scispace - formally typeset
Search or ask a question

Showing papers on "Graphical user interface published in 2020"



Journal ArticleDOI
TL;DR: In this paper, the authors present an approach that enables accurate prototyping of graphical user interface (GUI) via three tasks: detection, classification, and assembly, where logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mockup metadata.
Abstract: It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application's inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection , classification , and assembly . First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled . We implemented this approach for Android in a system called ReDraw . Our evaluation illustrates that ReDraw achieves an average GUI-component classification accuracy of 91 percent and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw's potential to improve real development workflows.

141 citations



Journal ArticleDOI
TL;DR: The main characteristics, functionalities and performance of the sen2r package are described and its usefulness as the operational back-end of service-oriented architectures, as illustrated by the Saturno project is highlighted.

68 citations


Journal ArticleDOI
TL;DR: The current version (i.e. Version 4) of X-Seed has a new interface designed to be more interactive and user friendly, and the software can be downloaded and used free of charge.
Abstract: X-Seed is a native Microsoft Windows program with three primary functions: (i) to serve as a graphical user interface to the SHELX suite of programs, (ii) to facilitate exploration of crystal packing and intermolecular interactions, and (iii) to generate high-quality molecular graphics artwork suitable for publication and presentation Development of X-Seed Version 10 began in 1998, when point-and-click crystallographic software was still limited in scope and power Considerable enhancements have been implemented within X-Seed over the past two decades Of particular importance are support for the SHELX2019 programs (SHELXS, SHELXD, SHELXT and SHELXL) for structure solution and refinement, and MSRoll for rendering void spaces in crystal structures The current version (ie Version 4) of X-Seed has a new interface designed to be more interactive and user friendly, and the software can be downloaded and used free of charge

66 citations


Proceedings ArticleDOI
TL;DR: A new GUI-specific old-fashioned method for non-text GUI element detection which adopts a novel top-down coarse-to-fine strategy, and incorporate it with the mature deep learning model for GUI text detection is designed.
Abstract: Detecting Graphical User Interface (GUI) elements in GUI images is a domain-specific object detection task. It supports many software engineering tasks, such as GUI animation and testing, GUI search and code generation. Existing studies for GUI element detection directly borrow the mature methods from computer vision (CV) domain, including old fashioned ones that rely on traditional image processing features (e.g., canny edge, contours), and deep learning models that learn to detect from large-scale GUI data. Unfortunately, these CV methods are not originally designed with the awareness of the unique characteristics of GUIs and GUI elements and the high localization accuracy of the GUI element detection task. We conduct the first large-scale empirical study of seven representative GUI element detection methods on over 50k GUI images to understand the capabilities, limitations and effective designs of these methods. This study not only sheds the light on the technical challenges to be addressed but also informs the design of new GUI element detection methods. We accordingly design a new GUI-specific old-fashioned method for non-text GUI element detection which adopts a novel top-down coarse-to-fine strategy, and incorporate it with the mature deep learning model for GUI text detection.Our evaluation on 25,000 GUI images shows that our method significantly advances the start-of-the-art performance in GUI element detection.

64 citations


Journal ArticleDOI
17 Feb 2020
TL;DR: Combinatorial optimization as a flexible and powerful tool for computational generation and adaptation of GUIs is surveyed and the position of this application area within optimization and human–computer interaction research is discussed and challenges for future work are outlined.
Abstract: The graphical user interface (GUI) has become the prime means for interacting with computing systems. It leverages human perceptual and motor capabilities for elementary tasks such as command exploration and invocation, information search, and multitasking. For designing a GUI, numerous interconnected decisions must be made such that the outcome strikes a balance between human factors and technical objectives. Normally, design choices are specified manually and coded within the software by professional designers and developers. This article surveys combinatorial optimization as a flexible and powerful tool for computational generation and adaptation of GUIs. As recently as 15 years ago, applications were limited to keyboards and widget layouts. The obstacle has been the mathematical definition of design tasks, on the one hand, and the lack of objective functions that capture essential aspects of human behavior, on the other. This article presents definitions of layout design problems as integer programming tasks, a coherent formalism that permits identification of problem types, analysis of their complexity, and exploitation of known algorithmic solutions. It then surveys advances in formulating evaluative functions for common design-goal foci such as user performance and experience. The convergence of these two advances has expanded the range of solvable problems. Approaches to practical deployment are outlined with a wide spectrum of applications. This article concludes by discussing the position of this application area within optimization and human–computer interaction research and outlines challenges for future work.

62 citations


Journal ArticleDOI
TL;DR: DolphinNext is a flexible, intuitive, web-based data processing and analysis platform that enables creating, deploying, sharing, and executing complex Nextflow pipelines with extensive revisioning and interactive reporting to enhance reproducible results.
Abstract: The emergence of high throughput technologies that produce vast amounts of genomic data, such as next-generation sequencing (NGS) is transforming biological research. The dramatic increase in the volume of data, the variety and continuous change of data processing tools, algorithms and databases make analysis the main bottleneck for scientific discovery. The processing of high throughput datasets typically involves many different computational programs, each of which performs a specific step in a pipeline. Given the wide range of applications and organizational infrastructures, there is a great need for highly parallel, flexible, portable, and reproducible data processing frameworks. Several platforms currently exist for the design and execution of complex pipelines. Unfortunately, current platforms lack the necessary combination of parallelism, portability, flexibility and/or reproducibility that are required by the current research environment. To address these shortcomings, workflow frameworks that provide a platform to develop and share portable pipelines have recently arisen. We complement these new platforms by providing a graphical user interface to create, maintain, and execute complex pipelines. Such a platform will simplify robust and reproducible workflow creation for non-technical users as well as provide a robust platform to maintain pipelines for large organizations. To simplify development, maintenance, and execution of complex pipelines we created DolphinNext. DolphinNext facilitates building and deployment of complex pipelines using a modular approach implemented in a graphical interface that relies on the powerful Nextflow workflow framework by providing 1. A drag and drop user interface that visualizes pipelines and allows users to create pipelines without familiarity in underlying programming languages. 2. Modules to execute and monitor pipelines in distributed computing environments such as high-performance clusters and/or cloud 3. Reproducible pipelines with version tracking and stand-alone versions that can be run independently. 4. Modular process design with process revisioning support to increase reusability and pipeline development efficiency. 5. Pipeline sharing with GitHub and automated testing 6. Extensive reports with R-markdown and shiny support for interactive data visualization and analysis. DolphinNext is a flexible, intuitive, web-based data processing and analysis platform that enables creating, deploying, sharing, and executing complex Nextflow pipelines with extensive revisioning and interactive reporting to enhance reproducible results.

59 citations



Journal ArticleDOI
TL;DR: The first version of a MATLAB-based graphical user interface (GUI) for multi-block data analysis (MBA), capable of performing data visualisation, regression, classification and variable selection for up to 4 different sensors is presented.

44 citations



Proceedings ArticleDOI
08 Nov 2020
TL;DR: Wang et al. as mentioned in this paper designed a new GUI-specific old-fashioned method for non-text GUI element detection which adopts a novel top-down coarse-to-fine strategy, and incorporate it with the mature deep learning model for GUI text detection.
Abstract: Detecting Graphical User Interface (GUI) elements in GUI images is a domain-specific object detection task. It supports many software engineering tasks, such as GUI animation and testing, GUI search and code generation. Existing studies for GUI element detection directly borrow the mature methods from computer vision (CV) domain, including old fashioned ones that rely on traditional image processing features (e.g., canny edge, contours), and deep learning models that learn to detect from large-scale GUI data. Unfortunately, these CV methods are not originally designed with the awareness of the unique characteristics of GUIs and GUI elements and the high localization accuracy of the GUI element detection task. We conduct the first large-scale empirical study of seven representative GUI element detection methods on over 50k GUI images to understand the capabilities, limitations and effective designs of these methods. This study not only sheds the light on the technical challenges to be addressed but also informs the design of new GUI element detection methods. We accordingly design a new GUI-specific old-fashioned method for non-text GUI element detection which adopts a novel top-down coarse-to-fine strategy, and incorporate it with the mature deep learning model for GUI text detection.Our evaluation on 25,000 GUI images shows that our method significantly advances the start-of-the-art performance in GUI element detection.

Proceedings ArticleDOI
21 Apr 2020
TL;DR: Guicomp as discussed by the authors is a GUI prototyping assistance tool that can be connected to GUI design software as an extension, and it provides real-time, multi-faceted feedback on a user's current design.
Abstract: Users may face challenges while designing graphical user interfaces, due to a lack of relevant experience and guidance. This paper aims to investigate the issues users face during the design process, and how to resolve them. To this end, we conducted semi-structured interviews, based on which we built a GUI prototyping assistance tool called GUIComp. This tool can be connected to GUI design software as an extension, and it provides real-time, multi-faceted feedback on a user's current design. Additionally, we conducted two user studies, in which we asked participants to create mobile GUIs with or without GUIComp, and requested online workers to assess the created GUIs. The experimental results show that GUIComp facilitated iterative designs and the participants with GUIComp had better a user experience and produced more acceptable designs than those who did not use it.

Journal ArticleDOI
11 Feb 2020
TL;DR: How crYOLO has evolved since its initial release is described, including filament picking, a new denoising technique, and a new graphical user interface, and its usage in automated processing pipelines is outlined.
Abstract: Particle selection is a crucial step when processing electron cryo microscopy data. Several automated particle picking procedures were developed in the past but most struggle with non-ideal data sets. In our recent Communications Biology article, we presented crYOLO, a deep learning based particle picking program. It enables fast, automated particle picking at human levels of accuracy with low effort. A general model allows the use of crYOLO for selecting particles in previously unseen data sets without further training. Here we describe how crYOLO has evolved since its initial release. We have introduced filament picking, a new denoising technique, and a new graphical user interface. Moreover, we outline its usage in automated processing pipelines, which is an important advancement on the horizon of the field. Wagner and Raunser recently presented a deep learning based particle picking program for Cryo-EM, crYOLO. Here they discuss recent improvements to the program, a graphical user interface and share their thoughts on desired future developments.

Proceedings ArticleDOI
TL;DR: This work proposes a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot, which can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug.
Abstract: Graphical User Interface (GUI) provides a visual bridge between a software application and end users, through which they can interact with each other. With the development of technology and aesthetics, the visual effects of the GUI are more and more attracting. However, such GUI complexity posts a great challenge to the GUI implementation. According to our pilot study of crowdtesting bug reports, display issues such as text overlap, blurred screen, missing image always occur during GUI rendering on different devices due to the software or hardware compatibility. They negatively influence the app usability, resulting in poor user experience. To detect these issues, we propose a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot. Therefore, OwlEye can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug. We manually construct a large-scale labelled dataset with 4,470 GUI screenshots with UI display issues and develop a heuristics-based data augmentation method for boosting the performance of our OwlEye. The evaluation demonstrates that our OwlEye can achieve 85% precision and 84% recall in detecting UI display issues, and 90% accuracy in localizing these issues. We also evaluate OwlEye with popular Android apps on Google Play and F-droid, and successfully uncover 57 previously-undetected UI display issues with 26 of them being confirmed or fixed so far.



Proceedings ArticleDOI
08 Nov 2020
TL;DR: A User Iterface Element Detection (UIED), a toolkit designed to provide user with a simple and easy-to-use platform to achieve accurate GUI element detection and export the detected UI elements in the GUI image to design files that can be further edited in popular UI design tools.
Abstract: Graphical User Interface (GUI) elements detection is critical for many GUI automation and GUI testing tasks. Acquiring the accurate positions and classes of GUI elements is also the very first step to conduct GUI reverse engineering or perform GUI testing. In this paper, we implement a User Iterface Element Detection (UIED), a toolkit designed to provide user with a simple and easy-to-use platform to achieve accurate GUI element detection. UIED integrates multiple detection methods including old-fashioned computer vision (CV) approaches and deep learning models to handle diverse and complicated GUI images. Besides, it equips with a novel customized GUI element detection methods to produce state-of-the-art detection results. Our tool enables the user to change and edit the detection result in an interactive dashboard. Finally, it exports the detected UI elements in the GUI image to design files that can be further edited in popular UI design tools such as Sketch and Photoshop. UIED is evaluated to be capable of accurate detection and useful for downstream works. Tool URL: http://uied.online Github Link: https://github.com/MulongXie/UIED


Proceedings ArticleDOI
01 Jun 2020
TL;DR: The results show that the optimised placement can improve up to the 20% with respect to a manual execution of the same task, and reveal the high potential of MOCA in increasing the performance of collaborative palletizing tasks.
Abstract: In this paper, a novel human-robot collaborative framework for mixed case palletizing is presented. The framework addresses several challenges associated with the detection and localisation of boxes and pallets through visual perception algorithms, high-level optimisation of the collaborative effort through effective role-allocation principles, and maximisation of packing density. A graphical user interface (GUI) is additionally developed to ensure an intuitive allocation of roles and the optimal placement of the boxes on target pallets. The framework is evaluated in two conditions where humans operate with and without the support of a Mobile COllaborative robotic Assistant (MOCA). The results show that the optimised placement can improve up to the 20% with respect to a manual execution of the same task, and reveal the high potential of MOCA in increasing the performance of collaborative palletizing tasks.

Proceedings ArticleDOI
21 Dec 2020
TL;DR: Wang et al. as discussed by the authors proposed a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot, which can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug.
Abstract: Graphical User Interface (GUI) provides a visual bridge between a software application and end users, through which they can interact with each other. With the development of technology and aesthetics, the visual effects of the GUI are more and more attracting. However, such GUI complexity posts a great challenge to the GUI implementation. According to our pilot study of crowdtesting bug reports, display issues such as text overlap, blurred screen, missing image always occur during GUI rendering on different devices due to the software or hardware compatibility. They negatively influence the app usability, resulting in poor user experience. To detect these issues, we propose a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot. Therefore, OwlEye can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug. We manually construct a large-scale labelled dataset with 4,470 GUI screenshots with UI display issues and develop a heuristics-based data augmentation method for boosting the performance of our OwlEye. The evaluation demonstrates that our OwlEye can achieve 85% precision and 84% recall in detecting UI display issues, and 90% accuracy in localizing these issues. We also evaluate OwlEye with popular Android apps on Google Play and F-droid, and successfully uncover 57 previously-undetected UI display issues with 26 of them being confirmed or fixed so far.

Journal ArticleDOI
TL;DR: The results show an overall positive impression of the FriWalk and an evident flexibility and adaptability of its guidance system across different categories of users (e.g., with or without visual impairments), and the implications of these findings on service social robotics.
Abstract: The present study aims to investigate the interaction between older adults and a robotic walker named FriWalk, which has the capability to act as a navigation support and to guide the user through indoor environments along a planned path. To this purpose, we developed a guidance system named Simulated Passivity, which leaves the responsibility of the locomotion to the user, both to increase the mobility of elder users and to enhance their perception of control over the robot. Moreover, the robotic walker can be integrated with a tablet and graphical user interface (GUI) which provides visual indications to the user on the path to follow. Since the FriWalk and Simulated Passivity were developed to suit the needs of users with different deficits, we conducted a human–robot interaction experiment, complemented with direct interviews of the participants. The goals of the present work were to observe the relation between elders (with and without visual impairments) and the robot in completing a path (with and without the support of the GUI), and to collect the impressions about of the older adult participants about the interaction. Our results show an overall positive impression of the FriWalk and an evident flexibility and adaptability of its guidance system across different categories of users (e.g., with or without visual impairments). In the paper, we discuss the implications of these findings on service social robotics.

Book ChapterDOI
01 Jan 2020
TL;DR: SPSS AMOS provides an intuitive graphical or programmatic user interface for evaluating the complex relationships among the constructs and is extensively used by the researchers for multivariate analysis by integrating the use of various multivariateAnalysis methods.
Abstract: AMOS is a highly popular software with a unique graphical interface (GUI) for solving structural equation modelling problems. The software is developed by IBM and SPSS Inc. Before 2003, AMOS software was a part of SmallWaters Corp. AMOS is extensively used by the researchers for multivariate analysis by integrating the use of various multivariate analysis methods such as regression, factor analysis, correlation and analysis of variance. AMOS provides an intuitive graphical or programmatic user interface for evaluating the complex relationships among the constructs. SPSS AMOS is available for the Windows operating system.


Proceedings ArticleDOI
01 Jul 2020
TL;DR: SUGILITE is an intelligent task automation agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations, using the graphical user interfaces (GUIs) of third-party mobile apps.
Abstract: We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human- AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.

Journal ArticleDOI
TL;DR: REMAP is a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures through visual exploration and user-defined semi-automated searches through the model space.
Abstract: The performance of deep learning models is dependent on the precise configuration of many layers and parameters. However, there are currently few systematic guidelines for how to configure a successful model. This means model builders often have to experiment with different configurations by manually programming different architectures (which is tedious and time consuming) or rely on purely automated approaches to generate and train the architectures (which is expensive). In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures. In REMAP, the user explores the large and complex parameter space for neural network architectures using a combination of global inspection and local experimentation. Through a visual overview of a set of models, the user identifies interesting clusters of architectures. Based on their findings, the user can run ablation and variation experiments to identify the effects of adding, removing, or replacing layers in a given architecture and generate new models accordingly. They can also handcraft new models using a simple graphical interface. As a result, a model builder can build deep learning models quickly, efficiently, and without manual programming. We inform the design of REMAP through a design study with four deep learning model builders. Through a use case, we demonstrate that REMAP allows users to discover performant neural network architectures efficiently using visual exploration and user-defined semi-automated searches through the model space.

Journal ArticleDOI
TL;DR: An eye-tracking experiment with users was performed to examine how people experience interaction through the graphical user interface, and several valuable recommendations for the design of interactive multimedia maps were presented.
Abstract: The purpose of this article is to show the differences in users’ experience when performing an interactive task with GUI buttons arrangement based on Google Maps and OpenStreetMap in a simulation environment The graphical user interface is part of an interactive multimedia map, and the interaction experience depends mainly on it For this reason, we performed an eye-tracking experiment with users to examine how people experience interaction through the GUI Based on the results related to eye movement, we presented several valuable recommendations for the design of interactive multimedia maps For better GUI efficiency, it is suitable to group buttons with similar functions in screen corners Users first analyze corners and only then search for the desired button The frequency of using a given web map does not translate into generally better performance while using any GUI Users perform more efficiently if they work with the preferred GUI

Journal ArticleDOI
TL;DR: Tcl/Tk toolkit is used to develop a user-friendly GUI for VMD, named Molcontroller, which is featured by allowing users to quickly and conveniently perform various molecular manipulations.
Abstract: Visual Molecular Dynamics (VMD) is one of the most widely used molecular graphics software in the community of theoretical simulations. So far, however, it still lacks a graphical user interface (GUI) for molecular manipulations when doing some modeling tasks. For instance, translation or rotation of a selected molecule(s) or part(s) of a molecule currently only can be achieved using tcl scripts. Here, we use the Tcl/Tk toolkit to develop a user-friendly GUI for VMD, named Molcontroller, which is featured by allowing users to quickly and conveniently perform various molecular manipulations. This GUI might be helpful for improving the modeling efficiency of VMD users.


Patent
29 Apr 2020
TL;DR: In this article, the design of a display panel with graphics user interface is described, which consists of the features of shape, ornament, pattern and configuration of the portion of the display shown in solid lines in the drawings.
Abstract: The design consists of the features of shape, ornament, pattern and configuration of the portion of the DISPLAY PANEL WITH GRAPHICAL USER INTERFACE shown in solid lines in the drawings. The portions shown in stippled lines do not form a part of the design. Drawings of the design are enclosed in which: Figure 1 is a front view of a DISPLAY PANEL WITH GRAPHICAL USER INTERFACE showing the design; and Figure 2 is a front view of a variant of the design shown in Figure 1. Drawings of the design are included.