scispace - formally typeset
Search or ask a question

Showing papers on "Graphical user interface published in 2019"


Journal ArticleDOI
TL;DR: This protocol describes how to use an open-source toolbox, DeepLabCut, to train a deep neural network to precisely track user-defined features with limited training data, which allows noninvasive behavioral tracking of movement.
Abstract: Noninvasive behavioral tracking of animals during experiments is critical to many scientific pursuits. Extracting the poses of animals without using markers is often essential to measuring behavioral effects in biomechanics, genetics, ethology, and neuroscience. However, extracting detailed poses without markers in dynamically changing backgrounds has been challenging. We recently introduced an open-source toolbox called DeepLabCut that builds on a state-of-the-art human pose-estimation algorithm to allow a user to train a deep neural network with limited training data to precisely track user-defined features that match human labeling accuracy. Here, we provide an updated toolbox, developed as a Python package, that includes new features such as graphical user interfaces (GUIs), performance improvements, and active-learning-based network refinement. We provide a step-by-step procedure for using DeepLabCut that guides the user in creating a tailored, reusable analysis pipeline with a graphical processing unit (GPU) in 1–12 h (depending on frame size). Additionally, we provide Docker environments and Jupyter Notebooks that can be run on cloud resources such as Google Colaboratory. This protocol describes how to use an open-source toolbox, DeepLabCut, to train a deep neural network to precisely track user-defined features with limited training data. This allows noninvasive behavioral tracking of movement.

674 citations


Journal ArticleDOI
Vassilis Angelopoulos1, P. Cruce1, Alexander Drozdov1, Eric Grimes1, N. Hatzigeorgiu2, D. A. King2, Davin Larson2, James W. Lewis2, J. M. McTiernan2, D. A. Roberts3, C. L. Russell1, Tomoaki Hori4, Yoshiya Kasahara5, Atsushi Kumamoto6, Ayako Matsuoka, Yukinaga Miyashita7, Yoshizumi Miyoshi4, I. Shinohara, Mariko Teramoto4, Jeremy Faden, Alexa Halford8, Matthew D. McCarthy9, Robyn Millan10, John Sample11, David M. Smith12, L. A. Woodger10, Arnaud Masson, A. A. Narock3, Kazushi Asamura, T. F. Chang4, C. Y. Chiang13, Yoichi Kazama14, Kunihiro Keika15, S. Matsuda4, Tomonori Segawa4, Kanako Seki15, Masafumi Shoji4, Sunny W. Y. Tam13, Norio Umemura4, B. J. Wang16, B. J. Wang14, Shiang-Yu Wang14, Robert J. Redmon17, Juan V. Rodriguez17, Juan V. Rodriguez18, Howard J. Singer17, Jon Vandegriff19, S. Abe20, Masahito Nose21, Masahito Nose4, Atsuki Shinbori4, Yoshimasa Tanaka22, S. UeNo21, L. Andersson23, P. Dunn2, Christopher M. Fowler23, Jasper Halekas24, Takuya Hara2, Yuki Harada21, Christina O. Lee2, Robert Lillis2, David L. Mitchell2, Matthew R. Argall25, Kenneth R. Bromund3, James L. Burch26, Ian J. Cohen19, Michael Galloy27, Barbara L. Giles3, Allison Jaynes24, O. Le Contel28, Mitsuo Oka2, T. D. Phan2, Brian Walsh29, Joseph Westlake19, Frederick Wilder23, Stuart D. Bale2, Roberto Livi2, Marc Pulupa2, Phyllis Whittlesey2, A. DeWolfe23, Bryan Harter23, E. Lucas23, U. Auster30, John W. Bonnell2, Christopher Cully31, Eric Donovan31, Robert E. Ergun23, Harald U. Frey2, Brian Jackel31, A. Keiling2, Haje Korth19, J. P. McFadden2, Yukitoshi Nishimura29, Ferdinand Plaschke32, P. Robert28, Drew Turner8, James M. Weygand1, Robert M. Candey3, R. C. Johnson3, T. Kovalick3, M. H. Liu3, R. E. McGuire3, Aaron Breneman33, Kris Kersten33, P. Schroeder2 
TL;DR: The SPEDAS development history, goals, and current implementation are reviewed, and its “modes of use” are explained with examples geared for users and its technical implementation and requirements with software developers in mind are outlined.
Abstract: With the advent of the Heliophysics/Geospace System Observatory (H/GSO), a complement of multi-spacecraft missions and ground-based observatories to study the space environment, data retrieval, analysis, and visualization of space physics data can be daunting. The Space Physics Environment Data Analysis System (SPEDAS), a grass-roots software development platform ( www.spedas.org ), is now officially supported by NASA Heliophysics as part of its data environment infrastructure. It serves more than a dozen space missions and ground observatories and can integrate the full complement of past and upcoming space physics missions with minimal resources, following clear, simple, and well-proven guidelines. Free, modular and configurable to the needs of individual missions, it works in both command-line (ideal for experienced users) and Graphical User Interface (GUI) mode (reducing the learning curve for first-time users). Both options have “crib-sheets,” user-command sequences in ASCII format that can facilitate record-and-repeat actions, especially for complex operations and plotting. Crib-sheets enhance scientific interactions, as users can move rapidly and accurately from exchanges of technical information on data processing to efficient discussions regarding data interpretation and science. SPEDAS can readily query and ingest all International Solar Terrestrial Physics (ISTP)-compatible products from the Space Physics Data Facility (SPDF), enabling access to a vast collection of historic and current mission data. The planned incorporation of Heliophysics Application Programmer’s Interface (HAPI) standards will facilitate data ingestion from distributed datasets that adhere to these standards. Although SPEDAS is currently Interactive Data Language (IDL)-based (and interfaces to Java-based tools such as Autoplot), efforts are under-way to expand it further to work with python (first as an interface tool and potentially even receiving an under-the-hood replacement). We review the SPEDAS development history, goals, and current implementation. We explain its “modes of use” with examples geared for users and outline its technical implementation and requirements with software developers in mind. We also describe SPEDAS personnel and software management, interfaces with other organizations, resources and support structure available to the community, and future development plans.

371 citations



Journal ArticleDOI
TL;DR: Sequenceserver is a tool for running BLAST and visually inspecting BLAST results for biological interpretation and uses simple algorithms to prevent potential analysis errors and provides flexible text-based and visual outputs to support researcher productivity.
Abstract: Comparing newly obtained and previously known nucleotide and amino-acid sequences underpins modern biological research. BLAST is a well-established tool for such comparisons but is challenging to use on new data sets. We combined a user-centric design philosophy with sustainable software development approaches to create Sequenceserver, a tool for running BLAST and visually inspecting BLAST results for biological interpretation. Sequenceserver uses simple algorithms to prevent potential analysis errors and provides flexible text-based and visual outputs to support researcher productivity. Our software can be rapidly installed for use by individuals or on shared servers.

124 citations




Journal ArticleDOI
TL;DR: This research presents a novel and scalable approach called “Smart Towns” to solve the challenge of integrating bioinformatics and data science into the design and engineering of smart devices.
Abstract: 1 Design Lab, UC San Diego, La Jolla, California, United States of America, 2 Center for Computational Biology and Bioinformatics, UC San Diego, La Jolla, California, United States of America, 3 Department of Pediatrics, UC San Diego, La Jolla, California, United States of America, 4 Data Science Hub, San Diego Supercomputer Center, UC San Diego, La Jolla, California, United States of America, 5 Departments of Bioengineering, and Computer Science and Engineering, and Center for Microbiome Innovation, UC San Diego, La Jolla, California, United States of America, 6 Bioinformatics and Systems Biology Graduate Program, UC San Diego, La Jolla, California, United States of America, 7 Department of Statistics and Berkeley Institute for Data Science, UC Berkeley, and Lawrence Berkeley National Laboratory, Berkeley, California, United States of America

88 citations


Journal ArticleDOI
TL;DR: The main strengths of Tracktor lie in its ability to track single individuals under noisy conditions, its robustness to perturbations, and its capacity to track multiple unmarked individuals while maintaining their identities.
Abstract: 1 Automated movement tracking is essential for high-throughput quantitative analyses of the behaviour and kinematics of organisms Automated tracking also improves replicability by avoiding observer biases and allowing reproducible workflows However, few automated tracking programs exist that are open access, open source, and capable of tracking unmarked organisms in noisy environments 2 Tracktor is an image-based tracking freeware designed to perform single-object tracking in noisy environments, or multi-object tracking in uniform environments while maintaining individual identities Tracktor is code-based but requires no coding skills other than the user being able to specify tracking parameters in a designated location, much like in a graphical user interface (GUI) The installation and use of the software is fully detailed in a user manual 3 Through four examples of common tracking problems, we show that Tracktor is able to track a variety of animals in diverse conditions The main strengths of Tracktor lie in its ability to track single individuals under noisy conditions (eg when the object shape is distorted), its robustness to perturbations (eg changes in lighting conditions during the experiment), and its capacity to track multiple individuals while maintaining their identities Additionally, summary statistics and plots allow measuring and visualizing common metrics used in the analysis of animal movement (eg cumulative distance, speed, acceleration, activity, time spent in specific areas, distance to neighbour, etc) 4 Tracktor is a versatile, reliable, easy-to-use automated tracking software that is compatible with all operating systems and provides many features not available in other existing freeware Access Tracktor and the complete user manual here: https://githubcom/vivekhsridhar/tracktor

76 citations





Proceedings ArticleDOI
17 Oct 2019
TL;DR: Sketch-n-sketch as discussed by the authors is an output-directed programming system for creating vector graphics that allows direct manipulation of the program's graphical output corresponds to writing code in a general-purpose programming language and edits not possible with the mouse can still be enacted through ordinary text edits to the program.
Abstract: For creative tasks, programmers face a choice: Use a GUI and sacrifice flexibility, or write code and sacrifice ergonomics? To obtain both flexibility and ease of use, a number of systems have explored a workflow that we call output-directed programming. In this paradigm, direct manipulation of the program's graphical output corresponds to writing code in a general-purpose programming language, and edits not possible with the mouse can still be enacted through ordinary text edits to the program. Such capabilities provide hope for integrating graphical user interfaces into what are currently text-centric programming environments. To further advance this vision, we present a variety of new output-directed techniques that extend the expressive power of Sketch-n-Sketch, an output-directed programming system for creating programs that generate vector graphics. To enable output-directed interaction at more stages of program construction, we expose intermediate execution products for manipulation and we present a mechanism for contextual drawing. Looking forward to output-directed programming beyond vector graphics, we also offer generic refactorings through the GUI, and our techniques employ a domain-agnostic provenance tracing scheme. To demonstrate the improved expressiveness, we implement a dozen new parametric designs in Sketch-n-Sketch without text-based edits. Among these is the first demonstration of building a recursive function in an output-directed programming setting.


Proceedings ArticleDOI
10 Jul 2019
TL;DR: This work proposes a technique for improving GUI testing by automatically identifying GUI widgets in screen shots using machine learning techniques and provides guidance to GUI testing tools in environments not currently supported by deriving GUI widget information from screen shots only.
Abstract: Graphical User Interfaces (GUIs) are amongst the most common user interfaces, enabling interactions with applications through mouse movements and key presses Tools for automated testing of programs through their GUI exist, however they usually rely on operating system or framework specific knowledge to interact with an application Due to frequent operating system updates, which can remove required information, and a large variety of different GUI frameworks using unique underlying data structures, such tools rapidly become obsolete, Consequently, for an automated GUI test generation tool, supporting many frameworks and operating systems is impractical We propose a technique for improving GUI testing by automatically identifying GUI widgets in screen shots using machine learning techniques As training data, we generate randomized GUIs to automatically extract widget information The resulting model provides guidance to GUI testing tools in environments not currently supported by deriving GUI widget information from screen shots only In our experiments, we found that identifying GUI widgets in screen shots and using this information to guide random testing achieved a significantly higher branch coverage in 18 of 20 applications, with an average increase of 425% when compared to conventional random testing


Journal ArticleDOI
TL;DR: A reflective analysis on the experience of virtual environment (VE) design is presented, leading to proposals for presenting HCI and cognitive knowledge in the context of design trade-offs in the choice of VR design techniques.
Abstract: A reflective analysis on the experience of virtual environment (VE) design is presented focusing on the human–computer interaction (HCI) challenges presented by virtual reality (VR). HCI design gui...



Journal ArticleDOI
TL;DR: To properly assess the hygrothermal properties of walls located in historic buildings, this study discloses the development of a remote sensing technology compatible with an in-situ measurement implemented in Palazzo Tassoni (Italy).
Abstract: This article aims to properly assess the hygrothermal properties of walls located in historic buildings, this study discloses the development of a remote sensing technology compatible with an in-situ measurement implemented in Palazzo Tassoni (Italy). As required by the international recommendations adapted to cultural heritage (CH), this monitoring system balances CH conservation, performance aspects and economic costs using an integrated multidisciplinary approach. Electronics for measurement of environmental parameters is composed of sensor measurements, data acquisition system and data storage and communication system. Data acquisition system, equipped with standard modbus-rtu interface, is designed to run standalone and it is based on two cloned single board PCs to reduce the possibility of data loss. In order to reduce the costs, RaspberryPI single board PCs were chosen. These run a C/C++ software based on standard modbus library and designed to implement multi-client server TCP/IP to allow communication with other devices. Storage and communication systems are based on an industrial PC; it communicates with sensor measurements’ system through a modbus-TCPIP bridge. PC runs a Labview software to provide data storage on a local database and graphical user interface to properly see all acquired data. Herein, some sensing options and approaches of measurement are described, unveiling different possible ways of enhancing the retrofit of CH with adapted technology.




Proceedings ArticleDOI
02 Mar 2019
TL;DR: A graphical user interface that contains menu-guided instructions and inspection documentation to increase the efficiency of manual processes and ideas for the integration of further analysis functions into the interface are provided.
Abstract: In this paper, we present a concept study to facilitate maintenance of an operating aircraft based on its lifelong collected data, called Digital Twin. It demonstrates a damage assessment scenario on a real aircraft component. We propose a graphical user interface that contains menu-guided instructions and inspection documentation to increase the efficiency of manual processes. Furthermore, experts located at different sites can join via a virtual session. By inspecting a 3D model of the aircraft component, they can see synchronized information from a Digital Twin database. With Augmented Reality glasses, the Microsoft HoloLens, a Digital Twin can be experienced personally. In the inspector's view, the 3D model of the Digital Twin is directly superimposed on the physical component. This Mixed Reality Vision can be used for inspection purposes. Any inspection related information can be directly attached to the component. For example, damage locations are marked by the inspector on the component's surface and are stored in the Digital Twin database. Our scenario demonstrates how new information can be derived from the combination of collected data and analyses from the Digital Twin database. This information is used to maintain the continued airworthiness of the aircraft. Feedback from domain related engineers confirm that our interface has an enormous potential for solving current maintenance problems in the aviation industry. Additionally, our study provides ideas for the integration of further analysis functions into the interface.

Journal ArticleDOI
TL;DR: Visbrain is a Python open-source package that offers a comprehensive visualization suite for neuroimaging and electrophysiological brain data and is developed on top of VisPy, a Python package providing high-performance 2D and 3D visualization by leveraging the computational power of the graphics card.
Abstract: We present Visbrain, a Python open-source package that offers a comprehensive visualization suite for neuroimaging and electrophysiological brain data. Visbrain consists of two levels of abstraction: 1) objects which represent highly configurable neuro-oriented visual primitives (3D brain, sources connectivity, etc.) and 2) graphical user interfaces for higher level interactions. The object level offers flexible and modular tools to produce and automate the production of figures using an approach similar to that of Matplotlib with subplots.. The second level visually connects these objects by controlling properties and interactions through graphical interfaces. The current release of Visbrain (version 0.4.2) contains 14 different objects and three responsive graphical user interfaces, built with PyQt: Signal, for the inspection of time-series and spectral properties, Brain for any type of visualization involving a 3D brain and Sleep for polysomnographic data visualization and sleep analysis. Each module has been developed in tight collaboration with end-users, i.e. primarily neuroscientists and domain experts, who bring their experience to make Visbrain as transparent as possible to the recording modalities (e.g. intracranial EEG, scalp-EEG, MEG, anatomical and functional MRI). Visbrain is developed on top of VisPy, a Python package providing high-performance 2D and 3D visualization by leveraging the computational power of the graphics card. Visbrain is available on Github and comes with a documentation, examples, and datasets (http://visbrain.org).

Journal ArticleDOI
TL;DR: It is proposed here that recent speech engineering advances present an opportunity to revolutionize the design of voice interactions, and that the path toward revolutionary new ubiquitous conversational voice interactions must be based on several evolutionary steps that build VUI heuristics off existing GUI design principles.
Abstract: The evolution of designing interactive interfaces has been rather incremental over the past few decades, largely focused on graphical user interfaces (GUIs), even as these extended from the desktop, to mobile or to wearables. Only recently can we engage in ubiquitous, ambient, and seamless interactions, as afforded by voice user interfaces (VUIs) such as smart speakers. We posit here that recent speech engineering advances present an opportunity to revolutionize the design of voice interactions. Yet current design guidelines or heuristics are heavily oriented towards GUI interaction, and thus may not fully facilitate the design of VUIs. We survey current research revealing the challenges of applying GUI design principles to this space, as well as critique efforts to develop VUI-specific heuristics. We use these to argue that the path toward revolutionary new ubiquitous conversational voice interactions must be based on several evolutionary steps that build VUI heuristics off existing GUI design principles.

Journal ArticleDOI
TL;DR: Usability analysis focused on retention, error rate, and convenience and showed that although no difference between the two interfaces recorded, students’ perceived impression on retention was in favor of the tangible interface, which was perceived as more playful by all students and more appropriate for collaborative work by elder students and girls.

Journal ArticleDOI
TL;DR: QSWATMOD is equipped with functionalities that assist in storing and retrieving user and default configuration settings and parameter values, performing the linkage and simulation processes, and uses various geo-processing functionalities of QGIS.
Abstract: This article presents QSWATMOD, a QGIS-based graphical user interface for application and evaluation of SWAT-MODFLOW models. QSWATMOD includes: (i) pre-processing modules to prepare input data for model execution, (ii) configuration modules for SWAT-MODFLOW options, and (iii) post-processing modules to view and interpret model results. QSWATMOD, written in Python, creates linkage files between SWAT and MODFLOW models, runs a simulation, and displays results within the open source Quantum Geographic Information System (QGIS) environment. QSWATMOD is equipped with functionalities that assist in storing and retrieving user and default configuration settings and parameter values, performing the linkage and simulation processes, and uses various geo-processing functionalities (e.g., selection, intersection, union) of QGIS. The use of QSWATMOD is demonstrated through an application to the 471 km2 Middle Bosque River Watershed in central Texas. As the number of SWAT-MODFLOW users grows worldwide, QSWATMOD can be a valuable tool to assist in creating and managing SWAT-MODFLOW projects.

Proceedings ArticleDOI
10 Jul 2019
TL;DR: An approach based on reinforcement learning is presented that automatically learns which interactions can be used for which elements, and uses this information to guide test generation, and shows improvements in statement coverage.
Abstract: When generating tests for graphical user interfaces, one central problem is to identify how individual UI elements can be interacted with—clicking, long- or right-clicking, swiping, dragging, typing, or more. We present an approach based on reinforcement learning that automatically learns which interactions can be used for which elements, and uses this information to guide test generation. We model the problem as an instance of the multi-armed bandit problem (MAB problem) from probability theory, and show how its traditional solutions work on test generation, with and without relying on previous knowledge. The resulting guidance yields higher coverage. In our evaluation, our approach shows improvements in statement coverage between 18% (when not using any previous knowledge) and 20% (when reusing previously generated models).


Proceedings ArticleDOI
11 Mar 2019
TL;DR: The design process of drone.io is described, a projected body-centric graphical user interface for human-drone interaction embedded on a drone to provide both input and output capabilities, and it is reported that people were able to use the interface with little prior training.
Abstract: Drones are becoming ubiquitous and offer support to people in various tasks, such as photography, in increasingly interactive social contexts. We introduce drone.io, a projected body-centric graphical user interface for human-drone interaction. Using two simple gestures, users can interact with a drone in a natural manner. drone.io is the first human-drone graphical user interface embedded on a drone to provide both input and output capabilities. This paper describes the design process of drone.io. We present a proof of concept, drone-based implementation, as well as a fully functional prototype for a drone tour-guide scenario. We report drone.io's evaluation in three user studies $(\mathbf{N}=27)$ and show that people were able to use the interface with little prior training. We contribute to the field of human-robot interaction and the growing field of human-drone interaction.