scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 2021"


Journal ArticleDOI
TL;DR: Changes to the text-mining system, a new scoring-mode for physical interactions, as well as extensive user interface features for customizing, extending and sharing protein networks are described.
Abstract: Cellular life depends on a complex web of functional associations between biomolecules. Among these associations, protein-protein interactions are particularly important due to their versatility, specificity and adaptability. The STRING database aims to integrate all known and predicted associations between proteins, including both physical interactions as well as functional associations. To achieve this, STRING collects and scores evidence from a number of sources: (i) automated text mining of the scientific literature, (ii) databases of interaction experiments and annotated complexes/pathways, (iii) computational interaction predictions from co-expression and from conserved genomic context and (iv) systematic transfers of interaction evidence from one organism to another. STRING aims for wide coverage; the upcoming version 11.5 of the resource will contain more than 14 000 organisms. In this update paper, we describe changes to the text-mining system, a new scoring-mode for physical interactions, as well as extensive user interface features for customizing, extending and sharing protein networks. In addition, we describe how to query STRING with genome-wide, experimental data, including the automated detection of enriched functionalities and potential biases in the user's query data. The STRING resource is available online, at https://string-db.org/.

3,253 citations


Journal ArticleDOI
TL;DR: VASPKIT as mentioned in this paper is a command-line program that aims at providing a robust and user-friendly interface to perform high-throughput analysis of a variety of material properties from the raw data produced by the VASP code.

1,357 citations


Journal ArticleDOI
TL;DR: The CellProfiler 4 as discussed by the authors is a new version of this software with expanded functionality based on user feedback, and it has made several user interface refinements to improve the usability of the software.
Abstract: Background Imaging data contains a substantial amount of information which can be difficult to evaluate by eye. With the expansion of high throughput microscopy methodologies producing increasingly large datasets, automated and objective analysis of the resulting images is essential to effectively extract biological information from this data. CellProfiler is a free, open source image analysis program which enables researchers to generate modular pipelines with which to process microscopy images into interpretable measurements. Results Herein we describe CellProfiler 4, a new version of this software with expanded functionality. Based on user feedback, we have made several user interface refinements to improve the usability of the software. We introduced new modules to expand the capabilities of the software. We also evaluated performance and made targeted optimizations to reduce the time and cost associated with running common large-scale analysis pipelines. Conclusions CellProfiler 4 provides significantly improved performance in complex workflows compared to previous versions. This release will ensure that researchers will have continued access to CellProfiler's powerful computational tools in the coming years.

268 citations


Posted ContentDOI
30 Jun 2021-bioRxiv
TL;DR: The CellProfiler 4 as mentioned in this paper is a new version of this software which has been ported to the Python 3 language based on user feedback, which has made several user interface refinements to improve the usability of the software.
Abstract: CellProfiler is a free, open source image analysis program which enables researchers to generate modular pipelines with which to process microscopy images into interpretable measurements. Here we describe CellProfiler 4, a new version of this software which has been ported to the Python 3 language. Based on user feedback, we have made several user interface refinements to improve the usability of the software. We introduced new modules to expand the capabilities of the software. We also evaluated performance and made targeted optimisations to reduce the time and cost associated with running common large-scale analysis pipelines. This release will ensure that researchers will have continued access to CellProfilers powerful computational tools in the coming years.

195 citations


Journal ArticleDOI
TL;DR: The new MobiDB version presents state-of-the-art knowledge on disordered proteins and improves data accessibility for both computational and experimental users.
Abstract: The MobiDB database (URL: https://mobidb.org/) provides predictions and annotations for intrinsically disordered proteins. Here, we report recent developments implemented in MobiDB version 4, regarding the database format, with novel types of annotations and an improved update process. The new website includes a re-designed user interface, a more effective search engine and advanced API for programmatic access. The new database schema gives more flexibility for the users, as well as simplifying the maintenance and updates. In addition, the new entry page provides more visualisation tools including customizable feature viewer and graphs of the residue contact maps. MobiDB v4 annotates the binding modes of disordered proteins, whether they undergo disorder-to-order transitions or remain disordered in the bound state. In addition, disordered regions undergoing liquid-liquid phase separation or post-translational modifications are defined. The integrated information is presented in a simplified interface, which enables faster searches and allows large customized datasets to be downloaded in TSV, Fasta or JSON formats. An alternative advanced interface allows users to drill deeper into features of interest. A new statistics page provides information at database and proteome levels. The new MobiDB version presents state-of-the-art knowledge on disordered proteins and improves data accessibility for both computational and experimental users.

151 citations


Journal ArticleDOI
TL;DR: The Target Central Resource Database (TCRD) and Pharos as discussed by the authors are two resources used by the National Institutes of Health (NIH) to identify and improve the understanding of poorly characterized proteins that can potentially be modulated using small molecules or biologics.
Abstract: In 2014, the National Institutes of Health (NIH) initiated the Illuminating the Druggable Genome (IDG) program to identify and improve our understanding of poorly characterized proteins that can potentially be modulated using small molecules or biologics. Two resources produced from these efforts are: The Target Central Resource Database (TCRD) (http://juniper.health.unm.edu/tcrd/) and Pharos (https://pharos.nih.gov/), a web interface to browse the TCRD. The ultimate goal of these resources is to highlight and facilitate research into currently understudied proteins, by aggregating a multitude of data sources, and ranking targets based on the amount of data available, and presenting data in machine learning ready format. Since the 2017 release, both TCRD and Pharos have produced two major releases, which have incorporated or expanded an additional 25 data sources. Recently incorporated data types include human and viral-human protein-protein interactions, protein-disease and protein-phenotype associations, and drug-induced gene signatures, among others. These aggregated data have enabled us to generate new visualizations and content sections in Pharos, in order to empower users to find new areas of study in the druggable genome.

80 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors present a set of normative perspectives for analyzing dark patterns and their effects on individuals and society and show how future research on dark patterns can go beyond subjective criticism of user interface designs and apply empirical methods grounded in normative perspectives.
Abstract: There is a rapidly growing literature on dark patterns, user interface designs -- typically related to shopping or privacy -- that researchers deem problematic. Recent work has been predominantly descriptive, documenting and categorizing objectionable user interfaces. These contributions have been invaluable in highlighting specific designs for researchers and policymakers. But the current literature lacks a conceptual foundation: What makes a user interface a dark pattern? Why are certain designs problematic for users or society? We review recent work on dark patterns and demonstrate that the literature does not reflect a singular concern or consistent definition, but rather, a set of thematically related considerations. Drawing from scholarship in psychology, economics, ethics, philosophy, and law, we articulate a set of normative perspectives for analyzing dark patterns and their effects on individuals and society. We then show how future research on dark patterns can go beyond subjective criticism of user interface designs and apply empirical methods grounded in normative perspectives.

74 citations


Proceedings ArticleDOI
06 May 2021
TL;DR: In this paper, the authors present a set of normative perspectives for analyzing dark patterns and their effects on individuals and society, drawing from scholarship in psychology, economics, ethics, philosophy, and law.
Abstract: There is a rapidly growing literature on dark patterns, user interface designs—typically related to shopping or privacy—that researchers deem problematic. Recent work has been predominantly descriptive, documenting and categorizing objectionable user interfaces. These contributions have been invaluable in highlighting specific designs for researchers and policymakers. But the current literature lacks a conceptual foundation: What makes a user interface a dark pattern? Why are certain designs problematic for users or society? We review recent work on dark patterns and demonstrate that the literature does not reflect a singular concern or consistent definition, but rather, a set of thematically related considerations. Drawing from scholarship in psychology, economics, ethics, philosophy, and law, we articulate a set of normative perspectives for analyzing dark patterns and their effects on individuals and society. We then show how future research on dark patterns can go beyond subjective criticism of user interface designs and apply empirical methods grounded in normative perspectives.

72 citations


Journal ArticleDOI
TL;DR: Open Reaction Database (ORD) as mentioned in this paper is an open-access schema and infrastructure for structuring and sharing organic reaction data, including a centralized data repository, which supports conventional and emerging technologies, from benchtop reactions to automated high-throughput experiments.
Abstract: Chemical reaction data in journal articles, patents, and even electronic laboratory notebooks are currently stored in various formats, often unstructured, which presents a significant barrier to downstream applications, including the training of machine-learning models. We present the Open Reaction Database (ORD), an open-access schema and infrastructure for structuring and sharing organic reaction data, including a centralized data repository. The ORD schema supports conventional and emerging technologies, from benchtop reactions to automated high-throughput experiments and flow chemistry. The data, schema, supporting code, and web-based user interfaces are all publicly available on GitHub. Our vision is that a consistent data representation and infrastructure to support data sharing will enable downstream applications that will greatly improve the state of the art with respect to computer-aided synthesis planning, reaction prediction, and other predictive chemistry tasks.

67 citations


Proceedings ArticleDOI
06 May 2021
TL;DR: In this paper, a robust, fast, memory-efficient, on-device model was trained to detect UI elements using a dataset of 77,637 screens (from 4,068 iPhone apps) that were collected and annotated.
Abstract: Many accessibility features available on mobile platforms require applications (apps) to provide complete and accurate metadata describing user interface (UI) components. Unfortunately, many apps do not provide sufficient metadata for accessibility features to work as expected. In this paper, we explore inferring accessibility metadata for mobile apps from their pixels, as the visual interfaces often best reflect an app’s full functionality. We trained a robust, fast, memory-efficient, on-device model to detect UI elements using a dataset of 77,637 screens (from 4,068 iPhone apps) that we collected and annotated. To further improve UI detections and add semantic information, we introduced heuristics (e.g., UI grouping and ordering) and additional models (e.g., recognize UI content, state, interactivity). We built Screen Recognition to generate accessibility metadata to augment iOS VoiceOver. In a study with 9 screen reader users, we validated that our approach improves the accessibility of existing mobile apps, enabling even previously inaccessible apps to be used.

59 citations


Journal ArticleDOI
TL;DR: This paper provides an overall academic roadmap and useful insight into the state-of-the-art of AR/MR remote collaboration on physical tasks and presents a comprehensive survey of research between 2000 and 2018 in this domain.
Abstract: This paper provides a review of research into using Augmented Reality (AR) and Mixed Reality(MR) for remote collaboration on physical tasks. AR/MR-based remote collaboration on physical tasks has recently become more prominent in academic research and engineering applications. It has great potential in many fields, such as real-time remote medical consultation, education, training, maintenance, remote assistance in engineering, and other remote collaborative tasks. However, to the best of our knowledge there has not been any comprehensive review of research in AR/MR remote collaboration on physical tasks. Therefore, this paper presents a comprehensive survey of research between 2000 and 2018 in this domain. We collected 215 papers, more than 80% of which were published between 2010 and 2018, and all relevant works are discussed at length. Then we elaborate on the review from typical architectures, applications (e.g., industry, telemedicine, architecture, teleducation and others), and empathic computing. Next, we made an in-depth review of the papers from seven aspects: (1) collection and classification research, (2) using 3D scene reconstruction environments and live panorama, (3) periodicals and conducting research, (4) local and remote user interfaces, (5) features of user interfaces commonly used, (6) architecture and sharing non-verbal cues, (7) applications and toolkits. We find that most papers (160 articles, 74.4%) are published in conferences, using co-located collaboration to emulate remote collaboration is adopted by more than half (126, 58.6%) of the reviewed papers, the shared non-verbal cues can be mainly classified into five types (Virtual Replicas or Physical Proxy(VRP), AR Annotations or a Cursor Pointer(ARACP), avatar, gesture, and gaze), the local/remote interface is mainly divided into four categories (Head-Mounted Displays(HMD), Spatial Augmented Reality(SAR), Windows-Icon-Menu-Pointer(WIMP) and Hand-Held Displays(HHD)). From this, we can draw ten conclusions. Following this we report on issues for future works. The paper also provides an overall academic roadmap and useful insight into the state-of-the-art of AR/MR remote collaboration on physical tasks. This work will be useful for current and future researchers who are interested in collaborative AR/MR systems.

Journal ArticleDOI
TL;DR: The proposed cloud-based mission control architecture is implemented on a fleet of real vehicles, H2Omni-X USVs, and the performance of the remote experimentation is demonstrated during sea trials at the Adriatic coast, Croatia, representing the practical contribution of this paper.

Journal ArticleDOI
08 Jan 2021
TL;DR: KG-COVID-19 is created, a flexible framework that ingests and integrates heterogeneous biomedical data to produce knowledge graphs (KGs) and can be applied to other problems in which siloed biomedical data must be quickly integrated for different research applications, including future pandemics.
Abstract: Integrated, up-to-date data about SARS-CoV-2 and COVID-19 is crucial for the ongoing response to the COVID-19 pandemic by the biomedical research community. While rich biological knowledge exists for SARS-CoV-2 and related viruses (SARS-CoV, MERS-CoV), integrating this knowledge is difficult and time consuming, since much of it is in siloed databases or in textual format. Furthermore, the data required by the research community varies drastically for different tasks-the optimal data for a machine learning task, for example, is much different from the data used to populate a browsable user interface for clinicians. To address these challenges, we created KG-COVID-19, a flexible framework that ingests and integrates heterogeneous biomedical data to produce knowledge graphs (KGs), and applied it to create a KG for COVID-19 response. This KG framework can also be applied to other problems in which siloed biomedical data must be quickly integrated for different research applications, including future pandemics.

Proceedings ArticleDOI
06 May 2021
TL;DR: In this paper, the role of users' physiology in behavioral biometrics by virtually altering and normalizing their body proportions was investigated, and it was shown that body normalization in general increases the identification rate, in some cases by up to 38%.
Abstract: Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces as well as to authenticate users. In this work, we conduct a lab study (N = 16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90% is possible across sessions recorded on different days. Moreover, we investigate the role of users’ physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38%; hence, it improves the performance of identification systems.

Journal ArticleDOI
TL;DR: The FullControl design concept offers new opportunities for creative and high-precision use of additive manufacturing systems, and offers a general framework for unconstrained design and is not limited to a particular type of structure or hardware.
Abstract: A new concept is presented for the design of additive manufacturing procedures, which is implemented in open-source software called FullControl GCode Designer. In this new design approach, the user defines every segment of the print-path along with all printing parameters, which may be related to geometric and non-geometric factors, at all points along the print-path. Machine control code (GCode) is directly generated by the software, without the need for any programming skills and without using computer-aided design (CAD), STL-files or slicing software. Excel is used as the front end for the software, which is written in Visual Basic. Case studies are used to demonstrate the broad range of structures that can be designed using the software, including: precisely controlled specimens for printer calibration, parametric specimens for hardware characterisation utilising hundreds of unique parameter combinations, novel mathematically defined lattice structures, and previously inconceivable 3D geometries that are impossible for traditional slicing software to achieve. The FullControl design approach enables unconstrained freedom to create nonplanar 3D print-paths and break free from traditional restrictions of layerwise print-path planning. It also allows nozzle movements to be carefully designed - both during extrusion and while travelling between disconnected extrusion volumes - to overcome inherent limitations of the printing process or to improve capabilities for challenging materials. An industrial case study shows how explicit print-path design improved printer reliability, production time, and print quality for a production run of over 1000 parts. FullControl GCode Designer offers a general framework for unconstrained design and is not limited to a particular type of structure or hardware; transferability to lasers and other manufacturing processes is discussed. Parametric design files use a few bytes or kilobytes of data to describe all details that are sent to the printer, which greatly improves shareability by eliminating any risk of errors being introduced during STL file conversion or due to different users having inconsistent slicer settings. Adjustable parameters allow GCode for revised designs to be produced instantly, instead of the laborious traditional routine using multiple software packages and file conversions. The FullControl design concept offers new opportunities for creative and high-precision use of additive manufacturing systems. It facilitates design for additive manufacturing (DfAM) at the smallest possible scale based on the fundamental nature of the process (i.e. assembly of individual extrusions). The software and source code are provided as supplementary data and ongoing updates to improve functionality and the user interface will be available at www.fullcontrolgcode.com .

Proceedings ArticleDOI
06 May 2021
TL;DR: In this paper, a model-based reinforcement learning method is proposed to plan sequences of adaptations and consults predictive HCI models to estimate their effects, which yields a conservative adaptation policy.
Abstract: Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user. A carelessly picked adaptation may impose high costs to the user – for example, due to surprise or relearning effort – or “trap” the process to a suboptimal design immaturely. However, effects on users are hard to predict as they depend on factors that are latent and evolve over the course of interaction. We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy: It finds beneficial changes when there are such and avoids changes when there are none. Our model-based reinforcement learning method plans sequences of adaptations and consults predictive HCI models to estimate their effects. We present empirical and simulation results from the case of adaptive menus, showing that the method outperforms both a non-adaptive and a frequency-based policy.

Proceedings ArticleDOI
06 May 2021
TL;DR: VINS as mentioned in this paper is a visual search framework that takes as input a UI image (wireframe, high-fidelity) and retrieves visually similar design examples, which can aid interface designers in gaining inspiration and comparing design alternatives.
Abstract: Searching for relative mobile user interface (UI) design examples can aid interface designers in gaining inspiration and comparing design alternatives. However, finding such design examples is challenging, especially as current search systems rely on only text-based queries and do not consider the UI structure and content into account. This paper introduces VINS, a visual search framework, that takes as input a UI image (wireframe, high-fidelity) and retrieves visually similar design examples. We first survey interface designers to better understand their example finding process. We then develop a large-scale UI dataset that provides an accurate specification of the interface’s view hierarchy (i.e., all the UI components and their specific location). By utilizing this dataset, we propose an object-detection based image retrieval framework that models the UI context and hierarchical structure. The framework achieves a mean Average Precision of 76.39% for the UI detection and high performance in querying similar UI designs.

Journal ArticleDOI
TL;DR: In this paper, a self-powered optoelectronic synergistic fiber sensors (SOEFSs) were proposed to simultaneously visualize and digitize the mechanical stimulus without external power supply.
Abstract: Fiber electronics with mechanosensory functionality are highly desirable in healthcare, human-machine interfaces, and robotics. Most efforts are committed to optimize the electronically readable interface of fiber mechanoreceptor, while the user interface based on naked-eye readable output is rarely explored. Here, a scalable fiber electronics that can simultaneously visualize and digitize the mechanical stimulus without external power supply, named self-powered optoelectronic synergistic fiber sensors (SOEFSs), are reported. By coupling of space and surface charge polarization, a new mechanoluminescent (ML)-triboelectric synergistic effect is realized. It contributes to remarkable enhancement of both electrical (by 100%) and optical output (by 30%), as well as novel temporal-spatial resolution mode for motion capturing. Based on entirely new thermoplastic ML material system and spinning process, industrial-level continuously manufacture and recycling processes of SOEFS are realized. Furthermore, SOEFSs' application in human-machine interface, virtual reality, and underwater sensing, rescue, and information interaction is demonstrated.

Proceedings ArticleDOI
06 May 2021
TL;DR: A framework for eliciting stakeholders’ subjective fairness notions is proposed by combining a user interface that allows stakeholders to examine the data and the algorithm’s predictions with an interview protocol to probe stakeholders�’ thoughts while they are interacting with the interface.
Abstract: Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack an understanding of how to develop machine learning systems with fairness criteria that reflect relevant stakeholders’ nuanced viewpoints in real-world contexts. To address this gap, we propose a framework for eliciting stakeholders’ subjective fairness notions. Combining a user interface that allows stakeholders to examine the data and the algorithm’s predictions with an interview protocol to probe stakeholders’ thoughts while they are interacting with the interface, we can identify stakeholders’ fairness beliefs and principles. We conduct a user study to evaluate our framework in the setting of a child maltreatment predictive system. Our evaluations show that the framework allows stakeholders to comprehensively convey their fairness viewpoints. We also discuss how our results can inform the design of predictive systems.

Proceedings ArticleDOI
TL;DR: In this article, a model-based reinforcement learning method is proposed to plan sequences of adaptations and consults predictive HCI models to estimate their effects, and the method outperforms both a nonadaptive and a frequency-based policy.
Abstract: Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user. A carelessly picked adaptation may impose high costs to the user -- for example, due to surprise or relearning effort -- or "trap" the process to a suboptimal design immaturely. However, effects on users are hard to predict as they depend on factors that are latent and evolve over the course of interaction. We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy: It finds beneficial changes when there are such and avoids changes when there are none. Our model-based reinforcement learning method plans sequences of adaptations and consults predictive HCI models to estimate their effects. We present empirical and simulation results from the case of adaptive menus, showing that the method outperforms both a non-adaptive and a frequency-based policy.

Journal ArticleDOI
TL;DR: In this article, the authors present oxDNA.org, a graphical web interface for running, visualizing and analyzing oxDNA and oxRNA molecular dynamics simulations on a GPU-enabled high performance computing server.
Abstract: OxDNA and oxRNA are popular coarse-grained models used by the DNA/RNA nanotechnology community to prototype, analyze and rationalize designed DNA and RNA nanostructures. Here, we present oxDNA.org, a graphical web interface for running, visualizing and analyzing oxDNA and oxRNA molecular dynamics simulations on a GPU-enabled high performance computing server. OxDNA.org automatically generates simulation files, including a multi-step relaxation protocol for structures exported in non-physical states from DNA/RNA design tools. Once the simulation is complete, oxDNA.org provides an interactive visualization and analysis interface using the browser-based visualizer oxView to facilitate the understanding of simulation results for a user's specific structure. This online tool significantly lowers the entry barrier of integrating simulations in the nanostructure design pipeline for users who are not experts in the technical aspects of molecular simulation. The webserver is freely available at oxdna.org.

Journal ArticleDOI
TL;DR: The goal is to provide the end-user with multiple application perspectives to Linked Data knowledge graphs, and a two-step usage cycle based on faceted search combined with ready-to-use tooling for data analysis.
Abstract: This paper presents a new software framework, Sampo-UI, for developing user interfaces for semantic portals. The goal is to provide the end-user with multiple application perspectives to Linked Data knowledge graphs, and a two-step usage cycle based on faceted search combined with ready-to-use tooling for data analysis. For the software developer, the Sampo-UI framework makes it possible to create highly customizable, user-friendly, and responsive user interfaces using current state-of-the-art JavaScript libraries and data from SPARQL endpoints, while saving substantial coding effort. Sampo-UI is published on GitHub under the open MIT License and has been utilized in several internal and external projects. The framework has been used thus far in creating six published and five forth-coming portals, mostly related to the Cultural Heritage domain, that have had tens of thousands of end-users on the Web.

Journal ArticleDOI
TL;DR: The Poky suite as mentioned in this paper is a platform with boundless possibilities for advancing research and technology development in signal detection, resonance assignment, structure calculation, and relaxation studies with the help of many automation and user interface tools.
Abstract: Summary The need for an efficient and cost-effective method is compelling in biomolecular NMR. To tackle this problem, we have developed the Poky suite, the revolutionized platform with boundless possibilities for advancing research and technology development in signal detection, resonance assignment, structure calculation, and relaxation studies with the help of many automation and user interface tools. This software is extensible and scalable by scripting and batching as well as providing modern graphical user interfaces and a diverse range of modules right out of the box. Availability Poky is freely available to non-commercial users at https://poky.clas.ucdenver.edu. Supplementary information Supplementary data are available at Bioinformatics online.

Proceedings ArticleDOI
Dakuo Wang1, Josh Andres1, Justin D. Weisz1, Erick Oduor1, Casey Dugan1 
06 May 2021
TL;DR: In this paper, an automated machine learning (AutoML) system that aims to leverage the latest ML automation techniques to support data science projects is introduced. But the system is limited to a single dataset and the user does not have access to the entire dataset.
Abstract: Data science (DS) projects often follow a lifecycle that consists of laborious tasks for data scientists and domain experts (e.g., data exploration, model training, etc.). Only till recently, machine learning(ML) researchers have developed promising automation techniques to aid data workers in these tasks. This paper introduces AutoDS, an automated machine learning (AutoML) system that aims to leverage the latest ML automation techniques to support data science projects. Data workers only need to upload their dataset, then the system can automatically suggest ML configurations, preprocess data, select algorithm, and train the model. These suggestions are presented to the user via a web-based graphical user interface and a notebook-based programming user interface. Our goal is to offer a systematic investigation of user interaction and perceptions of using an AutoDS system in solving a data science task. We studied AutoDS with 30 professional data scientists, where one group used AutoDS, and the other did not, to complete a data science project. As expected, AutoDS improves productivity; Yet surprisingly, we find that the models produced by the AutoDS group have higher quality and less errors, but lower human confidence scores. We reflect on the findings by presenting design implications for incorporating automation techniques into human work in the data science lifecycle.

Journal ArticleDOI
TL;DR: In this article, the authors provide an overview of Human-City Interaction and related technological approaches, followed by reviewing the latest trends of information visualization, constrained interfaces, and embodied interaction for AR headsets.
Abstract: Interaction design for Augmented Reality (AR) is gaining attention from both academia and industry. This survey discusses 260 articles (68.8% of articles published between 2015–2019) to review the field of human interaction in connected cities with emphasis on augmented reality-driven interaction. We provide an overview of Human-City Interaction and related technological approaches, followed by reviewing the latest trends of information visualization, constrained interfaces, and embodied interaction for AR headsets. We highlight under-explored issues in interface design and input techniques that warrant further research and conjecture that AR with complementary Conversational User Interfaces (CUIs) is a crucial enabler for ubiquitous interaction with immersive systems in smart cities. Our work helps researchers understand the current potential and future needs of AR in Human-City Interaction.

Proceedings ArticleDOI
Dakuo Wang1, Josh Andres1, Justin D. Weisz1, Erick Oduor1, Casey Dugan1 
TL;DR: In this article, an automated machine learning (AutoML) system that aims to leverage the latest ML automation techniques to support data science projects is presented. But, the system only needs data workers to upload their dataset, then the system can automatically suggest ML configurations, preprocess data, select algorithm, and train the model.
Abstract: Data science (DS) projects often follow a lifecycle that consists of laborious tasks for data scientists and domain experts (e.g., data exploration, model training, etc.). Only till recently, machine learning(ML) researchers have developed promising automation techniques to aid data workers in these tasks. This paper introduces AutoDS, an automated machine learning (AutoML) system that aims to leverage the latest ML automation techniques to support data science projects. Data workers only need to upload their dataset, then the system can automatically suggest ML configurations, preprocess data, select algorithm, and train the model. These suggestions are presented to the user via a web-based graphical user interface and a notebook-based programming user interface. We studied AutoDS with 30 professional data scientists, where one group used AutoDS, and the other did not, to complete a data science project. As expected, AutoDS improves productivity; Yet surprisingly, we find that the models produced by the AutoDS group have higher quality and less errors, but lower human confidence scores. We reflect on the findings by presenting design implications for incorporating automation techniques into human work in the data science lifecycle.

Journal ArticleDOI
TL;DR: GWpy is a Python software package that provides an intuitive, object-oriented interface through which to access, process, and visualise data from gravitational-wave detectors.

Proceedings ArticleDOI
06 May 2021
TL;DR: In this paper, a user interface for practicing coding called Faded Parsons Problems (FPP) was proposed to support introductory computer science students in learning to apply programming patterns. But the FPN did not address the problem of learning to recognize and apply reusable abstractions of code.
Abstract: Learning to recognize and apply programming patterns — reusable abstractions of code — is critical to becoming a proficient computer scientist. However, many introductory Computer Science courses do not teach patterns, in part because teaching these concepts requires significant curriculum changes. As an alternative, we explore how a novel user interface for practicing coding — Faded Parsons Problems — can support introductory Computer Science students in learning to apply programming patterns. We ran a classroom-based study with 237 students which found that Faded Parsons Problems, or rearranging and completing partially blank lines of code into a valid program, are an effective exercise interface for teaching programming patterns, significantly surpassing the performance of the more standard approaches of code writing and code tracing exercises. Faded Parsons Problems also improve overall code writing ability at a comparable level to code writing exercises, but are preferred by students.

Journal ArticleDOI
TL;DR: To make an interface simultaneously usable for users from a diverse range of cultural backgrounds will require a very large amount of adaptation, but the powerful principles of plasticity of user interface design hold the future promise of an optimum tool to achieve cross-cultural usability.

Proceedings ArticleDOI
Bryan Wang1, Gang Li2, Xin Zhou2, Zhourong Chen2, Tovi Grossman1, Yang Li2 
10 Oct 2021
TL;DR: In this paper, a multi-modal learning approach is proposed to generate succinct language descriptions of mobile screens for conveying important contents and functionalities of the screen, which can be useful for many language-based application scenarios.
Abstract: Mobile User Interface Summarization generates succinct language descriptions of mobile screens for conveying important contents and functionalities of the screen, which can be useful for many language-based application scenarios. We present Screen2Words, a novel screen summarization approach that automatically encapsulates essential information of a UI screen into a coherent language phrase. Summarizing mobile screens requires a holistic understanding of the multi-modal data of mobile UIs, including text, image, structures as well as UI semantics, motivating our multi-modal learning approach. We collected and analyzed a large-scale screen summarization dataset annotated by human workers. Our dataset contains more than 112k language summarization across ∼ 22k unique UI screens. We then experimented with a set of deep models with different configurations. Our evaluation of these models with both automatic accuracy metrics and human rating shows that our approach can generate high-quality summaries for mobile screens. We demonstrate potential use cases of Screen2Words and open-source our dataset and model to lay the foundations for further bridging language and user interfaces.