scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 2017"


Journal ArticleDOI
TL;DR: ImageJ2 as mentioned in this paper is the next generation of ImageJ, which provides a host of new functionality and separates concerns, fully decoupling the data model from the user interface.
Abstract: ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software’s ability to handle the requirements of modern science. We rewrote the entire ImageJ codebase, engineering a redesigned plugin mechanism intended to facilitate extensibility at every level, with the goal of creating a more powerful tool that continues to serve the existing community while addressing a wider range of scientific requirements. This next-generation ImageJ, called “ImageJ2” in places where the distinction matters, provides a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. Scientific imaging benefits from open-source programs that advance new method development and deployment to a diverse audience. ImageJ has continuously evolved with this idea in mind; however, new and emerging scientific requirements have posed corresponding challenges for ImageJ’s development. The described improvements provide a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs. Future efforts will focus on implementing new algorithms in this framework and expanding collaborations with other popular scientific software suites.

4,093 citations


Journal ArticleDOI
15 Feb 2017-Methods
TL;DR: TrackMate is an extensible platform where developers can easily write their own detection, particle linking, visualization or analysis algorithms within the TrackMate environment and is validated for quantitative lifetime analysis of clathrin-mediated endocytosis in plant cells.

2,356 citations


Posted Content
TL;DR: The entire ImageJ codebase was rewrote, engineering a redesigned plugin mechanism intended to facilitate extensibility at every level, with the goal of creating a more powerful tool that continues to serve the existing community while addressing a wider range of scientific requirements.
Abstract: ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs.

2,156 citations


Journal ArticleDOI
TL;DR: This survey discusses advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application.
Abstract: The boom in the capabilities and features of mobile devices, like smartphones, tablets, and wearables, combined with the ubiquitous and affordable Internet access and the advances in the areas of cooperative networking, computer vision, and mobile cloud computing transformed mobile augmented reality (MAR) from science fiction to a reality. Although mobile devices are more constrained computationalwise from traditional computers, they have a multitude of sensors that can be used to the development of more sophisticated MAR applications and can be assisted from remote servers for the execution of their intensive parts. In this paper, after introducing the reader to the basics of MAR, we present a categorization of the application fields together with some representative examples. Next, we introduce the reader to the user interface and experience in MAR applications and continue with the core system components of the MAR systems. After that, we discuss advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application. We continue with the importance of data management in MAR systems and the systems performance and sustainability, and before we conclude this survey, we present existing challenging problems.

285 citations


Journal ArticleDOI
TL;DR: The Pathview Web server is developed, to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources, and presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data.
Abstract: Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/.

272 citations


Book ChapterDOI
17 Jul 2017
TL;DR: A usability test of the most prestigious and internationally used Speech-based NUI (i.e., Alexa, Siri, Cortana and Google’s) shows that even though there are many services available, there is a lot to do to improve the usability of these systems.
Abstract: Natural User Interfaces (NUI) are supposed to be used by humans in a very logic way. However, the run to deploy Speech-based NUIs by the industry has had a large impact on the naturality of such interfaces. This paper presents a usability test of the most prestigious and internationally used Speech-based NUI (i.e., Alexa, Siri, Cortana and Google’s). A comparison of the services that each one provides was also performed considering: access to music services, agenda, news, weather, To-Do lists and maps or directions, among others. The test was design by two Human Computer Interaction experts and executed by eight persons. Results show that even though there are many services available, there is a lot to do to improve the usability of these systems. Specially focused on separating the traditional use of computers (based on applications that require parameters to function) and to get closer to real NUIs.

231 citations


Posted Content
TL;DR: A literature review of quality issues and attributes as they relate to the contemporary issue of chatbot development and implementation is presented, and a quality assessment method based on these attributes and the Analytic Hierarchy Process is proposed and examined.
Abstract: Chatbots are one class of intelligent, conversational software agents activated by natural language input (which can be in the form of text, voice, or both). They provide conversational output in response, and if commanded, can sometimes also execute tasks. Although chatbot technologies have existed since the 1960s and have influenced user interface development in games since the early 1980s, chatbots are now easier to train and implement. This is due to plentiful open source code, widely available development platforms, and implementation options via Software as a Service (SaaS). In addition to enhancing customer experiences and supporting learning, chatbots can also be used to engineer social harm - that is, to spread rumors and misinformation, or attack people for posting their thoughts and opinions online. This paper presents a literature review of quality issues and attributes as they relate to the contemporary issue of chatbot development and implementation. Finally, quality assessment approaches are reviewed, and a quality assessment method based on these attributes and the Analytic Hierarchy Process (AHP) is proposed and examined.

226 citations


Proceedings ArticleDOI
Kun Qian1, Chenshu Wu1, Zimu Zhou2, Yue Zheng1, Zheng Yang1, Yunhao Liu1 
02 May 2017
TL;DR: This work presents WiDance, a Wi-Fi-based user interface, which is utilized to design and prototype a contactless dance-pad exergame and proposes a light-weight pipeline to detect, segment, and recognize motions without training.
Abstract: In-air interaction acts as a key enabler for ambient intelligence and augmented reality. As an increasing popular example, exergames, and the alike gesture recognition applications, have attracted extensive research in designing accurate, pervasive and low-cost user interfaces. Recent advances in wireless sensing show promise for a ubiquitous gesture-based interaction interface with Wi-Fi. In this work, we extract complete information of motion-induced Doppler shifts with only commodity Wi-Fi. The key insight is to harness antenna diversity to carefully eliminate random phase shifts while retaining relevant Doppler shifts. We further correlate Doppler shifts with motion directions, and propose a light-weight pipeline to detect, segment, and recognize motions without training. On this basis, we present WiDance, a Wi-Fi-based user interface, which we utilize to design and prototype a contactless dance-pad exergame. Experimental results in typical indoor environment demonstrate a superior performance with an accuracy of 92%, remarkably outperforming prior approaches.

197 citations


Proceedings ArticleDOI
TL;DR: To assess the Microsoft HoloLens’ potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application and showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting.
Abstract: Industry and academia have repeatedly demonstrated the transformative potential of Augmented Reality (AR) guided assembly instructions. In the past, however, computational and hardware limitations often dictated that these systems were deployed on tablets or other cumbersome devices. Often, tablets impede worker progress by diverting a user's hands and attention, forcing them to alternate between the instructions and the assembly process. Head Mounted Displays (HMDs) overcome those diversions by allowing users to view the instructions in a hands-free manner while simultaneously performing an assembly operation. Thanks to rapid technological advances, wireless commodity AR HMDs are becoming commercially available. Specifically, the pioneering Microsoft HoloLens, provides an opportunity to explore a hands-free HMD’s ability to deliver AR assembly instructions and what a user interface looks like for such an application. Such an exploration is necessary because it is not certain how previous research on user interfaces will transfer to the HoloLens or other new commodity HMDs. In addition, while new HMD technology is promising, its ability to deliver a robust AR assembly experience is still unknown. To assess the HoloLens’ potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application. Features focused upon when building the prototype were: user interfaces, dynamic 3D assembly instructions, and spatially registered content placement. The research showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting.

154 citations


Proceedings ArticleDOI
01 Sep 2017
TL;DR: NeuroNER as mentioned in this paper is an easy-to-use named entity recognition tool based on ANNs, where users can annotate entities using a graphical web-based user interface (BRAT).
Abstract: Named-entity recognition (NER) aims at identifying entities of interest in a text. Artificial neural networks (ANNs) have recently been shown to outperform existing NER systems. However, ANNs remain challenging to use for non-expert users. In this paper, we present NeuroNER, an easy-to-use named-entity recognition tool based on ANNs. Users can annotate entities using a graphical web-based user interface (BRAT): the annotations are then used to train an ANN, which in turn predict entities’ locations and categories in new texts. NeuroNER makes this annotation-training-prediction flow smooth and accessible to anyone.

141 citations


Proceedings ArticleDOI
10 Jun 2017
TL;DR: Tiles Cards, a set of 110 design cards and a workshop technique to involve non-experts in quick idea generation for augmented objects, aims to support exploring combinations of user interface metaphors, digital services, and physical objects.
Abstract: The Internet of Things (IoT) offers new opportunities to invent technology-augmented things that are more useful, efficient or playful than their ordinary selves, yet only a few tools currently support ideation for the IoT. In this paper we present Tiles Cards, a set of 110 design cards and a workshop technique to involve non-experts in quick idea generation for augmented objects. Our tool aims to support exploring combinations of user interface metaphors, digital services, and physical objects. Then it supports creative thinking through provocative design goals inspired by human values and desires. Finally, it provides critical lenses through which analyze and judge design outcomes. We evaluated our tool in 9 ideation workshops with a total of 32 participants. Results show that the tool was useful in informing and guiding idea generation and was perceived as appealing and fun. Drawing on observations and participant feedbacks, we reflect on the strengths and limitations of this tool.

Journal ArticleDOI
19 Sep 2017
TL;DR: This work provides a comprehensive overview on the existing literature on user interaction aspects in recommender systems, covering existing approaches for preference elicitation and result presentation, as well as proposals that consider recommendation as an interactive process.
Abstract: Automated recommendations have become a ubiquitous part of today’s online user experience. These systems point us to additional items to purchase in online shops, they make suggestions to us on movies to watch, or recommend us people to connect with on social websites. In many of today’s applications, however, the only way for users to interact with the system is to inspect the recommended items. Often, no mechanisms are implemented for users to give the system feedback on the recommendations or to explicitly specify preferences, which can limit the potential overall value of the system for its users. Academic research in recommender systems is largely focused on algorithmic approaches for item selection and ranking. Nonetheless, over the years a variety of proposals were made on how to design more interactive recommenders. This work provides a comprehensive overview on the existing literature on user interaction aspects in recommender systems. We cover existing approaches for preference elicitation and result presentation, as well as proposals that consider recommendation as an interactive process. Throughout the work, we furthermore discuss examples of real-world systems and outline possible directions for future works.

Proceedings ArticleDOI
02 May 2017
TL;DR: This lab study suggests that users with little or no programming knowledge can successfully automate smartphone tasks using SUGILITE.
Abstract: SUGILITE is a new programming-by-demonstration (PBD) system that enables users to create automation on smartphones. SUGILITE uses Android's accessibility API to support automating arbitrary tasks in any Android app (or even across multiple apps). When the user gives verbal commands that SUGILITE does not know how to execute, the user can demonstrate by directly manipulating the regular apps' user interface. By leveraging the verbal instructions, the demonstrated procedures, and the apps? UI hierarchy structures, SUGILITE can automatically generalize the script from the recorded actions, so SUGILITE learns how to perform tasks with different variations and parameters from a single demonstration. Extensive error handling and context checking support forking the script when new situations are encountered, and provide robustness if the apps change their user interface. Our lab study suggests that users with little or no programming knowledge can successfully automate smartphone tasks using SUGILITE.

Journal ArticleDOI
TL;DR: This article provides a systematic literature review of the existing studies on mobile UI design patterns to give an overview of recent studies on the mobile designs and provides an analysis on what topics or areas have insufficient information and what factors are concentrated upon.
Abstract: Mobile platforms have called for attention from HCI practitioners, and, ever since 2007, touchscreens have completely changed mobile user interface and interaction design. Some notable differences between mobile devices and desktops include the lack of tactile feedback, ubiquity, limited screen size, small virtual keys, and high demand of visual attention. These differences have caused unprecedented challenges to users. Most of the mobile user interface designs are based on desktop paradigm, but the desktop designs do not fully fit the mobile context. Although mobile devices are becoming an indispensable part of daily lives, true standards for mobile UI design patterns do not exist. This article provides a systematic literature review of the existing studies on mobile UI design patterns. The first objective is to give an overview of recent studies on the mobile designs. The second objective is to provide an analysis on what topics or areas have insufficient information and what factors are concentrated upon. This article will benefit the HCI community in seeing an overview of present works, to shape the future research directions.

Journal Article
TL;DR: A systematic literature review is conducted and a comprehensive overview of relevant psychological effects and exemplary nudges in the physical and digital sphere are provided to provide a valuable basis for researchers and practitioners that aim to study or design information systems and interventions that assist user decision making on screens.
Abstract: Individuals make increasingly more decisions on screens, such as those on websites or mobile apps. However, the nature of screens and the vast amount of information available online make individuals particularly prone to deficient decisions. Digital nudging is an approach based on insights from behavioral economics that applies user interface (UI) design elements to affect the choices of users in digital environments. UI design elements include graphic design, specific content, wording or small features. To date, little is known about the psychological mechanisms that underlie digital nudging. To address this research gap, we conducted a systematic literature review and provide a comprehensive overview of relevant psychological effects and exemplary nudges in the physical and digital sphere. These insights serve as a valuable basis for researchers and practitioners that aim to study or design information systems and interventions that assist user decision making on screens.

Journal ArticleDOI
TL;DR: Qudi is a general, modular, multi-operating system suite written in Python 3 for controlling laboratory experiments that provides a structured environment by separating functionality into hardware abstraction, experiment logic and user interface layers.

Journal ArticleDOI
TL;DR: A thorough analysis of the architectural design of an intelligent operational system is completed to present a smart solution for cities to unify departments and agencies under one umbrella.

Journal ArticleDOI
TL;DR: A survey on hand posture and gesture is clarified with a detailed comparative analysis of hidden Markov model approach with other classifier techniques, and difficulties and future investigation bearing are also examined.
Abstract: Motion recognition is a topic in software engineering and dialect innovation with a goal of interpreting human signals through mathematical algorithm. Hand gesture is a strategy for nonverbal communication for individuals as it expresses more liberally than body parts. Hand gesture acknowledgment has more prominent significance in planning a proficient human computer interaction framework, utilizing signals as a characteristic interface favorable to circumstance of movements. Regardless, the distinguishing proof and acknowledgment of posture, gait, proxemics and human behaviors is furthermore the subject of motion to appreciate human nonverbal communication, thus building a richer bridge between machines and humans than primitive text user interfaces or even graphical user interfaces, which still limits the majority of input to electronics gadget. In this paper, a study on various motion recognition methodologies is given specific accentuation on available motions. A survey on hand posture and gesture is clarified with a detailed comparative analysis of hidden Markov model approach with other classifier techniques. Difficulties and future investigation bearing are also examined.

Patent
Amir Hoffnung1, Micha Galor1, Jonathan Pokrass1, Roee Shenberg1, Shlomo Zippel1 
16 Feb 2017
TL;DR: A gesture based user interface includes a movement monitor configured to monitor a user's hand and to provide a signal based on movements of the hand as discussed by the authors, where a processor is configured to provide at least one interface state in which a cursor is confined to movement within a single dimension region responsive to the signal from the movement monitor.
Abstract: A gesture based user interface includes a movement monitor configured to monitor a user's hand and to provide a signal based on movements of the hand. A processor is configured to provide at least one interface state in which a cursor is confined to movement within a single dimension region responsive to the signal from the movement monitor, and to actuate different commands responsive to the signal from the movement monitor and the location of the cursor in the single dimension region.

Journal ArticleDOI
11 May 2017-PLOS ONE
TL;DR: Shinyheatmap as mentioned in this paper is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions.
Abstract: Background Transcriptomics, metabolomics, metagenomics, and other various next-generation sequencing (-omics) fields are known for their production of large datasets, especially across single-cell sequencing studies. Visualizing such big data has posed technical challenges in biology, both in terms of available computational resources as well as programming acumen. Since heatmaps are used to depict high-dimensional numerical data as a colored grid of cells, efficiency and speed have often proven to be critical considerations in the process of successfully converting data into graphics. For example, rendering interactive heatmaps from large input datasets (e.g., 100k+ rows) has been computationally infeasible on both desktop computers and web browsers. In addition to memory requirements, programming skills and knowledge have frequently been barriers-to-entry for creating highly customizable heatmaps. Results We propose shinyheatmap: an advanced user-friendly heatmap software suite capable of efficiently creating highly customizable static and interactive biological heatmaps in a web browser. shinyheatmap is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions. Also, shinyheatmap features a built-in high performance web plug-in, fastheatmap, for rapidly plotting interactive heatmaps of datasets as large as 105—107 rows within seconds, effectively shattering previous performance benchmarks of heatmap rendering speed. Conclusions shinyheatmap is hosted online as a freely available web server with an intuitive graphical user interface: http://shinyheatmap.com. The methods are implemented in R, and are available as part of the shinyheatmap project at: https://github.com/Bohdan-Khomtchouk/shinyheatmap. Users can access fastheatmap directly from within the shinyheatmap web interface, and all source code has been made publicly available on Github: https://github.com/Bohdan-Khomtchouk/fastheatmap.

Journal ArticleDOI
TL;DR: A subjective evaluation of AEPS’s effectiveness as an educational tool shows that the proposed platform not only promotes the students’ learning interest and practical ability but also consolidates their understanding and impression of theoretical concepts.
Abstract: With the purpose of further mastering and grasping the course of speech signal processing, a novel Android-based, mobile-assisted educational platform (AEPS) is proposed in this paper. The goal of this work was to design AEPS as an educational signal-processing auxiliary system by simulating signal analysis methods commonly used in speech signal processing and bridging the gap for transition from undergraduate study to industry practice or academic research. The educational platform is presented in a highly intuitive, easy-to-interpret and strongly maneuverable graphical user interface. It also has the characteristics of high portability, strong affordability, and easy adoptability for application extension and popularization. Through adequate intuitive user interface, rich visual information, and extensive hands-on experiences, it greatly facilitates students in authentic, interactive, and creative learning. This paper details a subjective evaluation of AEPS’s effectiveness as an educational tool. The re...

Book ChapterDOI
22 Apr 2017
TL;DR: The Jani model format and tool interaction protocol is a metamodel based on networks of communicating automata and has been designed for ease of implementation without sacrificing readability, to provide a stable and uniform interface between tools such as model checkers, transformers, and user interfaces.
Abstract: The formal analysis of critical systems is supported by a vast space of modelling formalisms and tools. The variety of incompatible formats and tools however poses a significant challenge to practical adoption as well as continued research. In this paper, we propose the Jani model format and tool interaction protocol. The format is a metamodel based on networks of communicating automata and has been designed for ease of implementation without sacrificing readability. The purpose of the protocol is to provide a stable and uniform interface between tools such as model checkers, transformers, and user interfaces. Jani uses the Json data format, inheriting its ease of use and inherent extensibility. Jani initially targets, but is not limited to, quantitative model checking. Several existing tools now support the verification of Jani models, and automatic converters from a diverse set of higher-level modelling languages have been implemented. The ultimate purpose of Jani is to simplify tool development, encourage research cooperation, and pave the way towards a future competition in quantitative model checking.

Proceedings ArticleDOI
02 May 2017
TL;DR: GazeSpeak is an eye gesture communication system that runs on a smartphone, and is designed to be low-cost, robust, portable, and easy-to-learn, with a higher communication bandwidth than an e-tran board.
Abstract: Current eye-tracking input systems for people with ALS or other motor impairments are expensive, not robust under sunlight, and require frequent re-calibration and substantial, relatively immobile setups. Eye-gaze transfer (e-tran) boards, a low-tech alternative, are challenging to master and offer slow communication rates. To mitigate the drawbacks of these two status quo approaches, we created GazeSpeak, an eye gesture communication system that runs on a smartphone, and is designed to be low-cost, robust, portable, and easy-to-learn, with a higher communication bandwidth than an e-tran board. GazeSpeak can interpret eye gestures in real time, decode these gestures into predicted utterances, and facilitate communication, with different user interfaces for speakers and interpreters. Our evaluations demonstrate that GazeSpeak is robust, has good user satisfaction, and provides a speed improvement with respect to an e-tran board; we also identify avenues for further improvement to low-cost, low-effort gaze-based communication technologies.

Journal ArticleDOI
TL;DR: Maplab as discussed by the authors is an open, research-oriented visual-inertial mapping framework for processing and manipulating multi-session maps, written in C++, which includes a collection of multisession mapping tools that include map merging, visual inertial batch optimization, and loop closure.
Abstract: Robust and accurate visual-inertial estimation is crucial to many of today's challenges in robotics. Being able to localize against a prior map and obtain accurate and driftfree pose estimates can push the applicability of such systems even further. Most of the currently available solutions, however, either focus on a single session use-case, lack localization capabilities or an end-to-end pipeline. We believe that only a complete system, combining state-of-the-art algorithms, scalable multi-session mapping tools, and a flexible user interface, can become an efficient research platform. We therefore present maplab, an open, research-oriented visual-inertial mapping framework for processing and manipulating multi-session maps, written in C++. On the one hand, maplab can be seen as a ready-to-use visual-inertial mapping and localization system. On the other hand, maplab provides the research community with a collection of multisession mapping tools that include map merging, visual-inertial batch optimization, and loop closure. Furthermore, it includes an online frontend that can create visual-inertial maps and also track a global drift-free pose within a localization map. In this paper, we present the system architecture, five use-cases, and evaluations of the system on public datasets. The source code of maplab is freely available for the benefit of the robotics research community.

Journal ArticleDOI
TL;DR: Results suggest that type of user interface does have an impact on children’s learning, but is only one of many factors that affect positive academic and socio-emotional experiences.
Abstract: Aim/Purpose Over the past few years, new approaches to introducing young children to computational thinking have grown in popularity. This paper examines the role that user interfaces have on children’s mastery of computational thinking concepts and positive interpersonal behaviors. Background There is a growing pressure to begin teaching computational thinking at a young age. This study explores the affordances of two very different programming interfaces for teaching computational thinking: a graphical coding application on the iPad (ScratchJr) and tangible programmable robotics kit (KIBO). Methodology This study used a mixed-method approach to explore the learning experiences that young children have with tangible and graphical coding interfaces. A sample of children ages four to seven (N = 28) participated. Findings Results suggest that type of user interface does have an impact on children’s learning, but is only one of many factors that affect positive academic and socio-emotional experiences. Tangible and graphical interfaces each have qualities that foster different types of learning

Proceedings ArticleDOI
01 Nov 2017
TL;DR: Transkribus is a comprehensive platform for the computer-aided transcription, recognition and retrieval of digitized historical documents through an open-source desktop application that incorporates means to segment document images, to add a transcription and to tag entities within.
Abstract: Transkribus is a comprehensive platform for the computer-aided transcription, recognition and retrieval of digitized historical documents. The main user interface is provided via an open-source desktop application that incorporates means to segment document images, to add a transcription and to tag entities within. The desktop application is able to connect to the platform's backend, which implements a document management system as well as several tools for document image analysis, such as layout analysis or automatic/handwritten text recognition (ATR/HTR). Access to documents, uploaded to the platform, may be granted to other users in order to collaborate on the transcription and to share results.

Journal ArticleDOI
TL;DR: This work proposes a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly.
Abstract: Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

Patent
04 Jan 2017
TL;DR: The systems and methods described in this article provide highly dynamic and interactive data analysis user interfaces, which enable data analysts to quickly and efficiently explore large volume data sources by applying filters, joining to other tables in a database, viewing interactive data visualizations.
Abstract: The systems and methods described herein provide highly dynamic and interactive data analysis user interfaces which enable data analysts to quickly and efficiently explore large volume data sources. In particular, a data analysis system, such as described herein, may provide features to enable the data analyst to investigate large volumes of data over many different paths of analysis while maintaining detailed and retraceable steps taken by the data analyst over the course of an investigation, as captured via the data analyst's queries and user interaction with the user interfaces provided by the data analysis system. Data analysis paths may involve exploration of high volume data sets, such as Internet proxy data, which may include trillions of rows of data. The data analyst may pursue a data analysis path that involves, among other things, applying filters, joining to other tables in a database, viewing interactive data visualizations, and so on.

Journal ArticleDOI
TL;DR: Existing works on declarative specifications and user interfaces for visualization construction are reviewed by summarizing their methods for producing information visualizations and efforts on improving usability in terms of a design space which describes the tools in several different aspects.
Abstract: Information visualization has been widely used to convey information from data and assist communication. There are enormous needs of efficient visualization design for users from diverse fields to leverage the power of data. As a result, emerging construction tools for information visualization focus on providing solutions with different aspects including expressiveness, accessibility, and efficiency. In this paper, we review existing works on declarative specifications and user interfaces for visualization construction. By summarizing their methods for producing information visualizations and efforts on improving usability, we express the design patterns in terms of a design space which describes the tools in several different aspects. We discuss how the design space can be applied to support further exploration of potential research topics in the future.

Patent
02 Nov 2017
TL;DR: The present disclosure relates to an information display method and device as mentioned in this paper, which includes: displaying a user interface of an application, where the user interface includes at least one picture; receiving a selection operation triggered on the user interfaces, and displaying the search result information corresponding to content of the target picture.
Abstract: The present disclosure relates to an information display method and device. The method includes: displaying a user interface of an application, where the user interface includes at least one picture; receiving a selection operation triggered on the user interface, where the selection operation is configured to select a target picture from the at least one picture; acquiring search result information corresponding to content of the target picture; and displaying the search result information.