scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 2018"


Journal ArticleDOI
TL;DR: An extensive review on human–robot collaboration in industrial environment is provided, with specific focus on issues related to physical and cognitive interaction, and the commercially available solutions are presented.

632 citations


Journal ArticleDOI
TL;DR: DGIdb v3.0 has received a major overhaul of its codebase, including an updated user interface, preset interaction search filters, consolidation of interaction information into interaction groups, greatly improved search response times and upgrading the underlying web application framework.
Abstract: The drug-gene interaction database (DGIdb, www.dgidb.org) consolidates, organizes and presents drug-gene interactions and gene druggability information from papers, databases and web resources. DGIdb normalizes content from 30 disparate sources and allows for user-friendly advanced browsing, searching and filtering for ease of access through an intuitive web user interface, application programming interface (API) and public cloud-based server image. DGIdb v3.0 represents a major update of the database. Nine of the previously included 24 sources were updated. Six new resources were added, bringing the total number of sources to 30. These updates and additions of sources have cumulatively resulted in 56 309 interaction claims. This has also substantially expanded the comprehensive catalogue of druggable genes and anti-neoplastic drug-gene interactions included in the DGIdb. Along with these content updates, v3.0 has received a major overhaul of its codebase, including an updated user interface, preset interaction search filters, consolidation of interaction information into interaction groups, greatly improved search response times and upgrading the underlying web application framework. In addition, the expanded API features new endpoints which allow users to extract more detailed information about queried drugs, genes and drug-gene interactions, including listings of PubMed IDs, interaction type and other interaction metadata.

605 citations


Proceedings ArticleDOI
21 Apr 2018
TL;DR: This study documents the methodical practices of VUI users, and how that use is accomplished in the complex social life of the home, and raises conceptual challenges to the notion of designing 'conversational' interfaces.
Abstract: Voice User Interfaces (VUIs) are becoming ubiquitously available, being embedded both into everyday mobility via smartphones, and into the life of the home via 'assistant' devices. Yet, exactly how users of such devices practically thread that use into their everyday social interactions remains underexplored. By collecting and studying audio data from month-long deployments of the Amazon Echo in participants' homes-informed by ethnomethodology and conversation analysis-our study documents the methodical practices of VUI users, and how that use is accomplished in the complex social life of the home. Data we present shows how the device is made accountable to and embedded into conversational settings like family dinners where various simultaneous activities are being achieved. We discuss how the VUI is finely coordinated with the sequential organisation of talk. Finally, we locate implications for the accountability of VUI interaction, request and response design, and raise conceptual challenges to the notion of designing 'conversational' interfaces.

455 citations


Journal ArticleDOI
TL;DR: The networking of EDTs with real assets leads to hybrid application scenarios in which EDTs are used in combination with real hardware, thus realizing complex control algorithms, innovative user interfaces, or mental models for intelligent systems.
Abstract: Digital twins represent real objects or subjects with their data, functions, and communication capabilities in the digital world. As nodes within the internet of things, they enable networking and thus the automation of complex value-added chains. The application of simulation techniques brings digital twins to life and makes them experimentable; digital twins become experimentable digital twins (EDTs). Initially, these EDTs communicate with each other purely in the virtual world. The resulting networks of interacting EDTs model different application scenarios and are simulated in virtual testbeds, providing new foundations for comprehensive simulation-based systems engineering. Its focus is on EDTs, which become more detailed with every single application. Thus, complete digital representations of the respective real assets and their behaviors are created successively. The networking of EDTs with real assets leads to hybrid application scenarios in which EDTs are used in combination with real hardware, thus realizing complex control algorithms, innovative user interfaces, or mental models for intelligent systems.

298 citations


Journal ArticleDOI
13 Jun 2018
TL;DR: A structural and behavioural model of a generalised IML system is proposed and a solution principles for building effective interfaces for IML are identified, identified strands of user interface research key to unlocking more efficient and productive non-expert interactive machine learning applications.
Abstract: Interactive Machine Learning (IML) seeks to complement human perception and intelligence by tightly integrating these strengths with the computational power and speed of computers. The interactive process is designed to involve input from the user but does not require the background knowledge or experience that might be necessary to work with more traditional machine learning techniques. Under the IML process, non-experts can apply their domain knowledge and insight over otherwise unwieldy datasets to find patterns of interest or develop complex data-driven applications. This process is co-adaptive in nature and relies on careful management of the interaction between human and machine. User interface design is fundamental to the success of this approach, yet there is a lack of consolidated principles on how such an interface should be implemented. This article presents a detailed review and characterisation of Interactive Machine Learning from an interactive systems perspective. We propose and describe a structural and behavioural model of a generalised IML system and identify solution principles for building effective interfaces for IML. Where possible, these emergent solution principles are contextualised by reference to the broader human-computer interaction literature. Finally, we identify strands of user interface research key to unlocking more efficient and productive non-expert interactive machine learning applications.

196 citations


Proceedings ArticleDOI
19 Jun 2018
TL;DR: It is shown that deep learning methods can be leveraged to train a model end-to-end to automatically reverse engineer user interfaces and generate code from a single input image with over 77% of accuracy for three different platforms.
Abstract: Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically reverse engineer user interfaces and generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).

179 citations


Proceedings ArticleDOI
15 Aug 2018
TL;DR: The NSRR provides a single point of access to analysis-ready physiological signals from polysomnography obtained from multiple sources, and a wide variety of clinical data to facilitate sleep research, and provides the design of a functional architecture for implementing a Sleep Data Commons.
Abstract: Objective: The gold standard for diagnosing sleep disorders is polysomnography, which generates extensive data about biophysical changes occurring during sleep. We developed the National Sleep Research Resource (NSRR), a comprehensive system for sharing sleep data. The NSRR embodies elements of a data commons aimed at accelerating research to address critical questions about the impact of sleep disorders on important health outcomes. Approach: We used a metadata-guided approach, with a set of common sleep-specific terms enforcing uniform semantic interpretation of data elements across three main components: (1) annotated datasets; (2) user interfaces for accessing data; and (3) computational tools for the analysis of polysomnography recordings. We incorporated the process for managing dataset-specific data use agreements, evidence of Institutional Review Board review, and the corresponding access control in the NSRR web portal. The metadata-guided approach facilitates structural and semantic interoperability, ultimately leading to enhanced data reusability and scientific rigor. Results: The authors curated and deposited retrospective data from 10 large, NIH-funded sleep cohort studies, including several from the Trans-Omics for Precision Medicine (TOPMed) program, into the NSRR. The NSRR currently contains data on 26,808 subjects and 31,166 signal files in European Data Format. Launched in April 2014, over 3000 registered users have downloaded over 130 terabytes of data. Conclusions: The NSRR offers a use case and an example for creating a full-fledged data commons. It provides a single point of access to analysis-ready physiological signals from polysomnography obtained from multiple sources, and a wide variety of clinical data to facilitate sleep research. The NIH Data Commons (or Commons) is an ambitious vision for a shared virtual space to allow digital objects to be stored and computed upon by the scientific community. The Commons would allow investigators to find, manage, share, use and reuse data, software, metadata and workflows. It imagines an ecosystem that makes digital objects Findable, Accessible, Interoperable and Reusable (FAIR). Four components are considered integral parts of the Commons: a computing resource for accessing and processing of digital objects; a "digital object compliance model" that describes the properties of digital objects that enable them to be FAIR; datasets that adhere to the digital object compliance model; and software and services to facilitate access to and use of data. This paper describes the contributions of NSRR along several aspects of the Commons vision: metadata for sleep research digital objects; a collection of annotated sleep data sets; and interfaces and tools for accessing and analyzing such data. More importantly, the NSRR provides the design of a functional architecture for implementing a Sleep Data Commons. The NSRR also reveals complexities and challenges involved in making clinical sleep data conform to the FAIR principles. Future directions: Shared resources offered by emerging resources such as cloud instances provide promising platforms for the Data Commons. However, simply expanding storage or adding compute power may not allow us to cope with the rapidly expanding volume and increasing complexity of biomedical data. Concurrent efforts must be spent to address digital object organization challenges. To make our approach future-proof, we need to continue advancing research in data representation and interfaces for human-data interaction. A possible next phase of NSRR is the creation of a universal self-descriptive sequential data format. The idea is to break large, unstructured, sequential data files into minimal, semantically meaningful, fragments. Such fragments can be indexed, assembled, retrieved, rendered, or repackaged on-the-fly, for multitudes of application scenarios. Data points in such a fragment will be locally embedded with relevant metadata labels, governed by terminology and ontology. Potential benefits of such an approach may include precise levels of data access, increased analysis readiness with on-the-fly data conversion, multi-level data discovery and support for effective web-based visualization of contents in large sequential files.

173 citations


Journal ArticleDOI
31 Jan 2018
TL;DR: Maplab as discussed by the authors is an open, research-oriented visual-inertial mapping framework for processing and manipulating multisession maps, written in C++, which can be seen as a ready-to-use visual-intrusive mapping and localization system.
Abstract: Robust and accurate visual-inertial estimation is crucial to many of today's challenges in robotics. Being able to localize against a prior map and obtain accurate and drift-free pose estimates can push the applicability of such systems even further. Most of the currently available solutions, however, either focus on a single session use case, lack localization capabilities, or do not provide an end-to-end pipeline. We believe that only a complete system, combining state-of-the-art algorithms, scalable multisession mapping tools, and a flexible user interface, can become an efficient research platform. We, therefore, present maplab, an open, research-oriented visual-inertial mapping framework for processing and manipulating multisession maps, written in C++. On the one hand, maplab can be seen as a ready-to-use visual-inertial mapping and localization system. On the other hand, maplab provides the research community with a collection of multisession mapping tools that include map merging, visual-inertial batch optimization, and loop closure. Furthermore, it includes an online frontend that can create visual-inertial maps and also track a global drift-free pose within a localization map. In this letter, we present the system architecture, five use cases, and evaluations of the system on public datasets. The source code of maplab is freely available for the benefit of the robotics research community.

172 citations


Proceedings ArticleDOI
19 Apr 2018
TL;DR: Empirical data on how users interact with the authors' VUI calendar system, DiscoverCal, is analyzed and it is found that while NLP Error obstacles occurred the most, other obstacles are more likely to frustrate or confuse the user.
Abstract: Voice User Interfaces (VUIs) are growing in popularity. However, even the most current VUIs regularly cause frustration for their users. Very few studies exist on what people do to overcome VUI problems they encounter, or how VUIs can be designed to aid people when these problems occur. In this paper, we analyze empirical data on how users (n=12) interact with our VUI calendar system, DiscoverCal, over three sessions. In particular, we identify the main obstacle categories and types of tactics our participants employ to overcome them. We analyzed the patterns of how different tactics are used in each obstacle category. We found that while NLP Error obstacles occurred the most, other obstacles are more likely to frustrate or confuse the user. We also found patterns that suggest participants were more likely to employ a "guessing" approach rather than rely on visual aids or knowledge recall.

159 citations


Proceedings ArticleDOI
21 Apr 2018
TL;DR: A prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively and implications for user interfaces where users can collaborate with AI in creative works are discussed.
Abstract: Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user--AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.

145 citations


Journal ArticleDOI
TL;DR: The HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface, and this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.
Abstract: The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.

Proceedings ArticleDOI
21 Apr 2018
TL;DR: The results show that pointing using tracked hand-held controllers outperforms all other methods and formulating guidelines for choosing optimal virtual keyboard text entry methods in VR is summarized.
Abstract: In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. While the technology for input as well as output devices is market ready, only a few solutions for text input exist, and empirical knowledge about performance and user preferences is lacking. In this paper, we study text entry in VR by selecting characters on a virtual keyboard. We discuss the design space for assessing selection-based text entry in VR. Then, we implement six methods that span different parts of the design space and evaluate their performance and user preferences. Our results show that pointing using tracked hand-held controllers outperforms all other methods. Other methods such as head pointing can be viable alternatives depending on available resources. We summarize our findings by formulating guidelines for choosing optimal virtual keyboard text entry methods in VR.

Journal ArticleDOI
07 Jul 2018-Sensors
TL;DR: The purpose of this paper is to survey the state-of-the-art Human-Computer Interaction techniques with a focus on the special field of three-dimensional interaction, including an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition.
Abstract: Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given.

Journal ArticleDOI
06 Jan 2018
TL;DR: MONTE is the prime operational orbit determination software for all JPL navigated missions and can be used seamlessly with other canonical scientific programming tools such as SciPy, NumPy, and Matplotlib.
Abstract: The Mission analysis, Operations and Navigation Toolkit Environment (MONTE) (Sunseri et al. in NASA Tech Briefs 36(9), 2012) is an astrodynamic toolkit produced by the Mission Design and Navigation Software Group at the Jet Propulsion Laboratory. It provides a single integrated environment for all phases of deep space and Earth orbiting missions. Capabilities include: trajectory optimization and analysis, operational orbit determination, flight path control, and 2D/3D visualization. MONTE is presented to the user as an importable Python language module. This allows a simple but powerful user interface via CLUI or script. In addition, the Python interface allows MONTE to be used seamlessly with other canonical scientific programming tools such as SciPy, NumPy, and Matplotlib. MONTE is the prime operational orbit determination software for all JPL navigated missions.

Proceedings ArticleDOI
21 Apr 2018
TL;DR: This work introduces a mobile app for collecting in-the-wild data, including sensor measurements and self-reported labels describing people's behavioral context, which is necessary for developing context-recognition systems that serve health monitoring, aging care, and more.
Abstract: We introduce a mobile app for collecting in-the-wild data, including sensor measurements and self-reported labels describing people's behavioral context (e.g., driving, eating, in class, shower). Labeled data is necessary for developing context-recognition systems that serve health monitoring, aging care, and more. Acquiring labels without observers is challenging and previous solutions compromised ecological validity, range of behaviors, or amount of data. Our user interface combines past and near-future self-reporting of combinations of relevant context-labels. We deployed the app on the personal smartphones of 60 users and analyzed quantitative data collected in-the-wild and qualitative user-experience reports. The interface's flexibility was important to gain frequent, detailed labels, support diverse behavioral situations, and engage different users: most preferred reporting their past behavior through a daily journal, but some preferred reporting what they're about to do. We integrated insights from this work back into the app, which we make available to researchers for conducting in-the-wild studies.

Journal ArticleDOI
TL;DR: The results show that the proposed methodology outperforms the existing approaches in adapting user interfaces by utilizing the users context and experience.
Abstract: Personalized services have greater impact on user experience to effect the level of user satisfaction Many approaches provide personalized services in the form of an adaptive user interface The focus of these approaches is limited to specific domains rather than a generalized approach applicable to every domain In this paper, we proposed a domain and device-independent model-based adaptive user interfacing methodology Unlike state-of-the-art approaches, the proposed methodology is dependent on the evaluation of user context and user experience (UX) The proposed methodology is implemented as an adaptive UI/UX authoring (A-UI/UX-A) tool; a system capable of adapting user interface based on the utilization of contextual factors, such as user disabilities, environmental factors (eg light level, noise level, and location) and device use, at runtime using the adaptation rules devised for rendering the adapted interface To validate effectiveness of the proposed A-UI/UX-A tool and methodology, user-centric and statistical evaluation methods are used The results show that the proposed methodology outperforms the existing approaches in adapting user interfaces by utilizing the users context and experience

Proceedings Article
01 Jan 2018
TL;DR: Evidence is provided that a chatbot’s response time represents a social cue that triggers social re-sponses shaped by social expectations that support researchers and practitioners in understanding and designing more natural human-chatbot interactions.
Abstract: A key challenge in designing conversational user interfaces is to make the conversation between the user and the system feel natural and human-like. In order to increase perceived humanness, many systems with conversational user interfaces (e.g., chatbots) use response delays to simu-late the time it would take humans to respond to a message. However, delayed responses may also negatively impact user satisfaction, particularly in situations where fast response times are expected, such as in customer service. This paper reports the findings of an online experiment in a customer service context that investigates how user perceptions differ when interacting with a chatbot that sends dynamically delayed responses compared to a chatbot that sends near-instant responses. The dynamic delay length was calculated based on the complexity of the re-sponse and complexity of the previous message. Our results indicate that dynamic response de-lays not only increase users’ perception of humanness and social presence, but also lead to greater satisfaction with the overall chatbot interaction. Building on social response theory, we provide evidence that a chatbot’s response time represents a social cue that triggers social re-sponses shaped by social expectations. Our findings support researchers and practitioners in understanding and designing more natural human-chatbot interactions.

Journal ArticleDOI
01 Aug 2018
TL;DR: This paper presents Northstar, the Interactive Data Science System, which has been developed over the last 4 years to explore designs that make advanced analytics and model building more accessible.
Abstract: In order to democratize data science, we need to fundamentally rethink the current analytics stack, from the user interface to the "guts." Most importantly, enabling a broader range of users to unfold the potential of (their) data requires a change in the interface and the "protection" we offer them. On the one hand, visual interfaces for data science have to be intuitive, easy, and interactive to reach users without a strong background in computer science or statistics. On the other hand, we need to protect users from making false discoveries. Furthermore, it requires that technically involved (and often boring) tasks have to be automatically done by the system so that the user can focus on contributing their domain expertise to the problem. In this paper, we present Northstar, the Interactive Data Science System, which we have developed over the last 4 years to explore designs that make advanced analytics and model building more accessible.

Journal ArticleDOI
TL;DR: UMATracker is proposed, which supports flexible image preprocessing by visual programming, multiple tracking algorithms and a manual tracking error-correction system and enables the user to visualize the effect of image processing.
Abstract: Image-based tracking software are regarded as valuable tools in collective animal behaviour studies. For such operations, image preprocessing is a prerequisite, and the users are required to build an appropriate image-processing pipeline for extracting the shape of animals. Even if the users successfully design an image-processing pipeline, unexpected noise in the video frame may significantly reduce the tracking accuracy in the tracking step. To address these issues, we propose UMATracker (Useful Multiple Animal Tracker), which supports flexible image preprocessing by visual programming, multiple tracking algorithms and a manual tracking error-correction system. UMATracker employs a visual programming user interface, wherein the user can intuitively design an image-processing pipeline. Moreover, the software also enables the user to visualize the effect of image processing. We implement four different tracking algorithms to enable the users to choose the most suitable algorithm. In addition, UMATracker provides a manual correction tool for identifying and correcting tracking errors.

Journal ArticleDOI
TL;DR: In this forum, trends and new technologies with the potential to influence interaction design are scout trends and discussed with a focus on interaction technologies.
Abstract: Envisioning, designing, and implementing the user interface require a comprehensive understanding of interaction technologies. In this forum we scout trends and discuss new technologies with the potential to influence interaction design. --- Albrecht Schmidt, Editor

Journal ArticleDOI
11 Apr 2018-Sensors
TL;DR: A Mission Definition System and the automated flight process it enables to implement measurement plans for discrete infrastructure inspections using aerial platforms, and specifically multi-rotor drones are described.
Abstract: This paper describes a Mission Definition System and the automated flight process it enables to implement measurement plans for discrete infrastructure inspections using aerial platforms, and specifically multi-rotor drones. The mission definition aims at improving planning efficiency with respect to state-of-the-art waypoint-based techniques, using high-level mission definition primitives and linking them with realistic flight models to simulate the inspection in advance. It also provides flight scripts and measurement plans which can be executed by commercial drones. Its user interfaces facilitate mission definition, pre-flight 3D synthetic mission visualisation and flight evaluation. Results are delivered for a set of representative infrastructure inspection flights, showing the accuracy of the flight prediction tools in actual operations using automated flight control.

Journal ArticleDOI
TL;DR: Based on the experiments in this study, Sf-GDT can generate creative design alternatives for a given model and outperforms existing state-of-the-art techniques.

Journal ArticleDOI
TL;DR: The user-centred design and the experimental evaluation in realistic environments of a web-based multi-modal user interface tailored for elderly users of near future multi-robot services demonstrate positive evaluation of usability and willingness to use by elderly users.
Abstract: Socially assistive robotic platforms are now a realistic option for the long-term care of ageing populations. Elderly users may benefit from many services provided by robots operating in different environments, such as providing assistance inside apartments, serving in shared facilities of buildings or guiding people outdoors. In this paper, we present the experience gained within the EU FP7 ROBOT-ERA project towards the objective of implementing easy-to-use and acceptable service robotic system for the elderly. In particular, we detail the user-centred design and the experimental evaluation in realistic environments of a web-based multi-modal user interface tailored for elderly users of near future multi-robot services. Experimental results demonstrate positive evaluation of usability and willingness to use by elderly users, especially those less experienced with technological devices who could benefit more from the adoption of robotic services. Further analyses showed how multi-modal modes of interaction support more flexible and natural elderly–robot interaction, make clear the benefits for the users and, therefore, increase its acceptability. Finally, we provide insights and lessons learned from the extensive experimentation, which, to the best of our knowledge, is one of the largest experimentation of a multi-robot multi-service system so far.

Proceedings ArticleDOI
03 Sep 2018
TL;DR: The findings show that the Google Home is usable and user-friendly for users, and shows the potential for international users.
Abstract: Recently, commercial Voice User Interfaces (VUIs) have been introduced to the market (e.g. Amazon Echo and Google Home). Although they have drawn much attention from users, little is known about their usability, user experiences, and usefulness. In this study, we conducted a web-based survey to investigate usability, user experiences, and usefulness of the Google Home smart speaker. A total of 114 users, who are active in a social-media based interest group, participated in the study. The findings show that the Google Home is usable and user-friendly for users, and shows the potential for international users. Based on the users' feedback, we identified the challenges encountered by the participants. The findings from this study can be insightful for researchers and developers to take into account for future research in VUI.

Journal ArticleDOI
TL;DR: An evaluation study performed to investigate the usability of PSS for some specific tasks confirmed the mismatch between what PSS provide and what planners expect, as well as indicated a poor usability ofPSS.

Proceedings ArticleDOI
26 Jun 2018
TL;DR: WiVo is presented, a device-free voice liveness detection system based on the prevalent wireless signals generated by IoT devices without any additional devices or sensors carried by the users, which is expected to significantly enhance the security of the existing VCS.
Abstract: With the prevalent of smart devices and home automations, voice command has become a popular User Interface (UI) channel in the IoT environment. Although Voice Control System (VCS) has the advantages of great convenience, it is extremely vulnerable to the spoofing attack (e.g., replay attack, hidden/inaudible command attack) due to its broadcast nature. In this study, we present WiVo, a device-free voice liveness detection system based on the prevalent wireless signals generated by IoT devices without any additional devices or sensors carried by the users. The basic motivation of WiVo is to distinguish the authentic voice command from a spoofed one via its corresponding mouth motions, which can be captured and recognized by wireless signals. To achieve this goal, WiVo builds a theoretical model to characterize the correlation between wireless signal dynamics and the user's voice syllables. WiVo extracts the unique features from both voice and wireless signals, and then calculates the consistency between these different types of signals in order to determine whether the voice command is generated by the authentic user of VCS or an adversary. To evaluate the effectiveness of WiVo, we build a testbed based on Samsung SmartThings framework and include WiVo as a new application, which is expected to significantly enhance the security of the existing VCS. We have evaluated WiVo with 6 participants and different voice commands. Experimental evaluation results demonstrate that WiVo achieves the overall 99% detection rate with 1% false accept rate and has a low latency.

Journal ArticleDOI
TL;DR: The results indicate that employing a k-means machine learning technique enables the automatic configuration of an HVAC system to reduce energy consumption while keeping the majority of occupants within acceptable comfort levels.

Journal ArticleDOI
TL;DR: A Python interface for each of the software tools GINsim, BioLQM, Pint, MaBoSS, and Cell Collective is developed to offer a seamless integration in the Jupyter web interface and ease the chaining of complementary analyses.
Abstract: Analysing models of biological networks typically relies on workflows in which different software tools with sensitive parameters are chained together, many times with additional manual steps. The accessibility and reproducibility of such workflows is challenging, as publications often overlook analysis details, and because some of these tools may be difficult to install, and/or have a steep learning curve. The CoLoMoTo Interactive Notebook provides a unified environment to edit, execute, share, and reproduce analyses of qualitative models of biological networks. This framework combines the power of different technologies to ensure repeatability and to reduce users' learning curve of these technologies. The framework is distributed as a Docker image with the tools ready to be run without any installation step besides Docker, and is available on Linux, macOS, and Microsoft Windows. The embedded computational workflows are edited with a Jupyter web interface, enabling the inclusion of textual annotations, along with the explicit code to execute, as well as the visualisation of the results. The resulting notebook files can then be shared and re-executed in the same environment. To date, the CoLoMoTo Interactive Notebook provides access to software tools including GINsim, BioLQM, Pint, MaBoSS, and Cell Collective for the modelling and analysis of Boolean and multi-valued networks. More tools will be included in the future. We developed a Python interface for each of these tools to offer a seamless integration in the Jupyter web interface and ease the chaining of complementary analyses.

Book
05 Apr 2018
TL;DR: Human-Machine Interaction for Vehicles: Review and Outlook surveys and explores the significant and growing body of research on the topic of modern in-vehicle user interfaces and reviews the key findings, as well as recommending areas for future work.
Abstract: Human-Machine Interaction for Vehicles: Review and Outlook surveys and explores the significant and growing body of research on the topic of modern in-vehicle user interfaces. Today’s vehicles have myriad user interfaces, from those related to the moment-to-moment control of the vehicle, to those that allow the consumption of information and entertainment. The bulk of the research in this domain is related to manual driving. With recent advances in automated vehicles, attention has increasingly focused on user interactions. In exploring human-machine interaction for both manual and automated driving, a key issue has been how to create safe in-vehicle interactions that assist the driver in completing the driving task, as well as to allow drivers to accomplish various non-driving tasks. In automated vehicles, human-machine interactions will increasingly allow users to reclaim their time, so that they can engage in non-driving tasks. Given that it is unlikely that most vehicles will be fully automated in the near future, there are also significant efforts to understand how to help the driver switch between different modes of automation. Human-Machine Interaction for Vehicles: Review and Outlook reviews the key findings from this line of research, as well as recommending areas for future work. It is an ideal primer for researchers and user interface designers working in the area.

Book ChapterDOI
01 Jan 2018
TL;DR: This chapter focuses on the gesture recognition task for HMI and introduces current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition and briefly introduces the convolutional neural networks.
Abstract: Human–machine interaction (HMI) refers to the communication and interaction between a human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained increasing attention as they allow humans to control machines through natural and intuitive behaviors. In gesture-based HMI, a sensor such as Microsoft Kinect is used to capture the human postures and motions, which are processed to control a machine. The key task of gesture-based HMI is to recognize the meaningful expressions of human motions using the data provided by Kinect, including RGB (red, green, blue), depth, and skeleton information. In this chapter, we focus on the gesture recognition task for HMI and introduce current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition. More specifically, we briefly introduce the convolutional neural networks (CNNs), and then present several deep learning frameworks based on CNNs that have been used for gesture recognition by using RGB, depth and skeleton sequences.