scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 2022"


Journal ArticleDOI
TL;DR: The semantic context model is focused to bring in the usage of adaptive environment and can be mapped to individual User interface (UI) display through smart calculations for versatile UIs.
Abstract: Currently, many mobile devices provide various interaction styles and modes which create complexity in the usage of interfaces. The context offers the information base for the development of Adaptive user interface (AUI) frameworks to overcome the heterogeneity. For this purpose, the ontological modeling has been made for specific context and environment. This type of philosophy states to the relationship among elements (e.g., classes, relations, or capacities etc.) with understandable satisfied representation. The context mechanisms can be examined and understood by any machine or computational framework with these formal definitions expressed in Web ontology language (WOL)/Resource description frame work (RDF). The Protégé is used to create taxonomy in which system is framed based on four contexts such as user, device, task and environment. Some competency questions and use-cases are utilized for knowledge obtaining while the information is refined through the instances of concerned parts of context tree. The consistency of the model has been verified through the reasoning software while SPARQL querying ensured the data availability in the models for defined use-cases. The semantic context model is focused to bring in the usage of adaptive environment. This exploration has finished up with a versatile, scalable and semantically verified context learning system. This model can be mapped to individual User interface (UI) display through smart calculations for versatile UIs.

49 citations


Journal ArticleDOI
TL;DR: A unified and flexible experimental framework for massive online experimentation of control education that enables all the functionalities such as web-based algorithm design, parameter tuning, visual configuration of customized user interface, and real-time control with remote and virtual laboratories is introduced.
Abstract: This article introduces a unified and flexible experimental framework for massive online experimentation of control education. The new architecture adopts a front-end and back-end separation scheme based on React at frontend and Nginx at backend, and a single page application has been achieved to improve user experience. Multiple features and supporting technologies which provide a flexible, interactive, and real-time platform with enhanced sense of presence and user experience are discussed. The design and implementation of the proposed system are depicted in detail. With the implementation of online algorithm design, the unified framework enables all the functionalities such as web-based algorithm design, parameter tuning, visual configuration of customized user interface, and real-time control with remote and virtual laboratories, which cover the entire process in control education experimentation. A case study with students from two universities shows that the implemented online laboratory framework can support massive students with minimum maintenance required.

25 citations


Journal ArticleDOI
TL;DR: In this article , a unified and flexible experimental framework for massive online experimentation of control education is introduced, which adopts a front-end and back-end separation scheme based on React at frontend and Nginx at backend, and a single application has been achieved to improve user experience.
Abstract: This article introduces a unified and flexible experimental framework for massive online experimentation of control education. The new architecture adopts a front-end and back-end separation scheme based on React at frontend and Nginx at backend, and a single page application has been achieved to improve user experience. Multiple features and supporting technologies which provide a flexible, interactive, and real-time platform with enhanced sense of presence and user experience are discussed. The design and implementation of the proposed system are depicted in detail. With the implementation of online algorithm design, the unified framework enables all the functionalities such as web-based algorithm design, parameter tuning, visual configuration of customized user interface, and real-time control with remote and virtual laboratories, which cover the entire process in control education experimentation. A case study with students from two universities shows that the implemented online laboratory framework can support massive students with minimum maintenance required.

18 citations


Journal ArticleDOI
TL;DR: In this article , the authors study the differences in user satisfaction with a chatbot system vis-a-vis a menu-based interface system, and identify factors that influence user satisfaction.

16 citations


Journal ArticleDOI
TL;DR: A new dynamic scheduler that adapts to the system variability, and a novel way of communicating instructions to the human operators based on haptic guidance are presented, suggesting that a combination of visual and tactile stimuli is a viable and effective solution for displaying instructions in complex HRC scenarios.
Abstract: Human–robot collaboration (HRC) is expected to add flexibility and agility to production lines in manufacturing plants. In this context, versatile scheduling algorithms are needed to organize the increasingly complex work-flow and to exploit the gained flexibility, ensuring the optimal use of resources and the smart management of failures. Moreover, intuitive user interfaces are needed to communicate with the human worker, informing him/her of the next operation to perform. Usually, grounded or wearable screens are used to this aim. Whenever human sight is impaired or needs to be free, other sensory channels could be used as well. In this work, we present a new dynamic scheduler that adapts to the system variability, and a novel way of communicating instructions to the human operators based on haptic guidance. The proposed strategies were applied to a complex assembly task involving three agents and compared to baseline methods with an experimental campaign involving 16 subjects. Results show the clear advantage of using dynamic scheduling over the static one and suggest that a combination of visual and tactile stimuli is a viable and effective solution for displaying instructions in complex HRC scenarios.

14 citations


Journal ArticleDOI
TL;DR: In this article, a cognitive analytics platform for anomaly detection which is capable to handle, analyze and exploit resourcefully machine data from a shop-floor of factory, so as to support the emerging and growing needs of manufacturing industry is presented.

12 citations


Book ChapterDOI
01 Jan 2022
TL;DR: In this article, the authors integrated the (EM)-(^3\) framework into a local IoT platform named Home-Assistant to help centralize all the connected sensors, and two smart plug systems were proposed to be part of the ecosystem.
Abstract: Accrediting the fast economic growth and the enhancement of people’s live standards, the overall household’s energy consumption is becoming more and more substantial. Thus, the need of conserving energy is becoming a critical task to help preserve energy resources and slow down climate change, which in turn, protects the environment. The development of an Internet of Things (IoT) system to monitor the consumer’s power consumption behavior and provides energy saving recommendation at a timely manner can be advantageous to shape the user’s energy saving habits. In this paper, we integrate the (EM)\(^3\) framework into a local IoT platform named Home-Assistant to help centralize all the connected sensors. Additionally, two smart plug systems are proposed to be part of the (EM)\(^3\) ecosystem. The plugs are employed to collect appliances energy consumption data as well as having home automation capabilities. Through Home-Assistant User Interface (UI), end-users can visualize their consumption trends together with ambient environmental data. The comparative analysis performed demonstrates great potential and highlights areas of future work focusing on integrating more sensing systems into the developed platform for the sake of enriching the existing database.

11 citations


Journal ArticleDOI
TL;DR: The results suggest that the physiological measure of facial expression and its extracted feature, automatic facial expression-based valence, is most informative of emotional events lived through voice user interface interactions.
Abstract: The rapid rise of voice user interface technology has changed the way users traditionally interact with interfaces, as tasks requiring gestural or visual attention are swapped by vocal commands. This shift has equally affected designers, required to disregard common digital interface guidelines in order to adapt to non-visual user interaction (No-UI) methods. The guidelines regarding voice user interface evaluation are far from the maturity of those surrounding digital interface evaluation, resulting in a lack of consensus and clarity. Thus, we sought to contribute to the emerging literature regarding voice user interface evaluation and, consequently, assist user experience professionals in their quest to create optimal vocal experiences. To do so, we compared the effectiveness of physiological features (e.g., phasic electrodermal activity amplitude) and speech features (e.g., spectral slope amplitude) to predict the intensity of users’ emotional responses during voice user interface interactions. We performed a within-subjects experiment in which the speech, facial expression, and electrodermal activity responses of 16 participants were recorded during voice user interface interactions that were purposely designed to elicit frustration and shock, resulting in 188 analyzed interactions. Our results suggest that the physiological measure of facial expression and its extracted feature, automatic facial expression-based valence, is most informative of emotional events lived through voice user interface interactions. By comparing the unique effectiveness of each feature, theoretical and practical contributions may be noted, as the results contribute to voice user interface literature while providing key insights favoring efficient voice user interface evaluation.

10 citations


Journal ArticleDOI
TL;DR: SDTrimSP as mentioned in this paper is a popular simulation program to compute several effects of the interaction between an impinging ion and a solid, such as ion implantation ranges, damage formation or sputtering of surface atoms.
Abstract: SDTrimSP is a popular simulation program to compute several effects of the interaction between an impinging ion and a solid, such as ion implantation ranges, damage formation or sputtering of surface atoms. We now introduce a graphical user interface for SDTrimSP to make its operation more accessible for a broad group of users. It is written as a separate Python program and is not restricted to any specific operating system. The interface allows a quick and easy start as well as the direct evaluation of SDTrimSP simulations. Its capabilities are demonstrated here in the form of several example cases, including the dynamic simulations with SDTrimSP, where ion-induced target changes are taken into account. The presented graphical user interface is made freely available to support a large number of users in performing simulations of ion–solid interaction.

10 citations


Journal ArticleDOI
TL;DR: In this article , the authors identify the accessibility barriers that occur when using design patterns for building user interfaces of mobile apps and propose guidelines to prevent the problems most often encountered and present a catalog which contains the descriptions of 9 user interface design patterns, accessibility barriers linked to the use of each pattern and the guidelines that can be followed to avoid the problem of these barriers.

10 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a bespoke data collection interface to generate speaking chatbots and made them available as tasks on the crowd sourcing platform Mechanical Turk to simulate how privacy can be communicated in a dialogue between user and machine.

Journal ArticleDOI
TL;DR: KiMoPack is open source and provides a comprehensive front-end for preprocessing, fitting and plotting of 2-dimensional data that simplifies the access to a powerful python-based data-processing system and forms the foundation for a well documented, reliable, and reproducible data analysis.
Abstract: Herein, we present KiMoPack, an analysis tool for the kinetic modeling of transient spectroscopic data. KiMoPack enables a state-of-the-art analysis routine including data preprocessing and standard fitting (global analysis), as well as fitting of complex (target) kinetic models, interactive viewing of (fit) results, and multiexperiment analysis via user accessible functions and a graphical user interface (GUI) enhanced interface. To facilitate its use, this paper guides the user through typical operations covering a wide range of analysis tasks, establishes a typical workflow and is bridging the gap between ease of use for less experienced users and introducing the advanced interfaces for experienced users. KiMoPack is open source and provides a comprehensive front-end for preprocessing, fitting and plotting of 2-dimensional data that simplifies the access to a powerful python-based data-processing system and forms the foundation for a well documented, reliable, and reproducible data analysis.

Proceedings ArticleDOI
25 Apr 2022
TL;DR: The findings show that the conversational interface was significantly more effective in building user trust and satisfaction in the online housing recommendation system when compared to the conventional web interface.
Abstract: Trust is an important component of human-AI relationships and plays a major role in shaping the reliance of users on online algorithmic decision support systems. With recent advances in natural language processing, text and voice-based conversational interfaces have provided users with new ways of interacting with such systems. Despite the growing applications of conversational user interfaces (CUIs), little is currently understood about the suitability of such interfaces for decision support and how CUIs inspire trust among humans engaging with decision support systems. In this work, we aim to address this gap and answer the following question: to what extent can a conversational interface build user trust in decision support systems in comparison to a conventional graphical user interface? To this end, we built a text-based conversational interface, and a conventional web-based graphical user interface. These served as the means for users to interact with an online decision support system to help them find housing, given a fixed set of constraints. To understand how the accuracy of the decision support system moderates user behavior and trust across the two interfaces, we considered an accurate and inaccurate system. We carried out a 2 × 2 between-subjects study (N = 240) on the Prolific crowdsourcing platform. Our findings show that the conversational interface was significantly more effective in building user trust and satisfaction in the online housing recommendation system when compared to the conventional web interface. Our results highlight the potential impact of conversational interfaces for trust development in decision support systems.

Proceedings ArticleDOI
29 Apr 2022
TL;DR: This paper addressed the limitations of existing manual UI transition approaches in spatially diverse tasks by designing and evaluating three UI transition mechanisms with different levels of automation and controllability, and simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them.
Abstract: Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fixed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we first ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-effort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them. Our results provide valuable lessons about the trade-offs between UI automation levels, controllability, user agency, and the impact of prediction errors.

Proceedings ArticleDOI
30 Jun 2022
TL;DR: The biophotonics app enables multidisciplinary and self-paced learning in both in-person or virtual environments as mentioned in this paper , which can work offline and has a user-friendly interface well accepted by students.
Abstract: The biophotonics app enables multidisciplinary and self-paced learning in both in-person or virtual environments. The app can work offline and has a user-friendly interface well accepted by students. App instructions are publicly available.

Journal ArticleDOI
TL;DR: In this article , the authors used a bespoke data collection interface to generate speaking chatbots and made them available as tasks on the crowd sourcing platform Mechanical Turk to simulate how privacy can be communicated in a dialogue between user and machine.

Journal ArticleDOI
TL;DR: The study concludes that a one-size-fits-all UI design is unsuitable for shared devices, i.e., smart TV, and recommends a personalized adaptive UI, which may enhance the learnability and UXs of the smart TV viewers.
Abstract: The user interface (UI) is a primary source of interaction with a device. Since the introduction of graphical user interface (GUI), software engineers and designers have been trying to make user-friendly UIs for various computing devices, including smartphones, tablets, and computers. The modern smart TV also comes with built-in operating systems. However, little attention has been given to this prominent entertainment device, i.e., smart TV. The technological advancement and proliferation of smart TV enabled the manufacturer to provide rich functionalities and features; however, this richness resulted in more clutter and attention-demanding interfaces. Besides, smart TV is a lean-back supporting device having a diverse range of users. Therefore, smart TV’s usability and user experience (UX) are questionable due to diverse user interests and limited features of traditional remote controls. This study aimed to discuss and critically analyze the features and functionalities of the existing well-known smart TV UIs of various operating systems in the context of usability, cognition, and UX. Moreover, this study highlights the issues and challenges in the current smart TV UIs and recommends some research opportunities to cope with the smart TV UIs. This study further reports and validates some overlooked factors affecting smart TV UIs and UX. A subjective study and usability tests from diverse users are presented to validate these factors. The study concludes that a one-size-fits-all UI design is unsuitable for shared devices, i.e., smart TV. This study further recommends a personalized adaptive UI, which may enhance the learnability and UXs of the smart TV viewers.

Journal ArticleDOI
TL;DR: In this paper, the authors present a user interface for whole-farm MP models implemented in the open-source statistical programming language R. They apply the tool in a participatory research process in Paraguay, analyzing the opportunities and barriers to adoption of new agroforestry options for smallholder farmers.

Journal ArticleDOI
29 Apr 2022
TL;DR: This work assesses different user‐interfaces that effectively transfer mentor's hand gestures to the movements of virtual surgical instruments.
Abstract: Recent tele‐mentoring technologies for minimally invasive surgery (MIS) augments the operative field with movements of virtual surgical instruments as visual cues. The objective of this work is to assess different user‐interfaces that effectively transfer mentor's hand gestures to the movements of virtual surgical instruments.

Proceedings ArticleDOI
26 Jul 2022
TL;DR: The results indicate that using a CUI for maintenance reports saves a significant amount of time, is no more cognitively demanding than writing a report, and results in maintenance reports of higher quality.
Abstract: Maintaining a complex system, such as a modern production line, is a knowledge-intensive task. Many firms use maintenance reports as a decision support tool. However, reports are often poor quality and tedious to compile. A Conversational User Interface (CUI) could streamline the reporting process by validating the user’s input, eliciting more valuable information, and reducing the time needed. In this paper, we use a Technology Probe to explore the potential of a CUI to create instructional maintenance reports. We conducted a between-groups study (N = 24) in which participants had to replace the inner tube of a bicycle tire. One group documented the procedure using a CUI while replacing the inner tube, whereas the other group compiled a paper report afterward. The CUI was enacted by a researcher according to a set of rules. Our results indicate that using a CUI for maintenance reports saves a significant amount of time, is no more cognitively demanding than writing a report, and results in maintenance reports of higher quality.

Journal ArticleDOI
TL;DR: The proposed multimodal interface presents better results compared to traditional interfaces and is evaluated with the support of police agents Explosive Ordnance Disposal Unit-Arequipa (UDEX-AQP), who evaluated the developed interfaces to find a more intuitive system that generates the least stress load to the operator.
Abstract: A global human–robot interface that meets the needs of Technical Explosive Ordnance Disposal Specialists (TEDAX) for the manipulation of a robotic arm is of utmost importance to make the task of handling explosives safer, more intuitive and also provide high usability and efficiency. This paper aims to evaluate the performance of a multimodal system for a robotic arm that is based on Natural User Interface (NUI) and Graphical User Interface (GUI). The mentioned interfaces are compared to determine the best configuration for the control of the robotic arm in Explosive Ordnance Disposal (EOD) applications and to improve the user experience of TEDAX agents. Tests were conducted with the support of police agents Explosive Ordnance Disposal Unit-Arequipa (UDEX-AQP), who evaluated the developed interfaces to find a more intuitive system that generates the least stress load to the operator, resulting that our proposed multimodal interface presents better results compared to traditional interfaces. The evaluation of the laboratory experiences was based on measuring the workload and usability of each interface evaluated.

Proceedings ArticleDOI
08 Mar 2022
TL;DR: The GANSpiration approach that suggests design examples for both targeted and serendipitous inspiration, leveraging a style-based Generative Adversarial Network is proposed, paving the road of using advanced generative machine learning techniques in supporting the creative design practice.
Abstract: Inspiration from design examples plays a crucial role in the creative process of user interface design. However, current tools and techniques that support inspiration usually only focus on example browsing with limited user control or similarity-based example retrieval, leading to undesirable design outcomes such as focus drift and design fixation. To address these issues, we propose the GANSpiration approach that suggests design examples for both targeted and serendipitous inspiration, leveraging a style-based Generative Adversarial Network. A quantitative evaluation revealed that the outputs of GANSpiration-based example suggestion approaches are relevant to the input design, and at the same time include diverse instances. A user study with professional UI/UX practitioners showed that the examples suggested by our approach serve as viable sources of inspiration for overall design concepts and specific design elements. Overall, our work paves the road of using advanced generative machine learning techniques in supporting the creative design practice.


Journal ArticleDOI
TL;DR: In this article, a set of RFID (Radio Frequency Identification)-based systems of patient care based on physiological signals in the pursuit of a remote medical care system was established. And the results of experiments and comparative analyses show that the proposed system is superior to competing systems in use.
Abstract: The safety of patients and the quality of medical care provided to them are vital for their wellbeing. This study establishes a set of RFID (Radio Frequency Identification)-based systems of patient care based on physiological signals in the pursuit of a remote medical care system. The RFID-based positioning system allows medical staff to continuously observe the patient's health and location. The staff can thus respond to medical emergencies in time and appropriately care for the patient. When the COVID-19 pandemic broke out, the proposed system was used to provide timely information on the location and body temperature of patients who had been screened for the disease. The results of experiments and comparative analyses show that the proposed system is superior to competing systems in use. The use of remote monitoring technology makes user interface easier to provide high-quality medical services to remote areas with sparse populations, and enables better care of the elderly and patients with mobility issues. It can be found from the experiments of this research that the accuracy of the position sensor and the ability of package delivery are the best among the other related studies. The presentation of the graphical interface is also the most cordial among human-computer interaction and the operation is simple and clear. © 2022 CRL Publishing. All rights reserved.

Journal ArticleDOI
TL;DR: In this article, a picture-based and conversational interaction with users was proposed to elicit their feedback on perceived similarities in order to determine the most likely diagnosis of a diseased target apple.
Abstract: This article presents the development of an expert system to support the diagnosis of post-harvest diseases of stored apples. We propose a picture-based and conversational interaction with users, where sampled images depicting symptoms of apples with known diseases are presented to users to elicit their feedback on perceived similarities in order to determine the most likely diagnosis of a diseased target apple. This article makes, besides the description of the industrial application scenario, multiple contributions circled around three rounds of user studies: (i) an usability and effectiveness assessment of the approach, where three user interface configurations are put to a test and the effectiveness of different types of user feedback mechanisms is assessed; (ii) contextual multi-armed bandit approaches for dynamic selection of displayed images with symptoms of diseased apples, that clearly outperform random and greedy sampling baseline strategies; (iii) a comparison of two different strategies for determining the context representation of a contextual multi-armed bandit approach, namely based on PCA of image features and a gamified large-scale user study. We therefore provide design insights for the development of such diagnosis applications on diseases that manifest themselves through visual symptoms in general and, hence, the findings can be also valid for domains other than post-harvest fruit diseases.


Journal ArticleDOI
Di Zhu, Dahua Wang, Ruonan Huang, Yu Jing, Li Qiao, Wei Li 
TL;DR: The design and evaluation of a to-do list application to help older adults encode, store, and retrieve non-declarative memory, such as tasks they plan to do, found three usability issues and proposed an iteration plan.

Proceedings ArticleDOI
01 Mar 2022
TL;DR: In this paper , a new interactive virtual environment (IVE) designed to compare the user inter-action between the mode with traditional graphical user interfaces (GUI) with the mode in which every element of interface is replaced by voice user interface (VUI).
Abstract: A trend of using natural interaction such us speech is clearly visible in human-computer interaction, while in interactive virtual environments (IVE) still it has not become a common practice. Most of input interface elements are graphical and usually they are im-plemented as non-diegetic 2D boards hanging in 3D space. Such holographic interfaces are usually hard to learn and operate, espe-cially for inexperienced users. We have observed a need to explore the potential of using multimodal interfaces in VR and conduct the systematic research that compare the interaction mode in order to optimize the interface and increase the quality of user experience (UX). We introduce a new IVE designed to compare the user inter-action between the mode with traditional graphical user interface (GUI) with the mode in which every element of interface is replaced by voice user interface (VUI). In each version, four scenarios of interaction with a virtual assistant in a sci-fi location are implemented using Unreal Engine, each of them lasting several minutes. The IVE is supplemented with tools for automatic generating reports on user behavior (clicktracking, audiotracking and eyetracking) which makes it useful for UX and usability studies.

Journal ArticleDOI
01 Aug 2022-IT
TL;DR: An overview of the current research in adaptive systems is provided and methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems are discussed.
Abstract: Abstract Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.

Proceedings ArticleDOI
29 Apr 2022
TL;DR: In this article , the authors propose AutoVCI, a novel approach to automatically generate voice command interface from smartphone operation sequences. But the approach requires a tremendous amount of effort in labeling and compiling the graphical user interface (GUI) and utterance data.
Abstract: Using voice commands to automate smartphone tasks (e.g., making a video call) can effectively augment the interactivity of numerous mobile apps. However, creating voice command interfaces requires a tremendous amount of effort in labeling and compiling the graphical user interface (GUI) and the utterance data. In this paper, we propose AutoVCI, a novel approach to automatically generate voice command interface (VCI) from smartphone operation sequences. The generated voice command interface has two distinct features. First, it automatically maps a voice command to GUI operations and fills in parameters accordingly, leveraging the GUI data instead of corpus or hand-written rules. Second, it launches a complementary Q&A dialogue to confirm the intention in case of ambiguity. In addition, the generated voice command interface can learn and evolve from user interactions. It accumulates the history command understanding results to annotate the user’s input and improve its semantic understanding ability. We implemented this approach on Android devices and conducted a two-phase user study with 16 and 67 participants in each phase. Experimental results of the study demonstrated the practical feasibility of AutoVCI.