scispace - formally typeset
Search or ask a question

Showing papers on "Interface (computing) published in 2018"


Journal ArticleDOI
TL;DR: HiGlass is presented, an open source visualization tool built on web technologies that provides a rich interface for rapid, multiplex, and multiscale navigation of 2D genomic maps alongside 1D genomic tracks, allowing users to combine various data types, synchronize multiple visualization modalities, and share fully customizable views with others.
Abstract: We present HiGlass, an open source visualization tool built on web technologies that provides a rich interface for rapid, multiplex, and multiscale navigation of 2D genomic maps alongside 1D genomic tracks, allowing users to combine various data types, synchronize multiple visualization modalities, and share fully customizable views with others. We demonstrate its utility in exploring different experimental conditions, comparing the results of analyses, and creating interactive snapshots to share with collaborators and the broader public. HiGlass is accessible online at http://higlass.io and is also available as a containerized application that can be run on any platform.

569 citations


Journal ArticleDOI
TL;DR: Physical haptic feedback mechanism is introduced to result in muscle activity that would generate EMG signals in a natural manner, in order to achieve intuitive human impedance transfer through a designed coupling interface.
Abstract: It has been established that the transfer of human adaptive impedance is of great significance for physical human–robot interaction (pHRI). By processing the electromyography (EMG) signals collected from human muscles, the limb impedance could be extracted and transferred to robots. The existing impedance transfer interfaces rely only on visual feedback and, thus, may be insufficient for skill transfer in a sophisticated environment. In this paper, physical haptic feedback mechanism is introduced to result in muscle activity that would generate EMG signals in a natural manner, in order to achieve intuitive human impedance transfer through a designed coupling interface. Relevant processing methods are integrated into the system, including the spectral collaborative representation-based classifications method used for hand motion recognition; fast smooth envelop and dimensionality reduction algorithm for arm endpoint stiffness estimation. The tutor’s arm endpoint motion trajectory is directly transferred to the robot by the designed coupling module without the restriction of hands. Haptic feedback is provided to the human tutor according to skill learning performance to enhance the teaching experience. The interface has been experimentally tested by a plugging-in task and a cutting task. Compared with the existing interfaces, the developed one has shown a better performance. Note to Practitioners —This paper is motivated by the limited performance of skill transfer in the existing human–robot interfaces. Conventional robots perform tasks independently without interaction with humans. However, the new generation of robots with the characteristics, such as flexibility and compliance, become more involved in interacting with humans. Thus, advanced human robot interfaces are required to enable robots to learn human manipulation skills. In this paper, we propose a novel interface for human impedance adaptive skill transfer in a natural and intuitive manner. The developed interface has the following functionalities: 1) it transfers human arm impedance adaptive motion to the robot intuitively; 2) it senses human motion signals that are decoded into human hand gesture and arm endpoint stiffness that ia employed for natural human robot interaction; and 3) it provides human tutor haptic feedback for enhanced teaching experience. The interface can be potentially used in pHRI, teleoperation, human motor training systems, etc.

172 citations


Journal ArticleDOI
TL;DR: The review identifies the potentials of electroencephalography (EEG) based BCI applications for locomotion and mobility rehabilitation and suggests to structure EEG-BCI controlled LL assistive devices within the presented framework, for future generation of intent-based multifunctional controllers.
Abstract: Over recent years, brain-computer interface (BCI) has emerged as an alternative communication system between the human brain and an output device. Deciphered intents, after detecting electrical signals from the human scalp, are translated into control commands used to operate external devices, computer displays and virtual objects in the real-time. BCI provides an augmentative communication by creating a muscle-free channel between the brain and the output devices, primarily for subjects having neuromotor disorders, or trauma to nervous system, notably spinal cord injuries (SCI), and subjects with unaffected sensorimotor functions but disarticulated or amputated residual limbs. This review identifies the potentials of electroencephalography (EEG) based BCI applications for locomotion and mobility rehabilitation. Patients could benefit from its advancements such as, wearable lower-limb (LL) exoskeletons, orthosis, prosthesis, wheelchairs, and assistive-robot devices. The EEG communication signals employed by the aforementioned applications that also provide feasibility for future development in the field are sensorimotor rhythms (SMR), event-related potentials (ERP) and visual evoked potentials (VEP). The review is an effort to progress the development of user’s mental task related to LL for BCI reliability and confidence measures. As a novel contribution, the reviewed BCI control paradigms for wearable LL and assistive-robots are presented by a general control framework fitting in hierarchical layers. It reflects informatic interactions, between the user, the BCI operator, the shared controller, the robotic device and the environment. Each sub layer of the BCI operator is discussed in detail, highlighting the feature extraction, classification and execution methods employed by the various systems. All applications’ key features and their interaction with the environment are reviewed for the EEG-based activity mode recognition, and presented in form of a table. It is suggested to structure EEG-BCI controlled LL assistive devices within the presented framework, for future generation of intent-based multifunctional controllers. Despite the development of controllers, for BCI-based wearable or assistive devices that can seamlessly integrate user intent, practical challenges associated with such systems exist and have been discerned, which can be constructive for future developments in the field.

145 citations


Proceedings ArticleDOI
19 Mar 2018
TL;DR: Capybara improves event detection accuracy by 2x-4x over statically-provisioned energy capacity, maintains response latency within 1.5x of a continuously-powered baseline, and enables reactive applications that are intractable with existing power systems.
Abstract: Battery-free, energy-harvesting devices operate using energy collected exclusively from their environment. Energy-harvesting devices allow maintenance-free deployment in extreme environments, but requires a power system to provide the right amount of energy when an application needs it. Existing systems must provision energy capacity statically based on an application's peak demand which compromises efficiency and responsiveness when not at peak demand. This work presents Capybara: a co-designed hardware/software power system with dynamically reconfigurable energy storage capacity that meets varied application energy demand. The Capybara software interface allows programmers to specify the energy mode of an application task. Capybara's runtime system reconfigures Capybara's hardware energy capacity to match application demand. Capybara also allows a programmer to write reactive application tasks that pre-allocate a burst of energy that it can spend in response to an asynchronous (e.g., external) event. We instantiated Capybara's hardware design in two EH devices and implemented three reactive sensing applications using its software interface. Capybara improves event detection accuracy by 2x-4x over statically-provisioned energy capacity, maintains response latency within 1.5x of a continuously-powered baseline, and enables reactive applications that are intractable with existing power systems.

143 citations


Proceedings ArticleDOI
26 Feb 2018
TL;DR: It is explored how advances in augmented reality (AR) technologies are creating a new design space for mediating robot teleoperation by enabling novel forms of intuitive, visual feedback and several objective and subjective performance benefits over existing systems.
Abstract: Robot teleoperation can be a challenging task, often requiring a great deal of user training and expertise, especially for platforms with high degrees-of-freedom (e.g., industrial manipulators and aerial robots). Users often struggle to synthesize information robots collect (e.g., a camera stream) with contextual knowledge of how the robot is moving in the environment. We explore how advances in augmented reality (AR) technologies are creating a new design space for mediating robot teleoperation by enabling novel forms of intuitive, visual feedback. We prototype several aerial robot teleoperation interfaces using AR, which we evaluate in a 48-participant user study where participants completed an environmental inspection task. Our new interface designs provided several objective and subjective performance benefits over existing systems, which often force users into an undesirable paradigm that divides user attention between monitoring the robot and monitoring the robot’s camera feed(s).

132 citations


Journal ArticleDOI
TL;DR: A survey on human-computer interaction for smart glasses can be found in this paper, where the interaction methods can be classified into hand-held, touch, and touchless input.
Abstract: Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human–computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.

113 citations


Journal ArticleDOI
Meng Wang1, Li Renjie1, Ruofan Zhang1, Guangye Li1, Dingguo Zhang1 
TL;DR: A wearable BCI system based on the steady-state visual evoked potential (SSVEP), which enables 3-D navigation of quadcopter flight with immersive first-person visual feedback using a head-mounted device and provides asynchronous switch control for the users to alleviate the user’s operational burden.
Abstract: Restoring the interaction between disabled people and the 3-D physical world via a brain-computer interface (BCI) is an exciting topic. To this end, we designed a wearable BCI system based on the steady-state visual evoked potential (SSVEP), which enables 3-D navigation of quadcopter flight with immersive first-person visual feedback using a head-mounted device. In addition, to alleviate the user’s operational burden, this paper provides asynchronous switch control for the users. The transitional state due to head movement in an asynchronous BCI was isolated online and translated into hover to eliminate its influence. The experimental results in the physical environment showed that the subjects could accomplish the 3-D flight tasks accurately and smoothly using our system. In particular, in this paper, we proposed an information transfer rate metric that is suitable for the asynchronous task. We demonstrated the feasibility of using the head-mounted device and a proper control strategy to facilitate the portability and practicability of the SSVEP-based BCI system for its navigation utility.

112 citations


Journal ArticleDOI
21 Nov 2018-PLOS ONE
TL;DR: This study demonstrates, for the first time, high-performance iBCI control of an unmodified, commercially available, general-purpose mobile computing device by people with tetraplegia.
Abstract: General-purpose computers have become ubiquitous and important for everyday life, but they are difficult for people with paralysis to use. Specialized software and personalized input devices can improve access, but often provide only limited functionality. In this study, three research participants with tetraplegia who had multielectrode arrays implanted in motor cortex as part of the BrainGate2 clinical trial used an intracortical brain-computer interface (iBCI) to control an unmodified commercial tablet computer. Neural activity was decoded in real time as a point-and-click wireless Bluetooth mouse, allowing participants to use common and recreational applications (web browsing, email, chatting, playing music on a piano application, sending text messages, etc.). Two of the participants also used the iBCI to “chat” with each other in real time. This study demonstrates, for the first time, high-performance iBCI control of an unmodified, commercially available, general-purpose mobile computing device by people with tetraplegia.

112 citations


Journal ArticleDOI
TL;DR: The task motion and self-motion (CTS) methods are coordinated to enhance the intelligence of the shared control system by equipping the robot with an autonomous obstacle avoidance function.
Abstract: This paper reports the development of an intelligent shared control system for a robotic manipulator that is commanded by the user's mind. The target objects are detected by a vision system and then displayed to the user in a video that shows them fused with flicking diamonds that are designed to excite electroencephalograph (EEG) signals at different frequency bands. Through the analysis of the invoked EEG signals, a brain–computer interface is developed to infer the exact object that is required by the user. These results are then transferred to the shared control system, which is enabled by visual servoing techniques to achieve accurate object manipulation. The task motion and self-motion (CTS) methods are coordinated to enhance the intelligence of the shared control system by equipping the robot with an autonomous obstacle avoidance function. Extensive experimental studies are performed to verify that the adaptive object tracking algorithm, the CTS method, and the least-squares method are helpful in improving the performance of the intelligent robotic system.

104 citations


Journal ArticleDOI
27 Feb 2018
TL;DR: The development of a gesture recognition technique using a mm-wave radar sensor for in-car infotainment control and a machine learning engine that can perform real-time gesture recognition is detailed.
Abstract: This article details the development of a gesture recognition technique using a mm-wave radar sensor for in-car infotainment control. Gesture recognition is becoming a more prominent form of human-computer interaction and can be used in the automotive industry to provide a safe and intuitive control interface that will limit driver distraction. We use a 60 GHz mm-wave radar sensor to detect precise features of fine motion. Specific gesture features are extracted and used to build a machine learning engine that can perform real-time gesture recognition. This article discusses the user requirements and in-car environmental constraints that influenced design decisions. Accuracy results of the technique are presented, and recommendations for further research and improvements are made.

98 citations


Journal ArticleDOI
TL;DR: The authors recommend that HMI designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user's first contact with the system, and continues long thereafter.
Abstract: While automated vehicle technology progresses, potentially leading to a safer and more efficient traffic environment, many challenges remain within the area of human factors, such as user trust for automated driving (AD) vehicle systems. The aim of this paper is to investigate how an appropriate level of user trust for AD vehicle systems can be created via human–machine interaction (HMI). A guiding framework for implementing trust-related factors into the HMI interface is presented. This trust-based framework incorporates usage phases, AD events, trust-affecting factors, and levels explaining each event from a trust perspective. Based on the research findings, the authors recommend that HMI designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user's first contact with the system, and continues long thereafter. Furthermore, factors-affecting trust change, both during user interactions with the system and over time; thus, HMI concepts need to be able to adapt. Future work should be dedicated to understanding how trust-related factors interact, as well as validating and testing the trust-based framework.

Journal ArticleDOI
TL;DR: A novel service workflow reconfiguration architecture is designed to provide guidance, which ranges from monitoring to recommendations for project implementation, and experiments are conducted to demonstrate the effectiveness and efficiency of the proposed method.

Journal ArticleDOI
TL;DR: The novel online learning method presented consists of a self-adaptive GT2 FS that can autonomously self- adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot.
Abstract: This paper presents a self-adaptive autonomous online learning through a general type-2 fuzzy system (GT2 FS) for the motor imagery (MI) decoding of a brain-machine interface (BMI) and navigation of a bipedal humanoid robot in a real experiment, using electroencephalography (EEG) brain recordings only. GT2 FSs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) the maximum number of EEG channels is limited and fixed; 2) no possibility of performing repeated user training sessions; and 3) desirable use of unsupervised and low-complexity feature extraction methods. The novel online learning method presented in this paper consists of a self-adaptive GT2 FS that can autonomously self-adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath–Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models), which are learnt in a continous (trial-by-trial) non-iterative basis. The effectiveness of the proposed method is demonstrated in a detailed BMI experiment, in which 15 untrained users were able to accurately interface with a humanoid robot, in a single session, using signals from six EEG electrodes only.

Journal ArticleDOI
11 Jan 2018-PLOS ONE
TL;DR: QCloud is presented, a cloud-based system to support proteomics laboratories in daily quality assessment using a user-friendly interface, easy setup, automated data processing and archiving, and unbiased instrument evaluation.
Abstract: The increasing number of biomedical and translational applications in mass spectrometry-based proteomics poses new analytical challenges and raises the need for automated quality control systems. Despite previous efforts to set standard file formats, data processing workflows and key evaluation parameters for quality control, automated quality control systems are not yet widespread among proteomics laboratories, which limits the acquisition of high-quality results, inter-laboratory comparisons and the assessment of variability of instrumental platforms. Here we present QCloud, a cloud-based system to support proteomics laboratories in daily quality assessment using a user-friendly interface, easy setup, automated data processing and archiving, and unbiased instrument evaluation. QCloud supports the most common targeted and untargeted proteomics workflows, it accepts data formats from different vendors and it enables the annotation of acquired data and reporting incidences. A complete version of the QCloud system has successfully been developed and it is now open to the proteomics community (http://qcloud.crg.eu). QCloud system is an open source project, publicly available under a Creative Commons License Attribution-ShareAlike 4.0.

Journal ArticleDOI
TL;DR: In this article, the operation of solution-gated field-effect transistors (SGFETs) and characterizing their performance in saline solution were discussed and compared with the performance of state-of-the-art neural technologies.
Abstract: Brain–computer interfaces and neural prostheses based on the detection of electrocorticography (ECoG) signals are rapidly growing fields of research. Several technologies are currently competing to be the first to reach the market; however, none of them fulfill yet all the requirements of the ideal interface with neurons. Thanks to its biocompatibility, low dimensionality, mechanical flexibility, and electronic properties, graphene is one of the most promising material candidates for neural interfacing. After discussing the operation of graphene solution-gated field-effect transistors (SGFET) and characterizing their performance in saline solution, it is reported here that this technology is suitable for μ-ECoG recordings through studies of spontaneous slow-wave activity, sensory-evoked responses on the visual and auditory cortices, and synchronous activity in a rat model of epilepsy. An in-depth comparison of the signal-to-noise ratio of graphene SGFETs with that of platinum black electrodes confirms that graphene SGFET technology is approaching the performance of state-of-the art neural technologies.

Journal ArticleDOI
Deyi Li1, Hongbo Gao1
TL;DR: The formalization of driving cognition reduces the influence of sensor type, model, quantity, and location on the whole software architecture, which makes the software architecture portable on different intelligent driving hardware platforms.

Journal ArticleDOI
01 Aug 2018
TL;DR: This paper presents Northstar, the Interactive Data Science System, which has been developed over the last 4 years to explore designs that make advanced analytics and model building more accessible.
Abstract: In order to democratize data science, we need to fundamentally rethink the current analytics stack, from the user interface to the "guts." Most importantly, enabling a broader range of users to unfold the potential of (their) data requires a change in the interface and the "protection" we offer them. On the one hand, visual interfaces for data science have to be intuitive, easy, and interactive to reach users without a strong background in computer science or statistics. On the other hand, we need to protect users from making false discoveries. Furthermore, it requires that technically involved (and often boring) tasks have to be automatically done by the system so that the user can focus on contributing their domain expertise to the problem. In this paper, we present Northstar, the Interactive Data Science System, which we have developed over the last 4 years to explore designs that make advanced analytics and model building more accessible.

Journal ArticleDOI
TL;DR: A major SimVascular (SV) release is introduced that includes a new graphical user interface (GUI) designed to improve user experience and major changes to the software platform and outline features added in this new release.
Abstract: Patient-specific simulation plays an important role in cardiovascular disease research, diagnosis, surgical planning and medical device design, as well as education in cardiovascular biomechanics. simvascular is an open-source software package encompassing an entire cardiovascular modeling and simulation pipeline from image segmentation, three-dimensional (3D) solid modeling, and mesh generation, to patient-specific simulation and analysis. SimVascular is widely used for cardiovascular basic science and clinical research as well as education, following increased adoption by users and development of a GATEWAY web portal to facilitate educational access. Initial efforts of the project focused on replacing commercial packages with open-source alternatives and adding increased functionality for multiscale modeling, fluid-structure interaction (FSI), and solid modeling operations. In this paper, we introduce a major SimVascular (SV) release that includes a new graphical user interface (GUI) designed to improve user experience. Additional improvements include enhanced data/project management, interactive tools to facilitate user interaction, new boundary condition (BC) functionality, plug-in mechanism to increase modularity, a new 3D segmentation tool, and new computer-aided design (CAD)-based solid modeling capabilities. Here, we focus on major changes to the software platform and outline features added in this new release. We also briefly describe our recent experiences using SimVascular in the classroom for bioengineering education.

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES) and show that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme.

Journal ArticleDOI
TL;DR: The core of Xi-cam is an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms, and targets cross-facility and cross-technique collaborative development, in support of multi-modal analysis.
Abstract: Xi-cam is an extensible platform for data management, analysis and visualization. Xi-cam aims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core of Xi-cam is an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. With Xi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data. Xi-cam's plugin-based architecture targets cross-facility and cross-technique collaborative development, in support of multi-modal analysis. Xi-cam is open-source and cross-platform, and available for download on GitHub.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This study analyzes the optimum configuration of the parameters that mostly influence the operation and performance of C-V2X or LTE-V Mode 4 for different channel loads and traffic conditions and compares with existing studies taking into account the importance of using accurate models for adequately configuring the C- V2X Mode 4 interface.
Abstract: The 3GPP has released the C-V2X standard to support V2X (Vehicle-to-Everything) communications using the LTE sidelink PC5 interface. This standard includes two modes of operation, and this study focuses on the Mode 4. This mode does not require the support of the cellular infrastructure, and vehicles can autonomously select their sub-channels for their V2V transmissions. The adequate operation of C-V2X Mode 4 requires a careful configuration of its main parameters. This study analyzes the optimum configuration of the parameters that mostly influence the operation and performance of C-V2X or LTE-V Mode 4. This analysis is conducted for different channel loads and traffic conditions. The conclusions obtained are compared with existing studies taking into account the importance of using accurate models for adequately configuring the C-V2X Mode 4 interface.

Journal ArticleDOI
TL;DR: This paper proposes a novel human-computer interaction system exploiting gesture recognition based on the combined usage of an head-mounted display and a multi-modal sensor setup including also a depth camera.
Abstract: This paper proposes a novel human-computer interaction system exploiting gesture recognition. It is based on the combined usage of an head-mounted display and a multi-modal sensor setup including also a depth camera. The depth information is used both to seamlessly include augmented reality elements into the real world and as input for a novel gesture-based interface. Reliable gesture recognition is obtained through a real-time algorithm exploiting novel feature descriptors arranged in a multi-dimensional structure fed to an SVM classifier. The system has been tested with various augmented reality applications including an innovative human-computer interaction scheme where virtual windows can be arranged into the real world observed by the user.

Journal ArticleDOI
TL;DR: In this article, the fabrication of electrically conductive yarns made of natural fiber yarns coated with graphene nanoplatelets (GNPs) and carbon black (CB) is reported.

Journal ArticleDOI
TL;DR: The robust structure, stable output performance and self-powered sensing property enable the VR-3D-CS as an ideal human machine interface towards AR interface, batteryless and energy saving applications.

Book
10 Feb 2018
TL;DR: This paper proposes several modifications to the process management facilities of the UNIX kernel that are primarily of interest for parallel processing, such as a generalized fork system call that can efficiently create many processes at once.
Abstract: Despite early development exclusively on uniprocessors, a growing number of UNIX systems are now available for shared memory (MIMD) multiprocessors. While much of this trend has been driven by the general success of the UNIX interface as an emerging industry standard, experience has shown that the basic UNIX design is amenable to such environments. Relatively simple extensions such as shared memory and synchronization mechanisms suffice for many parallel programs. While simple needs can be satisfied in a simple fashion, the desire to support more sophisticated applications has created pressure for ever more complex extensions. Is there a better way to meet such needs? Although some argue that it is time to abandon the UNIX model completely, we believe that viable alternatives exist within the traditional framework. In this paper we propose several modifications to the process management facilities of the UNIX kernel. Some of them are primarily of interest for parallel processing, such as a generalized fork system call that can efficiently create many processes at once, while others are equally attractive in other contexts, such as mechanisms for improved I/O and IPC performance. While the primary goals are improved performance and reliability, a strong aesthetic judgement is applied to create a total design that is cohesively integrated. While the concepts presented here are applicable to any UNIX environment, they have been conceived in the context of very large scale parallel computing, with hundreds or thousands of processors. An initial implementation of these extensions is currently underway for the NYU Ultracomputer prototype and the IBM RP3.

Journal ArticleDOI
TL;DR: A systematic methodology to identify the spontaneous gesture-based interaction strategies of naive individuals with a distant device is presented, and to exploit this information to develop a data-driven body–machine interface (BoMI) to efficiently control this device.
Abstract: The accurate teleoperation of robotic devices requires simple, yet intuitive and reliable control interfaces However, current human–machine interfaces (HMIs) often fail to fulfill these characteristics, leading to systems requiring an intensive practice to reach a sufficient operation expertise Here, we present a systematic methodology to identify the spontaneous gesture-based interaction strategies of naive individuals with a distant device, and to exploit this information to develop a data-driven body–machine interface (BoMI) to efficiently control this device We applied this approach to the specific case of drone steering and derived a simple control method relying on upper-body motion The identified BoMI allowed participants with no prior experience to rapidly master the control of both simulated and real drones, outperforming joystick users, and comparing with the control ability reached by participants using the bird-like flight simulator Birdly

Proceedings ArticleDOI
23 Sep 2018
TL;DR: Results show that combining gestures with mid-air haptic feedback was particularly promising, reducing the number of long glances and mean off-road glance time associated with the in-vehicle tasks.
Abstract: Employing a 2x2 within-subjects design, forty-eight experienced drivers (28 male, 20 female) undertook repeated button selection and 'slider-bar' manipulation tasks, to compare a traditional touchscreen with a virtual mid-air gesture interface in a driving simulator Both interfaces were tested with and without haptic feedback generated using ultrasound Results show that combining gestures with mid-air haptic feedback was particularly promising, reducing the number of long glances and mean off-road glance time associated with the in-vehicle tasks For slider-bar tasks in particular, gestures-with-haptics was also associated with the shortest interaction times, highest number of correct responses and least 'overshoots', and was favoured by participants In contrast, for button-selection tasks, the touchscreen was most popular, enabling the highest accuracy and quickest responses, particularly when combined with haptic feedback to guide interactions, although this also increased visual demand The study shows clear potential for gestures with mid-air ultrasonic haptic feedback in the automotive domain

Patent
09 Jan 2018
TL;DR: In this paper, the authors provided an interworking method between networks of a user equipment (UE) in a wireless communication system, including: performing a first interworking procedure for changing a network of the UE from a 5-generation core network (5GC) network to an evolved packet core (EPC) network.
Abstract: According to an aspect of the present invention, there is provided an interworking method between networks of a user equipment (UE) in a wireless communication system, including: performing a first interworking procedure for changing a network of the UE from a 5-generation core network (5GC) network to an evolved packet core (EPC) network, wherein, when an interface between the 5GC and the EPC networks does not exist, the performing of the first interworking procedure includes: receiving a first indication from an access and mobility management function (AMF) of the 5GC network; and performing a handover attach procedure in the EPC network based on the first indication.

Journal ArticleDOI
01 Aug 2018
TL;DR: JedAI 2.0 is presented, which enhances the original version in three important respects: time efficiency, effectiveness and usability.
Abstract: JedAI is an Entity Resolution toolkit that can be used in three ways: (i) as an open-source library that combines state-of-the-art methods into a plethora of end-to-end workflows, (ii) as a user-friendly desktop application with a wizardlike interface that provides complex, out-of-the-box solutions even to lay users, and (iii) as a workbench for comparing the performance of numerous workflows over both structured and semi-structured data Here, we present its significant upgrade, JedAI 20, which enhances the original version in three important respects: (i) time efficiency, as the running time has been drastically reduced with the use of high performance data structures and multi-core processing, (ii) effectiveness, since we enriched its library with more established methods, a new layer that exploits loose schema binding as well as the automatic, data-driven configuration of individual methods or entire workflows, and (iii)usability, as the GUI now enables users to manually configure any method based on concrete guidelines, to store the matching results into any of the supported data formats and to visually explore both input and output data

Journal ArticleDOI
TL;DR: VisTiles is a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data and presents a web-based prototype implementation as a specific instance of the concepts.
Abstract: We present V is T iles , a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.