scispace - formally typeset
Search or ask a question

Showing papers on "Interface (computing) published in 2020"


Journal ArticleDOI
TL;DR: This article reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016 and group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately.
Abstract: A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common non-invasive BCI modality, electroencephalogram (EEG), is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This paper reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications – motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks – are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the paper, which may point to future research directions.

128 citations


Book ChapterDOI
01 Jan 2020
TL;DR: A virtual reality interface that allows users to remotely teleoperate a physical robot in real-time and to directly move the robot’s end effector by moving a hand controller in 3D space, enabling fine-grained dexterous control.
Abstract: Teleoperation allows a human to remotely operate a robot to perform complex and potentially dangerous tasks such as defusing a bomb, repairing a nuclear reactor, or maintaining the exterior of a space station. Existing teleoperation approaches generally rely on computer monitors to display sensor data and joysticks or keyboards to actuate the robot. These approaches use 2D interfaces to view and interact with a 3D world, which can make using them difficult for complex or delicate tasks. To address this problem, we introduce a virtual reality interface that allows users to remotely teleoperate a physical robot in real-time. Our interface allows users to control their point of view in the scene using virtual reality, increasing situational awareness (especially of object contact), and to directly move the robot’s end effector by moving a hand controller in 3D space, enabling fine-grained dexterous control. We evaluated our interface on a cup-stacking manipulation task with 18 participants, comparing the relative effectiveness of a keyboard and mouse interface, virtual reality camera control, and positional hand tracking. Our system reduces task completion time from 153 s (\(\pm 44\)) to 53 s (\(\pm 37\)), a reduction of 66%, while improving subjective assessments of system usability and workload. Additionally, we have shown the effectiveness of our system over long distances, successfully completing a cup stacking task from over 40 miles away. Our paper contributes a quantitative assessment of robot grasping teleoperation across desktop and virtual reality interfaces.

101 citations


Journal ArticleDOI
TL;DR: A new strategy to interface silicon-based chips with three-dimensional microwire arrays is presented, providing the link between rapidly-developing electronics and high density neural interfaces, and has excellent recording performance.
Abstract: Multi-channel electrical recordings of neural activity in the brain is an increasingly powerful method revealing new aspects of neural communication, computation, and prosthetics. However, while planar silicon-based CMOS devices in conventional electronics scale rapidly, neural interface devices have not kept pace. Here, we present a new strategy to interface silicon-based chips with three-dimensional microwire arrays, providing the link between rapidly-developing electronics and high density neural interfaces. The system consists of a bundle of microwires mated to large-scale microelectrode arrays, such as camera chips. This system has excellent recording performance, demonstrated via single unit and local-field potential recordings in isolated retina and in the motor cortex or striatum of awake moving mice. The modular design enables a variety of microwire types and sizes to be integrated with different types of pixel arrays, connecting the rapid progress of commercial multiplexing, digitisation and data acquisition hardware together with a three-dimensional neural interface.

92 citations


Journal ArticleDOI
01 Apr 2020
TL;DR: The development of neural interfaces, which can provide a direct, electrical bridge between analogue human nervous systems and digital man-made devices, is examined, considering challenges and opportunities created with such technology.
Abstract: Devices such as keyboards and touchscreens allow humans to communicate with machines. Neural interfaces, which can provide a direct, electrical bridge between analogue nervous systems and digital man-made systems, could provide a more efficient route to future information exchange. Here we review the development of electronic neural interfaces. The interfaces typically consist of three modules — a tissue interface, a sensing interface, and a neural signal processing unit — and based on technical milestones in the development of the electronic sensing interface, we group and analyse the interfaces in four generations: the patch clamp technique, multi-channel neural interfaces, implantable/wearable neural interfaces and integrated neural interfaces. We also consider key circuit and system challenges in the design of neural interfaces and explore the opportunities that arise with the latest technology This Review Article examines the development of neural interfaces, which can provide a direct, electrical bridge between analogue human nervous systems and digital man-made devices, considering challenges and opportunities created with such technology.

88 citations


Proceedings ArticleDOI
30 May 2020
TL;DR: The insight is that many prior accelerator architectures can be approximated by composing a small number of hardware primitives, specifically those from spatial architectures, which is used to develop the DSAGEN framework, which automates the hardware/software co-design process for reconfigurable accelerators.
Abstract: Domain-specific hardware accelerators can provide orders of magnitude speedup and energy efficiency over general purpose processors. However, they require extensive manual effort in hardware design and software stack development. Automated ASIC generation (eg. HLS) can be insufficient, because the hardware becomes inflexible. An ideal accelerator generation framework would be automatable, enable deep specialization to the domain, and maintain a uniform programming interface. Our insight is that many prior accelerator architectures can be approximated by composing a small number of hardware primitives, specifically those from spatial architectures. With careful design, a compiler can understand how to use available primitives, with modular and composable transformations, to take advantage of the features of a given program. This suggests a paradigm where accelerators can be generated by searching within such a rich accelerator design space, guided by the affinity of input programs for hardware primitives and their interactions. We use this approach to develop the DSAGEN framework, which automates the hardware/software co-design process for reconfigurable accelerators. For several existing accelerators, our evaluation demonstrates that the compiler can achieve 89% of the performance of manually tuned versions. For automated design space exploration, we target multiple sets of workloads which prior accelerators are design for; the generated hardware has mean 1.3 x perf2/mm2 over prior programmable accelerators.

73 citations


Journal ArticleDOI
TL;DR: A millimeter-scale pressure sensor that adopts a soft, three-dimensional design that integrates into a thin, flexible battery-free, wireless platform with a built-in temperature sensor to allow operation in a noninvasive, imperceptible fashion directly at the skin-prosthesis interface is introduced.
Abstract: Precise form-fitting of prosthetic sockets is important for the comfort and well-being of persons with limb amputations. Capabilities for continuous monitoring of pressure and temperature at the skin-prosthesis interface can be valuable in the fitting process and in monitoring for the development of dangerous regions of increased pressure and temperature as limb volume changes during daily activities. Conventional pressure transducers and temperature sensors cannot provide comfortable, irritation-free measurements because of their relatively rigid construction and requirements for wired interfaces to external data acquisition hardware. Here, we introduce a millimeter-scale pressure sensor that adopts a soft, three-dimensional design that integrates into a thin, flexible battery-free, wireless platform with a built-in temperature sensor to allow operation in a noninvasive, imperceptible fashion directly at the skin-prosthesis interface. The sensor system mounts on the surface of the skin of the residual limb, in single or multiple locations of interest. A wireless reader module attached to the outside of the prosthetic socket wirelessly provides power to the sensor and wirelessly receives data from it, for continuous long-range transmission to a standard consumer electronic device such as a smartphone or tablet computer. Characterization of both the sensor and the system, together with theoretical analysis of the key responses, illustrates linear, accurate responses and the ability to address the entire range of relevant pressures and to capture skin temperature accurately, both in a continuous mode. Clinical application in two prosthesis users demonstrates the functionality and feasibility of this soft, wireless system.

72 citations


Posted Content
TL;DR: This work introduces a new quantum simulation framework that enables developers to delegate all complicated aspects of hardware or platform implementation to the library so they can focus on the problem and quantum algorithms at hand.
Abstract: We present Qibo, a new open-source software for fast evaluation of quantum circuits and adiabatic evolution which takes full advantage of hardware accelerators. The growing interest in quantum computing and the recent developments of quantum hardware devices motivates the development of new advanced computational tools focused on performance and usage simplicity. In this work we introduce a new quantum simulation framework that enables developers to delegate all complicated aspects of hardware or platform implementation to the library so they can focus on the problem and quantum algorithms at hand. This software is designed from scratch with simulation performance, code simplicity and user friendly interface as target goals. It takes advantage of hardware acceleration such as multi-threading CPU, single GPU and multi-GPU devices.

63 citations


Journal ArticleDOI
TL;DR: The interaction behavior of the user when using the smartphone was studied, and the model of the interface visual design method of the interdisciplinary “Shared Communication” system for the interface design of the mobile APP was constructed and the case of Didi Chuxing was analyzed, which preliminarily confirmed the feasibility of the construction.
Abstract: In order to achieve information visualization, realize good interaction between users and information, and meet the needs of users, this study first studied the interaction behavior of the user when using the smartphone was studied, and analyzed the visual factors of the smartphone interface were analyzed from the user sensory interaction level, and the user operation mode level, from the expression of visual form to the commonly used interface mode and User Interface (UI) component space. On this basis, the situational visual expression of the scene in different interaction scenarios was analyzed. Secondly, the basic theory of visual design of smartphone application interface was explained from the perspectives of aesthetics, semiotics and Gestalt psychology, In other words, the visual design of the application interface should be metaphorical, highlighting the key points in the overall visual style, and conforming to the user’s psychological model. At the same time, in order to meet the user’s personalized needs for control, it must add customized options. Finally, the model of the interface visual design method of the interdisciplinary “Shared Communication” system for the interface design of the mobile APP was constructed, and the case of Didi Chuxing was analyzed, which preliminarily confirmed the feasibility of the construction of the interface visual design method model of the “Shared Communication” system.

62 citations


Journal ArticleDOI
27 Jun 2020-Sensors
TL;DR: Various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task are discussed.
Abstract: A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI This is majorly beneficial for those who have severe motor disabilities Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task The paper also includes a review of the methods and system design used in the discussed applications

58 citations


Journal ArticleDOI
TL;DR: This paper focuses on connecting the brain with a mobile home robot by translating brain signals to computer commands to build a brain-computer interface that may offer the promise of greatly enhancing the quality of life of disabled and able-bodied people by considerably improving their autonomy, mobility, and abilities.
Abstract: The assistive, adaptive, and rehabilitative applications of EEG-based robot control and navigation are undergoing a major transformation in dimension as well as scope. Under the background of artificial intelligence, medical and nonmedical robots have rapidly developed and have gradually been applied to enhance the quality of people’s lives. We focus on connecting the brain with a mobile home robot by translating brain signals to computer commands to build a brain-computer interface that may offer the promise of greatly enhancing the quality of life of disabled and able-bodied people by considerably improving their autonomy, mobility, and abilities. Several types of robots have been controlled using BCI systems to complete real-time simple and/or complicated tasks with high performances. In this paper, a new EEG-based intelligent teleoperation system was designed for a mobile wall-crawling cleaning robot. This robot uses crawler type instead of the traditional wheel type to be used for window or floor cleaning. For EEG-based system controlling the robot position to climb the wall and complete the tasks of cleaning, we extracted steady state visually evoked potential (SSVEP) from the collected electroencephalography (EEG) signal. The visual stimulation interface in the proposed SSVEP-based BCI was composed of four flicker pieces with different frequencies (e.g., 6 Hz, 7.5 Hz, 8.57 Hz, and 10 Hz). Seven subjects were able to smoothly control the movement directions of the cleaning robot by looking at the corresponding flicker using their brain activity. To solve the multiclass problem, thereby achieving the purpose of cleaning the wall within a short period, the canonical correlation analysis (CCA) classification algorithm had been used. Offline and online experiments were held to analyze/classify EEG signals and use them as real-time commands. The proposed system was efficient in the classification and control phases with an obtained accuracy of 89.92% and had an efficient response speed and timing with a bit rate of 22.23 bits/min. These results suggested that the proposed EEG-based clean robot system is promising for smart home control in terms of completing the tasks of cleaning the walls with efficiency, safety, and robustness.

56 citations


Posted Content
TL;DR: This work presents an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots and uses commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
Abstract: Visual imitation learning provides a framework for learning complex manipulation behaviors by leveraging human demonstrations. However, current interfaces for imitation such as kinesthetic teaching or teleoperation prohibitively restrict our ability to efficiently collect large-scale data in the wild. Obtaining such diverse demonstration data is paramount for the generalization of learned skills to novel scenarios. In this work, we present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots. We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector. To extract action information from these visual demonstrations, we use off-the-shelf Structure from Motion (SfM) techniques in addition to training a finger detection network. We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task. For both tasks, we use standard behavior cloning to learn executable policies from the previously collected offline demonstrations. To improve learning performance, we employ a variety of data augmentations and provide an extensive analysis of its effects. Finally, we demonstrate the utility of our interface by evaluating on real robotic scenarios with previously unseen objects and achieve a 87% success rate on pushing and a 62% success rate on stacking. Robot videos are available at this https URL.

Proceedings ArticleDOI
21 Apr 2020
TL;DR: This paper presents multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis by considering standard interface elements and grounding the design in a set of core concepts including operations, parameters, targets, and instruments.
Abstract: While tablet devices are a promising platform for data visualization, supporting consistent interactions across different types of visualizations on tablets remains an open challenge. In this paper, we present multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis. By considering standard interface elements (e.g., axes, marks) and grounding our design in a set of core concepts including operations, parameters, targets, and instruments, we systematically develop interactions applicable to different visualization types. To exemplify how the proposed interactions collectively facilitate data exploration, we employ them in a tablet-based system, InChorus that supports pen, touch, and speech input. Based on a study with 12 participants performing replication and factchecking tasks with InChorus, we discuss how participants adapted to using multimodal input and highlight considerations for future multimodal visualization systems.

Journal ArticleDOI
TL;DR: A comprehensive up-to-date survey identifies the main trade-offs and limitations of the existing hardware-accelerated platforms and infrastructures for NFs and outlines directions for future research.
Abstract: In order to facilitate flexible network service virtualization and migration, network functions (NFs) are increasingly executed by software modules as so-called “softwarized NFs” on General-Purpose Computing (GPC) platforms and infrastructures. GPC platforms are not specifically designed to efficiently execute NFs with their typically intense Input/Output (I/O) demands. Recently, numerous hardware-based accelerations have been developed to augment GPC platforms and infrastructures, e.g., the central processing unit (CPU) and memory, to efficiently execute NFs. This article comprehensively surveys hardware-accelerated platforms and infrastructures for executing softwarized NFs. This survey covers both commercial products, which we consider to be enabling technologies, as well as relevant research studies. We have organized the survey into the main categories of enabling technologies and research studies on hardware accelerations for the CPU, the memory, and the interconnects (e.g., between CPU and memory), as well as custom and dedicated hardware accelerators (that are embedded on the platforms); furthermore, we survey hardware-accelerated infrastructures that connect GPC platforms to networks (e.g., smart network interface cards). We find that the CPU hardware accelerations have mainly focused on extended instruction sets and CPU clock adjustments, as well as cache coherency. Hardware accelerated interconnects have been developed for on-chip and chip-to-chip connections. Our comprehensive up-to-date survey identifies the main trade-offs and limitations of the existing hardware-accelerated platforms and infrastructures for NFs and outlines directions for future research.

Journal ArticleDOI
TL;DR: Empirical outcomes reveal that this advanced method is responsible for sensing air quality, which serves to expose the modification patterns regarding air quality through a certain level.

Journal ArticleDOI
16 Nov 2020
TL;DR: The proposed robotic arm control system was designed and realized by combing augmented reality, computer vision, and steady-state visual evoked potential (SSVEP)-BCI, demonstrating the potential of combining AR-BCI and computer vision to control robotic arms, which is expected to further promote the practicality of BCI-controlled robots.
Abstract: Recent advances in robotics, neuroscience, and signal processing make it possible to operate a robot through electroencephalography (EEG)-based brain-computer interface (BCI). Although some successful attempts have been made in recent years, the practicality of the entire system still has much room for improvement. The present study designed and realized a robotic arm control system by combing augmented reality (AR), computer vision, and steady-state visual evoked potential (SSVEP)-BCI. AR environment was implemented by a Microsoft HoloLens. Flickering stimuli for eliciting SSVEPs were presented on the HoloLens, which allowed users to see both the robotic arm and the user interface of the BCI. Thus users did not need to switch attention between the visual stimulator and the robotic arm. A four-command SSVEP-BCI was built for users to choose the specific object to be operated by the robotic arm. Once an object was selected, the computer vision would provide the location and color of the object in the workspace. Subsequently, the object was autonomously picked up and placed by the robotic arm. According to the online results obtained from twelve participants, the mean classification accuracy of the proposed system was 93.96 ± 5.05%. Moreover, all subjects could utilize the proposed system to successfully pick and place objects in a specific order. These results demonstrated the potential of combining AR-BCI and computer vision to control robotic arms, which is expected to further promote the practicality of BCI-controlled robots.

Journal ArticleDOI
TL;DR: The main problems in modeling accuracy, system bandwidth and stability, limitations on communication interface and energy interface, and the cost of platform construction are focused on.
Abstract: Real-time simulation of power electronics has been recognized by the industry as an effective tool for developing power electronic devices and systems. Since there is no energy transfer during the course of the usage, real-time simulation has a lot of advantages in the process of development and experimentation. From the perspective of real-time simulation, this paper focuses on the main problems in modeling accuracy, system bandwidth and stability, limitations on communication interface and energy interface, and the cost of platform construction. Finally, we provide further research directions.

Journal ArticleDOI
TL;DR: A general, modular and expandable framework for the application of HMs to peripheral neural interfaces, in which the correct degree of approximation required to answer different kinds of research questions can be readily determined and implemented is illustrated.
Abstract: Peripheral neural interfaces have been successfully used in the recent past to restore sensory-motor functions in disabled subjects and for the neuromodulation of the autonomic nervous system. The optimization of these neural interfaces is crucial for ethical, clinical and economic reasons. In particular, hybrid models (HMs) constitute an effective framework to simulate direct nerve stimulation and optimize virtually every aspect of implantable electrode design: the type of electrode (for example, intrafascicular versus extrafascicular), their insertion position and the used stimulation routines. They are based on the combined use of finite element methods (to calculate the voltage distribution inside the nerve due to the electrical stimulation) and computational frameworks such as NEURON ( https://neuron.yale.edu/neuron/ ) to determine the effects of the electric field generated on the neural structures. They have already provided useful results for different applications, but the overall usability of this powerful approach is still limited by the intrinsic complexity of the procedure. Here, we illustrate a general, modular and expandable framework for the application of HMs to peripheral neural interfaces, in which the correct degree of approximation required to answer different kinds of research questions can be readily determined and implemented. The HM workflow is divided into the following tasks: identify and characterize the fiber subpopulations inside the fascicles of a given nerve section, determine different degrees of approximation for fascicular geometries, locate the fibers inside these geometries and parametrize electrode geometries and the geometry of the nerve-electrode interface. These tasks are examined in turn, and solutions to the most relevant issues regarding their implementation are described. Finally, some examples related to the simulation of common peripheral neural interfaces are provided.

Journal ArticleDOI
TL;DR: A framework for performance evaluation of a recommending interface, which takes into consideration individual user characteristics and goals is proposed, which can be used to automate an optimal recommending interface adjustment according to the characteristics of the user and their goals.
Abstract: The increasing amount of marketing content in e-commerce websites results in the limited attention of users. For recommender systems, the way recommended items are presented becomes as important as the underlying algorithms for product selection. In order to improve the effectiveness of content presentation, marketing experts experiment with the layout and other visual aspects of website elements to find the most suitable solution. This study investigates those aspects for a recommending interface. We propose a framework for performance evaluation of a recommending interface, which takes into consideration individual user characteristics and goals. At the heart of the proposed solution is a deep neutral network trained to predict the efficiency a particular recommendation presented in a selected position and with a chosen degree of intensity. The proposed Performance Evaluation of a Recommending Interface (PERI) framework can be used to automate an optimal recommending interface adjustment according to the characteristics of the user and their goals. The experimental results from the study are based on research-grade measurement electronics equipment Gazepoint GP3 eye-tracker data, together with synthetic data that were used to perform pre-assessment training of the neural network.

Journal ArticleDOI
TL;DR: In this paper, a scalable photonic neuro-inspired architecture based on the reservoir computing paradigm is proposed for real-time video processing. But the architecture is not suitable for large datasets and special purpose, energy-consuming hardware.
Abstract: The recognition of human actions in video streams is a challenging task in computer vision, with cardinal applications in e.g. brain-computer interface and surveillance. Deep learning has shown remarkable results recently, but can be found hard to use in practice, as its training requires large datasets and special purpose, energy-consuming hardware. In this work, we propose a scalable photonic neuro-inspired architecture based on the reservoir computing paradigm, capable of recognising video-based human actions with state-of-the-art accuracy. Our experimental optical setup comprises off-the-shelf components, and implements a large parallel recurrent neural network that is easy to train and can be scaled up to hundreds of thousands of nodes. This work paves the way towards simply reconfigurable and energy-efficient photonic information processing systems for real-time video processing.

Journal ArticleDOI
TL;DR: A passive device-free FDS based on commodity WiFi framework for smart home, which is mainly composed of two modules in terms of hardware platform and client application is presented, able to achieve a satisfactory performance compared with some state-of-the-art algorithms.
Abstract: Falls among the elderly living on their own have been regarded as a major public health worry that can even lead to death. Fall detection system (FDS) that alerts caregivers or family members can potentially save lives of the elderly. However, conventional FDS involves wearable sensors and specialized hardware installations. This article presents a passive device-free FDS based on commodity WiFi framework for smart home, which is mainly composed of two modules in terms of hardware platform and client application. Concretely, commercial WiFi devices collect disturbance signal induced by human motions from smart home and transmit the data to a data analysis platform for further processing. Based on this basis, a discrete wavelet transform (DWT) method is used to eliminate the influence of random noise presented in the collected data. Next, a recurrent neural network (RNN) model is utilized to classify human motions and identify the fall status automatically. By leveraging Web Application Programming Interface (API), the analyzed data is able to be uploaded to the proxy server from which the client application then obtains the corresponding fall information. Moreover, the system has been implemented as a consumer mobile App that can help the elderly saving their lives in smart home, and detection performance of the proposed FDS has been evaluated by conducting comprehensive experiments on real-world dataset. The results confirm that the proposed FDS is able to achieve a satisfactory performance compared with some state-of-the-art algorithms.

Proceedings ArticleDOI
21 Apr 2020
TL;DR: This work develops a handheld dispenser tool for directly applying continuous functional tapes of desired length as well as discrete patches and introduces versatile compositions techniques that allow for creating complex circuits, utilizing commodity textile accessories and sketching custom-shaped I/O modules.
Abstract: Rapid prototyping of interactive textiles is still challenging, since manual skills, several processing steps, and expert knowledge are involved. We present Rapid Iron-On User Interfaces, a novel fabrication approach for empowering designers and makers to enhance fabrics with interactive functionalities. It builds on heat-activated adhesive materials consisting of smart textiles and printed electronics, which can be flexibly ironed onto the fabric to create custom interface functionality. To support rapid fabrication in a sketching-like fashion, we developed a handheld dispenser tool for directly applying continuous functional tapes of desired length as well as discrete patches. We introduce versatile compositions techniques that allow for creating complex circuits, utilizing commodity textile accessories and sketching custom-shaped I/O modules. We further contribute a comprehensive library of components for input, output, wiring and computing. Three example applications, results from technical experiments and expert reviews demonstrate the functionality, versatility and potential of this approach.

Journal ArticleDOI
TL;DR: Experimental results show that the new approach allows users to continuously drive the mobile robot via BMI; leads to significant improvements in the navigation performance; and promotes a better coupling between user and robot.
Abstract: Despite the growing interest in brain–machine interface (BMI)-driven neuroprostheses, the translation of the BMI output into a suitable control signal for the robotic device is often neglected. In this article, we propose a novel control approach based on dynamical systems that was explicitly designed to take into account the nature of the BMI output that actively supports the user in delivering real-valued commands to the device and, at the same time, reduces the false positive rate. We hypothesize that such a control framework would allow users to continuously drive a mobile robot and it would enhance the navigation performance. 13 healthy users evaluated the system during three experimental sessions. Users exploit a 2-class motor imagery BMI to drive the robot to five targets in two experimental conditions: with a discrete control strategy, traditionally exploited in the BMI field, and with the novel continuous control framework developed herein. Experimental results show that the new approach: 1) allows users to continuously drive the mobile robot via BMI; 2) leads to significant improvements in the navigation performance; and 3) promotes a better coupling between user and robot. These results highlight the importance of designing a suitable control framework to improve the performance and the reliability of BMI-driven neurorobotic devices.

Journal ArticleDOI
TL;DR: A systematic review of the state-of-the-art brain-switch techniques and future research directions published from 2000 to 2019 in terms of their neuroimaging modality, paradigm, operation algorithm, and performance.
Abstract: A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance.

Journal ArticleDOI
TL;DR: A highly efficient system for the configuration, deployment, and management of smart street lights, utilizing container-based virtualization to deploy all edge computing devices on the server and validates the feasibility of simultaneous operation of multiple container- based services on edge computing Devices.
Abstract: Street light are among the most common infrastructure in cities. Street lights and sensors can be combined to generate an interface of data collection. The analysis of massive data serves as an integral element of a smart city. This paper proposes a highly efficient system for the configuration, deployment, and management of smart street lights. The features of fast deployment and high scalability of the container-based system management result in virtual deployment. Additionally, for database design, NoSQL and in-memory databases are integrated to realize flexible data management. In terms of data transmission, this paper designs an asymmetric key and an SSH encrypted tunnel. Moreover, when all the services are connected, it conducts legitimacy validation via a token. Therefore, this system can help meet the demands for data throughput, low-latency, configuration, and realization of a smart city. It boasts high efficiency and security. Besides, it offers a flexible data storage and management service to facilitate the massive data processing of a smart city. With respect to experiments, this paper designs a street lighting simulation system with edge computing devices (consisting of a micro-controller, a sensor, and an IP camera) and a street lighting function. The system collects real-time sensed environmental data, enables live streaming of images, and offers an API for historical data query. This paper utilizes container-based virtualization to deploy all edge computing devices on the server and validates the feasibility of simultaneous operation of multiple container-based services on edge computing devices. This system has high commercial value.

Journal ArticleDOI
TL;DR: A passive four-degree-of-freedom foot interface to control a robotic surgical instrument based on a parallel–serial hybrid mechanism with springs and force sensors that provides an operator with intuitive control in continuous directions and speed with force and position feedback.
Abstract: This article introduces a passive four-degree-of-freedom foot interface to control a robotic surgical instrument. This interface is based on a parallel–serial hybrid mechanism with springs and force sensors. In contrast to existing switch-based interfaces that can command a slave robot arm at constant speed in only discrete directions, the novel interface provides an operator with intuitive control in continuous directions and speed with force and position feedback. The output command of the interface was initially derived based on the kinematics and statics of the interface. Since distinct movement patterns among different subjects were observed in a pilot test, a data-driven approach using independent component analysis was developed to convert the foot inputs to the control commands of the user. The capability of this interface in controlling a robotic arm in multiple degrees of freedom was further verified with a teleoperation test.

Journal ArticleDOI
TL;DR: Current implementations and roadmap for leveraging high performance computing tools and methods on small satellites with radiation tolerant hardware are discussed, including runtime analysis with benchmarks of convolutional neural networks and matrix multiplications using industry standard tools (e.g., TensorFlow and PlaidML).
Abstract: The last decade has seen a dramatic increase in small satellite missions for commercial, public, and government intelligence applications Given the rapid commercialization of constellation-driven services in Earth Observation, situational domain awareness, communications including machine-to-machine interface, exploration etc, small satellites represent an enabling technology for a large growth market generating truly Big Data Examples of modern sensors that can generate very large amounts of data are optical sensing, hyperspectral, Synthetic Aperture Radar (SAR), and Infrared imaging Traditional handling and downloading of Big Data from space requires a large onboard mass storage and high bandwidth downlink with a trend towards optical links Many missions and applications can benefit significantly from onboard cloud computing similarly to Earth-based cloud services Hence, enabling space systems to provide near real-time data and enable low latency distribution of critical and time sensitive information to users In addition, the downlink capability can be more effectively utilized by applying more onboard processing to reduce the data and create high value information products This paper discusses current implementations and roadmap for leveraging high performance computing tools and methods on small satellites with radiation tolerant hardware This includes runtime analysis with benchmarks of convolutional neural networks and matrix multiplications using industry standard tools (eg, TensorFlow and PlaidML) In addition, a ½ CubeSat volume unit (05U) (10 × 10 × 5 cm3) cloud computing solution, called SpaceCloud™ iX5100 based on AMD 28 nm APU technology is presented as an example of heterogeneous computer solution An evaluation of the AMD 14 nm Ryzen APU is presented as a candidate for future advanced onboard processing for space vehicles

Journal ArticleDOI
TL;DR: Experimental results demonstrated that this proposed method can generate the motion path of the mobile robot according to the specific requirements of the operator, and achieve a good obstacle avoidance performance.
Abstract: This paper proposes a novel control system for the path planning of an omnidirectional mobile robot based on mixed reality. Most research on mobile robots is carried out in a completely real environment or a completely virtual environment. However, a real environment containing virtual objects has important actual applications. The proposed system can control the movement of the mobile robot in the real environment, as well as the interaction between the mobile robot’s motion and virtual objects which can be added to a real environment. First, an interactive interface is presented in the mixed reality device HoloLens. The interface can display the map, path, control command, and other information related to the mobile robot, and it can add virtual objects to the real map to realize a real-time interaction between the mobile robot and the virtual objects. Then, the original path planning algorithm, vector field histogram* (VFH*), is modified in the aspects of the threshold, candidate direction selection, and cost function, to make it more suitable for the scene with virtual objects, reduce the number of calculations required, and improve the security. Experimental results demonstrated that this proposed method can generate the motion path of the mobile robot according to the specific requirements of the operator, and achieve a good obstacle avoidance performance.

Journal ArticleDOI
TL;DR: A MATLAB based program, including an easy-to-use graphical interface, developed to estimate 3D geometry of magnetic basement interfaces from respective gridded magnetic anomalies by a rapid iterative procedure based on a relationship between the Fourier transforms of the magnetic data and the interface topography.

Journal ArticleDOI
TL;DR: The results of these usability tests show that the proposed approach is more intuitive, ergonomic, and easy to use and clearly improves the velocity of the teleoperation task, regardless of the user’s previous experience in robotics and augmented reality technology.
Abstract: This research develops a novel teleoperation for robot manipulators based on augmented reality. The proposed interface is equipped with full capabilities in order to replace the classical teach pendant of the robot for carrying out teleoperation tasks. The proposed interface is based on an augmented reality headset for projecting computer-generated graphics onto the real environment and a gamepad to interact with the computer-generated graphics and provide robot commands. In order to demonstrate the benefits of the proposed method, several usability tests were conducted using a 6R industrial robot manipulator in order to compare the proposed interface and the conventional teach pendant interface for teleoperation tasks. In particular, the results of these usability tests show that the proposed approach is more intuitive, ergonomic, and easy to use. Furthermore, the comparison results also show that the proposed method clearly improves the velocity of the teleoperation task, regardless of the user’s previous experience in robotics and augmented reality technology.

Journal ArticleDOI
01 Sep 2020
TL;DR: The advancement in neuroscience and computer science promotes the ability of the human brain to communicate and interact with the environment, making brain-computer interface (BCI) top interdisciplinarity.
Abstract: The advancement in neuroscience and computer science promotes the ability of the human brain to communicate and interact with the environment, making brain–computer interface (BCI) top interdiscipl...