scispace - formally typeset
Search or ask a question

Showing papers on "Interface (computing) published in 2019"


Posted Content
TL;DR: New design-criteria for next-generation hyperparameter optimization software are introduced, including define-by-run API that allows users to construct the parameter search space dynamically, and easy-to-setup, versatile architecture that can be deployed for various purposes.
Abstract: The purpose of this study is to introduce new design-criteria for next-generation hyperparameter optimization software. The criteria we propose include (1) define-by-run API that allows users to construct the parameter search space dynamically, (2) efficient implementation of both searching and pruning strategies, and (3) easy-to-setup, versatile architecture that can be deployed for various purposes, ranging from scalable distributed computing to light-weight experiment conducted via interactive interface. In order to prove our point, we will introduce Optuna, an optimization software which is a culmination of our effort in the development of a next generation optimization software. As an optimization software designed with define-by-run principle, Optuna is particularly the first of its kind. We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications. Our software is available under the MIT license (this https URL).

1,448 citations


Proceedings ArticleDOI
25 Jul 2019
TL;DR: Optuna as mentioned in this paper is a next-generation hyperparameter optimization software with define-by-run (DBR) API that allows users to construct the parameter search space dynamically.
Abstract: The purpose of this study is to introduce new design-criteria for next-generation hyperparameter optimization software. The criteria we propose include (1) define-by-run API that allows users to construct the parameter search space dynamically, (2) efficient implementation of both searching and pruning strategies, and (3) easy-to-setup, versatile architecture that can be deployed for various purposes, ranging from scalable distributed computing to light-weight experiment conducted via interactive interface. In order to prove our point, we will introduce Optuna, an optimization software which is a culmination of our effort in the development of a next generation optimization software. As an optimization software designed with define-by-run principle, Optuna is particularly the first of its kind. We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications. Our software is available under the MIT license (https://github.com/pfnet/optuna/).

1,248 citations


Journal ArticleDOI
TL;DR: A high-level overview of the features of the MRtrix3 framework and general-purpose image processing applications provided with the software is provided.

1,228 citations


Posted ContentDOI
15 Feb 2019-bioRxiv
TL;DR: A high-level overview of the features of the MRtrix3 framework and general-purpose image processing applications provided with the software is provided.
Abstract: MRtrix3 is an open-source, cross-platform software package for medical image processing, analysis and visualization, with a particular emphasis on the investigation of the brain using diffusion MRI. It is implemented using a fast, modular and flexible general-purpose code framework for image data access and manipulation, enabling efficient development of new applications, whilst retaining high computational performance and a consistent command-line interface between applications. In this article, we provide a high-level overview of the features of the MRtrix3 framework and general-purpose image processing applications provided with the software.

728 citations


Journal ArticleDOI
TL;DR: A novel DT prototype to analyze the requirements of communication in a mission-critical application such as mobile networks supported remote surgery and necessary cybersecurity technologies that will help in developing the DT architecture are developed.
Abstract: The concept of digital twin (DT) has emerged to enable the benefits of future paradigms such as the industrial Internet of Things and Industry 4.0. The idea is to bring every data source and control interface description related to a product or process available through a single interface, for auto-discovery and automated communication establishment. However, designing the architecture of a DT to serve every future application is an ambitious task. Therefore, the prototyping systems for specific applications are required to design the DT incrementally. We developed a novel DT prototype to analyze the requirements of communication in a mission-critical application such as mobile networks supported remote surgery. Such operations require low latency and high levels of security and reliability and therefore are a perfect subject for analyzing DT communication and cybersecurity. The system comprised of a robotic arm and HTC vive virtual reality (VR) system connected over a 4G mobile network. More than 70 test users were employed to assess the system. To address the cybersecurity of the system, we incorporated a network manipulation module to test the effect of network outages and attacks; we studied state of the art practices and their utilization within DTs. The capability of the system for actual remote surgery is limited by capabilities of the VR system and insufficient feedback from the robot. However, simulations and research of remote surgeries could be conducted with the system. As a result, we propose ideas for communication establishment and necessary cybersecurity technologies that will help in developing the DT architecture. Furthermore, we concluded that developing the DT requires cross-disciplinary development in several different engineering fields. Each field makes use of its own tools and methods, which do not always fit together perfectly. This is a potentially major obstacle in the realization of Industry 4.0 and similar concepts.

185 citations


Journal ArticleDOI
TL;DR: A flexible triboelectric interacting patch with only four sensing electrodes to detect various human machine interactions and functional interfaces for writing trace recognition, identification code system and remote control are successfully realized, showing the high applicability of the device in diversified human machine interaction.

142 citations


Journal ArticleDOI
01 Jan 2019
TL;DR: This paper provides a comprehensive review of dominant feature extraction methods and classification algorithms in brain-computer interface for motor imagery tasks.
Abstract: Motor Imagery Brain Computer Interface (MI-BCI) provides a non-muscular channel for communication to those who are suffering from neuronal disorders. The designing of an accurate and reliable MI-BCI system requires the extraction of informative and discriminative features. Common Spatial Pattern (CSP) has been potent and is widely used in BCI for extracting features in motor imagery tasks. The classifiers translate these features into device commands. Many classification algorithms have been devised, among those Support Vector Machine (SVM) and Linear Discriminate Analysis (LDA) have been widely used. In recent studies, the researchers are using deep neural networks for the classification of motor imagery tasks. This paper provides a comprehensive review of dominant feature extraction methods and classification algorithms in brain-computer interface for motor imagery tasks. Authors discuss existing challenges in the domain of motor imagery brain-computer interface and suggest possible research directions.

123 citations


Journal ArticleDOI
TL;DR: A freely available web-based “point and click” interactive tool which allows users to input their DTA study data and conduct meta-analyses for DTA reviews, including sensitivity analyses, and allows for sensitivity analyses to be conducted in a timely manner.
Abstract: Recommended statistical methods for meta-analysis of diagnostic test accuracy studies require relatively complex bivariate statistical models which can be a barrier for non-statisticians. A further barrier exists in the software options available for fitting such models. Software accessible to non-statisticians, such as RevMan, does not support the fitting of bivariate models thus users must seek statistical support to use R, Stata or SAS. Recent advances in web technologies make analysis tool creation much simpler than previously. As well as accessibility, online tools can allow tailored interactivity not found in other packages allowing multiple perspectives of data to be displayed and information to be tailored to the user’s preference from a simple interface. We set out to: (i) Develop a freely available web-based “point and click” interactive tool which allows users to input their DTA study data and conduct meta-analyses for DTA reviews, including sensitivity analyses. (ii) Illustrate the features and benefits of the interactive application using an existing DTA meta-analysis for detecting dementia. To create our online freely available interactive application we used the existing R packages lme4 and Shiny to analyse the data and create an interactive user interface respectively. MetaDTA, an interactive online application was created for conducting meta-analysis of DTA studies. The user interface was designed to be easy to navigate having different tabs for different functions. Features include the ability for users to enter their own data, customise plots, incorporate quality assessment results and quickly conduct sensitivity analyses. All plots produced can be exported as either .png or .pdf files to be included in report documents. All tables can be exported as .csv files. MetaDTA, is a freely available interactive online application which meta-analyses DTA studies, plots the summary ROC curve, incorporates quality assessment results and allows for sensitivity analyses to be conducted in a timely manner. Due to the rich feature-set and user-friendliness of the software it should appeal to a wide audience including those without specialist statistical knowledge. We encourage others to create similar applications for specialist analysis methods to encourage broader uptake which in-turn could improve research quality.

121 citations


Journal ArticleDOI
TL;DR: The BISNC interface shows high scalability with a single electrode for detection and/or control of multiple directions, by detecting different output signal patterns, and has excellent reliability and robustness in actual usage scenarios.
Abstract: Human-machine interfaces are essential components between various human and machine interactions such as entertainment, robotics control, smart home, virtual/augmented reality, etc. Recently, various triboelectric-based interfaces have been developed toward flexible wearable and battery-less applications. However, most of them exhibit complicated structures and a large number of electrodes for multidirectional control. Herein, a bio-inspired spider-net-coding (BISNC) interface with great flexibility, scalability, and single-electrode output is proposed, through connecting information-coding electrodes into a single triboelectric electrode. Two types of coding designs are investigated, i.e., information coding by large/small electrode width (L/S coding) and information coding with/without electrode at a predefined position (0/1 coding). The BISNC interface shows high scalability with a single electrode for detection and/or control of multiple directions, by detecting different output signal patterns. In addition, it also has excellent reliability and robustness in actual usage scenarios, since recognition of signal patterns is in regardless of absolute amplitude and thereby not affected by sliding speed/force, humidity, etc. Based on the spider-net-coding concept, single-electrode interfaces for multidirectional 3D control, security code systems, and flexible wearable electronics are successfully developed, indicating the great potentials of this technology in diversified applications such as human-machine interaction, virtual/augmented reality, security, robotics, Internet of Things, etc.

120 citations


Journal ArticleDOI
TL;DR: This glove-based interface provides a novel minimalist design concept distinct from the present rigid and bulky HMIs to open up a new direction of the further development of the HMIs with advantages of flexibility/wearability, low cost, easy operation, simplified signal processing, and low power consumption.

117 citations


Posted Content
TL;DR: This article systematically investigates brain signal types for BCI and related deep learning concepts for brain signal analysis, and presents a comprehensive survey of deep learning techniques used forBCI.
Abstract: Brain-Computer Interface (BCI) bridges the human's neural world and the outer physical world by decoding individuals' brain signals into commands recognizable by computer devices. Deep learning has lifted the performance of brain-computer interface systems significantly in recent years. In this article, we systematically investigate brain signal types for BCI and related deep learning concepts for brain signal analysis. We then present a comprehensive survey of deep learning techniques used for BCI, by summarizing over 230 contributions most published in the past five years. Finally, we discuss the applied areas, opening challenges, and future directions for deep learning-based BCI.

Journal ArticleDOI
TL;DR: The results show that the proposed training system, based on process mining and virtual reality, is competitive against conventional alternatives and user evaluations are better in terms of mental demand, perception, learning, results, and performance.
Abstract: Industry 4.0 aims at integrating machines and operators through network connections and information management. It proposes the use of a set of technologies in industry, such as data analysis, Internet of Things, cloud computing, cooperative robots, and immersive technologies. This paper presents a training system for industrial operators in assembly tasks, which takes advantage of tools such as virtual reality and process mining. First, expert workers use an immersive interface to perform assemblies according to their experience. Then, process mining algorithms are applied to obtain assembly models from event logs. Finally, trainee workers use an improved immersive interface with hints to learn the assemblies that the expert workers introduced in the system. A toy example has been developed with building blocks and tests have been performed with a set of volunteers. The results show that the proposed training system, based on process mining and virtual reality, is competitive against conventional alternatives. Furthermore, user evaluations are better in terms of mental demand, perception, learning, results, and performance.

Journal ArticleDOI
TL;DR: A vibration sensor based human-machine interface with an unlimited sensing area on a ubiquitous surface by detecting and positioning an external vibration source by mimicking the ampulla in the lateral line of a fish is presented.

Journal ArticleDOI
TL;DR: A reservoir computer that recognizes different forms of human action from video streams using photonic neural networks, comparable to state-of-the-art digital implementations, while promising a higher processing speed in comparison to the existing hardware approaches.
Abstract: The recognition of human actions in video streams is a challenging task in computer vision, with cardinal applications in e.g. brain-computer interface and surveillance. Deep learning has shown remarkable results recently, but can be found hard to use in practice, as its training requires large datasets and special purpose, energy-consuming hardware. In this work, we propose a photonic hardware approach. Our experimental setup comprises off-the-shelf components and implements an easy-to-train recurrent neural network with 16,384 nodes, scalable up to hundreds of thousands of nodes. The system, based on the reservoir computing paradigm, is trained to recognise six human actions from the KTH video database using either raw frames as inputs, or a set of features extracted with the histograms of oriented gradients algorithm. We report a classification accuracy of 91.3%, comparable to state-of-the-art digital implementations, while promising a higher processing speed in comparison to the existing hardware approaches. Because of the massively parallel processing capabilities offered by photonic architectures, we anticipate that this work will pave the way towards simply reconfigurable and energy-efficient solutions for real-time video processing.

Journal ArticleDOI
TL;DR: This paper attempts to utilize channel state information (CSI) derived from wireless signals to realize the device-free air-write recognition called Wri-Fi, and uses the Hidden Markov model for character modeling and classification.
Abstract: Recently, handwriting recognition approaches has been widely applied to Human-Computer Interface (HCI) applications. The emergence of the novel mobile terminals urges a more man-machine friendly interface mode. The previous air-writing recognition approaches have been accomplished by virtue of cameras and sensors. However, the vision based approaches are susceptible to the light condition and sensor based methods have disadvantages in deployment and highcost. The latest researches have demonstrated that the pervasive wireless signals can be used to identify different gestures. In this paper, we attempt to utilize channel state information (CSI) derived from wireless signals to realize the device-free air-write recognition called Wri-Fi . Compared to the gesture recognition, the increased diversity and complexity of characters of the alphabet make it challenging. The Principle Component Analysis (PCA) is used for denoising effectively and the energy indicator derived from the Fast Fourier Transform (FFT) is to detect action continuously. The unique CSI waveform caused by unique writing patterns of 26 letters serve as feature space. Finally, the Hidden Markov model (HMM) is used for character modeling and classification. We conduct experiments in our laboratory and get the average accuracy of the Wri-Fi are 86.75 and 88.74 percent in two writing areas, respectively.

Journal ArticleDOI
TL;DR: An architecture for combining the AR interface with IoT for an improved shopping experience that is distributed and therefore scalable such that any IoT product can be accessed on the spot locally without any server restriction and provide intuitive AR-based visualization and interaction for a flexible product trial in the showroom.
Abstract: The current bare Internet of Things (IoT) infrastructure has recently been extended to include smarter and more effective user interactions. Individual or meaningful sets and groups of IoT objects can be imbued with data and/or content in a distributed manner and efficiently utilized by the client. This distribution makes it possible to scale and customize interaction techniques such as augmented reality (AR). This paper proposes an architecture for combining the AR interface with IoT for an improved shopping experience. The proposed architecture is distributed and therefore scalable such that any IoT product can be accessed on the spot locally without any server restriction and provide intuitive AR-based visualization and interaction for a flexible product trial in the showroom. We identify three key architectural components required to support such a seamless and scalable AR service and experience for IoT-ready products: (1) object-centric data management and visualization, (2) mechanism for accessing, controlling, and interacting with the object, and (3) content exchange interoperability. We illustrate the possible scenarios of shopping in the future with the interactive and smart digital information combined with the analog, that is, the real world. A proof-of-concept implementation is presented as applied to such a “digital–analog” style of shopping. In addition, its usability is experimentally assessed as compared to using the conventional control interface. Our experimental study shows that the subjects clearly experience higher usability and greater satisfaction with AR-interactive shopping, thereby demonstrating the potential of the proposed approach.

Journal ArticleDOI
15 Jul 2019
TL;DR: A sequence of challenging actions, i.e., navigation, door opening, and wall drilling, has been considered in the experimental setup to evaluate the performance of the proposed teleoperation interface in the execution of remote tasks with dynamic uncertainties.
Abstract: This letter presents a novel teleoperation interface that enables remote loco-manipulation control of a MObile Collaborative robotic Assistant (MOCA). MOCA is a new research platform developed at the Istituto Italiano di Tecnologia (IIT), which is composed of a lightweight manipulator arm, a Pisa/IIT SoftHand, and a mobile platform driven by four omni-directional wheels. A whole-body impedance controller is consequently developed to ensure accurate tracking of the impedance and position trajectories at MOCA end-effector by considering the causal interactions in such a dynamic system. The proposed teleoperation interface provides the user with two control modes: locomotion and manipulation. The locomotion mode receives inputs from a personalized human center-of-pressure model, which enables real-time navigation of the MOCA mobile base in the environment. The manipulation mode receives inputs from a tele-impedance interface, which tracks human arm endpoint stiffness and trajectory profiles in real time and replicates them using the MOCA's whole-body impedance controller. To evaluate the performance of the proposed teleoperation interface in the execution of remote tasks with dynamic uncertainties, a sequence of challenging actions, i.e., navigation, door opening, and wall drilling, has been considered in the experimental setup.

Posted Content
TL;DR: SGX-LKL, a system for running Linux binaries inside of Intel SGX enclaves that only exposes a minimal, protected and oblivious host interface, is described and it is shown that SGX- LKL protects TensorFlow training with a 21% overhead.
Abstract: Hardware support for trusted execution in modern CPUs enables tenants to shield their data processing workloads in otherwise untrusted cloud environments. Runtime systems for the trusted execution must rely on an interface to the untrusted host OS to use external resources such as storage, network, and other functions. Attackers may exploit this interface to leak data or corrupt the computation. We describe SGX-LKL, a system for running Linux binaries inside of Intel SGX enclaves that only exposes a minimal, protected and oblivious host interface: the interface is (i) minimal because SGX-LKL uses a complete library OS inside the enclave, including file system and network stacks, which requires a host interface with only 7 calls; (ii) protected because SGX-LKL transparently encrypts and integrity-protects all data passed via low-level I/O operations; and (iii) oblivious because SGX-LKL performs host operations independently of the application workload. For oblivious disk I/O, SGX-LKL uses an encrypted ext4 file system with shuffled disk blocks. We show that SGX-LKL protects TensorFlow training with a 21% overhead.

Proceedings ArticleDOI
17 Oct 2019
TL;DR: ShapeBots as discussed by the authors is a concept prototype of shape-changing swarm robots that can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20 cm).
Abstract: We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions. The modular design of each actuator enables various shapes and geometries of self-transformation. We illustrate potential application scenarios and discuss how this type of interface opens up possibilities for the future of ubiquitous and distributed shape-changing interfaces.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A Minecraft-based collaborative building task in which one player is shown a target structure and needs to instruct the other player to build this structure, and the subtask of Architect utterance generation is considered, and how challenging it is is considered.
Abstract: We wish to develop interactive agents that can communicate with humans to collaboratively solve tasks in grounded scenarios. Since computer games allow us to simulate such tasks without the need for physical robots, we define a Minecraft-based collaborative building task in which one player (A, the Architect) is shown a target structure and needs to instruct the other player (B, the Builder) to build this structure. Both players interact via a chat interface. A can observe B but cannot place blocks. We present the Minecraft Dialogue Corpus, a collection of 509 conversations and game logs. As a first step towards our goal of developing fully interactive agents for this task, we consider the subtask of Architect utterance generation, and show how challenging it is.

Journal ArticleDOI
TL;DR: A novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain- computer interface commands is presented.
Abstract: Brain-computer interface technologies, such as steady-state visually evoked potential, P300, and motor imagery are methods of communication between the human brain and the external devices. Motor imagery-based brain-computer interfaces are popular because they avoid unnecessary external stimuli. Although feature extraction methods have been illustrated in several machine intelligent systems in motor imagery-based brain-computer interface studies, the performance remains unsatisfactory. There is increasing interest in the use of the fuzzy integrals, the Choquet and Sugeno integrals, that are appropriate for use in applications in which fusion of data must consider possible data interactions. To enhance the classification accuracy of brain-computer interfaces, we adopted fuzzy integrals, after employing the classification method of traditional brain-computer interfaces, to consider possible links between the data. Subsequently, we proposed a novel classification framework called the multimodal fuzzy fusion-based brain-computer interface system. Ten volunteers performed a motor imagery-based brain-computer interface experiment, and we acquired electroencephalography signals simultaneously. The multimodal fuzzy fusion-based brain-computer interface system enhanced performance compared with traditional brain-computer interface systems. Furthermore, when using the motor imagery-relevant electroencephalography frequency alpha and beta bands for the input features, the system achieved the highest accuracy, up to 78.81% and 78.45% with the Choquet and Sugeno integrals, respectively. Herein, we present a novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain-computer interface commands.

Journal ArticleDOI
TL;DR: A transparent and flexible 3D touch system is demonstrated to draw a complex three-dimensional structure by utilizing the pressure as a third coordinate and rigorous theoretical analysis is carried out to achieve the target pressure performances with successful 3D data acquisition in wireless and wearable conditions.
Abstract: Pressure-sensitive touch panels can measure pressure and location (3D) information simultaneously and provide an intuitive and natural method for expressing one’s intention with a higher level of controllability and interactivity. However, they have been generally realized by a simple combination of pressure and location sensor or a stylus-based interface, which limit their implementation in a wide spectrum of applications. Here, we report a first demonstration (to our knowledge) of a transparent and flexible 3D touch which can sense the 3D information in a single device with the assistance of functionally designed self-generated multiscale structures. The single 3D touch system is demonstrated to draw a complex three-dimensional structure by utilizing the pressure as a third coordinate. Furthermore, rigorous theoretical analysis is carried out to achieve the target pressure performances with successful 3D data acquisition in wireless and wearable conditions, which in turn, paves the way for future wearable devices. Touch technology holds potential for the development of smartphones and touchscreens, yet the conventional devices are usually built on separate pressure and location sensing units. Kim et al. show a flexible and transparent touch sensor capable of mapping the position and pressure at the same time.

Patent
30 Jan 2019
TL;DR: In this article, a transmission device performs autonomous radio resource allocation for transmitting vehicular data via a sidelink interface to one or more receiving devices, based on the received priority indication and the quality of service parameters.
Abstract: The present disclosure relates to a transmitting device for transmitting vehicular data via a sidelink interface to one or more receiving devices. The transmitting device performs autonomous radio resource allocation for transmitting the vehicular data via the sidelink interface. An application layer generates the vehicular data and forwards the vehicular data together with a priority indication and one or more quality of service parameters to a transmission layer responsible for transmission of the vehicular data via the sidelink interface. The transmission layer performs autonomous radio resource allocation based on the received priority indication and the one or more quality of service parameters. The transmission layer transmits the vehicular data via the sidelink interface to the one or more receiving devices according to the performed autonomous radio resource allocation.

Journal ArticleDOI
19 Aug 2019-ACS Nano
TL;DR: This work presents a human-microrobot user interface to perform direct and agile recognition of user commands and signal conversion for driving the microrobots, and expects that such interface will work as a compelling and versatile platform for myriad potential scenarios in transportation units of micro robots, single cell analysis instruments, lab-on-chip systems, micro-factories, etc.
Abstract: Micro/nanorobotic systems capable of targeted transporting and releasing hold considerable promise for drug delivery, cellular surgery, biosensing, nano assembling, etc. However, on-demand precise control of the micro/nanorobot movement remains a major challenge. In particular, a practical interface to realize instant and customized interactions between human and micro/nanorobots, which is quite essential for developing next generation intelligent micro/nanorobots, has seldom been explored. Here, we present a human-microrobot user interface to perform direct and agile recognition of user commands and signal conversion for driving the microrobot. The microrobot platform is built based on locally enhanced acoustic streaming which could precisely transport microparticles and cells along a given pathway, while the interface is enabled by tuning the actuation frequency and time with different instructions and inputs. Our numerical simulations and experimental demonstrations illustrate that microparticles can be readily transported along the path by the acoustic robotic system, due to the vibration-induced locally enhanced acoustic streaming and resultant propulsion force. The acoustic robotic platform allows large-scale parallel transportation for microparticles and cells along given paths. The human microrobot interface enables the micromanipulator to response promptly to the users' commands input by typing or music playing for accurate transport. For example, the music tone of a playing melody is used for manipulating a cancer cell to a targeted position. The interface offers several attractive capabilities, including tunable speed and orientation, quick response, considerable delivery capacities, high precision and favorable controllability. We expect that such interface will work as a compelling and versatile platform for myriad potential scenarios in transportation units of microrobots, single cell analysis instruments, lab-on-chip systems, microfactories, etc.

Proceedings ArticleDOI
TL;DR: This work introduces shape-changing swarm robots, a swarm of self-transformable robots that can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances.
Abstract: We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions. The modular design of each actuator enables various shapes and geometries of self-transformation. We illustrate potential application scenarios and discuss how this type of interface opens up possibilities for the future of ubiquitous and distributed shape-changing interfaces.

Journal ArticleDOI
TL;DR: In this paper, a reinforcement learning-based selective attention mechanism (SAM) was proposed to discover the distinctive features from the input brain signals and a modified long short-term memory (LSTM) was used to distinguish the interdimensional information forwarded from the SAM.
Abstract: A brain–computer interface (BCI) acquires brain signals, analyzes, and translates them into commands that are relayed to actuation devices for carrying out desired actions. With the widespread connectivity of everyday devices realized by the advent of the Internet of Things (IoT), BCI can empower individuals to directly control objects such as smart home appliances or assistive robots, directly via their thoughts. However, realization of this vision is faced with a number of challenges, most importantly being the issue of accurately interpreting the intent of the individual from the raw brain signals that are often of low fidelity and subject to noise. Moreover, preprocessing brain signals and the subsequent feature engineering are both time-consuming and highly reliant on human domain expertise. To address the aforementioned issues, in this paper, we propose a unified deep learning-based framework that enables effective human-thing cognitive interactivity in order to bridge individuals and IoT objects. We design a reinforcement learning-based selective attention mechanism (SAM) to discover the distinctive features from the input brain signals. In addition, we propose a modified long short-term memory to distinguish the interdimensional information forwarded from the SAM. To evaluate the efficiency of the proposed framework, we conduct extensive real-world experiments and demonstrate that our model outperforms a number of competitive state-of-the-art baselines. Two practical real-time human-thing cognitive interaction applications are presented to validate the feasibility of our approach.

Journal ArticleDOI
TL;DR: To properly assess the hygrothermal properties of walls located in historic buildings, this study discloses the development of a remote sensing technology compatible with an in-situ measurement implemented in Palazzo Tassoni (Italy).
Abstract: This article aims to properly assess the hygrothermal properties of walls located in historic buildings, this study discloses the development of a remote sensing technology compatible with an in-situ measurement implemented in Palazzo Tassoni (Italy). As required by the international recommendations adapted to cultural heritage (CH), this monitoring system balances CH conservation, performance aspects and economic costs using an integrated multidisciplinary approach. Electronics for measurement of environmental parameters is composed of sensor measurements, data acquisition system and data storage and communication system. Data acquisition system, equipped with standard modbus-rtu interface, is designed to run standalone and it is based on two cloned single board PCs to reduce the possibility of data loss. In order to reduce the costs, RaspberryPI single board PCs were chosen. These run a C/C++ software based on standard modbus library and designed to implement multi-client server TCP/IP to allow communication with other devices. Storage and communication systems are based on an industrial PC; it communicates with sensor measurements’ system through a modbus-TCPIP bridge. PC runs a Labview software to provide data storage on a local database and graphical user interface to properly see all acquired data. Herein, some sensing options and approaches of measurement are described, unveiling different possible ways of enhancing the retrofit of CH with adapted technology.

Journal ArticleDOI
TL;DR: A distributed parallel approach using TensorFlow to accelerate the geostatistical seismic inversion and the results indicate that it is feasible to the practical application and the computational time can be largely reduced by using multiple GPUs.

Journal ArticleDOI
TL;DR: The proposed identification model of a coal-rock interface provided the theoretical foundation and technical premise to realize automatic and intelligent mining.

Posted ContentDOI
11 Mar 2019-bioRxiv
TL;DR: A new strategy to interface silicon-based chips with three-dimensional microwire arrays is presented, providing the link between rapidly-developing electronics and high density neural interfaces, and has excellent recording performance.
Abstract: Multi-channel electrical recordings of neural activity in the brain is an increasingly powerful method revealing new aspects of neural communication, computation, and prosthetics. However, while planar silicon-based CMOS devices in conventional electronics scale rapidly, neural interface devices have not kept pace. Here, we present a new strategy to interface silicon-based chips with three-dimensional microwire arrays, providing the link between rapidly-developing electronics and high density neural interfaces. The system consists of a bundle of microwires mated to large-scale microelectrode arrays, such as camera chips. This system has excellent recording performance, demonstrated via single unit and local-field potential recordings in isolated retina and in the motor cortex or striatum of awake moving mice. The modular design enables a variety of microwire types and sizes to be integrated with different types of pixel arrays, connecting the rapid progress of commercial multiplexing, digitisation and data acquisition hardware together with a three-dimensional neural interface.