scispace - formally typeset
Search or ask a question

Showing papers on "Interface (computing) published in 2017"


Journal ArticleDOI
21 Feb 2017-eLife
TL;DR: A high-performance intracortical BCI for communication is reported, which was tested by three clinical trial participants with paralysis and demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.
Abstract: People with various forms paralysis not only have difficulties getting around, but also are less able to use many communication technologies including computers. In particular, strokes, neurological injuries, or diseases such as ALS can lead to severe paralysis and make it very difficult to communicate. In rare instances, these disorders can result in a condition called locked-in syndrome, in which the affected person is aware but completely unable to move or speak. Several researchers are looking to help people with severe paralysis to communicate again, via a system called a brain-computer interface. These devices record activity in the brain either from the surface of the scalp or directly using a sensor that is surgically implanted. Computers then interpret this activity via algorithms to generate signals that can control various tools, including robotic limbs, powered wheelchairs or computer cursors. Such tools would be invaluable for many people with paralysis. Pandarinath, Nuyujukian et al. set out to study the performance of an implanted brain-computer interface in three people with varying forms of paralysis and focused specifically on a typing task. Each participant used a brain-computer interface known as “BrainGate” to move a cursor on a computer screen displaying the letters of the alphabet. The participants were asked to “point and click” on letters – similar to using a normal computer mouse – to type specific sentences, and their typing rate in words per minute was measured. With recently developed computer algorithms, the participants typed faster using the brain-computer interface than anyone with paralysis has ever managed before. Indeed, the highest performing participant could, on average, type nearly 8 words per minute. The next steps are to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets. These devices are widely available, and would allow paralyzed users to take advantage of a range of applications that can be easily downloaded and customized. This development might enable brain-computer interfaces to not only allow people with neurological disorders to communicate, but also assist other people with paralysis in a number of ways.

355 citations


Journal ArticleDOI
TL;DR: This work is proposing a smartphone-based mobile gateway acting as a flexible and transparent interface between different IoT devices and the Internet, which supports opportunistic IoT devices discovery, control and management coupled with data processing, collection and diffusion functionalities.

257 citations


Journal ArticleDOI
TL;DR: Focus provides the functionality required to remotely monitor the progress of data collection and data processing, which is essential now that automation in cryo-EM allows a steady flow of images of single particles, two-dimensional crystals, or electron tomography data to be recorded in overnight sessions.

187 citations


Journal ArticleDOI
TL;DR: This work studies optocomputational display modes and shows their potential to improve experiences for users across ages and with common refractive errors, and lays the foundations of next generation computational near-eye displays that can be used by everyone.
Abstract: From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

184 citations


Journal ArticleDOI
01 Apr 2017
TL;DR: A practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability.
Abstract: We propose a practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker. The proposed approach provides an intuitive way to collect data and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability. The proposed approach utilizes a real-time user interface with a monitor and provides a first-person perspective using a head-mounted display. Through this interface, teleoperation is used for collecting task operating data, especially for tasks that are difficult to be applied with a conventional method. A two-phase deep learning model is also utilized in the proposed approach. A deep convolutional autoencoder extracts images features and reconstructs images, and a fully connected deep time delay neural network learns the dynamics of a robot task process from the extracted image features and motion angle signals. The “Nextage Open” humanoid robot is used as an experimental platform to evaluate the proposed model. The object folding task utilizing with 35 trained and 5 untrained sensory motor sequences for test. Testing the trained model with online generation demonstrates a 77.8% success rate for the object folding task.

183 citations


Patent
19 Dec 2017
TL;DR: In this paper, a surgical tool configured to be interchangeably attached to a first controller interface and a second controller interface that is different from the first controller interfaces is described and discussed.
Abstract: A surgical tool configured to be interchangeably attached to a first controller interface and a second controller interface that is different from the first controller interface is disclosed.

142 citations


Journal ArticleDOI
TL;DR: A suitable piece of software is presented to connect Abaqus, a sophisticated finite element package, with Matlab, the most comprehensive program for mathematical analysis, and its potential to create and train neural networks is used to identify damage parameters through a hybrid experimental–numerical scheme.

137 citations


Proceedings ArticleDOI
24 Jun 2017
TL;DR: A general architecture which can more efficiently expresses programs with broad common properties called stream-dataflow is defined, which enables high concurrency, and the stream component enables communication and coordination at very-low power and area overhead.
Abstract: Demand for low-power data processing hardware continues to rise inexorably. Existing programmable and "general purpose" solutions (eg. SIMD, GPGPUs) are insufficient, as evidenced by the order-of-magnitude improvements and industry adoption of application and domain-specific accelerators in important areas like machine learning, computer vision and big data. The stark tradeoffs between efficiency and generality at these two extremes poses a difficult question: how could domain-specific hardware efficiency be achieved without domain-specific hardware solutions?In this work, we rely on the insight that "acceleratable" algorithms have broad common properties: high computational intensity with long phases, simple control patterns and dependences, and simple streaming memory access and reuse patterns. We define a general architecture (a hardware-software interface) which can more efficiently expresses program with these properties called stream-dataflow. The dataflow component of this architecture enables high concurrency, and the stream component enables communication and coordination at very-low power and area overhead. This paper explores the hardware and software implications, describes its detailed microarchitecture, and evaluates an implementation. Compared to a state-of-the-art domain specific accelerator (DianNao), and fixed-function accelerators for MachSuite, Softbrain can match their performance with only 2x power overhead on average.

125 citations


Patent
08 Jun 2017
TL;DR: In this article, the authors present a data transmission method and device, which are to be applied to a source client, wherein the source client is a client in a foreground running status in a mobile terminal, the mobile terminal by means of a split-screen function, divides the display screen thereof into a first split screen for displaying a running interface of the source clients and a second split screen to display a running interfaces of a target clients.
Abstract: Embodiments of the present invention provide a data transmission method and device, which are to be applied to a source client, wherein the source client is a client in a foreground running status in a mobile terminal, the mobile terminal, by means of a split-screen function, divides the display screen thereof into a first split screen for displaying a running interface of the source client and a second split screen for displaying a running interface of a target client, the target client is a client in a foreground running status in the mobile terminal. The method comprises: establishing a communication connection between the source client and the target client; receiving a drag instruction for a thumbnail of target data in the source client and moving the thumbnail according to the drag instruction; monitoring whether a data transmission instruction for the target data is received, if yes, transmitting the target data to the target client through the established communication connection. By applying the embodiments of the present invention, users can make full advantage of the split-screen technology, which simplifies data transmission operation.

109 citations


Proceedings ArticleDOI
02 May 2017
TL;DR: A novel design for a home control interface in the form of a social robot, commanded via tangible icons and giving feedback through expressive gestures is presented, suggesting that embodied social robots could provide for an engaging interface with high situation awareness, but also that their usability remains a considerable design challenge.
Abstract: With domestic technology on the rise, the quantity and complexity of smart-home devices are becoming an important interaction design challenge. We present a novel design for a home control interface in the form of a social robot, commanded via tangible icons and giving feedback through expressive gestures. We experimentally compare the robot to three common smart-home interfaces: a voice-control loudspeaker; a wall-mounted touch-screen; and a mobile application. Our findings suggest that interfaces that rate higher on flow rate lower on usability, and vice versa. Participants' sense of control is highest using familiar interfaces, and lowest using voice control. Situation awareness is highest using the robot, and also lowest using voice control. These findings raise questions about voice control as a smart-home interface, and suggest that embodied social robots could provide for an engaging interface with high situation awareness, but also that their usability remains a considerable design challenge.

105 citations


Proceedings ArticleDOI
14 Oct 2017
TL;DR: This work designs a set of application programming interfaces (APIs) that can be used by the host application to offload a data intensive task to the SSD processor, and describes how these APIs can be implemented by simple modifications to the existing Non-Volatile Memory Express (NVMe) command interface between the host and the SSD processors.
Abstract: Modern data center solid state drives (SSDs) integrate multiple general-purpose embedded cores to manage ash translation layer, garbage collection, wear-leveling, and etc., to improve the performance and the reliability of SSDs. As the performance of these cores steadily improves there are opportunities to repurpose these cores to perform application driven computations on stored data, with the aim of reducing the communication between the host processor and the SSD. Reducing host-SSD bandwidth demand cuts down the I/O time which is a bottleneck for many applications operating on large data sets. However, the embedded core performance is still significantly lower than the host processor, as generally wimpy embedded cores are used within SSD for cost effective reasons. So there is a trade-o between the computation overhead associated with near SSD processing and the reduction in communication overhead to the host system. In this work, we design a set of application programming interfaces (APIs) that can be used by the host application to offload a data intensive task to the SSD processor. We describe how these APIs can be implemented by simple modifications to the existing Non-Volatile Memory Express (NVMe) command interface between the host and the SSD processor. We then quantify the computation versus communication tradeoffs for near storage computing using applications from two important domains, namely data analytics and data integration. Using a fully functional SSD evaluation platform we perform design space exploration of our proposed approach by varying the bandwidth and computation capabilities of the SSD processor. We evaluate static and dynamic approaches for dividing the work between the host and SSD processor, and show that our design may improve the performance by up to 20% when compared to processing at the host processor only, and 6when compared to processing at the SSD processor only. CCS CONCEPTS • Computer systems organization → Secondary storage organization; Distributed architectures; Firmware;

Proceedings ArticleDOI
24 Sep 2017
TL;DR: An interface that communicates the intentions of SAVs to pedestrians was designed and implemented in a virtual reality (VR) environment and shows that the pedestrians' level of perceived safety and comfort is higher in encounters with the interface than in encounters without the interface.
Abstract: To study future communication needs between pedestrians and shared automated vehicles (SAVs), an interface that communicates the intentions of SAVs to pedestrians was designed and implemented in a virtual reality (VR) environment. This enabled the exploration of behaviors and experiences of 34 pedestrians when encountering SAVs, both with and without the interface, in several street crossing situations. All pedestrians assessed the level of perceived safety and comfort directly after each encounter with the SAV. The results show that the pedestrians' level of perceived safety and comfort is higher in encounters with the interface than in encounters without the interface. This may have a positive influence on the acceptance of SAVs, and implies that future SAVs may gain from this, or similar interface.

Journal ArticleDOI
TL;DR: An event-driven visualization mechanism fusing multimodal information for a large-scale intelligent video surveillance system that proactively helps security personnel intuitively be aware of events through close cooperation among visualization, data fusion, and sensor tasking is presented.
Abstract: Wide-area monitoring for a smart community can be challenging in systems engineering because of its large scale and heterogeneity at the sensor, algorithm, and visualization levels. A smart interface to visualize high-level information fused from a diversity of low-level surveillance data, and to facilitate rapid response of events, is critical for the design of the system. This paper presents an event-driven visualization mechanism fusing multimodal information for a large-scale intelligent video surveillance system. The mechanism proactively helps security personnel intuitively be aware of events through close cooperation among visualization, data fusion, and sensor tasking. The visualization not only displays 2-D, 3-D, and geographical information within a condensed form of interface but also automatically shows the only important video streams corresponding to spontaneous alerts and events by a decision process called display switching arbitration. The display switching arbitration decides the importance of cameras by score ranking that considers event urgency and semantic object features. This system has been successfully deployed in a campus to demonstrate its usability and efficiency for an installation with two camera clusters that include dozens of cameras, and with a lot of video analytics to detect alerts and events. A further simulation comparing the display switching arbitration with similar camera selection methods shows that our method improves the visualization by selecting better representative camera views and reducing redundant switchover among multiview videos.

Journal ArticleDOI
TL;DR: How haptic technology works, its devices, applications, and disadvantages are described and a description on some of its future applications and a few limitations of this technology is provided.

Journal ArticleDOI
TL;DR: A modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model is presented, able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.
Abstract: In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.

Journal ArticleDOI
TL;DR: This paper represents the first attempt to utilize fuzzy fusion technique to attack the individual differences problem of MI applications in real-world noisy environments and demonstrates the practical feasibility of implementing the proposed method for real- world applications.
Abstract: A brain-computer interface (BCI) system using elec-troencephalography signals provides a convenient means of communication between the human brain and a computer. Motor imagery (MI), in which motor actions are mentally rehearsed without engaging in actual physical execution, has been widely used as a major BCI approach. One robust algorithm that can successfully cope with the individual differences in MI-related rhythmic patterns is to create diverse ensemble classifiers using the subband common spatial pattern (SBCSP) method. To aggregate outputs of ensemble members, this study uses fuzzy integral with particle swarm optimization (PSO), which can regulate subject-specific parameters for the assignment of optimal confidence levels for classifiers. The proposed system combining SBCSP, fuzzy integral, and PSO exhibits robust performance for offline single-trial classification of MI and real-time control of a robotic arm using MI. This paper represents the first attempt to utilize fuzzy fusion technique to attack the individual differences problem of MI applications in real-world noisy environments. The results of this study demonstrate the practical feasibility of implementing the proposed method for real-world applications.

Proceedings ArticleDOI
01 Feb 2017
TL;DR: KAML is presented, an SSD with a key-value interface that uses a novel multi-log architecture and stores data as variable-sized records rather than fixed-sized sectors and provides native transaction support tuned to support fine-grained locking.
Abstract: Modern solid state drives (SSDs) unnecessarily confine host programs to the conventional block I/O interface, leading to suboptimal performance and resource under-utilization. Recent attempts to replace or extend this interface with a key-value-oriented interface and/or built-in support for transactions offer some improvements, but the details of their implementations make them a poor match for many applications. This paper presents the key-addressable, multi-log SSD (KAML), an SSD with a key-value interface that uses a novel multi-log architecture and stores data as variable-sized records rather than fixed-sized sectors. Exposing a key-value interface allows applications to remove a layer of indirection between application-level keys (e.g., database record IDs or file inode numbers) and data stored in the SSD. KAML also provides native transaction support tuned to support fine-grained locking, achieving improved performance compared to previous designs that require page-level locking. Finally, KAML includes a caching layer analogous to a conventional page cache that leverages host DRAM to improve performance and provides additional transactional features. We have implemented a prototype of KAML on a commercial SSD prototyping platform, and our results show that compared with existing key-value stores, KAML improves the performance of online transaction processing (OLTP) workloads by 1.1X – 4.0X, and NoSQL key-value store applications by 1.1X – 3.0X.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A smart system of greenhouse management based on the Internet of Things is proposed using sensor networks and web-based technologies to remotely manage the temperature, humidity and irrigation in the greenhouses.
Abstract: China is a large agricultural country with the largest population in the world. This creates a high demand for food, which is prompting the study of high quality and high-yielding crops. China's current agricultural production is sufficient to feed the nation; however, compared with developed countries agricultural farming is still lagging behind, mainly due to the fact that the system of growing agricultural crops is not based on maximizing output, the latter would include scientific sowing, irrigation and fertilization. In the past few years many seasonal fruits have been offered for sale in markets, but these crops are grown in traditional backward agricultural greenhouses and large scale changes are needed to modernize production. The reform of small-scale greenhouse agricultural production is relatively easy and could be implemented. The concept of the Agricultural Internet of Things utilizes networking technology in agricultural production, the hardware part of this agricultural IoT include temperature, humidity and light sensors and processors with a large data processing capability; these hardware devices are connected by short-distance wireless communication technology, such as Bluetooth, WIFI or Zigbee. In fact, Zigbee technology, because of its convenient networking and low power consumption, is widely used in the agricultural internet. The sensor network is combined with well-established web technology, in the form of a wireless sensor network, to remotely control and monitor data from the sensors.In this paper a smart system of greenhouse management based on the Internet of Things is proposed using sensor networks and web-based technologies. The system consists of sensor networks and asoftware control system. The sensor network consists of the master control center and various sensors using Zigbee protocols. The hardware control center communicates with a middleware system via serial network interface converters. The middleware communicates with a hardware network using an underlying interface and it also communicates with a web system using an upper interface. The top web system provides users with an interface to view and manage the hardware facilities ; administrators can thus view the status of agricultural greenhouses and issue commands to the sensors through this system in order to remotely manage the temperature, humidity and irrigation in the greenhouses. The main topics covered in this paper are:1. To research the current development of new technologies applicable to agriculture and summarizes the strong points concerning the application of the Agricultural Internet of Things both at home and abroad. Also proposed are some new methods of agricultural greenhouse management.2. An analysis of system requirements, the users’ expectations of the system and the response to needs analysis, and the overall design of the system to determine it’s architecture.3. Using software engineering to ensure that functional modules of the system, as far as possible, meet the requirements of high cohesion and low coupling between modules, also detailed design and implementation of each module is considered.

Patent
15 Jun 2017
TL;DR: In this article, the authors present a method for initiating an embedded application in association with a chat interface displayed by a messaging application that executes at least in part on a first user device.
Abstract: Implementations relate to embedded programs and interfaces for chat conversations. In some implementations, a method includes initiating an embedded application in association with a chat interface displayed by a messaging application that executes at least in part on a first user device. The chat interface displays messages originating from other user devices participating in a chat conversation over a network and associated with chat users. An indication is received over the network that one or more particular devices of the other user devices have connected to an embedded session associated with the embedded application. In response, chat identities associated with particular users of the particular user devices are provided from the messaging application to the embedded application. The particular users are designated as member users of the embedded session, and the embedded application is updated based on data received from particular user devices of the embedded session.

Journal ArticleDOI
TL;DR: The novel interaction paradigm attains a seamless interaction between the physical and digital worlds and shortens the operation time on various tasks involving operating physical devices.
Abstract: We describe a new set of interaction techniques that allow users to interact with physical objects through augmented reality (AR). Previously, to operate a smart device, physical touch is generally needed and a graphical interface is normally involved. These become limitations and prevent the user from operating a device out of reach or operating multiple devices at once. Ubii (Ubiquitous interface and interaction) is an integrated interface system that connects a network of smart devices together, and allows users to interact with the physical objects using hand gestures. The user wears a smart glass which displays the user interface in an augmented reality view. Hand gestures are captured by the smart glass, and upon recognizing the right gesture input, Ubii will communicate with the connected smart devices to complete the designated operations. Ubii supports common inter-device operations such as file transfer, printing, projecting, as well as device pairing. To improve the overall performance of the system, we implement computation offloading to perform the image processing computation. Our user test shows that Ubii is easy to use and more intuitive than traditional user interfaces. Ubii shortens the operation time on various tasks involving operating physical devices. The novel interaction paradigm attains a seamless interaction between the physical and digital worlds.

Journal ArticleDOI
TL;DR: From experimental results in a hospital it was confirmed that the robot can move along its global path, and reach the goal without colliding with static and moving objects.
Abstract: This paper describes the development of Pathfinder—an autonomous guided vehicle intended for the transportation of material in hospital environments. Pathfinder is equipped with the latest industrial hardware components and employs the most recent software stacks for simultaneous localization, navigation, and mapping. As the most significant contribution to the current robotics development, powerlink interface enabling direct data transfers between robot operating system and powerlink compatible hardware was developed. This combination is in our best knowledge reported here for the first time and the results with comprehensive tutorial were made publicly available as a GitHub repository. The capabilities of Pathfinder were explored during preliminary on-site tests in local hospital. From experimental results in a hospital it was confirmed that the robot can move along its global path, and reach the goal without colliding with static and moving objects.

Journal ArticleDOI
TL;DR: Results show that the most important factor for human-robot interface usability is the number and placement of views, whereas the effect of the screen output type was only significant on the participants' perceived workload index.

Journal ArticleDOI
TL;DR: The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.
Abstract: Certain diseases affect brain areas that control the movements of the patients’ body, thereby limiting their autonomy and communication capacity Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems The present work focus on the non-muscular control of a robotic wheelchair A proposal to control a wheelchair through a Brain–Computer Interface based on the discrimination of only two mental tasks is presented in this study The wheelchair displacement is performed with discrete movements The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 083 on the real wheelchair control session The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases

Journal ArticleDOI
TL;DR: The use of Director—the open‐source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge (DRC) resulted in efficient high‐level task operation while being fully competitive with approaches focusing on teleoperation by highly trained operators.
Abstract: Operating a high degree of freedom mobile manipulator, such as a humanoid, in a field scenario requires constant situational awareness, capable perception modules, and effective mechanisms for interactive motion planning and control. A well-designed operator interface presents the operator with enough context to quickly carry out a mission and the flexibility to handle unforeseen operating scenarios robustly. By contrast, an unintuitive user interface can increase the risk of catastrophic operator error by overwhelming the user with unnecessary information. With these principles in mind, we present the philosophy and design decisions behind Director-the open-source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge DRC. At the heart of Director is an integrated task execution system that specifies sequences of actions needed to achieve a substantive task, such as drilling a wall or climbing a staircase. These task sequences, developed a priori, make online queries to automated perception and planning algorithms with outputs that can be reviewed by the operator and executed by our whole-body controller. Our use of Director at the DRC resulted in efficient high-level task operation while being fully competitive with approaches focusing on teleoperation by highly trained operators. We discuss the primary interface elements that comprise Director, and we provide an analysis of its successful use at the DRC.

Journal ArticleDOI
TL;DR: The design of interfaces that support users in working with free-flying robots to accomplish tasks including inventory logistics and management, environmental data collection, and visual inspection are explored and the utility of a data-driven design process is demonstrated.
Abstract: Robots are becoming increasingly prevalent and are already providing assistance in a variety of activities, ranging from space exploration to domestic housework. Recent advances in the design of sensors, motors, and microelectromechanical systems have enabled the development of a new class of small aerial robots. These free-flying robots hold great promise in assisting humans by acting as mobile sensor platforms to collect data in areas that are difficult to access or infeasible to instrument. In this work, we explored the design of interfaces that support users in working with free-flying robots to accomplish tasks including inventory logistics and management, environmental data collection, and visual inspection. Extending prior work in control interfaces for ground robots, we conducted a formative study in order to identify key design requirements for free-flyer interfaces. We designed several realistic tasks for use in evaluating human-robot interaction within the context of indoor free-flyer operation. We implemented three prototype interfaces that each provide varying degrees of support in enabling remote users to work with a flying robot to plan, communicate goals, accomplish tasks, and respond to changes in a dynamic environment. An experimental evaluation of each interface found that the interface designed to support collaborative planning and replanning using an interactive timeline and three-dimensional spatial waypoints significantly improved users' efficiency in accomplishing tasks, their ability to intervene in response to spontaneous changes in task demands, and their ratings of the robot as a teammate compared to interfaces that support low-level teleoperation or waypoint-based supervisory control. Our results demonstrate the utility of a data-driven design process and show the need for free-flyer interfaces to consider planning phases in addition to task execution. In addition, we demonstrate the importance of providing interface support for interrupting robot operations as unplanned events arise.

Journal ArticleDOI
TL;DR: This work provides an overview of modeling approaches for RRAM simulation, at the level of technology computer aided design and high-level compact models for circuit simulations, including Finite element method modeling, kinetic Monte Carlo models, and physics-based analytical models.
Abstract: The semiconductor industry is currently challenged by the emergence of Internet of Things, Big data, and deep-learning techniques to enable object recognition and inference in portable computers. These revolutions demand new technologies for memory and computation going beyond the standard CMOS-based platform. In this scenario, resistive switching memory (RRAM) is extremely promising in the frame of storage technology, memory devices, and in-memory computing circuits, such as memristive logic or neuromorphic machines. To serve as enabling technology for these new fields, however, there is still a lack of industrial tools to predict the device behavior under certain operation schemes and to allow for optimization of the device properties based on materials and stack engineering. This work provides an overview of modeling approaches for RRAM simulation, at the level of technology computer aided design and high-level compact models for circuit simulations. Finite element method modeling, kinetic Monte Carlo models, and physics-based analytical models will be reviewed. The adaptation of modeling schemes to various RRAM concepts, such as filamentary switching and interface switching, will be discussed. Finally, application cases of compact modeling to simulate simple RRAM circuits for computing will be shown.

Journal ArticleDOI
TL;DR: Recent advances in both GPGPU-accelerated software packages PowerFit and DisVis are reported on, which combine high-resolution structures of atomic subunits with density maps from cryo-electron microscopy or distance restraints, typically acquired by chemical cross-linking coupled with mass spectrometry, respectively.

Patent
18 Dec 2017
TL;DR: In this paper, the objective function is computed by a combination of a distortion of the useful attributes in the transformed data and a mutual information between the sensitive attributes and the input data.
Abstract: A communication system including a receiver to receive training data. An input interface to receive input data coupled to a hardware processor and a memory. The hardware processor is configured to initialize the privacy module using the training data. Generate a trained privacy module, by iteratively optimizing an objective function. Wherein for each iteration the objective function is computed by a combination of a distortion of the useful attributes in the transformed data and of a mutual information between the sensitive attributes and the transformed data. Such that the mutual information is estimated by the auxiliary module that maximizes a conditional likelihood of the sensitive attributes given the transformed data. Receive the input data via the input interface. Apply the trained privacy module on the input data to produce an application specific transformed data. A transmitter to transmit the application specific transformed data over a communication channel.

Journal ArticleDOI
TL;DR: The results suggest that this inductive tongue computer interface provides an esthetically acceptable and functionally efficient environmental control for a severely disabled user.
Abstract: Purpose: Individuals with tetraplegia depend on alternative interfaces in order to control computers and other electronic equipment. Current interfaces are often limited in the number of available control commands, and may compromise the social identity of an individual due to their undesirable appearance. The purpose of this study was to implement an alternative computer interface, which was fully embedded into the oral cavity and which provided multiple control commands.Methods: The development of a wireless, intraoral, inductive tongue computer was described. The interface encompassed a 10-key keypad area and a mouse pad area. This system was embedded wirelessly into the oral cavity of the user. The functionality of the system was demonstrated in two tetraplegic individuals and two able-bodied individualsResults: The system was invisible during use and allowed the user to type on a computer using either the keypad area or the mouse pad. The maximal typing rate was 1.8 s for repetitively typing ...

Patent
04 Jan 2017
TL;DR: In this article, a system for remotely assisting an autonomous vehicle is described, which includes: aggregating sensor data from the autonomous vehicle; identifying an assistance-desired scenario; generating an assistance request based on the sensor data; transmitting the assistance request to a remote assistance interface; and receiving and processing a response to the requested assistance request.
Abstract: Systems and methods are provided for remotely assisting an autonomous vehicle. The method includes: aggregating sensor data from the autonomous vehicle; identifying an assistance-desired scenario; generating an assistance request based on the sensor data; transmitting the assistance request to a remote assistance interface; and receiving and processing a response to the assistance request. The remote assistance interface includes a remote assistance interface that is used in generating the response to the assistance request.