scispace - formally typeset
Search or ask a question

Showing papers on "Interface (computing) published in 2021"


Journal ArticleDOI
TL;DR: A collection of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications, is presented, aiming to establish a common toolbox to guide both performance engineers and compiler engineers in tapping into the performance potential offered by spatial computing architectures using HLS.
Abstract: Spatial computing architectures promise a major stride in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target spatial computing architectures, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes, due to fundamentally distinct aspects of hardware design, such as programming for deep pipelines, distributed memory resources, and scalable routing. To alleviate this, we present a collection of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. We systematically identify classes of transformations (pipelining, scalability, and memory), the characteristics of their effect on the HLS code and the resulting hardware (e.g., increasing data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip dataflow, allowing for massively parallel architectures. To quantify the effect of various transformations, we cover the optimization process of a sample set of HPC kernels, provided as open source reference codes. We aim to establish a common toolbox to guide both performance engineers and compiler engineers in tapping into the performance potential offered by spatial computing architectures using HLS.

83 citations


Journal ArticleDOI
TL;DR: A BCI-based system to remotely control a robot by continuously capturing workers' brainwaves acquired from a wearable electroencephalogram (EEG) device and interpreting them into robotic commands with 90% accuracy is proposed.

54 citations


Journal ArticleDOI
TL;DR: Benefited by this multi-parameter sensing capability, the interacting patch is successfully demonstrated as a writing pad, a 2D virtual car control, and a 3D virtual drone control interface, indicating its high adaptability in various human-machine interactions.

53 citations


Journal ArticleDOI
TL;DR: The current research in SSVEP-based BCI is reviewed, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate, and the main technical challenges are described.
Abstract: Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this article, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this article. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed.

53 citations


Journal ArticleDOI
TL;DR: In HBDIAIM, a differential evolutionary algorithm has been incorporated to build adequate security for the confidential data management interface in smart city applications and the Big Data analytics assisted decision privacy scheme has been used in the differential evolutionarygorithm, which improves the scalability and accessibility of the information in a data management interfaces based on their corresponding storage location.

42 citations


Journal ArticleDOI
TL;DR: This protocol provides ImageJ-based workflows for the analysis of images obtained from colorimetric assays and is accessible to uninitiated users with little experience in image processing or color science and does not require fluorescence signals, expensive imaging equipment or custom-written algorithms.
Abstract: Recently, there has been an explosion of scientific literature describing the use of colorimetry for monitoring the progression or the endpoint result of colorimetric reactions. The availability of inexpensive imaging technology (e.g., scanners, Raspberry Pi, smartphones and other sub-$50 digital cameras) has lowered the barrier to accessing cost-efficient, objective detection methodologies. However, to exploit these imaging devices as low-cost colorimetric detectors, it is paramount that they interface with flexible software that is capable of image segmentation and probing a variety of color spaces (RGB, HSB, Y'UV, L*a*b*, etc.). Development of tailor-made software (e.g., smartphone applications) for advanced image analysis requires complex, custom-written processing algorithms, advanced computer programming knowledge and/or expertise in physics, mathematics, pattern recognition and computer vision and learning. Freeware programs, such as ImageJ, offer an alternative, affordable path to robust image analysis. Here we describe a protocol that uses the ImageJ program to process images of colorimetric experiments. In practice, this protocol consists of three distinct workflow options. This protocol is accessible to uninitiated users with little experience in image processing or color science and does not require fluorescence signals, expensive imaging equipment or custom-written algorithms. We anticipate that total analysis time per region of interest is ~6 min for new users and <3 min for experienced users, although initial color threshold determination might take longer.

41 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an efficient summarization technique to summarize and then search the events in such multi-view videos over cloud through text query, where deep learning framework is employed to extract the features of moving objects in the frames.
Abstract: In the digital era, the growth of multimedia data is increasing at a rapid pace, which demands both effective and efficient summarization techniques. Such advanced techniques are required so that the users can quickly access the video content, recorded by multiple cameras for a certain period. At present, it is very challenging to manage and search a huge amount of multiview video data, which contains the inter-views dependencies, significant illumination changes, and many low-active frames. This work highlights an efficient summarization technique to summarize and then search the events in such multi-view videos over cloud through text query. Deep learning framework is employed to extract the features of moving objects in the frames. The inter-views dependencies among multiple views of the video are captured via local alignment. Parallel Virtual Machines (VMs) in the Cloud environment have been used to process the multiple video clip independently at a time. Object tracking is applied to filter the low-active frames. Experimental Results indicate that the model successfully reduces the video content, while preserving the momentous information in the form of the events. A computing analysis also indicates that it meets the requirement of real-time applications.

39 citations


Journal ArticleDOI
TL;DR: This article presents a wireless bidirectional grid power interface for EVs that facilitate power flow between the grid, EV, and homes with nonlinear loads, and uses the grid-side low frequency to dc converter to improve the power quality by compensating for the harmonic power and reactive power of nonlinear household loads.
Abstract: Electric vehicles (EVs) that offer grid services using the vehicle-to-grid (V2G) concept essentially require an interface that allows for directional power flow. This article presents a wireless bidirectional grid power interface for EVs that facilitate power flow between the grid, EV, and homes with nonlinear loads. The proposed wireless vehicle-to-grid-to-home power interface (WVGH-PI) uses the grid-side low frequency to dc converter, used in typical wireless bidirectional chargers only to facilitate the two-way energy flow, to improve the power quality by compensating for the harmonic power and reactive power of nonlinear household loads. To improve the overall efficiency of the interface, an adaptive dc-link voltage controller is also proposed. Operation of the proposed wireless interface is investigated under dynamic conditions and in different modes. The simulation and experimental results obtained from a 1-kW prototype are presented, benchmarking against conventional control, to demonstrate the validity of the proposed concept.

37 citations


Journal ArticleDOI
TL;DR: In this paper, a self-powered optoelectronic synergistic fiber sensors (SOEFSs) were proposed to simultaneously visualize and digitize the mechanical stimulus without external power supply.
Abstract: Fiber electronics with mechanosensory functionality are highly desirable in healthcare, human-machine interfaces, and robotics. Most efforts are committed to optimize the electronically readable interface of fiber mechanoreceptor, while the user interface based on naked-eye readable output is rarely explored. Here, a scalable fiber electronics that can simultaneously visualize and digitize the mechanical stimulus without external power supply, named self-powered optoelectronic synergistic fiber sensors (SOEFSs), are reported. By coupling of space and surface charge polarization, a new mechanoluminescent (ML)-triboelectric synergistic effect is realized. It contributes to remarkable enhancement of both electrical (by 100%) and optical output (by 30%), as well as novel temporal-spatial resolution mode for motion capturing. Based on entirely new thermoplastic ML material system and spinning process, industrial-level continuously manufacture and recycling processes of SOEFS are realized. Furthermore, SOEFSs' application in human-machine interface, virtual reality, and underwater sensing, rescue, and information interaction is demonstrated.

35 citations


Journal ArticleDOI
TL;DR: In this article, an open-source decentralized peer-to-peer (P2P) energy trading system, designed on the blockchain and internet of things (IoT) architecture is proposed.

31 citations


Journal ArticleDOI
TL;DR: Two prototypes of neuromorphic photonic computation units based on chalcogenide phase-change materials, including a neuromorphic hardware accelerator designed to carry out matrix vector multiplication in convolutional neural networks and an all-optical spiking neuron, which can serve as a building block for large-scale artificial neural networks.
Abstract: The integration of artificial intelligence systems into daily applications like speech recognition and autonomous driving rapidly increases the amount of data generated and processed. However, satisfying the hardware requirements with the conventional von Neumann architecture remains challenging due to the von Neumann bottleneck. Therefore, new architectures inspired by the working principles of the human brain are developed, and they are called neuromorphic computing. The key principles of neuromorphic computing are in-memory computing to reduce data shuffling and parallelization to decrease computation time. One promising framework for neuromorphic computing is phase-change photonics. By switching to the optical domain, parallelization is inherently possible by wavelength division multiplexing, and high modulation speeds can be deployed. Non-volatile phase-change materials are used to perform multiplications and non-linear operations in an energetically efficient manner. Here, we present two prototypes of neuromorphic photonic computation units based on chalcogenide phase-change materials. First is a neuromorphic hardware accelerator designed to carry out matrix vector multiplication in convolutional neural networks. Due to the neuromorphic architecture, this prototype can already operate at tera-multiply-accumulate per second speeds. Second is an all-optical spiking neuron, which can serve as a building block for large-scale artificial neural networks. Here, the whole computation is carried out in the optical domain, and the device only needs an electrical interface for data input and readout.

Journal ArticleDOI
TL;DR: This work proposes a novel virtualmouse method using RGB-D images and fingertip detection to track fingertips in real-time at 30 FPS on a desktop computer using a single CPU and Kinect V2.
Abstract: A real-time fingertip-gesture-based interface is still challenging for human–computer interactions, due to sensor noise, changing light levels, and the complexity of tracking a fingertip across a variety of subjects. Using fingertip tracking as a virtual mouse is a popular method of interacting with computers without a mouse device. In this work, we propose a novel virtual-mouse method using RGB-D images and fingertip detection. The hand region of interest and the center of the palm are first extracted using in-depth skeleton-joint information images from a Microsoft Kinect Sensor version 2, and then converted into a binary image. Then, the contours of the hands are extracted and described by a border-tracing algorithm. The K-cosine algorithm is used to detect the fingertip location, based on the hand-contour coordinates. Finally, the fingertip location is mapped to RGB images to control the mouse cursor based on a virtual screen. The system tracks fingertips in real-time at 30 FPS on a desktop computer using a single CPU and Kinect V2. The experimental results showed a high accuracy level; the system can work well in real-world environments with a single CPU. This fingertip-gesture-based interface allows humans to easily interact with computers by hand.

Proceedings ArticleDOI
19 May 2021
TL;DR: GARMI as mentioned in this paper, a service robotics platform conceptualized with a focus on assisting elderly at home, is designed to provide support with household tasks, as an avatar for tactile-enabled communication and as an interface for telemedicine and emergency assistance.
Abstract: This letter introduces GARMI, a service robotics platform conceptualized with a focus on assisting elderly at home. GARMI is designed to provide support with household tasks, as an avatar for tactile-enabled communication and as an interface for telemedicine and emergency assistance. Its unique humanoid design features a sensor-equipped multi-modal head designed for natural human-machine communication as well as a whole-body torque-control interface for safe physical human-machine interaction. GARMI's modular software architecture and distinctive whole-body control scheme allows multimodal dynamic coupling. Additionally, every system component can actively produce as well as sense forces and can thus serve as a haptic feedback interface when interacting with its environment and users. Furthermore, GARMI is the first mobile humanoid designed with specialized use-inspired avatar stations: PARTI for dual-arm-based exoskeleton-like remote-control with force-feedback and MUCKI for transparent remote doctor-patient interaction with both audiovisual and safe haptic feedback. The specialized software architecture allows for rapid prototyping and field-testing of new behaviors for telemedicine, multi-modal interaction and autonomous physical and service assistance. Our first results reveal the potential of our use-driven force-based whole-body control mobile humanoid for daily living and telemedicine scenarios in an elderly care research facility.

Journal ArticleDOI
TL;DR: Tidyseurat as discussed by the authors is a lightweight adapter to the tidyverse, which allows intuitively interfacing Seurat with dplyr, tidyr, ggplot2 and plotly packages powering efficient data manipulation, integration, and visualisation.
Abstract: Motivation Seurat is one of the most popular software suites for the analysis of single-cell RNA sequencing data. Considering the popularity of the tidyverse ecosystem, which offers a large set of data display, query, manipulation, integration, and visualisation utilities, a great opportunity exists to interface the Seurat object with the tidyverse. This interface gives the large data science community of tidyverse users the possibility to operate with familiar grammar. Results To provide Seurat with a tidyverse-oriented interface without compromising efficiency, we developed tidyseurat, a lightweight adapter to the tidyverse. Tidyseurat displays cell information as a tibble abstraction, which allows intuitively interfacing Seurat with dplyr, tidyr, ggplot2 and plotly packages powering efficient data manipulation, integration, and visualisation. Iterative analyses on data subsets are enabled by interfacing with the popular nest-map framework. Availability The software is freely available at cran.r-project.org/web/packages/tidyseurat/ and github.com/stemangiola/tidyseurat. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: This work presents a computationally efficient non-invasive, real-time interface based on the decoding of the activity of spinal motoneurons from wearable high-density electromyogram (EMG) sensors, and shows that the accuracy in control of the proposed neural interface may approach that of the natural control of force.
Abstract: Interfacing with human neural cells during natural tasks provides the means for investigating the working principles of the central nervous system and for developing human-machine interaction technologies. Here we present a computationally efficient non-invasive, real-time interface based on the decoding of the activity of spinal motoneurons from wearable high-density electromyogram (EMG) sensors. We validate this interface by comparing its decoding results with those obtained with invasive EMG sensors and offline decoding, as reference. Moreover, we test the interface in a series of studies involving real-time feedback on the behavior of a relatively large number of decoded motoneurons. The results on accuracy, intuitiveness, and stability of control demonstrate the possibility of establishing a direct non-invasive interface with the human spinal cord without the need for extensive training. Moreover, in a control task, we show that the accuracy in control of the proposed neural interface may approach that of the natural control of force. These results are the first that demonstrate the feasibility and validity of a non-invasive direct neural interface with the spinal cord, with wearable systems and matching the neural information flow of natural movements.

Journal ArticleDOI
05 Feb 2021-Robotics
TL;DR: This paper classifies the application of haptic devices based on the construction and functionality in various fields, followed by addressing major limitations related to haptics technology and discussing prospects of this technology.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an enhanced fusion framework for brain-computer interface (BCI) applications, which includes an additional preprocessing step of the EEG signal that makes it time invariant and an additional frequency band as a feature for the system.
Abstract: Brain-computer interface (BCI) technologies are popular methods of communication between the human brain and external devices. One of the most popular approaches to BCI is motor imagery (MI). In BCI applications, the electroencephalography (EEG) is a very popular measurement for brain dynamics because of its noninvasive nature. Although there is a high interest in the BCI topic, the performance of existing systems is still far from ideal, due to the difficulty of performing pattern recognition tasks in EEG signals. This difficulty lies in the selection of the correct EEG channels, the signal-to-noise ratio of these signals, and how to discern the redundant information among them. BCI systems are composed of a wide range of components that perform signal preprocessing, feature extraction, and decision making. In this article, we define a new BCI framework, called enhanced fusion framework, where we propose three different ideas to improve the existing MI-based BCI frameworks. First, we include an additional preprocessing step of the signal: a differentiation of the EEG signal that makes it time invariant. Second, we add an additional frequency band as a feature for the system: the sensorimotor rhythm band, and we show its effect on the performance of the system. Finally, we make a profound study of how to make the final decision in the system. We propose the usage of both up to six types of different classifiers and a wide range of aggregation functions (including classical aggregations, Choquet and Sugeno integrals, and their extensions and overlap functions) to fuse the information given by the considered classifiers. We have tested this new system on a dataset of 20 volunteers performing MI-based brain-computer interface experiments. On this dataset, the new system achieved 88.80% accuracy. We also propose an optimized version of our system that is able to obtain up to 90.76%. Furthermore, we find that the pair Choquet/Sugeno integrals and overlap functions are the ones providing the best results.

Journal ArticleDOI
Yi Zhang1, Shuqin Dong1, Chengkai Zhu1, Marcel Balle1, Bin Zhang1, Lixin Ran1 
TL;DR: This work proposes a noncontact solution to implement hand gesture recognitions for smart devices based on a continuous wave, time-division-multiplexing (TDM), single-input multiple-output (SIMO) Doppler radar sensor, and a machine-learning algorithm to recognize predefined gestures by classifying deterministic Dopplers signals.
Abstract: Personal devices such as smartphones and tablets are rapidly becoming personal communication, information, and control centers. Apart from multitouch screens, human gestures are considered as a new interactive human–smart device interface. In this work, we propose a noncontact solution to implement hand gesture recognitions for smart devices. It is based on a continuous wave, time-division-multiplexing (TDM), single-input multiple-output (SIMO) Doppler radar sensor that can be realized by slightly modifying existing RF front ends of smart devices, and a machine-learning algorithm to recognize predefined gestures by classifying deterministic Doppler signals. An experimental setup emulating a smartphone-based radar sensor was implemented, and the experimental results verified the robustness and the accuracy of the proposed approach.

ReportDOI
TL;DR: In this article, the authors focus on the network and computing aspects of the NVIDIA BlueField-2 SmartNIC when used in an Ethernet environment, and show that while the Bluefield-2 provides a flexible means of processing data at the network's edge, great care must be taken to not overwhelm the hardware.
Abstract: High-performance computing (HPC) researchers have long envisioned scenarios where application workflows could be improved through the use of programmable processing elements embedded in the network fabric. Recently, vendors have introduced programmable Smart Network Interface Cards (SmartNICs) that enable computations to be offloaded to the edge of the network. There is great interest in both the HPC and high-performance data analytics communities in understanding the roles these devices may play in the data paths of upcoming systems. This paper focuses on characterizing both the networking and computing aspects of NVIDIA's new BlueField-2 SmartNIC when used in an Ethernet environment. For the networking evaluation we conducted multiple transfer experiments between processors located at the host, the SmartNIC, and a remote host. These tests illuminate how much processing headroom is available on the SmartNIC during transfers. For the computing evaluation we used the stress-ng benchmark to compare the BlueField-2 to other servers and place realistic bounds on the types of offload operations that are appropriate for the hardware. Our findings from this work indicate that while the BlueField-2 provides a flexible means of processing data at the network's edge, great care must be taken to not overwhelm the hardware. While the host can easily saturate the network link, the SmartNIC's embedded processors may not have enough computing resources to sustain more than half the expected bandwidth when using kernel-space packet processing. From a computational perspective, encryption operations, memory operations under contention, and on-card IPC operations on the SmartNIC perform significantly better than the general-purpose servers used for comparisons in our experiments. Therefore, applications that mainly focus on these operations may be good candidates for offloading to the SmartNIC.

Journal ArticleDOI
TL;DR: In this article, the authors present a research data infrastructure for materials science, extending and combining the features of an electronic lab notebook and a repository, which can be used throughout the entire research process.
Abstract: The concepts and current developments of a research data infrastructure for materials science are presented, extending and combining the features of an electronic lab notebook and a repository. The objective of this infrastructure is to incorporate the possibility of structured data storage and data exchange with documented and reproducible data analysis and visualization, which finally leads to the publication of the data. This way, researchers can be supported throughout the entire research process. The software is being developed as a web-based and desktop-based system, offering both a graphical user interface and a programmatic interface. The focus of the development is on the integration of technologies and systems based on both established as well as new concepts. Due to the heterogeneous nature of materials science data, the current features are kept mostly generic, and the structuring of the data is largely left to the users. As a result, an extension of the research data infrastructure to other disciplines is possible in the future. The source code of the project is publicly available under a permissive Apache 2.0 license.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the effect of using different user interfaces for planning transrectal robot-assisted MR-guided prostate biopsy (MRgPBx) in an augmented reality (AR) environment.
Abstract: Background User interfaces play a vital role in the planning and execution of an interventional procedure The objective of this study is to investigate the effect of using different user interfaces for planning transrectal robot-assisted MR-guided prostate biopsy (MRgPBx) in an augmented reality (AR) environment Method End-user studies were conducted by simulating an MRgPBx system with end- and side-firing modes The information from the system to the operator was rendered on HoloLens as an output interface Joystick, mouse/keyboard, and holographic menus were used as input interfaces to the system Results The studies indicated that using a joystick improved the interactive capacity and enabled operator to plan MRgPBx in less time It efficiently captures the operator's commands to manipulate the augmented environment representing the state of MRgPBx system Conclusions The study demonstrates an alternative to conventional input interfaces to interact and manipulate an AR environment within the context of MRgPBx planning


Proceedings ArticleDOI
11 Oct 2021
TL;DR: In this paper, the authors implement the 5G data plane using two P4 programs: one that acts as a open-source model data plane to simplify the interface with the control plane, and one to run efficiently on hardware switches to minimize latency and maximize bandwidth.
Abstract: The demands on mobile networks are constantly evolving, but designing and integrating new high-speed packet processing remains a challenge due to the complexity of requirements and opacity of protocol specifications. 5G data planes should be implemented in programmable hardware for both speed and flexibility, and extending or replacing these data planes should be painless. In this paper we implement the 5G data plane using two P4 programs: one that acts as a open-source model data plane to simplify the interface with the control plane, and one to run efficiently on hardware switches to minimize latency and maximize bandwidth. The model data plane enables testing changes made to the control plane before integrating with a performant data plane, and vice versa. The hardware data plane implements the fast path for device traffic, and makes use of microservices to implement functions that highspeed switch hardware cannot do. Our data plane implementation is currently in limited deployment on three university campuses where it is enabling new research on mobile networks.

Journal ArticleDOI
TL;DR: A novel electroencephalograph (EEG)-based brain–computer interface (BCI) system for ground vehicle control with potential application of mobility assistance to the disabled is established and the application potential of BCI on the vehicle control and automation is revealed.
Abstract: This article establishes a novel electroencephalograph (EEG)-based brain–computer interface (BCI) system for ground vehicle control with potential application of mobility assistance to the disabled. To enable an intuitive motor imagery (MI) paradigm of “left,” “right,” “push,” and “pull,” a driving simulator based EEG data recording and automatic labeling platform is built for dataset making. In the preprocessing stage, a wavelet and canonical correlation analysis (CCA) combined method is used for artifact removal and improving signal-to-noise ratio. An ensemble learning based training and testing framework is proposed for MI EEG data classification. The average classification accuracy of proposed framework is about 91.75%. This approach essentially takes advantage of the common spatial pattern (CSP) with ability of extracting the feature of event-related potentials and the convolutional neural networks (CNNs) with powerful capacity of feature learning and classification. To convert the classification results of EEG data segments into motion control signals of ground vehicle, shared control strategy is used to realize the control command of “left-steering,” “right-steering,” “acceleration,” and “stop” considering collision avoidance with obstacles detected by a single-line LIDAR. The online experimental results on a model vehicle platform validate the significant performance of the established BCI system and reveal the application potential of BCI on the vehicle control and automation.

Journal ArticleDOI
TL;DR: In this article, the authors comprehensively review bio-cyber interface for IoBNT architecture, focusing on bio-Cyber interfacing options for Io-BNT like biologically inspired bio-electronic devices, RFID enabled implantable chips, and electronic tattoos.
Abstract: Advances in synthetic biology and nanotechnology have contributed to the design of tools that can be used to control, reuse, modify, and re-engineer cells’ structure, as well as enabling engineers to effectively use biological cells as programmable substrates to realize Bio-NanoThings (biological embedded computing devices). Bio-NanoThings are generally tiny, non-intrusive, and concealable devices that can be used for in-vivo applications such as intra-body sensing and actuation networks, where the use of artificial devices can be detrimental. Such (nano-scale) devices can be used in various healthcare settings such as continuous health monitoring, targeted drug delivery, and nano-surgeries. These services can also be grouped to form a collaborative network (i.e., nanonetwork), whose performance can potentially be improved when connected to higher bandwidth external networks such as the Internet, say via 5G. However, to realize the IoBNT paradigm, it is also important to seamlessly connect the biological environment with the technological landscape by having a dynamic interface design to convert biochemical signals from the human body into an equivalent electromagnetic signal (and vice versa). This, unfortunately, risks the exposure of internal biological mechanisms to cyber-based sensing and medical actuation, with potential security and privacy implications. This paper comprehensively reviews bio-cyber interface for IoBNT architecture, focusing on bio-cyber interfacing options for IoBNT like biologically inspired bio-electronic devices, RFID enabled implantable chips, and electronic tattoos. This study also identifies known and potential security and privacy vulnerabilities and mitigation strategies for consideration in future IoBNT designs and implementations.

Journal ArticleDOI
TL;DR: This paper has shown how the hardware abstraction layer interfaces of optical transceivers are implemented for multivendor and heterogeneous environments, coherent digital signal processor interoperability, and optical transport whiteboxes, and driven the effort to define the transponder abstraction interface with partners.
Abstract: In this paper, we identify challenges in developing future optical network infrastructure for new services based on technologies such as 5G, virtual reality, and artificial intelligence, and we suggest approaches to handling these challenges that include a business model, architecture, and diversity. Through activities in multiservice agreement and de facto standard organizations, we have shown how the hardware abstraction layer interfaces of optical transceivers are implemented for multivendor and heterogeneous environments, coherent digital signal processor interoperability, and optical transport whiteboxes. We have driven the effort to define the transponder abstraction interface with partners. The feasibility of such implementation was verified through demonstrations and trials. In addition, we are constructing an open-transport platform by combining existing open-source software and implementing software components that automate and enhance operations. An open architecture maintains a healthy ecosystem for industry and allows for a flexible, operator-driven network.

Journal ArticleDOI
07 Jan 2021-Robotics
TL;DR: Results from the online adaptation phase showed that the CHMI2 system was able to support real-time inference and human-machine interface and interaction (HMI2) adaptation, however, the accuracy of the inferred workload was variable across the different participants.

Journal ArticleDOI
TL;DR: A fully-integrated wireless sensor-brain-machine interface (SBMI) system for communicating key somatosensory signals, fingertip forces and limb joint angles, to the brain and provides a novel solution for providing somatoensory feedback to next-generation neural prostheses.
Abstract: Sensory feedback is critical to the performance of neural prostheses that restore movement control after neurological injury. Recent advances in direct neural control of paralyzed arms present new requirements for miniaturized, low-power sensor systems. To address this challenge, we developed a fully-integrated wireless sensor-brain-machine interface (SBMI) system for communicating key somatosensory signals, fingertip forces and limb joint angles, to the brain. The system consists of a tactile force sensor, an electrogoniometer, and a neural interface. The tactile force sensor features a novel optical waveguide on CMOS design for sensing. The electrogoniometer integrates an ultra low-power digital signal processor (DSP) for real-time joint angle measurement. The neural interface enables bidirectional neural stimulation and recording. Innovative designs of sensors and sensing interfaces, analog-to-digital converters (ADC) and ultra wide-band (UWB) wireless transceivers have been developed. The prototypes have been fabricated in 180nm standard CMOS technology and tested on the bench and in vivo . The developed system provides a novel solution for providing somatosensory feedback to next-generation neural prostheses.

Journal ArticleDOI
TL;DR: This work presents an investigation to an alternative option which aims to manage the incoming information while offering an uncluttered and timely manner of presenting and interacting with the incoming data safely, through the use of an augmented reality (AR) head-up display (HUD) system.
Abstract: The plurality of current infotainment devices within the in-vehicle space produces an unprecedented volume of incoming data that overwhelm the typical driver, leading to higher collision probability. This work presents an investigation to an alternative option which aims to manage the incoming information while offering an uncluttered and timely manner of presenting and interacting with the incoming data safely. The latter is achieved through the use of an augmented reality (AR) head-up display (HUD) system, which projects the information within the driver’s field of view. An uncluttered gesture recognition interface provides the interaction with the AR visuals. For the assessment of the system’s effectiveness, we developed a full-scale virtual reality driving simulator which immerses the drivers in challenging, collision-prone, scenarios. The scenarios unfold within a digital twin model of the surrounding motorways of the city of Glasgow. The proposed system was evaluated in contrast to a typical head-down display (HDD) interface system by 30 users, showing promising results that are discussed in detail.

Proceedings ArticleDOI
17 Feb 2021
TL;DR: NASCENT as mentioned in this paper is a near storage accelerator for database sort, which utilizes Samsung SmartSSD, an NVMe flash drive with an on-board FPGA chip that processes data in-situ.
Abstract: As the size of data generated every day grows dramatically, the computational bottleneck of computer systems has been shifted toward the storage devices. Thanks to recent developments in storage devices, the interface between the storage and the computational platforms has become the main limitation as it provides limited bandwidth which does not scale when the number of storage devices increases. Interconnect networks limit the performance of the system when independent operations are executing on different storage devices since they do not provide simultaneous accesses to all the storage devices. Offloading the computations to the storage devices eliminates the burden of data transfer from the interconnects. Emerging as a nascent computing trend, near storage computing offloads a portion of computation to the storage devices to accelerate the big data applications. In this paper, we propose a near storage accelerator for database sort, NASCENT, which utilizes Samsung SmartSSD, an NVMe flash drive with an on-board FPGA chip that processes data in-situ. We propose, to the best of our knowledge, the first near storage database sort based on bitonic sort which considers the specifications of the storage devices to increase the scalability of computer systems as the number of storage devices increases. NASCENT improves both performance and energy efficiency as the number of storage devices increases. With 12 SmartSSDs, NASCENT is 7.6x (147.2x) faster and 5.6x (131.4x) more energy efficient than the FPGA (CPU) baseline.