Author
Jiarui Liu
Bio: Jiarui Liu is an academic researcher. The author has contributed to research in topics: Computer science & Artificial intelligence. The author has an hindex of 1, co-authored 1 publications receiving 4 citations.
Papers
More filters
TL;DR: In this article , a tactile-olfactory sensing array, inspired by the natural sense-fusion system of star-nose mole, was presented, which can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input.
Abstract: Abstract Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short.
39 citations
TL;DR: In this article , a tactile-olfactory sensing array, inspired by the natural sense-fusion system of star-nose mole, was presented, which can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input.
Abstract: Abstract Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short.
30 citations
TL;DR: A method named off-line optimizing visual servoing algorithm is innovatively proposed to minimize the base disturbance during the visual Servoing process where the degrees-of-freedom of the manipulator is not enough for a zero-reaction control.
Abstract: During visual servoing space activities, the attitude of free-floating space robot may be disturbed due to dynamics coupling between the satellite base and the manipulator. And the disturbance may cause communication interruption between space robot and control center on earth. However, it often happens that the redundancy of manipulator is not enough to fully eliminate this disturbance. In this paper, a method named off-line optimizing visual servoing algorithm is innovatively proposed to minimize the base disturbance during the visual servoing process where the degrees-of-freedom of the manipulator is not enough for a zero-reaction control. Based on the characteristic of visual servoing process and the robot system modeling, the optimal control method is applied to achieve the optimization, and a pose planning method is presented to achieve a second-order continuity of quaternion getting rid of the interruption caused by ambiguity. Then simulations are carried out to verify the method, and the results show that the robot is controlled with optimized results during visual servoing process and the joint trajectories are smooth.
9 citations
TL;DR: In this paper , a Neural Network (NN) event-triggered finite-time consensus control method for uncertain nonlinear multi-agent systems (MASs) with dead-zone input and actuator failures is developed.
Abstract: This paper develops a Neural Network (NN) event-triggered finite-time consensus control method for uncertain nonlinear Multi-Agent Systems (MASs) with dead-zone input and actuator failures. In practical applications, actuator failures would inevitably arise in MASs. And the time, pattern, and value of the failures are unknown. Besides, the actuators of MASs also suffer from dead-zone nonlinearity. No matter actuator failures or dead-zone input would dramatically affect the performance and stability of MASs. To address these issues, finite-time adaptive controllers capable of simultaneously compensating for actuator failures and dead-zone input are constructed by adopting the backstepping technology. Meanwhile, the NN control scheme is adopted to handle the unknown nonlinear dynamics of each agent. Furthermore, an event-triggered control mechanism is established that no longer requires continuous communication on the control network. Under the proposed control method, all followers achieve finite-time synchronization, irrespective of the presence of limited bandwidth, unknown failures, and dead-zone input. These results are demonstrated by simulations.
Cited by
More filters
41 citations
TL;DR: An intelligent noncontact gesture‐recognition system is presented through the integration of a triboelectric touchless sensor (TTS) and deep learning technology that can recognize diverse complex gestures by utilizing charges naturally carried on human fingers without the need of wearing accessories, complicated device structures, adequate light conditions, and achieves high recognition accuracy.
Abstract: Human‐machine interfaces (HMIs) play important role in the communication between humans and robots. Touchless HMIs with high hand dexterity and hygiene hold great promise in medical applications, especially during the pandemic of coronavirus disease 2019 (COVID‐19) to reduce the spread of virus. However, current touchless HMIs are mainly restricted by limited types of gesture recognition, the requirement of wearing accessories, complex sensing platforms, light conditions, and low recognition accuracy, obstructing their practical applications. Here, an intelligent noncontact gesture‐recognition system is presented through the integration of a triboelectric touchless sensor (TTS) and deep learning technology. Combined with a deep‐learning‐based multilayer perceptron neural network, the TTS can recognize 16 different types of gestures with a high average accuracy of 96.5%. The intelligent noncontact gesture‐recognition system is further applied to control a robot for collecting throat swabs in a noncontact mode. Compared with present touchless HMIs, the proposed system can recognize diverse complex gestures by utilizing charges naturally carried on human fingers without the need of wearing accessories, complicated device structures, adequate light conditions, and achieves high recognition accuracy. This system could provide exciting opportunities to develop a new generation of touchless medical equipment, as well as touchless public facilities, smart robots, virtual reality, metaverse, etc.
38 citations
TL;DR: In this article, the authors identify bottlenecks hindering the maturation of flexible sensors and propose promising solutions to ease and to expedite their deployment, highlighting environmental concerns and emphasizing nontechnical issues such as business, regulatory, and ethical considerations.
Abstract: Humans rely increasingly on sensors to address grand challenges and to improve quality of life in the era of digitalization and big data. For ubiquitous sensing, flexible sensors are developed to overcome the limitations of conventional rigid counterparts. Despite rapid advancement in bench-side research over the last decade, the market adoption of flexible sensors remains limited. To ease and to expedite their deployment, here, we identify bottlenecks hindering the maturation of flexible sensors and propose promising solutions. We first analyze challenges in achieving satisfactory sensing performance for real-world applications and then summarize issues in compatible sensor-biology interfaces, followed by brief discussions on powering and connecting sensor networks. Issues en route to commercialization and for sustainable growth of the sector are also analyzed, highlighting environmental concerns and emphasizing nontechnical issues such as business, regulatory, and ethical considerations. Additionally, we look at future intelligent flexible sensors. In proposing a comprehensive roadmap, we hope to steer research efforts towards common goals and to guide coordinated development strategies from disparate communities. Through such collaborative efforts, scientific breakthroughs can be made sooner and capitalized for the betterment of humanity.
34 citations
TL;DR: In this article , a tactile-olfactory sensing array, inspired by the natural sense-fusion system of star-nose mole, was presented, which can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input.
Abstract: Abstract Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short.
30 citations
TL;DR: In this paper , the authors present the basic model of bio-inspired interactive neuromorphic devices and discuss the performance metrics and pros and cons regarding to the computing neurons and integrating sensory neurons at the material, device, network, and system levels.
Abstract: The performance of conventional computer based on von Neumann architecture is limited due to the physical separation of memory and processor. By synergistically integrating various sensors with synaptic devices, recently emerging interactive neuromorphic devices can directly sense/store/process various stimuli information from external environments and implement functions of perception, learning, memory, and computation. In this review, we present the basic model of bioinspired interactive neuromorphic devices and discuss the performance metrics. Next, we summarize the recent progress and development of bioinspired interactive neuromorphic devices, which are classified into neuromorphic tactile systems, visual systems, auditory systems, and multisensory system. They are discussed in detail from the aspects of materials, device architectures, operating mechanisms, synaptic plasticity, and potential applications. Additionally, the bioinspired interactive neuromorphic devices that can fuse multiple/mixed sensing signals are proposed to address more realistic and sophisticated problems. Finally, we discuss the pros and cons regarding to the computing neurons and integrating sensory neurons and deliver the perspectives on interactive neuromorphic devices at the material, device, network, and system levels. It is believed the neuromorphic devices can provide promising solutions to next generation of interactive sensation/memory/computation toward the development of multimodal, low-power, and large-scale intelligent systems endowed with neuromorphic features.
20 citations