scispace - formally typeset
Search or ask a question
Author

Hao Su

Bio: Hao Su is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Computer science & Point cloud. The author has an hindex of 57, co-authored 302 publications receiving 55902 citations. Previous affiliations of Hao Su include Philips & Jiangxi University of Science and Technology.


Papers
More filters
Proceedings ArticleDOI
06 Jul 2020
TL;DR: The control design takes advantages of the custom-built high-performance knee assistive device and the muscle synergy-based human walking actuation model and a model predictive control (MPC) design is used for real-time tuning physical human-robot interactions.
Abstract: Knee joint actuation plays a critical role to keep human walking locomotion and balance under abnormal conditions, such as foot slip or work-related musculoskeletal disorders etc. Wearable assistive robotic knee devices provide additional support and actuation for human walkers. We present assist-as-needed control strategy for a lightweight, highly-backdrivable soft knee assistive device. The control design takes advantages of the custom-built high-performance knee assistive device and the muscle synergy-based human walking actuation model. A model predictive control (MPC) design is used for real-time tuning physical human-robot interactions. Human-in-the-loop simulation results are presented to demonstrate the performance of the robotic control systems under normal walking condition.

3 citations

Journal ArticleDOI
TL;DR: In this article , a fully physics-grounded simulation pipeline that includes material acquisition, ray-tracing-based infrared (IR) image rendering, IR noise simulation, and depth estimation is designed.
Abstract: In this article, we focus on the simulation of active stereovision depth sensors, which are popular in both academic and industry communities. Inspired by the underlying mechanism of the sensors, we designed a fully physics-grounded simulation pipeline that includes material acquisition, ray-tracing-based infrared (IR) image rendering, IR noise simulation, and depth estimation. The pipeline is able to generate depth maps with material-dependent error patterns similar to a real depth sensor in real time. We conduct real experiments to show that perception algorithms and reinforcement learning policies trained in our simulation platform could transfer well to the real-world test cases without any fine-tuning. Furthermore, due to the high degree of realism of this simulation, our depth sensor simulator can be used as a convenient testbed to evaluate the algorithm performance in the real world, which will largely reduce the human effort in developing robotic algorithms. The entire pipeline has been integrated into the SAPIEN simulator and is open-sourced to promote the research of vision and robotics communities.

2 citations

Journal ArticleDOI
TL;DR: Exoskeleton-assistance does not alter the existing synergy modules, but could induce a new module to emerge, and alters the control of these modules, i.e., modifies the neural commands indicated by the reduced amplitude of the activation profiles.
Abstract: Objective: Gait deficit after multiple sclerosis (MS) can be characterized by altered muscle activation patterns. There is preliminary evidence of improved walking with a lower limb exoskeleton in persons with MS. However, the effects of exoskeleton-assisted walking on neuromuscular modifications are relatively unclear. The objective of this study was to investigate the muscle synergies, their activation patterns and the differences in neural strategies during walking with (EXO) and without (No-EXO) an exoskeleton. Methods: Ten subjects with MS performed walking during EXO and No-EXO conditions. Electromyography signals from seven leg muscles were recorded. Muscle synergies and the activation profiles were extracted using non-negative matrix factorization. Results: The stance phase duration was significantly shorter during EXO compared to the No-EXO condition (p<0.05). Moreover, typically 3-5 modules were extracted in each condition. The module-1 (comprising Vastus Medialis and Rectus Femoris muscles), module-2 (comprising Soleus and Medial Gastrocnemius muscles), module-3 (Tibialis Anterior muscle) and module-4 (comprising Biceps Femoris and Semitendinosus muscles) were comparable between conditions. During EXO condition, Semitendinosus and Vastus Medialis emerged in module-5 in 7/10 subjects. Compared to No-EXO, average activation amplitude was significantly reduced corresponding to module-2 during the stance phase and module-3 during the swing phase during EXO. Conclusion: Exoskeleton-assistance does not alter the existing synergy modules, but could induce a new module to emerge, and alters the control of these modules, i.e., modifies the neural commands indicated by the reduced amplitude of the activation profiles. Significance: The work provides insights on the potential underlying mechanism of improving gait functions after exoskeleton-assisted locomotor training.

2 citations

Journal ArticleDOI
TL;DR: One study presents a pediatric exoskeleton that provides adaptive assistance to knee extension to alleviate crouch and its evaluation in a child with cerebral palsy and two manuscripts present novel controllers which leverage reinforcement learning and their evaluation in simulation.
Abstract: One study presents a pediatric exoskeleton that provides adaptive assistance to knee extension to alleviate crouch and its evaluation in a child with cerebral palsy (Chen et al.). Two manuscripts present novel controllers which leverage reinforcement learning and their evaluation in simulation: one for assisting squatting motion (Luo et al.) and one for bipedal exoskeleton walking in three dimensions (Liu et al.). The fi nal manuscript evaluates the fusion of surface electromyography (EMG) and muscle sonography to estimate limb movement in a variety of locomotor tasks (Rabe and Fey). a novel Motor Assisted Hybrid Neuroprosthesis (MAHNP) with actuated hip and knee joints and a distributed control architecture that integrates the exoskeleton with customized FES systems. A supervisory gait event detector split the gait cycle into four discrete states. The hip and/or knee motors could be activated with bursts of torque to assist the stimulation-driven limb motion. The system was evaluated in two participants with SCI, each with different implanted stimulation systems. Each

2 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations