scispace - formally typeset
Search or ask a question
Author

Hao Su

Bio: Hao Su is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Computer science & Point cloud. The author has an hindex of 57, co-authored 302 publications receiving 55902 citations. Previous affiliations of Hao Su include Philips & Jiangxi University of Science and Technology.


Papers
More filters
Proceedings ArticleDOI
11 Mar 2007
TL;DR: The evaluation showed that the SNet system performs considerable service discovery efficiency, and to guarantee efficient and semantic service discovery, SNet schemes WSDL-S as Semantic Web Services description language and extracts its semantic attributes as indexing keys in Skip Graph.
Abstract: This paper presents the design of SNet system, which is a P2P overlay for Semantic Web Services discovery. SNet differs from previous P2P Web Services discovery systems in that it supports complex search with its locality-preserving feature based on Skip Graph. To guarantee efficient and semantic service discovery, SNet schemes WSDL-S as Semantic Web Services description language and extracts its semantic attributes as indexing keys in Skip Graph so that similar keys are aggregated to keep the leverage between peer nodes. Our evaluation showed that the SNet system performs considerable service discovery efficiency.

16 citations

Proceedings ArticleDOI
06 Sep 2022
TL;DR: This work proposes that the manipulation skills should include mobility to have accessibility in interacting with the target object from multiple locations and at the same time the navigation skill could have multiple end points which lead to successful manipulation, and operationalizes these ideas by implementing mobile manipulation skills rather than stationary ones.
Abstract: We study a modular approach to tackle long-horizon mobile manipulation tasks for object rearrangement, which decomposes a full task into a sequence of subtasks. To tackle the entire task, prior work chains multiple stationary manipulation skills with a point-goal navigation skill, which are learned individually on subtasks. Although more effective than monolithic end-to-end RL policies, this framework suffers from compounding errors in skill chaining, e.g., navigating to a bad location where a stationary manipulation skill can not reach its target to manipulate. To this end, we propose that the manipulation skills should include mobility to have flexibility in interacting with the target object from multiple locations and at the same time the navigation skill could have multiple end points which lead to successful manipulation. We operationalize these ideas by implementing mobile manipulation skills rather than stationary ones and training a navigation skill trained with region goal instead of point goal. We evaluate our multi-skill mobile manipulation method M3 on 3 challenging long-horizon mobile manipulation tasks in the Home Assistant Benchmark (HAB), and show superior performance as compared to the baselines.

16 citations

Journal ArticleDOI
TL;DR: In this article, an upflow porous-plate anaerobic reactor (UPPAR) was developed and explored to promote biomass (anammox) retention and growth, and its nitrogen removal efficiencies, as well as the microbial community dynamics involved in the nitrogen removal process, was evaluated and reported.
Abstract: The anaerobic ammonium oxidation (anammox) process has gained much popularity in recent years following its success in nitrogen removal. However, not much has been reported on techniques to promote anammox bacteria immobilization and associated microbial community evolution. In this study, a novel upflow porous-plate anaerobic reactor (UPPAR) was developed and explored to promote biomass (anammox) retention and growth. To comprehend the performance of the UPPAR, its nitrogen removal efficiencies, as well as the microbial community dynamics involved in the nitrogen removal process, was evaluated and reported. When NLR ranging 0.98–1.08 kg m−3 d−1 was introduced at various stages of the UPPAR operation, a rapid start-up was achieved in 63 d, and the overall nitrogen removal rate could reach 90–95%. By the end of the start-up period, it was revealed that Proteobacteria abundance had reduced by 43.92% as opposed Planctomycetes which increased from 2.95% to 43.52%. Conversely, after the UPPAR had been operated for 124 d, thus at steady-state, the most pronounced phylum observed was Planctomycetes (43.52%) followed by Proteobacteria (26.63%), Chloroflexi (5.87%), Ignavibacteriae (5.55%), and Bacteroidetes (4.9%). Predominant genera observed included Candidatus Kuenenia – (25.46%) and Candidatus Brocadia – (3.15%), an indication that nitrogen removal mechanism within the UPPAR was mainly conducted via autotrophic anammox process. Scanning electron microscopy (SEM) revealed that sludge samples obtained at steady-state were predominantly in granular form with sizes ranging between 2 mm to 5 mm. Granules surfaces were dominated with normal to coccoid-shaped cells as revealed by the SEM.

16 citations

Posted Content
TL;DR: This letter presents a new wearable robot design approach continuum soft exoskeleton that is conformal to human anatomy and it can reduce multiple types of forces along the human spine such as the spinae muscle force, shear, and compression force of the lumbar vertebrae.
Abstract: Back injuries are the most prevalent work-related musculoskeletal disorders and represent a major cause of disability. Although innovations in wearable robots aim to alleviate this hazard, the majority of existing exoskeletons are obtrusive because the rigid linkage design limits natural movement, thus causing ergonomic risk. Moreover, these existing systems are typically only suitable for one type of movement assistance, not ubiquitous for a wide variety of activities. To fill in this gap, this paper presents a new wearable robot design approach continuum soft exoskeleton. This spine-inspired wearable robot is unobtrusive and assists both squat and stoops while not impeding walking motion. To tackle the challenge of the unique anatomy of spine that is inappropriate to be simplified as a single degree of freedom joint, our robot is conformal to human anatomy and it can reduce multiple types of forces along the human spine such as the spinae muscle force, shear, and compression force of the lumbar vertebrae. We derived kinematics and kinetics models of this mechanism and established an analytical biomechanics model of human-robot interaction. Quantitative analysis of disc compression force, disc shear force and muscle force was performed in simulation. We further developed a virtual impedance control strategy to deliver force control and compensate hysteresis of Bowden cable transmission. The feasibility of the prototype was experimentally tested on three healthy subjects. The root mean square error of force tracking is 6.63 N (3.3 % of the 200N peak force) and it demonstrated that it can actively control the stiffness to the desired value. This continuum soft exoskeleton represents a feasible solution with the potential to reduce back pain for multiple activities and multiple forces along the human spine.

16 citations

Proceedings ArticleDOI
11 Nov 2010
TL;DR: The developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb and the appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.
Abstract: This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct “ball-in-socket” structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

15 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations