Institution
Beihang University
Education•Beijing, China•
About: Beihang University is a education organization based out in Beijing, China. It is known for research contribution in the topics: Control theory & Microstructure. The organization has 67002 authors who have published 73507 publications receiving 975691 citations. The organization is also known as: Beijing University of Aeronautics and Astronautics.
Topics: Control theory, Microstructure, Nonlinear system, Artificial neural network, Feature extraction
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The architecture of 5G-based IIoT is proposed, and the implementation methods of different advanced manufacturing scenarios and manufacturing technologies under the circumstances of three typical application modes of5G, respectively, i.e., enhance mobile broadband, massive machine type communication, ultra-reliable and low latency communication.
317 citations
••
TL;DR: A smartphone inertial accelerometer-based architecture for HAR is designed and a real-time human activity classification method based on a convolutional neural network (CNN) is proposed, which uses a CNN for local feature extraction on the UCI and Pamap2 datasets.
Abstract: With the widespread application of mobile edge computing (MEC), MEC is serving as a bridge to narrow the gaps between medical staff and patients. Relatedly, MEC is also moving toward supervising in ...
316 citations
••
TL;DR: Research results indicate that there is still much room for enterprises to improve competitiveness in situations of confining score ranges of technological innovation capability and competitiveness.
314 citations
••
TL;DR: A self-powered, highly stretchable, and transparent triboelectric tactile sensor with patterned Ag-nanofiber electrodes for detecting and spatially mapping trajectory profiles is reported, which has widespread potential in tactile sensing and touchpad technology applications.
Abstract: Recently, the quest for new highly stretchable transparent tactile sensors with large-scale integration and rapid response time continues to be a great impetus to research efforts to expand the promising applications in human-machine interactions, artificial electronic skins, and smart wearable equipment. Here, a self-powered, highly stretchable, and transparent triboelectric tactile sensor with patterned Ag-nanofiber electrodes for detecting and spatially mapping trajectory profiles is reported. The Ag-nanofiber electrodes demonstrate high transparency (>70%), low sheet resistance (1.68-11.1 Ω □-1 ), excellent stretchability, and stability (>100% strain). Based on the electrode patterning and device design, an 8 × 8 triboelectric sensor matrix is fabricated, which works well under high strain owing to the effect of the electrostatic induction. Using cross-locating technology, the device can execute more rapid tactile mapping, with a response time of 70 ms. In addition, the object being detected can be made from any commonly used materials or can even be human hands, indicating that this device has widespread potential in tactile sensing and touchpad technology applications.
314 citations
••
15 Jun 2019TL;DR: In this article, an efficient 3D object detection framework based on a single RGB image in the scenario of autonomous driving is presented. But, the 3D structure information of the object is not explored by employing the visual features of visible surfaces.
Abstract: We present an efficient 3D object detection framework based on a single RGB image in the scenario of autonomous driving. Our efforts are put on extracting the underlying 3D information in a 2D image and determining the accurate 3D bounding box of object without point cloud or stereo data. Leveraging the off-the-shelf 2D object detector, we propose an artful approach to efficiently obtain a coarse cuboid for each predicted 2D box. The coarse cuboid has enough accuracy to guide us to determine the 3D box of the object by refinement. In contrast to previous state-of-the-art methods that only use the features extracted from the 2D bounding box for box refinement, we explore the 3D structure information of the object by employing the visual features of visible surfaces. The new features from surfaces are utilized to eliminate the problem of representation ambiguity brought by only using 2D bounding box. Moreover, we investigate different methods of 3D box refinement and discover that a classification formulation with quality aware loss have much better performance than regression. Evaluated on KITTI benchmark, our approach outperforms current state-of-the-art methods for single RGB image based 3D object detection.
313 citations
Authors
Showing all 67500 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yi Chen | 217 | 4342 | 293080 |
H. S. Chen | 179 | 2401 | 178529 |
Alan J. Heeger | 171 | 913 | 147492 |
Lei Jiang | 170 | 2244 | 135205 |
Wei Li | 158 | 1855 | 124748 |
Shu-Hong Yu | 144 | 799 | 70853 |
Jian Zhou | 128 | 3007 | 91402 |
Chao Zhang | 127 | 3119 | 84711 |
Igor Katkov | 125 | 972 | 71845 |
Tao Zhang | 123 | 2772 | 83866 |
Nicholas A. Kotov | 123 | 574 | 55210 |
Shi Xue Dou | 122 | 2028 | 74031 |
Li Yuan | 121 | 948 | 67074 |
Robert O. Ritchie | 120 | 659 | 54692 |
Haiyan Wang | 119 | 1674 | 86091 |