scispace - formally typeset
Search or ask a question
Author

Jie Tang

Bio: Jie Tang is an academic researcher from South China University of Technology. The author has contributed to research in topics: Optimization problem & Energy consumption. The author has an hindex of 18, co-authored 120 publications receiving 1212 citations. Previous affiliations of Jie Tang include University of California, Riverside & Intel.


Papers
More filters
Journal ArticleDOI
24 Jun 2019
TL;DR: In this paper, the authors review state-of-the-art approaches in these areas as well as explore potential solutions to address these challenges, including providing enough computing power, redundancy, and security so as to guarantee the safety of autonomous vehicles.
Abstract: Safety is the most important requirement for autonomous vehicles; hence, the ultimate challenge of designing an edge computing ecosystem for autonomous vehicles is to deliver enough computing power, redundancy, and security so as to guarantee the safety of autonomous vehicles. Specifically, autonomous driving systems are extremely complex; they tightly integrate many technologies, including sensing, localization, perception, decision making, as well as the smooth interactions with cloud platforms for high-definition (HD) map generation and data storage. These complexities impose numerous challenges for the design of autonomous driving edge computing systems. First, edge computing systems for autonomous driving need to process an enormous amount of data in real time, and often the incoming data from different sensors are highly heterogeneous. Since autonomous driving edge computing systems are mobile, they often have very strict energy consumption restrictions. Thus, it is imperative to deliver sufficient computing power with reasonable energy consumption, to guarantee the safety of autonomous vehicles, even at high speed. Second, in addition to the edge system design, vehicle-to-everything (V2X) provides redundancy for autonomous driving workloads and alleviates stringent performance and energy constraints on the edge side. With V2X, more research is required to define how vehicles cooperate with each other and the infrastructure. Last, safety cannot be guaranteed when security is compromised. Thus, protecting autonomous driving edge computing systems against attacks at different layers of the sensing and computing stack is of paramount concern. In this paper, we review state-of-the-art approaches in these areas as well as explore potential solutions to address these challenges.

369 citations

Journal ArticleDOI
TL;DR: To enable autonomous driving, a computing stack must simultaneously ensure high performance, consume minimal power, and have low thermal dissipation—all at an acceptable cost.
Abstract: To enable autonomous driving, a computing stack must simultaneously ensure high performance, consume minimal power, and have low thermal dissipation—all at an acceptable cost. An architecture that matches workload to computing units and implements task time-sharing can meet these requirements.

145 citations

Journal ArticleDOI
TL;DR: Two ways to successfully integrate deep learning with low-power IoT products are explored.
Abstract: Deep learning can enable Internet of Things (IoT) devices to interpret unstructured multimedia data and intelligently react to both user and environmental events but has demanding performance and power requirements. The authors explore two ways to successfully integrate deep learning with low-power IoT products.

129 citations

Book
25 Oct 2017
TL;DR: This book is the first technical overview of autonomous vehicles written for a general computing and engineering audience and the authors share their practical experiences of creating autonomous vehicle systems.
Abstract: This book is the first technical overview of autonomous vehicles written for a general computing and engineering audience. The authors share their practical experiences of creating autonomous vehicle systems. These systems are complex, consisting of three major subsystems: (1) algorithms for localization, perception, and planning and control; (2) client systems, such as the robotics operating system and hardware platform; and (3) the cloud platform, which includes data storage, simulation, high-definition (HD) mapping, and deep learning model training. The algorithm subsystem extracts meaningful information from sensor raw data to understand its environment and make decisions about its actions. The client subsystem integrates these algorithms to meet real-time and reliability requirements. The cloud platform provides offline computing and storage capabilities for autonomous vehicles. Using the cloud platform, we are able to test new algorithms and update the HD map—plus, train better recognition,...

109 citations

Journal ArticleDOI
TL;DR: A worst case secrecy rate maximization problem is formulated, which jointly optimizes the position of the UAV, the AN transmit power, as well as the PS and TS ratios, and a multi-dimensional search and numerical method to handle the subproblem is proposed.
Abstract: In this paper, we consider an energy-constrained unmanned aerial vehicle (UAV)-enabled mobile relay assisted secure communication system in the presence of a legitimate source-destination pair and multiple eavesdroppers with imperfect locations. The energy-constrained UAV employs the power splitting (PS) scheme to simultaneously receive information and harvest energy from the source, and then exploits the time switching (TS) protocol to perform information relaying. Furthermore, we consider a full-duplex destination node which can simultaneously receive confidential signals from the UAV and cooperatively transmit artificial noise (AN) signals to confuse malicious eavesdroppers. To further enhance the reliability and security of this system, we formulate a worst case secrecy rate maximization problem, which jointly optimizes the position of the UAV, the AN transmit power, as well as the PS and TS ratios. The formulated problem is non-convex and generally intractable. In order to circumvent the non-convexity, we decouple the original optimization problem into three subproblems; this facilitates the design of a suboptimal iterative algorithm. In each iteration, we propose a multi-dimensional search and numerical method to handle the subproblem. Numerical simulation results are provided to demonstrate the effectiveness and superior performance of the proposed joint design versus the conventional schemes in the literature.

82 citations


Cited by
More filters
Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas, and provides an encyclopedic review of mobile and Wireless networking research based on deep learning, which is categorize by different domains.
Abstract: The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper, we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.

975 citations

Book ChapterDOI
01 Jan 1997
TL;DR: In this paper, a nonlinear fractional programming problem is considered, where the objective function has a finite optimal value and it is assumed that g(x) + β + 0 for all x ∈ S,S is non-empty.
Abstract: In this chapter we deal with the following nonlinear fractional programming problem: $$P:\mathop{{\max }}\limits_{{x \in s}} q(x) = (f(x) + \alpha )/((x) + \beta )$$ where f, g: R n → R, α, β ∈ R, S ⊆ R n . To simplify things, and without restricting the generality of the problem, it is usually assumed that, g(x) + β + 0 for all x ∈ S,S is non-empty and that the objective function has a finite optimal value.

797 citations

Journal ArticleDOI
TL;DR: An overview of the state-of-the-art and focus on emerging trends to highlight the hardware, software, and application landscape of big-data analytics are provided.

699 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive survey to draw a picture of the 6G system in terms of drivers, use cases, usage scenarios, requirements, key performance indicators (KPIs), architecture, and enabling technologies.
Abstract: As of today, the fifth generation (5G) mobile communication system has been rolled out in many countries and the number of 5G subscribers already reaches a very large scale. It is time for academia and industry to shift their attention towards the next generation. At this crossroad, an overview of the current state of the art and a vision of future communications are definitely of interest. This article thus aims to provide a comprehensive survey to draw a picture of the sixth generation (6G) system in terms of drivers, use cases, usage scenarios, requirements, key performance indicators (KPIs), architecture, and enabling technologies. First, we attempt to answer the question of "Is there any need for 6G?" by shedding light on its key driving factors, in which we predict the explosive growth of mobile traffic until 2030, and envision potential use cases and usage scenarios. Second, the technical requirements of 6G are discussed and compared with those of 5G with respect to a set of KPIs in a quantitative manner. Third, the state-of-the-art 6G research efforts and activities from representative institutions and countries are summarized, and a tentative roadmap of definition, specification, standardization, and regulation is projected. Then, we identify a dozen of potential technologies and introduce their principles, advantages, challenges, and open research issues. Finally, the conclusions are drawn to paint a picture of "What 6G may look like?". This survey is intended to serve as an enlightening guideline to spur interests and further investigations for subsequent research and development of 6G communications systems.

475 citations