scispace - formally typeset
Search or ask a question

Showing papers by "Chen Liu published in 2017"


Journal ArticleDOI
TL;DR: This article provides a comprehensive overview of the data analytics landscape on the electric vehicle integration to green smart cities and serves as a roadmap to the future data analytics needs and solutions for electric vehicle Integration to smart cities.
Abstract: The huge amount of data generated by devices, vehicles, buildings, the power grid, and many other connected things, coupled with increased rates of data transmission, constitute the big data challenge. Among many areas associated with the Internet of Things, smart grid and electric vehilces have their share of this challenge by being both producers and consumers (ie., prosumers) of big data. Electric vehicls can significantly help smart cities to become greener by reducing emissions of the transportation sector and play an important role in green smart cities. In this article, we first survey the data analytics techniques used for handling the big data of smart grid and electric vehicles. The data generated by electric vehicles come from sources that vary from sensors to trip logs. Once this vast amount of data are analyzed using big data techniques, they can be used to develop policies for siting charging stations, developing smart charging algorithms, solving energy efficiency issues, evaluating the capacity of power distribution systems to handle extra charging loads, and finally, determining the market value for the services provided by electric vehicles (i.e., vehicle-to-grid opportunities). This article provides a comprehensive overview of the data analytics landscape on the electric vehicle integration to green smart cities. It serves as a roadmap to the future data analytics needs and solutions for electric vehicle integration to smart cities.

78 citations


Book ChapterDOI
22 Nov 2017
TL;DR: In this paper, the authors present their experiences of building a production distributed autonomous driving simulation platform based on Spark distributed framework, for distributed computing management, and Robot Operating System (ROS) for data playback simulations.
Abstract: Autonomous vehicle safety and reliability are the paramount requirements when developing autonomous vehicles. These requirements are guaranteed by massive functional and performance tests. Conducting these tests on real vehicles is extremely expensive and time consuming, and thus it is imperative to develop a simulation platform to perform these tasks. For simulation, we can utilize the Robot Operating System (ROS) for data playback to test newly developed algorithms. However, due to the massive amount of simulation data, performing simulation on single machines is not practical. Hence, a high-performance distributed simulation platform is a critical piece in autonomous driving development. In this paper we present our experiences of building a production distributed autonomous driving simulation platform. This platform is built upon Spark distributed framework, for distributed computing management, and ROS, for data playback simulations.

8 citations


Proceedings ArticleDOI
TL;DR: This work explores the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process, allowing for the immediate detection of any unknown downgrade attacks in real time.
Abstract: Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

3 citations


Journal ArticleDOI
TL;DR: In this article, high-k dielectric films with Er 2 O 3 /Al 2 O3 /Si structure were fabricated by the pulsed laser deposition (PLD) technique.

3 citations


Journal ArticleDOI
TL;DR: SVM-JADE, a machine learning enhanced version of an adaptive differential evolution algorithm (JADE), is proposed, able to achieve energy-aware computing on many-core platform when running multiple-program workloads and converges faster than JADE.
Abstract: The modern era of computing involves increasing the core count of the processor, which in turn increases the energy usage of the processor. How to identify the most energy-efficient way of running a multiple-program workload on a many-core processor while still maintaining a satisfactory performance level is always a challenge. Automatic tuning on the voltage and frequency level of a many-core processor is an effective method to aid solving this dilemma. The metrics we focus on optimizing are energy usage and energy-delay product (EDP). To this end, we propose SVM-JADE, a machine learning enhanced version of an adaptive differential evolution algorithm (JADE). We monitor the energy and EDP values of different voltage and frequency combinations of the cores, or power islands, as the algorithm evolves through generations. By adding a well-tuned support vector machine (SVM) to JADE, creating SVM-JADE, we are able to achieve energy-aware computing on many-core platform when running multiple-program workloads. Our experimental results show that our algorithm can further improve the energy by 8.3% and further improve EDP by 7.7% than JADE. Besides, in both EDP-based and energy-based fitness SVM-JADE converges faster than JADE. Parallel tree skeletons are basic computational patterns that can be used to develop parallel programs for manipulating trees. In this paper, we propose an efficient implementation of parallel tree skeletons on distributed-memory parallel computers. In our implementation, we divide a binary tree to segments based on the idea of m-bridges with high locality, and represent local segments as serialized arrays for high sequential performance. We furthermore develop a cost model for our implementation of parallel tree skeletons. We confirm the efficacy of our implementation with several experiments.

2 citations


Journal ArticleDOI
TL;DR: The efforts on porting both EEG processing algorithms into Intel's concept vehicle, the single-chip cloud computer (SCC), a fully programmable 48-core prototype provided with an on-chip network along with advanced power management technologies and support for message-passing are presented.
Abstract: Epilepsy is the most frequent neurological disorder other than stroke. The electroencephalogram (EEG) is the main tool used in monitoring and recording brain signals. In this study, we target two detection algorithms that are essential in the diagnosis of epileptic patients. These algorithms detect high frequency oscillations (HFO) and interictal spikes (IIS) in subdural EEG recordings respectively. This paper presents the efforts on porting both EEG processing algorithms into Intel's concept vehicle, the single-chip cloud computer (SCC), a fully programmable 48-core prototype provided with an on-chip network along with advanced power management technologies and support for message-passing. Several experiments are presented for different SCC configurations, where we vary the number of cores used and their respective voltage/frequency settings. The application was decomposed into two execution regions (i.e., load and execution). Results are presented in the form of performance, power, energy, and energy-delay product (EDP) metrics for each experiment.

1 citations


Book ChapterDOI
15 Jun 2017
TL;DR: Distributed SDN controllers can address the problems of scalability, reliability, and performance issues that a centralized SDN controller suffers from.
Abstract: SDN is a new network operation and management solution, which decouples the control and data planes in a communication network SDN makes it easier for network administrators to create, modify, and manage dynamic networks by abstracting low level functions and network structure Generally, SDN uses a centralized controller, which offers a global view of the entire network This new feature offers the flexibility for administrators to define the strategies in terms of how the network flow is forwarded on a software level With the advancing of the research on SDN, however, distributed SDN controllers are also introduced by researchers One reason for the emerging of distributed SDN controller is: it can address the problems of scalability, reliability, and performance issues that a centralized SDN controller suffers from acceleration

1 citations


Proceedings ArticleDOI
01 Nov 2017
TL;DR: This paper first demonstrates the severity of communication latency problems, then uses LQCD simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware.
Abstract: . Communication latency problems are universal and have become a major performance bottleneck as we scale in big data infrastructure and many-core architectures. Specifically, research institutes around the world have built specialized supercomputers with powerful computation units in order to accelerate scientific computation. However, the problem often comes from the communication side instead of the computation side. In this paper we first demonstrate the severity of communication latency problems. Then we use Lattice Quantum Chromo Dynamic (LQCD) simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware. In detail, we first implement a software value predictor on LQCD simulations: our results indicate that 22.15% of the predictions result in performance gain and only 2.65% of the predictions lead to rollbacks. Next we explore the hardware value predictor design, which results in a 20-fold reduction of the prediction latency. In addition, based on the observation that the full range of floating point accuracy may not be always needed, we propose and implement an initial design of the tolerance value predictor: as the tolerance range increases, the prediction accuracy also increases dramatically.

22 Nov 2017
TL;DR: This paper first demonstrates the severity of communication latency problems, then uses LQCD simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware.
Abstract: Communication latency problems are universal and have become a major performance bottleneck as we scale in distributed computing architectures. Specifically, research institutes around the world have built specialized supercomputers with powerful computation units in order to accelerate scientific computation. However, the problem often comes from the communication side instead of the computation side. In this paper we first demonstrate the severity of communication latency problems. Then we use Lattice Quantum Chromo Dynamic (LQCD) simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware. In detail, we first implement a software value predictor on LQCD simulations: our results indicate that 22.15% of the predictions result in performance gain and only 2.65% of the predictions lead to rollbacks. Next we explore the hardware value predictor design, which results in a 20-fold reduction of the prediction latency. In addition, based on the observation that the full range of floating point accuracy may not be always needed, we propose and implement an initial design of the tolerance value predictor: as the tolerance range increases, the prediction accuracy also increases dramatically.