scispace - formally typeset
Search or ask a question
Author

Cong Liu

Bio: Cong Liu is an academic researcher from University of Texas at Dallas. The author has contributed to research in topics: Scheduling (computing) & Metrical task system. The author has an hindex of 29, co-authored 128 publications receiving 2790 citations. Previous affiliations of Cong Liu include University of Alberta & University of North Carolina at Chapel Hill.


Papers
More filters
Proceedings ArticleDOI
03 Sep 2018
TL;DR: The experimental results demonstrate that DeepRoad can detect thousands of inconsistent behaviors for DNN-based autonomous driving systems, and effectively validate input images to potentially enhance the system robustness as well.
Abstract: While Deep Neural Networks (DNNs) have established the fundamentals of image-based autonomous driving systems, they may exhibit erroneous behaviors and cause fatal accidents. To address the safety issues in autonomous driving systems, a recent set of testing techniques have been designed to automatically generate artificial driving scenes to enrich test suite, e.g., generating new input images transformed from the original ones. However, these techniques are insufficient due to two limitations: first, many such synthetic images often lack diversity of driving scenes, and hence compromise the resulting efficacy and reliability. Second, for machine-learning-based systems, a mismatch between training and application domain can dramatically degrade system accuracy, such that it is necessary to validate inputs for improving system robustness. In this paper, we propose DeepRoad, an unsupervised DNN-based framework for automatically testing the consistency of DNN-based autonomous driving systems and online validation. First, DeepRoad automatically synthesizes large amounts of diverse driving scenes without using image transformation rules (e.g. scale, shear and rotation). In particular, DeepRoad is able to produce driving scenes with various weather conditions (including those with rather extreme conditions) by applying Generative Adversarial Networks (GANs) along with the corresponding real-world weather scenes. Second, DeepRoad utilizes metamorphic testing techniques to check the consistency of such systems using synthetic images. Third, DeepRoad validates input images for DNN-based systems by measuring the distance of the input and training images using their VGGNet features. We implement DeepRoad to test three well-recognized DNN-based autonomous driving systems in Udacity self-driving car challenge. The experimental results demonstrate that DeepRoad can detect thousands of inconsistent behaviors for these systems, and effectively validate input images to potentially enhance the system robustness as well.

445 citations

Journal ArticleDOI
TL;DR: This paper adopts a power-law decaying data model verified by real data sets and proposes a random projection-based estimation algorithm for this data model, which requires fewer compressed measurements and greatly reduces the energy consumption.
Abstract: Data collection is a crucial operation in wireless sensor networks. The design of data collection schemes is challengingdue to the limited energy supply and the hot spot problem. Leveraging empirical observations that sensory data possess strongspatiotemporal compressibility, this paper proposes a novel compressive data collection scheme for wireless sensor networks. We adopt a power-law decaying data model verified by real data sets and then propose a random projection-based estimation algorithm for this data model. Our scheme requires fewer compressed measurements, thus greatly reduces the energy consumption. It allowssimple routing strategy without much computation and control overheads, which leads to strong robustness in practical applications. Analytically, we prove that it achieves the optimal estimation error bound. Evaluations on real data sets (from the GreenOrbs, IntelLab and NBDC-CTD projects) show that compared with existing approaches, this new scheme prolongs the network lifetime by $1.5 \times$ to $2 \times$ for estimation error 5-20 percent.

263 citations

Proceedings ArticleDOI
24 Mar 2014
TL;DR: It is shown that by utilizing an appropriate error recovery, the proposed approximate multiplier achieves similar processing accuracy as traditional exact multipliers but with significant improvements in power and performance.
Abstract: Approximate circuits have been considered for error-tolerant applications that can tolerate some loss of accuracy with improved performance and energy efficiency. Multipliers are key arithmetic circuits in many such applications such as digital signal processing (DSP). In this paper, a novel approximate multiplier with a lower power consumption and a shorter critical path than traditional multipliers is proposed for high-performance DSP applications. This multiplier leverages a newly-designed approximate adder that limits its carry propagation to the nearest neighbors for fast partial product accumulation. Different levels of accuracy can be achieved through a configurable error recovery by using different numbers of most significant bits (MSBs) for error reduction. The approximate multiplier has a low mean error distance, i.e., most of the errors are not significant in magnitude. Compared to the Wallace multiplier, a 16-bit approximate multiplier implemented in a 28nm CMOS process shows a reduction in delay and power of 20% and up to 69%, respectively. It is shown that by utilizing an appropriate error recovery, the proposed approximate multiplier achieves similar processing accuracy as traditional exact multipliers but with significant improvements in power and performance.

225 citations

Journal ArticleDOI
TL;DR: A review and classification are presented for the current designs of approximate arithmetic circuits including adders, multipliers, and dividers including improvements in delay, power, and area for the detection of differences in images by using approximate dividers.
Abstract: Often as the most important arithmetic modules in a processor, adders, multipliers, and dividers determine the performance and energy efficiency of many computing tasks. The demand of higher speed and power efficiency, as well as the feature of error resilience in many applications (e.g., multimedia, recognition, and data analytics), have driven the development of approximate arithmetic design. In this article, a review and classification are presented for the current designs of approximate arithmetic circuits including adders, multipliers, and dividers. A comprehensive and comparative evaluation of their error and circuit characteristics is performed for understanding the features of various designs. By using approximate multipliers and adders, the circuit for an image processing application consumes as little as 47% of the power and 36% of the power-delay product of an accurate design while achieving similar image processing quality. Improvements in delay, power, and area are obtained for the detection of differences in images by using approximate dividers.

197 citations

Journal ArticleDOI
TL;DR: A systematic description of how self-suspending tasks can be handled in both soft and hard real-time systems is provided and an explanation of the existing misconceptions and their potential remedies are explained.
Abstract: In general computing systems, a job (process/task) may suspend itself whilst it is waiting for some activity to complete, eg, an accelerator to return data In real-time systems, such self-suspension can cause substantial performance/schedulability degradation This observation, first made in 1988, has led to the investigation of the impact of self-suspension on timing predictability, and many relevant results have been published since Unfortunately, as it has recently come to light, a number of the existing results are flawed To provide a correct platform on which future research can be built, this paper reviews the state of the art in the design and analysis of scheduling algorithms and schedulability tests for self-suspending tasks in real-time systems We provide (1) a systematic description of how self-suspending tasks can be handled in both soft and hard real-time systems; (2) an explanation of the existing misconceptions and their potential remedies; (3) an assessment of the influence of such flawed analyses on partitioned multiprocessor fixed-priority scheduling when tasks synchronize access to shared resources; and (4) a discussion of the computational complexity of analyses for different self-suspension task models

90 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
01 May 1975
TL;DR: The Fundamentals of Queueing Theory, Fourth Edition as discussed by the authors provides a comprehensive overview of simple and more advanced queuing models, with a self-contained presentation of key concepts and formulae.
Abstract: Praise for the Third Edition: "This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented."IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science.

2,562 citations

01 Nov 1981
TL;DR: In this paper, the authors studied the effect of local derivatives on the detection of intensity edges in images, where the local difference of intensities is computed for each pixel in the image.
Abstract: Most of the signal processing that we will study in this course involves local operations on a signal, namely transforming the signal by applying linear combinations of values in the neighborhood of each sample point. You are familiar with such operations from Calculus, namely, taking derivatives and you are also familiar with this from optics namely blurring a signal. We will be looking at sampled signals only. Let's start with a few basic examples. Local difference Suppose we have a 1D image and we take the local difference of intensities, DI(x) = 1 2 (I(x + 1) − I(x − 1)) which give a discrete approximation to a partial derivative. (We compute this for each x in the image.) What is the effect of such a transformation? One key idea is that such a derivative would be useful for marking positions where the intensity changes. Such a change is called an edge. It is important to detect edges in images because they often mark locations at which object properties change. These can include changes in illumination along a surface due to a shadow boundary, or a material (pigment) change, or a change in depth as when one object ends and another begins. The computational problem of finding intensity edges in images is called edge detection. We could look for positions at which DI(x) has a large negative or positive value. Large positive values indicate an edge that goes from low to high intensity, and large negative values indicate an edge that goes from high to low intensity. Example Suppose the image consists of a single (slightly sloped) edge:

1,829 citations

Book ChapterDOI
Eric V. Denardo1
01 Jan 2011
TL;DR: This chapter sees how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models” and finds an optimal solution that is integer-valued.
Abstract: In this chapter, you will see how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models.” You will also see that if a network flow model has “integer-valued data,” the simplex method finds an optimal solution that is integer-valued.

828 citations