scispace - formally typeset
Search or ask a question
Institution

University of California

EducationOakland, California, United States
About: University of California is a education organization based out in Oakland, California, United States. It is known for research contribution in the topics: Population & Layer (electronics). The organization has 55175 authors who have published 52933 publications receiving 1491169 citations. The organization is also known as: UC & University of California System.


Papers
More filters
Proceedings ArticleDOI
02 Jun 2018
TL;DR: This work designs Bit Fusion, a bit-flexible accelerator that constitutes an array of bit-level processing elements that dynamically fuse to match the bitwidth of individual DNN layers, and compares it to two state-of-the-art DNN accelerators, Eyeriss and Stripes.
Abstract: Hardware acceleration of Deep Neural Networks (DNNs) aims to tame their enormous compute intensity. Fully realizing the potential of acceleration in this domain requires understanding and leveraging algorithmic properties of DNNs. This paper builds upon the algorithmic insight that bitwidth of operations in DNNs can be reduced without compromising their classification accuracy. However, to prevent loss of accuracy, the bitwidth varies significantly across DNNs and it may even be adjusted for each layer individually. Thus, a fixed-bitwidth accelerator would either offer limited benefits to accommodate the worst-case bitwidth requirements, or inevitably lead to a degradation in final accuracy. To alleviate these deficiencies, this work introduces dynamic bit-level fusion/decomposition as a new dimension in the design of DNN accelerators. We explore this dimension by designing Bit Fusion, a bit-flexible accelerator, that constitutes an array of bit-level processing elements that dynamically fuse to match the bitwidth of individual DNN layers. This flexibility in the architecture enables minimizing the computation and the communication at the finest granularity possible with no loss in accuracy. We evaluate the benefits of Bit Fusion using eight real-world feed-forward and recurrent DNNs. The proposed microarchitecture is implemented in Verilog and synthesized in 45 nm technology. Using the synthesis results and cycle accurate simulation, we compare the benefits of Bit Fusion to two state-of-the-art DNN accelerators, Eyeriss [1] and Stripes [2]. In the same area, frequency, and process technology, Bit Fusion offers 3.9X speedup and 5.1X energy savings over Eyeriss. Compared to Stripes, Bit Fusion provides 2.6X speedup and 3.9X energy reduction at 45 nm node when Bit Fusion area and frequency are set to those of Stripes. Scaling to GPU technology node of 16 nm, Bit Fusion almost matches the performance of a 250-Watt Titan Xp, which uses 8-bit vector instructions, while Bit Fusion merely consumes 895 milliwatts of power.

442 citations

Posted Content
TL;DR: Long Short-Term Memory (LSTM), a special type of recurrent neural networks are used to model the variable-range dependencies entailed in the task of video summarization to improve summarization by reducing the discrepancies in statistical properties across those datasets.
Abstract: We propose a novel supervised learning technique for summarizing videos by automatically selecting keyframes or key subshots. Casting the problem as a structured prediction problem on sequential data, our main idea is to use Long Short-Term Memory (LSTM), a special type of recurrent neural networks to model the variable-range dependencies entailed in the task of video summarization. Our learning models attain the state-of-the-art results on two benchmark video datasets. Detailed analysis justifies the design of the models. In particular, we show that it is crucial to take into consideration the sequential structures in videos and model them. Besides advances in modeling techniques, we introduce techniques to address the need of a large number of annotated data for training complex learning models. There, our main idea is to exploit the existence of auxiliary annotated video datasets, albeit heterogeneous in visual styles and contents. Specifically, we show domain adaptation techniques can improve summarization by reducing the discrepancies in statistical properties across those datasets.

441 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider a class of zero-sum two-player stochastic games called tug-of-war and use them to prove that every bounded real-valued Lipschitz function F on a subset Y of a length space X admits a unique AM extension to X.
Abstract: We consider a class of zero-sum two-player stochastic games called tug-of-war and use them to prove that every bounded real-valued Lipschitz function F on a subset Y of a length space X admits a unique absolutely minimal (AM) extension to X, i.e., a unique Lipschitz extension u : X → ℝ for which Lip U u = Lip ∂u u for all open U ⊂ X \ Y.

438 citations

Journal ArticleDOI
TL;DR: This review sets out to identify and classify the sources of the unexpected divergences between design and actual function of synthetic systems and analyze possible methodologies aimed at controlling, if not preventing, unwanted contextual issues.
Abstract: Despite the efforts that bioengineers have exerted in designing and constructing biological processes that function according to a predetermined set of rules, their operation remains fundamentally circumstantial. The contextual situation in which molecules and single-celled or multi-cellular organisms find themselves shapes the way they interact, respond to the environment and process external information. Since the birth of the field, synthetic biologists have had to grapple with contextual issues, particularly when the molecular and genetic devices inexplicably fail to function as designed when tested in vivo. In this review, we set out to identify and classify the sources of the unexpected divergences between design and actual function of synthetic systems and analyze possible methodologies aimed at controlling, if not preventing, unwanted contextual issues.

438 citations

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This paper proposes the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network, and applies residual learning to take appearance changes into account to reduce model degradation during online update.
Abstract: Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual tracking. They only need a small set of training samples from the initial frame to generate an appearance model. However, existing DCFs learn the filters separately from feature extraction, and update these filters using a moving average operation with an empirical weight. These DCF trackers hardly benefit from the end-to-end training. In this paper, we propose the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network. Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training. To reduce model degradation during online update, we apply residual learning to take appearance changes into account. Extensive experiments on the benchmark datasets demonstrate that our CREST tracker performs favorably against state-of-the-art trackers.

437 citations


Authors

Showing all 55232 results

NameH-indexPapersCitations
Meir J. Stampfer2771414283776
George M. Whitesides2401739269833
Michael Karin236704226485
Fred H. Gage216967185732
Rob Knight2011061253207
Martin White1962038232387
Simon D. M. White189795231645
Scott M. Grundy187841231821
Peidong Yang183562144351
Patrick O. Brown183755200985
Michael G. Rosenfeld178504107707
George M. Church172900120514
David Haussler172488224960
Yang Yang1712644153049
Alan J. Heeger171913147492
Network Information
Related Institutions (5)
Cornell University
235.5K papers, 12.2M citations

95% related

University of California, Berkeley
265.6K papers, 16.8M citations

94% related

University of Minnesota
257.9K papers, 11.9M citations

94% related

University of Wisconsin-Madison
237.5K papers, 11.8M citations

94% related

Stanford University
320.3K papers, 21.8M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202322
2022105
2021775
20201,069
20191,225
20181,684