scispace - formally typeset
Search or ask a question
Author

Jan van Leeuwen

Bio: Jan van Leeuwen is an academic researcher from Utrecht University. The author has contributed to research in topics: Turing machine & Chordal graph. The author has an hindex of 32, co-authored 144 publications receiving 6689 citations. Previous affiliations of Jan van Leeuwen include State University of New York System & University of California, Berkeley.


Papers
More filters
Book
01 Jan 2001
TL;DR: Simplified variants that omit a quadratic function and a fixed rotation in RC6 are examined to clarify their essential contribution to the overall security of RC6.
Abstract: RC6 has been submitted as a candidate for the Advanced Encryption Standard (AES). Two important features of RC6 that were absent from its predecessor RC5 are a quadratic function and a fixed rotation. By examining simplified variants that omit these features we clarify their essential contribution to the overall security of RC6.

1,487 citations

Book
04 Jan 1994
TL;DR: Theoretical computer science provides the foundations for understanding and exploiting the concepts and mechanisms in computing and information processing as discussed by the authors and provides professionals and students with a comprehensive overview of the main results and developments in this rapidly evolving field.
Abstract: From the Publisher: Theoretical computer science provides the foundations for understanding and exploiting the concepts and mechanisms in computing and information processing. This handbook will provide professionals and students with a comprehensive overview of the main results and developments in this rapidly evolving field. It consists of thirty-seven chapters in two volumes, all addressing core areas of theoretical computer science as it is practiced today. The material is written by leading American and European researchers, and each volume may be used independently. Volume A covers models of computation, complexity theory, data structures, and efficient computation in many recognized subdisciplines of theoretical computer science. Volume B presents a choice of material on the theory of automata and rewriting systems, the foundations of modem programming languages, logics for program specification and verification, and several chapters on the theoretic modeling of advanced information processing. The organization of each volume reflects the development of theoretical computer science from its classical roots to the modem theoretical approaches in parallel and distributed computing. Extensive bibliographies, a subject index, and list of contributors are included in each volume.

616 citations

Journal ArticleDOI
TL;DR: A fully dynamic maintenance algorithm for convex hulls that can process insertions and deletions of single points in only O(log* n) steps per transaction, where n is the number of points currently in the set.

505 citations

Journal ArticleDOI
TL;DR: It is shown that two one-pass methods proposed by van Leeuwen and van der Weide are asymptotically optimal, whereas several other methods, including one proposed by Rein and advocated by Dijkstra, are slower than the best methods.
Abstract: This paper analyzes the asymptotic worst-case running time of a number of variants of the well-known method of path compression for maintaining a collection of disjoint sets under union. We show that two one-pass methods proposed by van Leeuwen and van der Weide are asymptotically optimal, whereas several other methods, including one proposed by Rein and advocated by Dijkstra, are slower than the best methods.

475 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Proceedings Article
15 Feb 2016
TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.

7,256 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Book
07 Jan 1999

4,478 citations

Book
01 Jan 1980
TL;DR: This classic introduction to artificial intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval.
Abstract: A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, "Principles of Artificial Intelligence" describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of the control strategies used. "Principles of Artificial Intelligence"evolved from the author's courses and seminars at Stanford University and University of Massachusetts, Amherst, and is suitable for text use in a senior or graduate AI course, or for individual study.

3,754 citations