scispace - formally typeset
Search or ask a question
Author

Jiří Matoušek

Bio: Jiří Matoušek is an academic researcher from Charles University in Prague. The author has contributed to research in topics: Convex polytope & Metric space. The author has an hindex of 46, co-authored 195 publications receiving 8100 citations. Previous affiliations of Jiří Matoušek include Georgia Institute of Technology & Free University of Berlin.


Papers
More filters
Book
01 Jan 2003

724 citations

Proceedings ArticleDOI
01 Jun 1991
TL;DR: A theorem on partitioning point sets inEd (d fixed) is proved and an efficient construction of partition trees based on it is given, which yields a simplex range searching structure with linear space, O(n logn) deterministic preprocessing time, andO(n1?1/d(logn)O(1)) query time.
Abstract: We prove a theorem on partitioning point sets inEd (d fixed) and give an efficient construction of partition trees based on it. This yields a simplex range searching structure with linear space,O(n logn) deterministic preprocessing time, andO(n1?1/d(logn)O(1)) query time. WithO(nlogn) preprocessing time, where ? is an arbitrary positive constant, a more complicated data structure yields query timeO(n1?1/d(log logn)O(1)). This attains the lower bounds due to Chazelle [C1] up to polylogarithmic factors, improving and simplifying previous results of Chazelleet al. [CSW]. The partition result implies that, forrd≤n1??, a (1/r)-approximation of sizeO(rd) with respect to simplices for ann-point set inEd can be computed inO(n logr) deterministic time. A (1/r)-cutting of sizeO(rd) for a collection ofn hyperplanes inEd can be computed inO(n logr) deterministic time, provided thatr≤n1/(2d?1).

404 citations

Book
05 Oct 2006
TL;DR: This chapter discusses the Simplex Method, a very simple and straightforward way of programming that can be applied to Integer Programming and LP Relaxation.
Abstract: What Is It, and What For?- Examples- Integer Programming and LP Relaxation- Theory of Linear Programming: First Steps- The Simplex Method- Duality of Linear Programming- Not Only the Simplex Method- More Applications- Software and Further Reading

358 citations

Journal ArticleDOI
TL;DR: The halfspace itrange itreporting problem, given a finite set P of points in R d, can be solved substantially more efficiently that the more general simplex range searching problem.
Abstract: We consider the halfspace itrange itreporting problem: given a finite set P of points in R d, preprocess it so that given a query halfspace γ, the points of P ∩ γ can be reported efficiently. We show that with almost linear storage, this problem can be solved substantially more efficiently that the more general simplex range searching problem. We give a data structure for halfspace range reporting in dimensions d ⩾ 4 with O(n log log n) space, O(n log n) deterministic preprocessing time and O(n 1 − 1 ⌊ d 2 ⌋ (logn) c + k) query time, where c = c(d) is a constant and k = |P ∩ γ| (efficient solutions were known for d = 2, 3). For the halfspace emptiness problem, where we only want to know whether P ∩ γ = O, we can achieve query time O(n 1 − 1 ⌊ d 2 ⌋ 2 c'log ∗ n ) with a linear space and O(n1 + δ) preprocessing (c' = c'(d) is a constant and γ > 0 is arbitrarily small but fixed).

292 citations

Journal ArticleDOI
TL;DR: It is pointed out that if the number of points is not large enough in terms of the dimension then nearly the lowest possible L 2 -discrepancy is attained by a pathological point set, and hence the L 1 -Discrepancy may not be very relevant for relatively small sets.

290 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm, and provides empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory.
Abstract: The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm. It is similar to the randomized, incremental algorithms for convex hull and delaunay triangulation. We provide empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory. computational geometry algorithms have traditionally assumed that input sets are well behaved. When an algorithm is implemented with floating-point arithmetic, this assumption can lead to serous errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick” facets that contain all possible exact convex hulls of the input. A variation is effective in five or more dimensions.

5,050 citations

Proceedings ArticleDOI
23 May 1998
TL;DR: In this paper, the authors present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces, for data sets of size n living in R d, which require space that is only polynomial in n and d.
Abstract: We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.

4,478 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that given an integer k ≥ 1, (1 + ϵ)-approximation to the k nearest neighbors of q can be computed in additional O(kd log n) time.
Abstract: Consider a set of S of n data points in real d-dimensional space, Rd, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q∈ Rd, is the closest point of S to q can be reported quickly. Given any positive real ϵ, data point p is a (1 +ϵ)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + ϵ) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in Rd in O(dn log n) time and O(dn) space, so that given a query point q ∈ Rd, and ϵ > 0, a (1 + ϵ)-approximate nearest neighbor of q can be computed in O(cd, ϵ log n) time, where cd,ϵ≤d ⌈1 + 6d/ϵ⌉d is a factor depending only on dimension and ϵ. In general, we show that given an integer k ≥ 1, (1 + ϵ)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time.

2,813 citations

Book
22 Jun 2009
TL;DR: This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling.
Abstract: A unified view of metaheuristics This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems Designing efficient metaheuristics for multi-objective optimization problems Designing hybrid, parallel, and distributed metaheuristics Implementing metaheuristics on sequential and parallel machines Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.

2,735 citations