scispace - formally typeset
Search or ask a question
Topic

Time complexity

About: Time complexity is a research topic. Over the lifetime, 36062 publications have been published within this topic receiving 879522 citations.


Papers
More filters
Book
01 Jan 1988
TL;DR: In this article, the Fulkerson Prize was won by the Mathematical Programming Society and the American Mathematical Society for proving polynomial time solvability of problems in convexity theory, geometry, and combinatorial optimization.
Abstract: This book develops geometric techniques for proving the polynomial time solvability of problems in convexity theory, geometry, and - in particular - combinatorial optimization. It offers a unifying approach based on two fundamental geometric algorithms: - the ellipsoid method for finding a point in a convex set and - the basis reduction method for point lattices. The ellipsoid method was used by Khachiyan to show the polynomial time solvability of linear programming. The basis reduction method yields a polynomial time procedure for certain diophantine approximation problems. A combination of these techniques makes it possible to show the polynomial time solvability of many questions concerning poyhedra - for instance, of linear programming problems having possibly exponentially many inequalities. Utilizing results from polyhedral combinatorics, it provides short proofs of the poynomial time solvability of many combinatiorial optimization problems. For a number of these problems, the geometric algorithms discussed in this book are the only techniques known to derive polynomial time solvability. This book is a continuation and extension of previous research of the authors for which they received the Fulkerson Prize, awarded by the Mathematical Programming Society and the American Mathematical Society.

3,676 citations

Proceedings ArticleDOI
15 Dec 2008
TL;DR: The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement.
Abstract: Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies.

3,455 citations

Journal ArticleDOI
TL;DR: An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings, showing that the set of concatenations of even palindromes, i.e., the language $\{\alpha \alpha ^R\}^*$, can be recognized in linear time.
Abstract: An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings. The constant of proportionality is low enough to make this algorithm of practical use, and the procedure can also be extended to deal with some more general pattern-matching problems. A theoretical application of the algorithm shows that the set of concatenations of even palindromes, i.e., the language $\{\alpha \alpha ^R\}^*$, can be recognized in linear time. Other algorithms which run even faster on the average are also considered.

3,156 citations

Journal ArticleDOI
TL;DR: This paper investigates a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities.
Abstract: Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.

3,095 citations

Book ChapterDOI
07 Jul 1992
TL;DR: Comparison with other feature selection algorithms shows Relief's advantages in terms of learning time and the accuracy of the learned concept, suggesting Relief's practicality.
Abstract: In real-world concept learning problems, the representation of data often uses many features, only a few of which may be related to the target concept. In this situation, feature selection is important both to speed up learning and to improve concept quality. A new feature selection algorithm Relief uses a statistical method and avoids heuristic search. Relief requires linear time in the number of given features and the number of training instances regardless of the target concept to be learned. Although the algorithm does not necessarily find the smallest subset of features, the size tends to be small because only statistically relevant features are selected. This paper focuses on empirical test results in two artificial domains; the LED Display domain and the Parity domain with and without noise. Comparison with other feature selection algorithms shows Relief's advantages in terms of learning time and the accuracy of the learned concept, suggesting Relief's practicality.

2,908 citations


Network Information
Related Topics (5)
Graph (abstract data type)
69.9K papers, 1.2M citations
91% related
Upper and lower bounds
56.9K papers, 1.1M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
87% related
Heuristics
32.1K papers, 956.5K citations
86% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023265
2022538
20211,607
20201,805
20191,850