Institution
University of Paderborn
Education•Paderborn, Nordrhein-Westfalen, Germany•
About: University of Paderborn is a education organization based out in Paderborn, Nordrhein-Westfalen, Germany. It is known for research contribution in the topics: Computer science & Context (language use). The organization has 6684 authors who have published 16929 publications receiving 323154 citations.
Topics: Computer science, Context (language use), Software, Control reconfiguration, Nonlinear system
Papers published on a yearly basis
Papers
More filters
••
Karlsruhe Institute of Technology1, University of Ulm2, Goethe University Frankfurt3, Technische Universität München4, Technical University of Dortmund5, Braunschweig University of Technology6, Dresden University of Technology7, University of Erlangen-Nuremberg8, University of Paderborn9, University of Tübingen10, Kaiserslautern University of Technology11, University of Stuttgart12
TL;DR: An overview of a major research project on dependable embedded systems that has started in Fall 2010 and is running for a projected duration of six years is presented, including a new classification on faults, errors, and failures.
Abstract: The paper presents an overview of a major research project on dependable embedded systems that has started in Fall 2010 and is running for a projected duration of six years. Aim is a ‘dependability co-design’ that spans various levels of abstraction in the design process of embedded systems starting from gate level through operating system, applications software to system architecture. In addition, we present a new classification on faults, errors, and failures.
99 citations
••
05 Jun 2006TL;DR: This paper develops an efficient implementation for a k-means clustering algorithm that significantly outperforms KMHybrid on most of these input instances and computed clusterings and approximate average silhouette coefficient for k=1,…,100 for the authors' input instances.
Abstract: In this paper we develop an efficient implementation for a k-means clustering algorithm. Our algorithm is a variant of KMHybrid [28, 20], i.e. it uses a combination of Lloyd-steps and random swaps, but as a novel feature it uses coresets to speed up the algorithm. A coreset is a small weighted set of points that approximates the original point set with respect to the considered problem. The main strength of the algorithm is that it can quickly determine clusterings of the same point set for many values of k. This is necessary in many applications, since, typically, one does not know a good value for k in advance. Once we have clusterings for many different values of k we can determine a good choice of k using a quality measure of clusterings that is independent of k, for example the average silhouette coefficient. The average silhouette coefficient can be approximated using coresets.To evaluate the performance of our algorithm we compare it with algorithm KMHybrid [28] on typical 3D data sets for an image compression application and on artificially created instances. Our data sets consist of 300,000 to 4.9 million points. We show that our algorithm significantly outperforms KMHybrid on most of these input instances. Additionally, the quality of the solutions computed by our algorithm deviates less than that of KMHybrid.We also computed clusterings and approximate average silhouette coefficient for k=1,…,100 for our input instances and discuss the performance of our algorithm in detail.
99 citations
••
99 citations
•
19 Jun 2016TL;DR: This work considers the problem of (macro) F-measure maximization in the context of extreme multilabel classification (XMLC) and proposes to solve the problem by classifiers that efficiently deliver sparse probability estimates (SPEs), that is, probability estimates restricted to the most probable labels.
Abstract: We consider the problem of (macro) F-measure maximization in the context of extreme multilabel classification (XMLC), i.e., multi-label classification with extremely large label spaces. We investigate several approaches based on recent results on the maximization of complex performance measures in binary classification. According to these results, the F-measure can be maximized by properly thresholding conditional class probability estimates. We show that a naive adaptation of this approach can be very costly for XMLC and propose to solve the problem by classifiers that efficiently deliver sparse probability estimates (SPEs), that is, probability estimates restricted to the most probable labels. Empirical results provide evidence for the strong practical performance of this approach.
99 citations
••
TL;DR: This paper used transaction-level trading data to show that investors significantly increase their trading activities as the COVID-19 pandemic unfolds, both at the extensive and at the intensive margin.
99 citations
Authors
Showing all 6872 results
Name | H-index | Papers | Citations |
---|---|---|---|
Martin Karplus | 163 | 831 | 138492 |
Marco Dorigo | 105 | 657 | 91418 |
Robert W. Boyd | 98 | 1161 | 37321 |
Thomas Heine | 84 | 423 | 24210 |
Satoru Miyano | 84 | 811 | 38723 |
Wen-Xiu Ma | 83 | 420 | 20702 |
Jörg Neugebauer | 81 | 491 | 30909 |
Thomas Lengauer | 80 | 477 | 34430 |
Gotthard Seifert | 80 | 445 | 26136 |
Reshef Tenne | 74 | 529 | 24717 |
Tim Meyer | 74 | 548 | 24784 |
Qiang Cui | 71 | 292 | 20655 |
Thomas Frauenheim | 70 | 451 | 17887 |
Walter Richtering | 67 | 332 | 14866 |
Marcus Elstner | 67 | 209 | 18960 |