scispace - formally typeset
Search or ask a question
Institution

Justsystem Pittsburgh Research Center

About: Justsystem Pittsburgh Research Center is a based out in . It is known for research contribution in the topics: Population & Face detection. The organization has 9 authors who have published 17 publications receiving 5302 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A neural network-based upright frontal face detection system that arbitrates between multiple networks to improve performance over a single network, and a straightforward procedure for aligning positive face examples for training.
Abstract: We present a neural network-based upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present a straightforward procedure for aligning positive face examples for training. To collect negative examples, we use a bootstrap algorithm, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting nonface training examples, which must be chosen to span the entire space of nonface images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further improve the accuracy. Comparisons with several other state-of-the-art face detection systems are presented, showing that our system has comparable performance in terms of detection and false-positive rates.

4,105 citations

Journal ArticleDOI
TL;DR: A multistrategy approach which combines these learners and yields performance competitive with or better than the best of them is described, which is modular and flexible, and could find application in other machine learning problems.
Abstract: We consider the problem of learning to perform information extraction in domains where linguistic processing is problematic, such as Usenet posts, email, and finger plan files. In place of syntactic and semantic information, other sources of information can be used, such as term frequency, typography, formatting, and mark-up. We describe four learning approaches to this problem, each drawn from a different paradigm: a rote learner, a term-space learner based on Naive Bayes, an approach using grammatical induction, and a relational rule learner. Experiments on 14 information extraction problems defined over four diverse document collections demonstrate the effectiveness of these approaches. Finally, we describe a multistrategy approach which combines these learners and yields performance competitive with or better than the best of them. This technique is modular and flexible, and could find application in other machine learning problems.

410 citations

Proceedings ArticleDOI
01 Sep 1995
TL;DR: An experimental comparison of a number of different algorithms for computing the Deluanay triangulation and analyzes the major high-level primitives that algorithms use and does an experimental analysis of how often implementations of these algorithms perform each operation.
Abstract: This paper presents an experimental comparison of a number of different algorithms for computing the Deluanay triangulation. The algorithms examined are: Dwyer’s divide and conquer algorithm, Fortune’s sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri, and Murota, a new bucketing-based algorithm described in this paper, and Devillers’s version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber’s convex hull based algorithm. Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distibutions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.

171 citations

Journal ArticleDOI
TL;DR: An experimental comparison of a number of different algorithms for computing the Delaunay triangulation is presented and an experimental analysis of how often implementations of these algorithms perform each operation is done.
Abstract: This paper presents an experimental comparison of a number of different algorithms for computing the Delaunay triangulation. The algorithms examined are: Dwyer's divide and conquer algorithm, Fortune's sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri and Murota, a new bucketing-based algorithm described in this paper, and Devillers's version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber's convex hull based algorithm. Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distributions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.

137 citations

Patent
07 Nov 1997
TL;DR: In this article, a method for transforming an original message into a final message by including an untrusted service, which includes the steps of identifying at least one sensitive term from the original message and replacing it with a standard token, is presented.
Abstract: A method for transforming an original message into a final message by including an untrusted service, includes the steps of identifying at least one sensitive term from the original message; replacing the at least one sensitive term with a standard token to create a sanitized message; storing the at least one sensitive term; transmitting the sanitized message to a provider of the untrusted service; performing the untrusted service on the sanitized message to create a serviced message; merging the serviced message with the at least one sensitive term stored in the storing step to create the final message.

129 citations


Authors

Showing all 9 results

NameH-indexPapersCitations
Rahul Sukthankar7024028630
Shumeet Baluja6123217270
Henry Allan Rowley33899374
Dayne Freitag30478610
Scott E. Fahlman21456498
Peter Su33319
Vibhu O. Mittal3383
Richard Caruana1119
Antoine Brusseau11129
Network Information
Related Institutions (5)
Facebook
10.9K papers, 570.1K citations

82% related

Google
39.8K papers, 2.1M citations

82% related

Microsoft
86.9K papers, 4.1M citations

80% related

Adobe Systems
8K papers, 214.7K citations

80% related

Mitsubishi Electric Research Laboratories
3.8K papers, 131.6K citations

79% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20002
19988
19975
19961
19951