scispace - formally typeset
Search or ask a question

Showing papers by "Hewlett-Packard published in 1999"


Proceedings ArticleDOI
27 Sep 1999
TL;DR: Some of the research challenges in understanding context and in developing context-aware applications are discussed, which are increasingly important in the fields of handheld and ubiquitous computing, where the user?s context is changing rapidly.
Abstract: When humans talk with humans, they are able to use implicit situational information, or context, to increase the conversational bandwidth. Unfortunately, this ability to convey ideas does not transfer well to humans interacting with computers. In traditional interactive computing, users have an impoverished mechanism for providing input to computers. By improving the computer’s access to context, we increase the richness of communication in human-computer interaction and make it possible to produce more useful computational services. The use of context is increasingly important in the fields of handheld and ubiquitous computing, where the user?s context is changing rapidly. In this panel, we want to discuss some of the research challenges in understanding context and in developing context-aware applications.

4,842 citations


Book
01 Jul 1999
TL;DR: In the past few years elliptic curve cryptography has moved from a fringe activity to a major challenger to the dominant RSA/DSA systems as mentioned in this paper, and it has become all pervasive.
Abstract: In the past few years elliptic curve cryptography has moved from a fringe activity to a major challenger to the dominant RSA/DSA systems. Elliptic curves offer major advances on older systems such as increased speed, less memory and smaller key sizes. As digital signatures become more and more important in the commercial world the use of elliptic curve-based signatures will become all pervasive. This book summarizes knowledge built up within Hewlett-Packard over a number of years, and explains the mathematics behind practical implementations of elliptic curve systems. Due to the advanced nature of the mathematics there is a high barrier to entry for individuals and companies to this technology. Hence this book will be invaluable not only to mathematicians wanting to see how pure mathematics can be applied but also to engineers and computer scientists wishing (or needing) to actually implement such systems.

1,697 citations


Journal ArticleDOI
26 Mar 1999-Science
TL;DR: The problem is solved by showing that, given fault-tolerant quantum computers, quantum key distribution over an arbitrarily long distance of a realistic noisy channel can be made unconditionally secure.
Abstract: Quantum key distribution is widely thought to offer unconditional security in communication between two users. Unfortunately, a widely accepted proof of its security in the presence of source, device, and channel noises has been missing. This long-standing problem is solved here by showing that, given fault-tolerant quantum computers, quantum key distribution over an arbitrarily long distance of a realistic noisy channel can be made unconditionally secure. The proof is reduced from a noisy quantum scheme to a noiseless quantum scheme and then from a noiseless quantum scheme to a noiseless classical scheme, which can then be tackled by classical probability theory.

1,694 citations


Journal ArticleDOI
TL;DR: The use of either lean thinking or agile manufacturing has to be combined with a total supply chain strategy particularly considering market knowledge and positioning of the decoupling point as agile manufacturing is best suited to satisfying a fluctuating demand and lean manufacturing requires a level schedule.

1,613 citations


Journal ArticleDOI
16 Jul 1999-Science
TL;DR: Logic gates were fabricated from an array of configurable switches, each consisting of a monolayer of redox-active rotaxanes sandwiched between metal electrodes, which provided a significant enhancement over that expected for wired-logic gates.
Abstract: Logic gates were fabricated from an array of configurable switches, each consisting of a monolayer of redox-active rotaxanes sandwiched between metal electrodes. The switches were read by monitoring current flow at reducing voltages. In the “closed” state, current flow was dominated by resonant tunneling through the electronic states of the molecules. The switches were irreversibly opened by applying an oxidizing voltage across the device. Several devices were configured together to produce AND and OR logic gates. The high and low current levels of those gates were separated by factors of 15 and 30, respectively, which is a significant enhancement over that expected for wired-logic gates.

1,553 citations


Journal ArticleDOI
TL;DR: The concept of quantum secret sharing was investigated in this article, where it was shown that the only constraint on the existence of threshold schemes comes from the quantum ''no-cloning theorem''.
Abstract: We investigate the concept of quantum secret sharing. In a $(k,n)$ threshold scheme, a secret quantum state is divided into $n$ shares such that any $k$ of those shares can be used to reconstruct the secret, but any set of $k\ensuremath{-}1$ or fewer shares contains absolutely no information about the secret. We show that the only constraint on the existence of threshold schemes comes from the quantum ``no-cloning theorem,'' which requires that $nl2k$, and we give efficient constructions of all threshold schemes. We also show that, for $k\ensuremath{\le}nl2k\ensuremath{-}1$, then any $(k,n)$ threshold scheme must distribute information that is globally in a mixed state.

1,200 citations


Proceedings Article
01 Jan 1999
TL;DR: In this paper, a review of the existing frameworks and measures for coupling measurement in object-oriented systems is presented, and a unified framework based on the issues discovered in the review is provided and all existing measures are classified according to this framework.
Abstract: The increasing importance being placed on software measurement has led to an increased amount of research developing new software measures. Given the importance of object-oriented development techniques, one specific area where this has occurred is coupling measurement in object-oriented systems. However, despite a very interesting and rich body of work, there is little understanding of the motivation and empirical hypotheses behind many of these new measures. It is often difficult to determine how such measures relate to one another and for which application they can be used. As a consequence, it is very difficult for practitioners and researchers to obtain a clear picture of the state-of-the-art in order to select or define measures for object-oriented systems. This situation is addressed and clarified through several different activities. First, a standardized terminology and formalism for expressing measures is provided which ensures that all measures using it are expressed in a fully consistent and operational manner. Second, to provide a structured synthesis, a review of the existing frameworks and measures for coupling measurement in object-oriented systems takes place. Third, a unified framework, based on the issues discovered in the review, is provided and all existing measures are then classified according to this framework. This paper contributes to an increased understanding of the state-of-the-art: A mechanism is provided for comparing measures and their potential use, integrating existing measures which examine the same concepts in different ways, and facilitating more rigorous decision making regarding the definition of new measures and the selection of existing measures for a specific goal of measurement. In addition, our review of the state-of-the-art highlights that many measures are not defined in a fully operational form, and relatively few of them are based on explicit empirical models, as recommended by measurement theory.

815 citations


Journal ArticleDOI
TL;DR: This paper contributes to an increased understanding of the state of the art in coupling measurement in object-oriented systems by providing a standardized terminology and formalism for expressing measures which ensures that all measures using it are expressed in a fully consistent and operational manner.
Abstract: The increasing importance being placed on software measurement has led to an increased amount of research developing new software measures. Given the importance of object-oriented development techniques, one specific area where this has occurred is coupling measurement in object-oriented systems. However, despite a very interesting and rich body of work, there is little understanding of the motivation and empirical hypotheses behind many of these new measures. It is often difficult to determine how such measures relate to one another and for which application they can be used. As a consequence, it is very difficult for practitioners and researchers to obtain a clear picture of the state of the art in order to select or define measures for object-oriented systems. This situation is addressed and clarified through several different activities. First, a standardized terminology and formalism for expressing measures is provided which ensures that all measures using it are expressed in a fully consistent and operational manner. Second, to provide a structured synthesis, a review of the existing frameworks and measures for coupling measurement in object-oriented systems takes place. Third, a unified framework, based on the issues discovered in the review, is provided and all existing measures are then classified according to this framework. This paper contributes to an increased understanding of the state-of-the-art.

775 citations


Journal ArticleDOI
TL;DR: An implementation of NeTra, a prototype image retrieval system that uses color texture, shape and spatial location information in segmented image database that incorporates a robust automated image segmentation algorithm that allows object or region based search.
Abstract: We present here an implementation of NeTra, a prototype image retrieval system that uses color, texture, shape and spatial location information in segmented image regions to search and retrieve similar regions from the database. A distinguishing aspect of this system is its incorporation of a robust automated image segmentation algorithm that allows object- or region-based search. Image segmentation significantly improves the quality of image retrieval when images contain multiple complex objects. Images are segmented into homogeneous regions at the time, of ingest into the database, and image attributes that represent each of these regions are computed. In addition to image segmentation, other important components of the system include an efficient color representation, and indexing of color, texture, and shape features for fast search and retrieval. This representation allows the user to compose interesting queries such as "retrieve all images that contain regions that have the color of object A, texture of object B, shape of object C, and lie in the upper of the image", where the individual objects could be regions belonging to different images. A Java-based web implementation of NeTra is available at http://vivaldi.ece.ucsb.edu/Netra.

624 citations


Patent
06 Jan 1999
TL;DR: A data protection system that integrates a database with Windows Explorer in the Microsoft Windows 9X and NT Environments that mimics the Windows Explorer user interface, enabling the user to apply already known use paradigms is presented in this article.
Abstract: A data protection system that integrates a database with Windows Explorer in the Microsoft Windows 9X and NT Environments that mimics the Windows Explorer user interface, enabling the user to apply already known use paradigms. The data protection system appears as an extension to Windows Explorer and visibly appears as a folder item called the data vault. The data vault is a virtual disk that represents the underlying database. The database creates records and stores information about files backed up to removable secondary storage medium. Files may be backed up manually or automatically. A schedule can be set up for automatic protection of selected files and file types. The database can be searched to find files for restoration purposes without having to load secondary storage medium. Once a file or files are selected, the data protection system indicates which labeled removable secondary storage medium must be loaded for retrieval.

602 citations


Journal ArticleDOI
TL;DR: Though this result raises questions about NMR quantum computation, further analysis would be necessary to assess the power of the general unitary transformations, which are indeed implemented in these experiments, in their action on separable states.
Abstract: We give a constructive proof that all mixed states of N qubits in a sufficiently small neighborhood of the maximally mixed state are separable (unentangled). The construction provides an explicit representation of any such state as a mixture of product states. We give upper and lower bounds on the size of the neighborhood, which show that its extent decreases exponentially with the number of qubits. The bounds show that no entanglement appears in the physical states at any stage of present NMR experiments. Though this result raises questions about NMR quantum computation, further analysis would be necessary to assess the power of the general unitary transformations, which are indeed implemented in these experiments, in their action on separable states.

Patent
29 Mar 1999
TL;DR: In this article, a molecular wire crossbar memory (MWCM) system is provided. The MWCM comprises a two-dimensional array of a plurality of nanometer-scale devices, each device comprising a junction formed by a pair of crossed wires where one wire crosses another and at least one connector species connecting the pair of wires in the junction.
Abstract: A molecular wire crossbar memory (MWCM) system is provided. The MWCM comprises a two-dimensional array of a plurality of nanometer-scale devices, each device comprising a junction formed by a pair of crossed wires where one wire crosses another and at least one connector species connecting the pair of crossed wires in the junction. The connector species comprises a bi-stable molecular switch. The junction forms either a resistor or a diode or an asymmetric non-linear resistor. The junction has a state that is capable of being altered by application of a first voltage and sensed by application of a second, non-destructive voltage.

Patent
Robert J Sims1
30 Mar 1999
TL;DR: In this article, a system and method for providing protection of content stored on a bulk storage media is described, and a technique for protecting from unauthorized utilization of the content stored is provided publicly in order to allow for those utilizing a conforming media device to master or generate content protected according to the present invention.
Abstract: A system and method for providing protection of content stored on a bulk storage media is disclosed. The technique for providing protection from unauthorized utilization of the content so stored is provided publicly in order to allow for those utilizing a conforming media device to master or generate content protected according to the present invention. Various ways in which to protect content are disclosed including verification of the authenticity of a particular media, utilization of an accepted list of media play-back devices and their corresponding published public keys in order to securely pass media content keys thereto, and utilization of an external contact to provide media content keys and/or updates of accepted media play-back devices.

Proceedings ArticleDOI
12 Dec 1999
TL;DR: This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files and contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems.
Abstract: Modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. We argue that this behavior is a relic of the past, when disk storage was a scarce resource. Today, large cheap disks make it possible for the file system to protect valuable data from accidental delete or overwrite.This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files. Users name previous file versions by combining a traditional pathname with a time when the desired version of a file or directory existed. Storage in Elephant is managed by the system using file-grain user-specified retention policies. This approach contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems and thus restrict retention to be guided by a single policy for all files within that file system.Elephant is implemented as a new Virtual File System in the FreeBSD kernel.

Proceedings ArticleDOI
01 Apr 1999
TL;DR: This paper defines an appropriate stochastic error model on the input, and proves that under the conditions of the model, the algorithm recovers the cluster structure with high probability, and presents a practical heuristic based on the same algorithmic ideas.
Abstract: Recent advances in biotechnology allow researchers to measure expression levels for thousands of genes simultaneously, across different conditions and over time. Analysis of data produced by such experiments offers potential insight into gene function and regulatory mechanisms. A key step in the analysis of gene expression data is the detection of groups of genes that manifest similar expression patterns. The corresponding algorithmic problem is to cluster multicondition gene expression patterns. In this paper we describe a novel clustering algorithm that was developed for analysis of gene expression data. We define an appropriate stochastic error model on the input, and prove that under the conditions of the model, the algorithm recovers the cluster structure with high probability. The running time of the algorithm on an n-gene dataset is O{n2[log(n)]c}. We also present a practical heuristic based on the same algorithmic ideas. The heuristic was implemented and its performance is demonstrated on simulate...

Patent
17 Mar 1999
TL;DR: In this article, the authors propose a distributed network address translation protocol (DNT) that allows an external network device to request a service from an internal network device on an internal distributed network translation network.
Abstract: Methods and system for locating network services with distributed network address translation. Digital certificates are created that allow an external network device on an external network, such as the Internet, to request a service from an internal network device on an internal distributed network address translation network, such as a stub local area network. The digital certificates include information obtained with a Port Allocation Protocol used for distributed network address translation. The digital certificates are published on the internal network so they are accessible to external network devices. An external network device retrieves a digital certificate, extracts appropriate information, and sends a service request packet to an internal network device on an internal distributed network address translation network. The external network device is able to locate and request a service from an internal network device. An external network device can also request a security service, such as an Internet Protocol security ("IPsec") service from an internal network device. The external network device and the internal network device can establish a security service (e.g., Internet Key Exchange protocol service). The internal network device and external network device can then establish a Security Association using Security Parameter Indexes ("SPI") obtained using a distributed network address translation protocol. External network devices can request services, and security services on internal network devices on an internal distribute network address translation network that were previously unknown and unavailable to the external network devices.

Journal ArticleDOI
Nigel P. Smart1
TL;DR: An elementary technique is described which leads to a linear algorithm for solving the discrete logarithm problem on elliptic curves of trace one and this means that when choosing elliptic curve to use in cryptography one has to eliminate all curves whose group orders are equal to the order of the finite field.
Abstract: In this short note we describe an elementary technique which leads to a linear algorithm for solving the discrete logarithm problem on elliptic curves of trace one. In practice the method described means that when choosing elliptic curves to use in cryptography one has to eliminate all curves whose group orders are equal to the order of the finite field.

Patent
29 Mar 1999
TL;DR: In this article, a demultiplexer for a two-dimensional array of a plurality of nanometer-scale switches (molecular wire crossbar network) is disclosed.
Abstract: A demultiplexer for a two-dimensional array of a plurality of nanometer-scale switches (molecular wire crossbar network) is disclosed. Each switch comprises a pair of crossed wires which form a junction where one wire crosses another and at least one connector species connecting said pair of crossed wires in said junction. The connector species comprises a bi-stable molecule. The demultiplexer comprises a plurality of address lines accessed by a first set of wires in the two-dimensional array by randomly forming contacts between each wire in the first set of wires to at least one of the address lines. The first set of wires crosses a second set of wires to form the junctions. The demultiplexer solves both the problems of data input and output to a molecular electronic system and also bridges the size gap between CMOS and molecules with an architecture that can scale up to extraordinarily large numbers of molecular devices. Further, the demultiplexer is very defect tolerant, and can work despite a large number of defects in the system.

Patent
10 Aug 1999
TL;DR: In this article, a method for more efficiently installing a subset of software components and data files contained in a component pool in a distributed processing network such as the Internet is presented, where an installation package delivered to a requesting end user is custom configured at a remote server location prior to delivery to a client system operated by the user, in response to user's inputs.
Abstract: This invention includes a method for more efficiently installing a subset of software components and data files contained in a component pool in a distributed processing network such as the Internet. An installation package delivered to a requesting end user is custom configured at a remote server location prior to delivery to a client system operated by the user, in response to the user's inputs. The delivered installation package contains only the programs, data, and local installation tools required for the user's unique installation requirements. The user initiates the installation process by connecting to the remote server system via a telecommunications link within a distributed processing network, such as the Internet. Engaging in a dialog with the server which provides informational links to server-side databases, the user chooses all software components and options that he desires his software package to have. Such a package may be, for example, a subset of a software suite. After selection of all options, a single package is manufactured on the server. A single download then occurs of a single file. This is no bigger or smaller than what is absolutely required by the components and options selected. Upon receipt of the downloaded file, the user executes the file to unpack the installation directory. An auto-start feature can also be included which immediately launches the installation of the selected applications and options.

Journal ArticleDOI
TL;DR: In this paper, the potential impact of high/spl kappa/ gate dielectrics on device short-channel performance is studied over a wide range of dielectric permittivities using a two-dimensional (2D) simulator implemented with quantum mechanical models.
Abstract: The potential impact of high-/spl kappa/ gate dielectrics on device short-channel performance is studied over a wide range of dielectric permittivities using a two-dimensional (2-D) simulator implemented with quantum mechanical models. It is found that the short-channel performance degradation is caused by the fringing fields from the gate to the source/drain regions. These fringing fields in the source/drain regions further induce electric fields from the source/drain to channel which weakens the gate control. The gate dielectric thickness-to-length aspect ratio is a proper parameter to quantify the percentage of the fringing field and thus the short channel performance degradation. In addition, the gate stack architecture plays an important role in the determination of the device short-channel performance degradation. Using double-layer gate stack structures and low-/spl kappa/ dielectric as spacer materials can well confine the electric fields within the channel thereby minimizing short-channel performance degradation. The introduction of a metal gate not only eliminates the poly gate depletion effect, but also improves short-channel performance. Several approaches have been proposed to adjust the proper threshold voltage when midgap materials or metal gates are used.

Patent
17 Sep 1999
TL;DR: In this article, a transducer array with a very large number of transducers and transducers array with many more transducers than beamformer channels is proposed for ultrasound imaging.
Abstract: The disclosed ultrasound imaging apparatus and method use a transducer array with a very large number of transducer elements or a transducer array with many more transducer elements than beamformer channels. The imaging apparatus includes a transmit array including a multiplicity of transducer elements allocated into several transmit sub-arrays, and a receive array including a multiplicity of transducer elements allocated into several receive sub-arrays. The apparatus also includes several intra-group transmit processors, connected to the transmit sub-arrays, constructed and arranged to generate a transmit acoustic beam directed into a region of interest, and several intra-group receive processors connected to the receive sub-arrays. Each intra-group receive processor is arranged to receive, from the transducer elements of the connected sub-array, transducer signals in response to echoes from the transmit acoustic beam. Each intra-group receive processor includes delay and summing elements constructed to delay and sum the received transducer signals. The apparatus also includes a receive beamformer including several processing channels connected to the intra-group receive processors, wherein each processing channel includes a beamformer delay constructed and arranged to synthesize receive beams from the echos by delaying signals received from the intra-group receive processor, and a beamformer summer (a summing junction) constructed and arranged to receive and sum signals from the processing channels. An image generator is constructed and arranged to form an image of the region of interest based on signals received from the receive beamformer. The apparatus is practical in size, cost and complexity and is sufficiently fast to provide two-dimensional or three-dimensional images of moving body organs.

Proceedings ArticleDOI
01 May 1999
TL;DR: FotoFile is an experimental system for multimedia organization and retrieval, based upon the design goal of making multimedia content accessible to non-expert users that blends human and automatic annotation methods.
Abstract: FotoFile is an experimental system for multimedia organization and retrieval, based upon the design goal of making multimedia content accessible to non-expert users. Search and retrieval are done in terms that are natural to the task. The system blends human and automatic annotation methods. It extends textual search, browsing, and retrieval technologies to support multimedia data types.

Journal ArticleDOI
Sandra Hirsh1
TL;DR: The authors explored the relevance criteria and search strategies elementary school children applied when searching for information related to a class assignment in a school library setting and found that children exhibited little concern for the authority of the textual and graphical information they found.
Abstract: This study explores the relevance criteria and search strategies elementary school children applied when searching for information related to a class assignment in a school library setting. Students were interviewed on two occasions at different stages of the research process ; field observations involved students thinking aloud to explain their search processes and shadowing as students moved around the school library. Students performed searches on an on-line catalog, an electronic encyclopedia, an electronic magazine index, and the World Wide Web. Results are presented for children selecting the topic, conducting the search, examining the results, and extracting relevant results. A total of 254 mentions of relevance criteria were identified, including 197 references to textual relevance criteria that were coded into nine categories and 57 references to graphical relevance criteria that were coded into five categories. Students exhibited little concern for the authority of the textual and graphical information they found, based the majority of their relevance decisions for textual material on topicality, and identified information they found interesting. Students devoted a large portion of their research time to finding pictures. Understanding the ways that children use electronic resources and the relevance criteria they apply has implications for information literacy training and for systems design.

Proceedings ArticleDOI
01 Oct 1999
TL;DR: Jalapeño is a virtual machine for Java#8482; servers written in Java that reduces the Java / non-Java boundary below the virtual machine rather than above it, and opens up more opportunities for optimization.
Abstract: Jalapeno is a virtual machine for Java™ servers written in Java.A running Java program involves four layers of functionality: the user code, the virtual-machine, the operating system, and the hardware. By drawing the Java / non-Java boundary below the virtual machine rather than above it, Jalapeno reduces the boundary-crossing overhead and opens up more opportunities for optimization.To get Jalapeno started, a boot image of a working Jalapeno virtual machine is concocted and written to a file. Later, this file can be loaded into memory and executed. Because the boot image consists entirely of Java objects, it can be concocted by a Java program that runs in any JVM. This program uses reflection to convert the boot image into Jalapeno's object format.A special MAGIC class allows unsafe casts and direct access to the hardware. Methods of this class are recognized by Jalapeno's three compilers, which ignore their bytecodes and emit special-purpose machine code. User code will not be allowed to call MAGIC methods so Java's integrity is preserved.A small non-Java program is used to start up a boot image and as an interface to the operating system.Java's programming features — object orientation, type safety, automatic memory management — greatly facilitated development of Jalapeno. However, we also discovered some of the language's limitations.

Journal ArticleDOI
TL;DR: The proposed methodology suggests least squares estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS), which enables the treatment of linear space and time-variant blurring and arbitrary motion.
Abstract: This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented.

Journal ArticleDOI
TL;DR: This article demonstrates that through classification, admission control, and scheduling, WebQoS can support distinct performance levels for different classes of users and maintain predictable performance even when the server is subjected to a client request rate that is several times greater than the server's maximum processing rate.
Abstract: The evolving needs of conducting commerce using the Internet requires more than just network quality of service mechanisms for differentiated services. Empirical evidence suggests that overloaded servers can have significant impact on user perceived response times. Furthermore, FIFO scheduling done by servers can eliminate any QoS improvements made by network-differentiated services. Consequently, server QoS is a key component in delivering end to end predictable, stable, and tiered services to end users. This article describes our research and results for WebQoS, an architecture for supporting server QoS. We demonstrate that through classification, admission control, and scheduling, we can support distinct performance levels for different classes of users and maintain predictable performance even when the server is subjected to a client request rate that is several times greater than the server's maximum processing rate.

Journal ArticleDOI
TL;DR: This paper rederive these algorithms as approximations of the Kalman filter and then carry out a thorough analysis of their performance, which shows the computational feasibility of these algorithms.
Abstract: In an earlier work (1999), we introduced the problem of reconstructing a super-resolution image sequence from a given low resolution sequence. We proposed two iterative algorithms, the R-SD and the R-LMS, to generate the desired image sequence. These algorithms assume the knowledge of the blur, the down-sampling, the sequences motion, and the measurements noise characteristics, and apply a sequential reconstruction process. It has been shown that the computational complexity of these two algorithms makes both of them practically applicable. In this paper, we rederive these algorithms as approximations of the Kalman filter and then carry out a thorough analysis of their performance. For each algorithm, we calculate a bound on its deviation from the Kalman filter performance. We also show that the propagated information matrix within the R-SD algorithm remains sparse in time, thus ensuring the applicability of this algorithm. To support these analytical results we present some computer simulations on synthetic sequences, which also show the computational feasibility of these algorithms.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: This method first locates candidate text regions directly in the DCT compressed domain, and then reconstructs the candidate regions for further refinement in the spatial domain, so that only a small amount of decoding is required.
Abstract: We present a method to automatically locate captions in MPEG video. Caption text regions are segmented from the background using their distinguishing texture characteristics. This method first locates candidate text regions directly in the DCT compressed domain, and then reconstructs the candidate regions for further refinement in the spatial domain. Therefore, only a small amount of decoding is required. The proposed algorithm achieves about 4.0% false reject rate and less than 5.7% false positive rate on a variety of MPEG compressed video containing more than 42,000 frames.

Journal ArticleDOI
TL;DR: The constant-statistics (CS) algorithm for nonuniformity correction of infrared focal point arrays (IRFPAs) and other imaging arrays is developed and shown to improve the overall accuracy of the correction procedure.
Abstract: Using clues from neurobiological adaptation, we have developed the constant-statistics (CS) algorithm for nonuniformity correction of infrared focal point arrays (IRFPAs) and other imaging arrays. The CS model provides an efficient implementation that can also eliminate much of the ghosting artifact that plagues all scene-based nonuniformity correction (NUC) algorithms. The CS algorithm with deghosting is demonstrated on synthetic and real infrared (IR) sequences and shown to improve the overall accuracy of the correction procedure.

Proceedings ArticleDOI
07 Jun 1999
TL;DR: A learning paradigm to incrementally train the classifiers as additional training samples become available is developed and preliminary results for feature size reduction using clustering techniques are shown.
Abstract: Grouping images into (semantically) meaningful categories using low level visual features is a challenging and important problem in content based image retrieval. Using binary Bayesian classifiers, we attempt to capture high level concepts from low level image features under the constraint that the test image does belong to one of the classes of interest. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified into indoor/outdoor classes, outdoor images are further classified into city/landscape classes, and finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a vector quantizer can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. On a database of 6931 vacation photographs, our system achieved an accuracy of 90.5% for indoor vs. outdoor classification, 95.3% for city vs. landscape classification, 96.6% for sunset vs. forest and mountain classification, and 95.5% for forest vs. mountain classification. We further develop a learning paradigm to incrementally train the classifiers as additional training samples become available and also show preliminary results for feature size reduction using clustering techniques.