scispace - formally typeset
Search or ask a question

Showing papers by "Hewlett-Packard published in 2001"


Proceedings ArticleDOI
02 Apr 2001
TL;DR: This work proposes a novel sequential pattern mining method, called Prefixspan (i.e., Prefix-projected - Ettern_ mining), which explores prejxprojection in sequential pattern Mining, and shows that Pre fixspan outperforms both the Apriori-based GSP algorithm and another recently proposed method; Frees pan, in mining large sequence data bases.
Abstract: Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of A priori which may substantially reduce the number of combinations to be examined. Howeve6 Apriori still encounters problems when a sequence database is large andor when sequential patterns to be mined are numerous ano we propose a novel sequential pattern mining method, called Prefixspan (i.e., Prefix-projected - Ettern_ mining), which explores prejxprojection in sequential pattern mining. Prefixspan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover; prefi-projection substantially reduces the size of projected databases and leads to efJicient processing. Our performance study shows that Prefixspan outperforms both the Apriori-based GSP algorithm and another recently proposed method; Frees pan, in mining large sequence data bases.

1,975 citations


Journal ArticleDOI
TL;DR: In this paper, the theory underpinning the measurement of density matrices of a pair of quantum two-level systems is described, and a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation.
Abstract: We describe in detail the theory underpinning the measurement of density matrices of a pair of quantum two-level systems ~‘‘qubits’’ !. Our particular emphasis is on qubits realized by the two polarization degrees of freedom of a pair of entangled photons generated in a down-conversion experiment; however, the discussion applies in general, regardless of the actual physical realization. Two techniques are discussed, namely, a tomographic reconstruction ~in which the density matrix is linearly related to a set of measured quantities ! and a maximum likelihood technique which requires numerical optimization ~but has the advantage of producing density matrices that are always non-negative definite!. In addition, a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation, to be estimated. Examples based on down-conversion experiments are used to illustrate our results.

1,838 citations


Journal ArticleDOI
TL;DR: The focus of this work is on spatial segmentation, where a criterion for "good" segmentation using the class-map is proposed and applying the criterion to local windows in theclass-map results in the "J-image," in which high and low values correspond to possible boundaries and interiors of color-texture regions.
Abstract: A method for unsupervised segmentation of color-texture regions in images and video is presented. This method, which we refer to as JSEG, consists of two independent steps: color quantization and spatial segmentation. In the first step, colors in the image are quantized to several representative classes that can be used to differentiate regions in the image. The image pixels are then replaced by their corresponding color class labels, thus forming a class-map of the image. The focus of this work is on spatial segmentation, where a criterion for "good" segmentation using the class-map is proposed. Applying the criterion to local windows in the class-map results in the "J-image," in which high and low values correspond to possible boundaries and interiors of color-texture regions. A region growing method is then used to segment the image based on the multiscale J-images. A similar approach is applied to video sequences. An additional region tracking scheme is embedded into the region growing process to achieve consistent segmentation and tracking results, even for scenes with nonrigid object motion. Experiments show the robustness of the JSEG algorithm on real images and video.

1,476 citations


Proceedings ArticleDOI
11 Jun 2001
TL;DR: This work proposes an on-demand routing scheme called split multipath routing (SMR) that establishes and utilizes multiple routes of maximally disjoint paths and uses a per-packet allocation scheme to distribute data packets into multiple paths of active sessions.
Abstract: In recent years, routing has been the most focused area in ad hoc networks research On-demand routing in particular, is widely developed in bandwidth constrained mobile wireless ad hoc networks because of its effectiveness and efficiency Most proposed on-demand routing protocols however, build and rely on a single route for each data session Whenever there is a link disconnection on the active route, the routing protocol must perform a route recovery process In QoS routing for wired networks, multiple path routing is popularly used Multiple routes are however, constructed using link-state or distance vector algorithms which are not well-suited for ad hoc networks We propose an on-demand routing scheme called split multipath routing (SMR) that establishes and utilizes multiple routes of maximally disjoint paths Providing multiple routes helps minimizing route recovery process and control message overhead Our protocol uses a per-packet allocation scheme to distribute data packets into multiple paths of active sessions This traffic distribution efficiently utilizes available network resources and prevents nodes of the route from being congested in heavily loaded traffic situations We evaluate the performance of our scheme using extensive simulation

1,325 citations


Journal ArticleDOI
TL;DR: A number of local search strategies that utilize high degree nodes in power-law graphs and that have costs scaling sublinearly with the size of the graph are introduced and demonstrated on the GNUTELLA peer-to-peer network.
Abstract: Many communication and social networks have power-law link distributions, containing a few nodes that have a very high degree and many with low degree. The high connectivity nodes play the important role of hubs in communication and networking, a fact that can be exploited when designing efficient search algorithms. We introduce a number of local search strategies that utilize high degree nodes in power-law graphs and that have costs scaling sublinearly with the size of the graph. We also demonstrate the utility of these strategies on the GNUTELLA peer-to-peer network.

1,254 citations


Journal ArticleDOI
TL;DR: It is shown that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions, and in some cases, the performance of the hybrid actually can surpass that of the best known classifier.
Abstract: In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems.

1,134 citations


Patent
05 Nov 2001
TL;DR: In this article, a system and method for maintaining consistent server-side state across a pool of collaborating servers with independent state repositories is presented, where a client performs an event on a collaborating server which affects such state on the server, it publishes notification of the event into a queue maintained in client-side states which is shared by all of the collaborating servers in the pool.
Abstract: A system and method are provided for maintaining consistent server-side state across a pool of collaborating servers with independent state repositories When a client performs an event on a collaborating server which affects such state on the server, it publishes notification of the event into a queue maintained in client-side state which is shared by all of the collaborating servers in the pool As the client makes requests to servers within the pool, the queue is thus included in each request When a collaborating server needs to access its server-side state in question, it first discerns events new to it from the queue and replicates their effects into such server-side state As a result, the effects of events upon server-side state are replicated asynchronously across the servers in the pool, as the client navigates among them

992 citations


Book
01 Nov 2001
TL;DR: Sellen and Harper as discussed by the authors used enthnography and cognitive psychology to study the use of paper from the level of the individual up to that of organizational culture, and concluded that paper will continue to play an important role in office life.
Abstract: From the Publisher: Over the past thirty years, many people have proclaimed the imminent arrival of the paperless office. Yet even the World Wide Web, which allows almost any computer to read and display another computer's documents, has only increased the amount of printing done by computer users. The use of e-mail in an organization increases paper consumption by an average of 40 percent. In The Myth of the Paperless Office, Abigail Sellen and Richard Harper study paper usage as a way to understand the work that people do and the reasons they do it the way they do. Using the tools of enthnography and cognitive psychology, they look at paper use from the level of the individual up to that of organizational culture. Central to Sellen and Harper's investigation is the concept of "affordances" -- the activities that an object allows, or affords. The physical properties of paper (its being thin, light, porous, opaque, and flexible) afford the human actions of grasping, carrying, folding, writing, and so on. The concept of affordance allows us to compare the affordances of paper with those of existing digital devices. We can then ask what kinds of devices or systems would make new kinds of activities possible or better support current activities. The authors argue that paper will continue to play an important role in office life. Rather than pursue the ideal of the paperless office, we should work toward a future in which paper and electronic document tools work in concert and organizational processes make optimal use of both.

795 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: A new form of texture mapping that produces increased photorealism, and several reflectance function transformations that act as contrast enhancement operators are presented that are useful in the study of ancient archeological clay and stone writings.
Abstract: In this paper we present a new form of texture mapping that produces increased photorealism. Coefficients of a biquadratic polynomial are stored per texel, and used to reconstruct the surface color under varying lighting conditions. Like bump mapping, this allows the perception of surface deformations. However, our method is image based, and photographs of a surface under varying lighting conditions can be used to construct these maps. Unlike bump maps, these Polynomial Texture Maps (PTMs) also capture variations due to surface self-shadowing and interreflections, which enhance realism. Surface colors can be efficiently reconstructed from polynomial coefficients and light directions with minimal fixed-point hardware. We have also found PTMs useful for producing a number of other effects such as anisotropic and Fresnel shading models and variable depth of focus. Lastly, we present several reflectance function transformations that act as contrast enhancement operators. We have found these particularly useful in the study of ancient archeological clay and stone writings.

640 citations


Journal ArticleDOI
TL;DR: A new probabilistic instantiation of this correlation framework is proposed and shown to deliver very good color constancy on both synthetic and real images, and is rich enough to allow many existing algorithms to be expressed within it.
Abstract: The paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problem. Thus, the work presented will have applications in fields such as color-based object recognition and digital photography. Rather than attempting to recover a single estimate of the illuminant, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop. We propose a new probabilistic instantiation of this correlation framework and show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy.

612 citations


Journal ArticleDOI
TL;DR: A study of mobile workers that highlights different facets of access to remote people and information, and different facet of anytime, anywhere, and four key factors in mobile work are identified.
Abstract: The rapid and accelerating move towards use of mobile technologies has increasingly provided people and organizations with the ability to work away from the office and on the move. The new ways of working afforded by these technologies are often characterized in terms of access to information and people anytime, anywhere. This article presents a study of mobile workers that highlights different facets of access to remote people and information, and different facets of anytime, anywhere. Four key factors in mobile work are identified: the role of planning, working in "dead time," accessing remote technological and informational resources, and monitoring the activities of remote colleagues. By reflecting on these issues, we can better understand the role of technology and artifacts in mobile work and identify the opportunities for the development of appropriate technological solutions to support mobile workers.

Journal ArticleDOI
TL;DR: A watermarking scheme for ownership verification and authentication that requires a user key during both the insertion and the extraction procedures, and which can detect any modification made to the image and indicate the specific locations that have been modified.
Abstract: We describe a watermarking scheme for ownership verification and authentication. Depending on the desire of the user, the watermark can be either visible or invisible. The scheme can detect any modification made to the image and indicate the specific locations that have been modified. If the correct key is specified in the watermark extraction procedure, then an output image is returned showing a proper watermark, indicating the image is authentic and has not been changed since the insertion of the watermark. Any modification would be reflected in a corresponding error in the watermark. If the key is incorrect, or if the image was not watermarked, or if the watermarked image is cropped, the watermark extraction algorithm will return an image that resembles random noise. Since it requires a user key during both the insertion and the extraction procedures, it is not possible for an unauthorized user to insert a new watermark or alter the existing watermark so that the resulting image will pass the test. We present secret key and public key versions of the technique.

Proceedings ArticleDOI
31 Oct 2001
TL;DR: The present invention computer method and apparatus determines music similarity by generating a K-means cluster signature and a beat signature and the beat of the music is included in the subsequent distance measurement.
Abstract: The present invention computer method and apparatus determines music similarity by generating a K-means (instead of Gaussian) cluster signature and a beat signature for each piece of music. The beat of the music is included in the subsequent distance measurement.

PatentDOI
05 Oct 2001
TL;DR: In this paper, a cooling system is configured to adjust cooling fluid flow to various racks located throughout a data center based upon the detected or anticipated temperatures at various locations throughout the data center.
Abstract: A cooling system is configured to adjust cooling fluid flow to various racks located throughout a data center based upon the detected or anticipated temperatures at various locations throughout the data center. In one respect, by substantially increasing the cooling fluid flow to those racks dissipating greater amounts of heat and by substantially decreasing the cooling fluid flow to those racks dissipating lesser amounts of heat, the amount of energy required to operate the cooling system may be relatively reduced. Thus, instead of operating the devices, e.g., compressors, fans, etc., of the cooling system at substantially 100 percent of the anticipated heat dissipation from the racks, those devices may be operated according to the actual cooling needs. In addition, the racks may be positioned throughout the data center according to their anticipated heat loads to thereby enable computer room air conditioning (CRAC) units located at various positions throughout the data center to operate in a more efficient manner. In one respect, the positioning of the racks may be determined through implementation of numerical modeling and metrology of the cooling fluid flow throughout the data center. In addition, the numerical modeling may be implemented to control the volume flow rate and velocity of the cooling fluid flow through each of the vents.

Proceedings Article
01 May 2001
TL;DR: Jena, an RDF API in Java based on an interpretation of the W3C RDF Model and Syntax Specification, is described.
Abstract: Some aspects of W3C's RDF Model and Syntax Specification require careful reading and interpretation to produce a conformant implementation. Issues have arisen around anonymous resources, reification and RDF Graphs. These and other issues are identified, discussed and an interpretation of each is proposed. Jena, an RDF API in Java based on this interpretation, is described.

Patent
Michael Padovano1
21 Feb 2001
TL;DR: In this paper, a method, system, and apparatus for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network is described.
Abstract: A method, system, and apparatus for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network is described A SAN server includes a first interface and a second interface The first interface is configured to be coupled to the SAN The second interface is coupled to a first data communication network A NAS server includes a third interface and fourth interface The third interface is configured to be coupled to a second data communication network The fourth interface is coupled to the first data communication network The SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible through the second interface to at least one first host coupled to the first data communication network The SAN server allocates a second portion of the plurality of storage devices in the SAN to the NAS server The NAS server configures access to the second portion of the plurality of storage devices to at least one second host coupled to the second data communication network

Patent
04 Dec 2001
TL;DR: In an enterprise-wide network which includes at least one centralized computer and a plurality of desktop computers, a method for enterprise system management comprising the steps of: storing an already have list for each desktop; storing an plurality of Should Have sub-lists; and generating a respective Should Have list from the stored sub-list for a respective desktop computer during configuration of the desktop computer as mentioned in this paper.
Abstract: In an enterprise-wide network which includes at least one centralized computer and a plurality of desktop computers, a method for enterprise system management comprising the steps of: storing an Already Have list for each desktop; storing a plurality of Should Have sub-lists; and generating a respective Should Have list from the stored sub-lists for a respective desktop computer during configuration of the desktop computer; wherein the Schema of the generated Should Have list includes at least one dynamic linkage which encompasses more than on Should Have sub-lists.

Proceedings ArticleDOI
11 Jun 2001
TL;DR: This work presents dynamic load-aware routing (DLAR) protocol that considers intermediate node routing loads as the primary route selection metric and describes three DLAR algorithms and shows their effectiveness by presenting and comparing simulation results with an ad hoc routing protocol that uses the shortest paths.
Abstract: Ad hoc networks are deployed in situations where no base station is available and a network has to be built impromptu. Since there is no wired backbone, each host is a router and a packet forwarder. Each node may be mobile, and topology changes frequently and unpredictably. Routing protocol development has received much attention because mobility management and efficient bandwidth and power usage are critical in ad hoc networks. No existing protocol however, considers the load as the main route selection criteria. This routing philosophy can lead to network congestion and create bottlenecks. We present dynamic load-aware routing (DLAR) protocol that considers intermediate node routing loads as the primary route selection metric. The protocol also monitors the congestion status of active routes and reconstructs the path when nodes of the route have their interface queue overloaded. We describe three DLAR algorithms and show their effectiveness by presenting and comparing simulation results with an ad hoc routing protocol that uses the shortest paths.

Journal ArticleDOI
TL;DR: In this paper, the authors present eFlow, a system that supports the specification, enactment, and management of composite e-services, modeled as processes that are enacted by a service process engine.

Journal ArticleDOI
Yining Deng1, B.S. Manjunath, Charles Kenney, M.S. Moore, H. Shin 
TL;DR: Experimental results show that this compact color descriptor is effective and compares favorably with the traditional color histogram in terms of overall computational complexity.
Abstract: A compact color descriptor and an efficient indexing method for this descriptor are presented. The target application is similarity retrieval in large image databases using color. Colors in a given region are clustered into a small number of representative colors. The feature descriptor consists of the representative colors and their percentages in the region. A similarity measure similar to the quadratic color histogram distance measure is defined for this descriptor. The representative colors can be indexed in the three-dimensional (3-D) color space thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The matches from all of the query colors are then combined to obtain the final retrievals. An efficient indexing scheme for fast retrieval is presented. Experimental results show that this compact descriptor is effective and compares favorably with the traditional color histogram in terms of overall computational complexity.

Journal ArticleDOI
TL;DR: For instance, this paper found that women are more likely than men to maintain kin relationships by e-mail, and women's messages sent to people far away are more filled with personal content and were more likely to be exchanged in intense burst.
Abstract: Do the gender differences found when men and women maintain personal relationships in person and on the phone also emerge when they use electronic mail? Alternately, does e-mail change these ways of interacting? The authors explore the types of relationships women and men maintain by e-mail, differences in their e-mail use locally and at a distance, and differences in the contents of messages they send. The findings are based on qualitative and quantitative data collected during a 4-year period. These data suggest that using e-mail to communicate with relatives and friends replicates preexisting gender differences. Compared to men, women find e-mail contact with friends and family more gratifying. Women are more likely than men to maintain kin relationships by e-mail. They are more likely than men to use e-mail to keep in touch with people who live far away. Women's messages sent to people far away are more filled with personal content and are more likely to be exchanged in intense burst. The fit between women's expressive styles and the features of e-mail seems to be making it especially easy for women to expand their distant social networks.

Journal ArticleDOI
TL;DR: In this article, the authors proposed two potential realizations for quantum bits based on nanometre-scale magnetic particles of large spin S and high-anisotropy molecular clusters.
Abstract: We propose two potential realizations for quantum bits based on nanometre-scale magnetic particles of large spin S and high-anisotropy molecular clusters. In case (1) the bit-value basis states |0 and |1 are the ground and first excited spin states Sz = S and S-1, separated by an energy gap given by the ferromagnetic resonance frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, |0, and antisymmetric, |1, combinations of the twofold degenerate ground state Sz = ±S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states |0 and |1. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware.

Patent
22 May 2001
TL;DR: In this article, a web server in the device provides access to the user interface functions for the device through a device web page, such that a user of the web browser accesses the user interfaces through the web page.
Abstract: Web access functionality is embedded in a device to enable low cost widely accessible and enhanced user interface functions for the device. A web server in the device provides access to the user interface functions for the device through a device web page. A network interface in the device enables access to the web page by a web browser such that a user of the web browser accesses the user interface functions for the device through the web page.

Journal ArticleDOI
TL;DR: This paper presents minerva: a suite of tools for designing storage systems automatically, and shows that Minerva can successfully handle a workload with substantial complexity (a decision-support database benchmark).
Abstract: Enterprise-scale storage systems, which can contain hundreds of host computers and storage devices and up to tens of thousands of disks and logical volumes, are difficult to design. The volume of choices that need to be made is massive, and many choices have unforeseen interactions. Storage system design is tedious and complicated to do by hand, usually leading to solutions that are grossly over-provisioned, substantially under-performing or, in the worst case, both.To solve the configuration nightmare, we present minerva: a suite of tools for designing storage systems automatically. Minerva uses declarative specifications of application requirements and device capabilities; constraint-based formulations of the various sub-problems; and optimization techniques to explore the search space of possible solutions.This paper also explores and evaluates the design decisions that went into Minerva, using specialized micro- and macro-benchmarks. We show that Minerva can successfully handle a workload with substantial complexity (a decision-support database benchmark). Minerva created a 16-disk design in only a few minutes that achieved the same performance as a 30-disk system manually designed by human experts. Of equal importance, Minerva was able to predict the resulting system's performance before it was built.

Proceedings ArticleDOI
08 Jul 2001
TL;DR: A method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color is proposed, making it well-suited for a wide range of practical applications in video event detection and recognition.
Abstract: Segmentation of novel or dynamic objects in a scene, often referred to as "background subtraction" or foreground segmentation", is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background and non-static backgrounds (e.g. active video displays or trees waving in the wind). The advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of (1) modulating the background model learning rate based on scene activity, and (2) making color-based segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated entangled mixed states and presented a class of states that have the maximum amount of entanglement for a given linear entropy, which can be succinctly characterized by their degree of entrapment and purity.
Abstract: Two-qubit states occupy a large and relatively unexplored Hilbert space. Such states can be succinctly characterized by their degree of entanglement and purity. In this article we investigate entangled mixed states and present a class of states that have the maximum amount of entanglement for a given linear entropy.

Journal ArticleDOI
TL;DR: A direct analogue of Burke's theorem for the Brownian queue was stated and proved by Harrison (Brownian Motion and Stochastic Flow Systems, Wiley, New York, 1985) as discussed by the authors.

Journal ArticleDOI
TL;DR: It is proved that one-way communications is necessary and sufficient for entanglement manipulations of a pure bipartite state and the supremum probability of obtaining a maximally entangled state (of any dimension) from an arbitrary state is determined.
Abstract: Suppose two distant observers Alice and Bob share a pure bipartite quantum state. By applying local operations and communicating with each other using a classical channel, Alice and Bob can manipulate it into some other states. Previous investigations of entanglement manipulations have been largely limited to a small number of strategies and their average outcomes. Here we consider a general entanglement manipulation strategy, and go beyond the average property. For a pure entangled state shared between two separated persons Alice and Bob, we show that the mathematical interchange symmetry of the Schmidt decomposition can be promoted into a physical symmetry between the actions of Alice and Bob. Consequently, the most general (multistep two-way-communications) strategy of entanglement manipulation of a pure state is, in fact, equivalent to a strategy involving only a single (generalized) measurement by Alice followed by one-way communications of its result to Bob. We also prove that strategies with one-way communications are generally more powerful than those without communications. In summary, one-way communications is necessary and sufficient for entanglement manipulations of a pure bipartite state. The supremum probability of obtaining a maximally entangled state (of any dimension) from an arbitrary state is determined, and a strategy for achieving this probability is constructed explicitly. One important question is whether collective manipulations in quantum mechanics can greatly enhance the probability of large deviations from the average behavior. We answer this question in the negative by showing that, given n pairs of identical partly entangled pure states $(|\ensuremath{\Psi}〉)$ with entropy of entanglement $E(|\ensuremath{\Psi}〉),$ the probability of getting $\mathrm{nK}$ $[KgE(|\ensuremath{\Psi}〉)]$ singlets out of entanglement concentration tends to zero as n tends to infinity.

Patent
11 Jan 2001
TL;DR: In this article, the authors present a short-term disposable certificate to a verifier for authentication and demonstrate that the subject has knowledge of a private key corresponding to the public key in the short-time disposable certificate.
Abstract: A PKI (30) includes an off-line registration authority (38) that issues a first unsigned certificate (60) to a subject (34) that binds a public key (62) of the subject to long-term identification information (63) related to the subject and maintains a certificate database (40) of unsigned certificates in which it stores the first unsigned certificate. An on-line credentials server (42) issues a short-term disposable certificate (70) to the subject that binds the public key of the subject from the first unsigned certificate to the long-term identification information related to the subject from the first unsigned certificate. The credentials server maintains a table (44) that contains entries corresponding to valid unsigned certificates stored in the certificate database. The subject presents the short-term disposable certificate to a verifier (36) for authentication and demonstrates that the subject has knowledge of a private key corresponding to the public key (46) in the short-term disposable certificate.

Journal ArticleDOI
TL;DR: CoolTown as mentioned in this paper offers a Web model for supporting nomadic users, based on the convergence of Web technology, wireless networks and portable devices, and describes how CoolTown ties Web resources to physical objects and places, and how users interact with resources using the information appliances they carry.