scispace - formally typeset
Search or ask a question

Showing papers by "Hewlett-Packard published in 2000"


Journal ArticleDOI
TL;DR: LOCO-I as discussed by the authors is a low complexity projection of the universal context modeling paradigm, matching its modeling unit to a simple coding unit, which is based on a simple fixed context model, which approaches the capability of more complex universal techniques for capturing high-order dependencies.
Abstract: LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS.

1,668 citations


Posted Content
TL;DR: The ROC convex hull (ROCCH) method as mentioned in this paper combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers.
Abstract: In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems.

1,114 citations


Journal ArticleDOI
01 May 2000
TL;DR: The design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor, are described and evaluated.
Abstract: We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor. The input native instruction stream to Dynamo can be dynamically generated (by a JIT for example), or it can come from the execution of a statically compiled native binary. This paper evaluates the Dynamo system in the latter, more challenging situation, in order to emphasize the limits, rather than the potential, of the system. Our experiments demonstrate that even statically optimized native binaries can be accelerated Dynamo, and often by a significant degree. For example, the average performance of -O optimized SpecInt95 benchmark binaries created by the HP product C compiler is improved to a level comparable to their -O4 optimized version running without Dynamo. Dynamo achieves this by focusing its efforts on optimization opportunities that tend to manifest only at runtime, and hence opportunities that might be difficult for a static compiler to exploit. Dynamo's operation is transparent in the sense that it does not depend on any user annotations or binary instrumentation, and does not require multiple runs, or any special compiler, operating system or hardware support. The Dynamo prototype presented here is a realistic implementation running on an HP PA-8000 workstation under the HPUX 10.20 operating system.

935 citations


Proceedings ArticleDOI
01 Aug 2000
TL;DR: Permission to make digital or hard copies of part or all of this work or personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page.
Abstract: Permission to make digital or hard copies of part or all of this work or personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

896 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the characteristic polynomials Z(U, θ) of matrices U in the Circular Unitary Ensemble (CUE) of Random Matrix Theory and derived exact expressions for any matrix size N for the moments of |Z| and Z/Z*, and from these they obtained the asymptotics of the value distributions and cumulants of real and imaginary parts of log Z as N→∞.
Abstract: We study the characteristic polynomials Z(U, θ) of matrices U in the Circular Unitary Ensemble (CUE) of Random Matrix Theory. Exact expressions for any matrix size N are derived for the moments of |Z| and Z/Z*, and from these we obtain the asymptotics of the value distributions and cumulants of the real and imaginary parts of log Z as N→∞. In the limit, we show that these two distributions are independent and Gaussian. Costin and Lebowitz [15] previously found the Gaussian limit distribution for Im log Z using a different approach, and our result for the cumulants proves a conjecture made by them in this case. We also calculate the leading order N→∞ asymptotics of the moments of |Z| and Z/Z*. These CUE results are then compared with what is known about the Riemann zeta function ζ (s) on its critical line Re s= 1/2, assuming the Riemann hypothesis. Equating the mean density of the non-trivial zeros of the zeta function at a height T up the critical line with the mean density of the matrix eigenvalues gives a connection between N and T. Invoking this connection, our CUE results coincide with a theorem of Selberg for the value distribution of log ζ(1/2+iT) in the limit T→∞. They are also in close agreement with numerical data computed by Odlyzko [29] for large but finite T. This leads us to a conjecture for the moments of |ζ(1/2+it) |. Finally, we generalize our random matrix results to the Circular Orthogonal (COE) and Circular Symplectic (CSE) Ensembles.

823 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: Based on a workshop discussion of multiple views, and based on the authors' own design and implementation experience with these systems, eight guidelines for the design of multiple view systems are presented.
Abstract: A multiple view system uses two or more distinct views to support the investigation of a single conceptual entity. Many such systems exist, ranging from computer-aided design (CAD) systems for chip design that display both the logical structure and the actual geometry of the integrated circuit to overview-plus-detail systems that show both an overview for context and a zoomed-in-view for detail. Designers of these systems must make a variety of design decisions, ranging from determining layout to constructing sophisticated coordination mechanisms. Surprisingly, little work has been done to characterize these systems or to express guidelines for their design. Based on a workshop discussion of multiple views, and based on our own design and implementation experience with these systems, we present eight guidelines for the design of multiple view systems.

794 citations


Journal ArticleDOI
TL;DR: It is found that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary.
Abstract: This article presents a detailed workload characterization study of the 1998 World Cup Web site. Measurements from this site were collected over a three-month period. During this time the site received 1.35 billion requests, making this the largest Web workload analyzed to date. By examining this extremely busy site and through comparison with existing characterization studies, we are able to determine how Web server workloads are evolving. We find that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary. In particular, we uncover evidence that a better consistency mechanism is required for World Wide Web caches.

743 citations


Journal ArticleDOI
07 Dec 2000
TL;DR: The HP Labs' “Cooltown” project has been exploring opportunities through an infrastructure to support “web presence” for people, places and things, providing a model for supporting nomadic users without a central control point.
Abstract: The convergence of Web technology, wireless networks, and portable client devices provides new design opportunities for computer/communications systems. In the HP Labs' Cooltown project we have been exploring these opportunities through an infrastructure to support Web presence for people, places and things. We put Web servers into things like printers and put information into Web servers about things like artwork; we group physically related things into places embodied in Web servers. Using URLs for addressing, physical URL beaconing and sensing of URLs for discovery, and localized Web servers for directories, we can create a location-aware but ubiquitous system to support nomadic users. On top of this infrastructure we can leverage Internet connectivity to support communications services. Web presence bridges the World Wide Web and the physical world we inhabit, providing a model for supporting nomadic users without a central control point.

711 citations


Journal ArticleDOI
TL;DR: Jalapeno is a virtual machine for JavaTM servers written in the Java language to be as self-sufficient as possible and to obtain high quality code for methods that are observed to be frequently executed or computationally intensive.
Abstract: Jalapeno is a virtual machine for JavaTM servers written in the Java language. To be able to address the requirements of servers (performance and scalability in particular), Jalapeno was designed "from scratch" to be as self-sufficient as possible. Jalapeno's unique object model and memory layout allows a hardware null-pointer check as well as fast access to array elements, fields, and methods. Run-time services conventionally provided in native code are implemented primarily in Java. Java threads are multiplexed by virtual processors (implemented as operating system threads). A family of concurrent object allocators and parallel type-accurate garbage collectors is supported. Jalapeno's interoperable compilers enable quasi-preemptive thread switching and precise location of object references. Jalapeno's dynamic optimizing compiler is designed to obtain high quality code for methods that are observed to be frequently executed or computationally intensive.

632 citations


Book ChapterDOI
05 Jun 2000
TL;DR: In this paper, the authors present eFlow, a system that supports the specification, enactment, and management of composite e-services, modeled as processes that are enacted by a service process engine.
Abstract: E-Services are typically delivered point-to-point. However, the e-service environment creates the opportunity for providing value-added, integrated services, which are delivered by composing existing e-services. In order to enable organizations to pursue this business opportunity we have developed eFlow, a system that supports the specification, enactment, and management of composite e-services, modeled as processes that are enacted by a service process engine. Composite e-services have to cope with a highly dynamic business environment in terms of services and service providers. In addition, the increased competition forces companies to provide customized services to better satisfy the needs of every individual customer. Ideally, service processes should be able to transparently adapt to changes in the environment and to the needs of different customers with minimal or no user intervention. In addition, it should be possible to dynamically modify service process definitions in a simple and effective way to manage cases where user intervention is indeed required. In this paper we show how eFlow achieves these goals.

614 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a survey of the field of process migration by summarizing the key concepts and giving an overview of the most important implementations, including MOSIX, Sprite, Mach, and Load Sharing Facility.
Abstract: Process migration is the act of transferring a process between two machines. It enables dynamic load distribution, fault resilience, eased system administration, and data access locality. Despite these goals and ongoing research efforts, migration has not achieved widespread use. With the increasing deployment of distributed systems in general, and distributed operating systems in particular, process migration is again receiving more attention in both research and product development. As high-performance facilities shift from supercomputers to networks of workstations, and with the ever-increasing role of the World Wide Web, we expect migration to play a more important role and eventually to be widely adopted.This survey reviews the field of process migration by summarizing the key concepts and giving an overview of the most important implementations. Design and implementation issues of process migration are analyzed in general, and then revisited for each of the case studies described: MOSIX, Sprite, Mach, and Load Sharing Facility. The benefits and drawbacks of process migration depend on the details of implementation and, therefore, this paper focuses on practical matters. This survey will help in understanding the potentials of process migration and why it has not caught on.

Journal ArticleDOI
TL;DR: A new, sequential algorithm is presented, which is faster in typical applications and is especially advantageous for image sequences: the KL basis calculation is done with much lower delay and allows for dynamic updating of image databases.
Abstract: The Karhunen-Loeve (KL) transform is an optimal method for approximating a set of vectors or images, which was used in image processing and computer vision for several tasks such as face and object recognition. Its computational demands and its batch calculation nature have limited its application. Here we present a new, sequential algorithm for calculating the KL basis, which is faster in typical applications and is especially advantageous for image sequences: the KL basis calculation is done with much lower delay and allows for dynamic updating of image databases. Systematic tests of the implemented algorithm show that these advantages are indeed obtained with the same accuracy available from batch KL algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors explore the link between the value distributions of the L-functions within these families at the central point s = 1/2 and those of the characteristic polynomials Z(U,θ) of matrices U with respect to averages over SO(2N) and USp(2Ns) at the corresponding point θ= 0, using techniques previously developed for U(N).
Abstract: Recent results of Katz and Sarnak [8, 9] suggest that the low-lying zeros of families of L-functions display the statistics of the eigenvalues of one of the compact groups of matrices U(N), O(N) or USp(2N). We here explore the link between the value distributions of the L-functions within these families at the central point s= 1/2 and those of the characteristic polynomials Z(U,θ) of matrices U with respect to averages over SO(2N) and USp(2N) at the corresponding point θ= 0, using techniques previously developed for U(N) in [10]. For any matrix size N we find exact expressions for the moments of Z(U,0) for each ensemble, and hence calculate the asymptotic (large N) value distributions for Z(U,0) and log Z(U,0). The asymptotic results for the integer moments agree precisely with the few corresponding values known for L-functions. The value distributions suggest consequences for the non-vanishing of L-functions at the central point.

Journal ArticleDOI
TL;DR: A line-based approach for the implementation of the wavelet transform is introduced, which yields the same results as a "normal" implementation, but where, unlike prior work, the memory issues arising from the need to synchronize encoder and decoder are addressed.
Abstract: This paper addresses the problem of low memory wavelet image compression. While wavelet or subband coding of images has been shown to be superior to more traditional transform coding techniques, little attention has been paid until recently to the important issue of whether both the wavelet transforms and the subsequent coding can be implemented in low memory without significant loss in performance. We present a complete system to perform low memory wavelet image coding. Our approach is "line-based" in that the images are read line by line and only the minimum required number of lines is kept in memory. There are two main contributions of our work. First, we introduce a line-based approach for the implementation of the wavelet transform, which yields the same results as a "normal" implementation, but where, unlike prior work, we address memory issues arising from the need to synchronize encoder and decoder. Second, we propose a novel context-based encoder which requires no global information and stores only a local set of wavelet coefficients. This low memory coder achieves performance comparable to state of the art coders at a fraction of their memory utilization.

Journal ArticleDOI
01 Jun 2000
TL;DR: This paper presents experiments designed to estimate users' tolerance of QoS in the context of e-commerce and discusses contextual factors that influence these thresholds and shows how users' conceptual models of Web tasks affect their expectations.
Abstract: As the number of Web users and the diversity of Web applications continues to explode, Web Quality of Service (QoS) is an increasingly critical issue in the domain of e-commerce This paper presents experiments designed to estimate users' tolerance of QoS in the context of e-commerce In addition to objective measures, we discuss contextual factors that influence these thresholds and show how users' conceptual models of Web tasks affect their expectations We then show how user thresholds of tolerance can be taken into account when designing Web servers This integration of user requirements for QoS into systems design is ultimately of benefit to all stakeholders in the design of Internet services

Journal ArticleDOI
TL;DR: A method to automatically localize captions in JPEG compressed images and the I-frames of MPEG compressed videos and locates candidate caption text regions directly in the DCT compressed domain using the intensity variation information encoded in theDCT domain.
Abstract: We present a method to automatically localize captions in JPEG compressed images and the I-frames of MPEG compressed videos. Caption text regions are segmented from background images using their distinguishing texture characteristics. Unlike previously published methods which fully decompress the video sequence before extracting the text regions, this method locates candidate caption text regions directly in the DCT compressed domain using the intensity variation information encoded in the DCT domain. Therefore, only a very small amount of decoding is required. The proposed algorithm takes about 0.006 second to process a 240/spl times/350 image and achieves a recall rate of 99.17 percent while falsely accepting about 1.87 percent nontext DCT blocks on a variety of MPEG compressed videos containing more than 2,300 I-frames.

Patent
19 Dec 2000
TL;DR: In this article, the authors describe a content tracking and incentives system that encourages commercial distributors, broadcasters and users to distribute digital content to new potential customers, which is based on meta-data associated with the digital content.
Abstract: Systems and methods of distributing digital content are described. In one aspect, a portable media device includes a memory, a wireless transceiver, an output, and a controller. The memory is configured to store digital content. The wireless transceiver is configured to wirelessly transmit and receive digital content. The output is configured to render digital content. The controller is coupled to the memory, the wireless transceiver and the output, and is configured to control wireless transmission of digital content based upon meta-data associated with the digital content. In another aspect, a digital content distribution system includes two or more portable media devices and a license manager. Each of the portable media devices comprises a memory for storing digital content and a transceiver for wirelessly transmitting digital content to and wirelessly receiving digital content from another portable media device. The license manager is configured to associate digital content with meta-data for controlling wireless transmission and rendering of digital content from one portable media device to another. A content tracking and incentives system that encourages commercial distributors, broadcasters and users to distribute digital content to new potential customers also is described.

Proceedings ArticleDOI
01 Apr 2000
TL;DR: It is shown that, while users' perceptions of World Wide Web QoS are influenced by a number of contextual factors, it is possible to correlate objective measures of QoS with subjective judgements made by users, and therefore influence system design.
Abstract: Growing usage and diversity of applications on the Internet makes Quality of Service (QoS) increasingly critical [15]. To date, the majority of research on QoS is systems oriented, focusing on traffic analysis, scheduling, and routing. Relatively minor attention has been paid to user-level QoS issues. It is not yet known how objective system quality relates to users' subjective perceptions of quality. This paper presents the results of quantitative experiments that establish a mapping between objective and perceived QoS in the context of Internet commerce. We also conducted focus groups to determine how contextual factors influence users' perceptions of QoS. We show that, while users' perceptions of World Wide Web QoS are influenced by a number of contextual factors, it is possible to correlate objective measures of QoS with subjective judgements made by users, and therefore influence system design. We argue that only by integrating users' requirements for QoS into system design can the utility of the future Internet be maximized.

Patent
14 Sep 2000
TL;DR: In this paper, a technique for training links in a computing system is disclosed, which includes configuring a first receiver in a first port using a first training sequence or a second training sequence; transmitting the second training sequences from the first port indicating the first receiver is configured; and receiving a second train sequence transmitted by a second port indicating that a second receiver in the second port is configured.
Abstract: A technique for training links in a computing system is disclosed. In one aspect, the technique includes configuring a first receiver in a first port using a first training sequence or a second training sequence; transmitting the second training sequence from the first port indicating the first receiver is configured; and receiving a second training sequence transmitted by a second port at the first port, the second training sequence transmitted by the second port indicating that a second receiver in the second port is configured. In a second aspect, the technique includes locking a communication link; handshaking across the locked link to indicate readiness for data transmission; transmitting information after handshaking across the locked link. And, in a third aspect, the technique includes transmitting a first training sequence from a first port and a second port; and synchronizing the receipt of the first training sequence at the first and second ports; transmitting a second training sequence from the first and second ports upon the synchronized receipt of the first training sequence at the first and second ports; and receiving the second training sequence transmitted by the first and second ports and the second and first ports, respectively, in synchrony.

Journal ArticleDOI
TL;DR: Instruction-level parallelism (ILP) as mentioned in this paper is a family of processor and compiler design techniques that speed up execution by causing individual machine operations to execute in parallel, and it has become a much more significant force in computer design.
Abstract: Instruction-level parallelism (ILP) is a family of processor and compiler design techniques that speed up execution by causing individual machine operations to execute in parallel. Although ILP has appeared in the highest performance uniprocessors for the past 30 years, the 1980s saw it become a much more significant force in computer design. Several systems were built and sold commercially, which pushed ILP far beyond where it had been before, both in terms of the amount of ILP offered and in the central role ILP played in the design of the system. By the end of the decade, advanced microprocessor design at all major CPU manufacturers had incorporated ILP, and new techniques for ILP had become a popular topic at academic conferences. This article provides an overview and historical perspective of the field of ILP and its development over the past three decades.

Proceedings ArticleDOI
01 May 2000
TL;DR: The experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio and how scaling machine resources scales performance, although not uniformly across all applications.
Abstract: Lx is a scalable and customizable VLIW processor technology platform designed by Hewlett-Packard and STMicroelectronics that allows variations in instruction issue width, the number and capabilities of structures and the processor instruction set. For Lx we developed the architecture and software from the beginning to support both scalability (variable numbers of identical processing resources) and customizability (special purpose resources).In this paper we consider the following issues. When is customization or scaling beneficial? How can one determine the right degree of customization or scaling for a particular application domain? What architectural compromises were made in the Lx project to contain the complexity inherent in a customizable and scalable processor family?The experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio. We also show how scaling machine resources scales performance, although not uniformly across all applications. Finally we show that customization on an application-by-application basis is today still very dangerous and much remains to be done for it to become a viable solution.

Journal ArticleDOI
01 Sep 2000
TL;DR: The first lower bound on the peak-to-average power ratio (PAPR) of a constant energy code of a given length n, minimum Euclidean distance and rate is established and there exist asymptotically good codes whose PAPR is at most 8 log n.
Abstract: The first lower bound on the peak-to-average power ratio (PAPR) of a constant energy code of a given length n, minimum Euclidean distance and rate is established. Conversely, using a nonconstructive Varshamov-Gilbert style argument yields a lower bound on the achievable rate of a code of a given length, minimum Euclidean distance and maximum PAPR. The derivation of these bounds relies on a geometrical analysis of the PAPR of such a code. Further analysis shows that there exist asymptotically good codes whose PAPR is at most 8 log n. These bounds motivate the explicit construction of error-correcting codes with low PAPR. Bounds for exponential sums over Galois fields and rings are applied to obtain an upper bound of order (log n)/sup 2/ on the PAPRs of a constructive class of codes, the trace codes. This class includes the binary simplex code, duals of binary, primitive Bose-Chaudhuri-Hocquenghem (BCH) codes and a variety of their nonbinary analogs. Some open problems are identified.

Journal ArticleDOI
Martin Arlitt1, Ludmila Cherkasova1, John Dilley1, Rich Friedrich1, Tai Jin1 
01 Mar 2000
TL;DR: A trace of client requests to a busy Web proxy in an ISP environment is utilized to evaluate the performance of several existing replacement policies and of two new, parameterless replacement policies that are introduced in this paper.
Abstract: The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. Current Web proxy caches utilize simple replacement policies to determine which files to retain in the cache. We utilize a trace of client requests to a busy Web proxy in an ISP environment to evaluate the performance of several existing replacement policies and of two new, parameterless replacement policies that we introduce in this paper. Finally, we introduce Virtual Caches, an approach for improving the performance of the cache for multiple metrics simultaneously.

Patent
20 Jan 2000
TL;DR: In this paper, a portable computer system has a screen attached to a base housing having an accessory bay and a first wireless transceiver, and a cursor is positioned by a mouse having a second transceiver in contact with the first transceiver of the portable computer.
Abstract: A portable computer system has a screen attached to a base housing having an accessory bay and a first wireless transceiver. The screen has a cursor positioned by a mouse having a second wireless transceiver in contact the first wireless transceiver of the portable computer. A module, capable of insertion and removal from the accessory bay, has a mouse bay and connector for coupling to the mouse. When the mouse is in the mouse bay, a battery in the mouse is recharged.

Journal ArticleDOI
TL;DR: Volumetric MRI and volumetric echocardiographic measures of LV volume and LVEF agree well and give similar results when used to stratify patients with dilated cardiomyopathy according to systolic function.

Patent
03 Nov 2000
TL;DR: In this article, a tactile cue such as a textured border or textured surface is used to identify the specialized function of a track pad and an on-off button is included to activate or deactivate the track pad operations.
Abstract: Prescribed areas of a track pad surface are dedicated to one or more prescribed or programmable pointing, clicking, scrolling or hot-key functions. These specialized touch sensing areas are adjacent to a main touch sensing area, and include a tactile cue such as a textured border or textured surface. A visual indication such as a label also may be used to identify the specialized function. A visual cue, such as a light, also may be generated when the user's finger is in a specialized touch sensing area. One specialized touch sensing area is dedicated to correspond to a window scrolling function adding convenience for scrolling web pages, word processing documents, tables and other content windows displayed on a computer screen. An on-off button is included to activate or deactivate the track pad operations.

Journal ArticleDOI
TL;DR: A review of the evolution of the understanding of correlated two-electron dynamics and its importance for doubly excited resonance states is presented in this article, with an emphasis on the concepts introduced.
Abstract: Since the first attempts to calculate the helium ground state in the early days of Bohr-Sommerfeld quantization, two-electron atoms have posed a series of unexpected challenges to theoretical physics. Despite the seemingly simple problem of three charged particles with known interactions, it took more than half a century after quantum mechanics was established to describe the spectra of two-electron atoms satisfactorily. The evolution of the understanding of correlated two-electron dynamics and its importance for doubly excited resonance states is presented here, with an emphasis on the concepts introduced. The authors begin by reviewing the historical development and summarizing the progress in measuring the spectra of two-electron atoms and in calculating them by solving the corresponding Schr\"odinger equation numerically. They devote the second part of the review to approximate quantum methods, in particular adiabatic and group-theoretical approaches. These methods explain and predict the striking regularities of two-electron resonance spectra, including propensity rules for decay and dipole transitions of resonant states. This progress was made possible through the identification of approximate dynamical symmetries leading to corresponding collective quantum numbers for correlated electron-pair dynamics. The quantum numbers are very different from the independent particle classification, suitable for low-lying states in atomic systems. The third section of the review describes modern semiclassical concepts and their application to two-electron atoms. Simple interpretations of the approximate quantum numbers and propensity rules can be given in terms of a few key periodic orbits of the classical three-body problem. This includes the puzzling existence of Rydberg series for electron-pair motion. Qualitative and quantitative semiclassical estimates for doubly excited states are obtained for both regular and chaotic classical two-electron dynamics using modern semiclassical techniques. These techniques set the stage for a theoretical investigation of the regime of extreme excitation towards the three-body breakup threshold. Together with periodic orbit spectroscopy, they supply new tools for the analysis of complex experimental spectra.

Patent
11 Aug 2000
TL;DR: In this paper, a tamper-proof component of a computer platform in conjunction with software, running within the tamperproof component, that controls the uploading and usage of data on the platform as a generic dongle for that platform.
Abstract: A computer platform (100) uses a tamper-proof component (120), or 'trusted module', of a computer platform in conjunction with software, preferably running within the tamper-proof component, that controls the uploading and usage of data on the platform as a generic dongle for that platform. Licensing checks can occur within a trusted environment (in other words, an environment which can be trusted to behave as the user expects); this can be enforced by integrity checking of the uploading and licence-checking software. Metering records can be stored in the tamper-proof device and reported back to administrators as required. There can be an associated clearinghouse mechanism to enable registration and payment for data.

Proceedings ArticleDOI
29 Dec 2000
TL;DR: In this article, the authors proposed a path diversity transmission system for video communication over lossy packet networks, where the system is composed of two subsystems: (1) multiple state video encoder/decoder and (2) a path-diversity transmission system.
Abstract: Video communication over lossy packet networks such as the Internet is hampered by limited bandwidth and packet loss. This paper presents a system for providing reliable video communication over these networks, where the system is composed of two subsystems: (1) multiple state video encoder/decoder and (2) a path diversity transmission system. Multiple state video coding combats the problem of error propagation at the decoder by coding the video into multiple independently decodable streams, each with its own prediction process and state. If one stream is lost the other streams can still be decoded to produce usable video, and furthermore, the correctly received streams provide bidirectional (previous and future) information that enables improved state recovery for the corrupted stream. This video coder is a form of multiple description coding (MDC), and its novelty lies in its use of information from the multiple streams to perform state recovery at the decoder. The path diversity transmission system explicitly sends different subsets of packets over different paths, as opposed to the default scenarios where the packets proceed along a single path, thereby enabling the end- to-end video application to effectively see an average path behavior. We refer to this as path diversity. Generally, seeing this average path behavior provides better performance than seeing the behavior of any individual random path. For example, the probability that all of the multiple paths are simultaneously congested is much less than the probability that a single path is congested. The resulting path diversity provides the multiple state video decoder with an appropriate virtual channel to assist in recovering from lost packets, and can also simplify system design, e.g. FEC design. We propose two architectures for achieving path diversity, and examine the effectiveness of path diversity in communicating video over a lossy packet network.

Patent
Jacques H. Helot1
29 Dec 2000
TL;DR: A docking station includes mechanisms to accommodate multiple devices simultaneously as discussed by the authors, including a docking connector that can mate with the notebook computer and a docking cradle that can accommodate the handheld device, and a slot in the housing to accommodate a handheld device instead of the docking cradle.
Abstract: A docking station includes mechanisms to accommodate multiple devices simultaneously. In the preferred embodiment, the docking station can accommodate at least a notebook computer and a palmtop-type handheld device. The docking station preferably facilitates a communication link between the handheld device and the notebook computer when the two devices are docked to the docking station. The communication link allows transmission and synchronization of data between the handheld device and the notebook computer. In a first embodiment of the invention, the docking station includes a docking connector that can mate with the notebook computer. The docking station also includes a docking cradle that can accommodate the handheld device. In the preferred embodiment, the docking cradle is configured to be adjustable in angle, so that the docked handheld device can be positioned at a desired angle. In the most preferred embodiment, the docking cradle includes a security feature that locks the handheld device to the docking cradle to prevent theft. In a second embodiment of the invention, the docking station includes a slot in the housing to accommodate the handheld device, instead of the docking cradle. In a third embodiment of the invention, the docking station is comprised of two modules, a primary docking module and a supplemental docking module. The primary docking module is configured to accommodate the notebook computer, while the supplemental docking module is configured to accommodate the palmtop-type handheld device.