scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Journal of Research and Development in 2004"


Journal ArticleDOI
Zhe Xiang1, Song Song1, Jing M. Chen2, Han Wang1, Jian Huang1, X. Gao1 
TL;DR: This paper presents a wireless-local-area-network-based (WLAN-based) indoor positioning technology that deploys a position-determination model to gather location information from collected WLAN signals and presents a tracking-assistant positioning algorithm to employ knowledge of the area topology to assist the procedure of position determination.
Abstract: Context-aware computing is an emerging computing paradigm that can provide new or improved services by exploiting user context information. In this paper, we present a wireless-local-area-network-based (WLAN-based) indoor positioning technology. The wireless device deploys a position-determination model to gather location information from collected WLAN signals. A model-based signal distribution training scheme is proposed to trade off the accuracy of signal distribution and training workload. A tracking-assistant positioning algorithm is presented to employ knowledge of the area topology to assist the procedure of position determination. We have set up a positioning system at the IBM China Research Laboratory. Our experimental results indicate an accuracy of 2 m with a 90% probability for static devices and, for moving (walking) devices, an accuracy of 5 m with a 90% probability. Moreover, the complexity of the training procedure is greatly reduced compared with other positioning algorithms.

273 citations


Journal ArticleDOI
Barbara M. Terhal1
TL;DR: By considering the question of whether entanglement is "monogamous," Charles Bennett's influence on modern quantum information theory is illustrated and the recent answers to this entanglements question are reviewed.
Abstract: In this paper I discuss some of the early history of quantum information theory. By considering the question of whether entanglement is "monogamous," I illustrate Charles Bennett's influence on modern quantum information theory. Finally, I review our recent answers to this entanglement question and its relation to Bell inequalities.

240 citations


Journal ArticleDOI
TL;DR: This work explains how the quantum state of a system of n qubits can be expressed as a real function--a generalized Wigner function--on a discrete 2n ?? 2n phase space.
Abstract: Focusing particularly on one-qubit and two-qubit systems, I explain how the quantum state of a system of n qubits can be expressed as a real function--a generalized Wigner function--on a discrete 2n ?? 2n phase space. The phase space is based on the finite field having 2n elements, and its geometric structure leads naturally to the construction of a complete set of 2n + 1 mutually conjugate bases.

119 citations


Journal ArticleDOI
TL;DR: This paper presents two new techniques that have been used to build a large-vocabulary continuous Hindi speech recognition system and proposes a hybrid approach that combines rule-based and statistical approaches in a two-step fashion.
Abstract: In this paper we present two new techniques that have been used to build a large-vocabulary continuous Hindi speech recognition system. We present a technique for fast bootstrapping of initial phone models of a new language. The training data for the new language is aligned using an existing speech recognition engine for another language. This aligned data is used to obtain the initial acoustic models for the phones of the new language. Following this approach requires less training data. We also present a technique for generating baseforms (phonetic spellings) for phonetic languages such as Hindi. As is inherent in phonetic languages, rules generally capture the mapping of spelling to phonemes very well. However, deep linguistic knowledge is required to write all possible rules, and there are some ambiguities in the language that are difficult to capture with rules. On the other hand, pure statistical techniques for base and generation require large amounts of training data that are not readily available. We propose a hybrid approach that combines rule-based and statistical approaches in a two-step fashion. We evaluate the performance of the proposed approaches through various phonetic classification and recognition experiments.

96 citations


Journal ArticleDOI
W. Hsu1, A. J. Smith1
TL;DR: The results suggest that a reliable method for improving performance is to use larger caches up to and even beyond 1% of the storage used, and that to effectively utilize the available disk bandwidth, data should be reorganized such that accesses become more sequential.
Abstract: In this paper, we use real server and personal computer workloads to systematically analyze the true performance impact of various I/O optimization techniques, including read caching, sequential prefetching, opportunistic prefetching, write buffering, request scheduling, striping, and short-stroking. We also break down disk technology improvement into four basic effects--faster seeks, higher RPM, linear density improvement, and increase in track density--and analyze each separately to determine its actual benefit. In addition, we examine the historical rates of improvement and use the trends to project the effect of disk technology scaling. As part of this study, we develop a methodology for replaying real workloads that more accurately models I/O arrivals and that allows the I/O rate to be more realistically scaled than previously. We find that optimization techniques that reduce the number of physical I/Os are generally more effective than those that improve the efficiency in performing the I/Os. Sequential prefetching and write buffering are particularly effective, reducing the average read and write response time by about 50% and 90%, respectively. Our results suggest that a reliable method for improving performance is to use larger caches up to and even beyond 1% of the storage used. For a given workload, our analysis shows that disk technology improvement at the historical rate increases performance by about 8% per year if the disk occupancy rate is kept constant, and by about 15% per year if the same number of disks are used. We discover that the actual average seek time and rotational latency are, respectively, only about 35% and 60% of the specified values. We also observe that the disk head positioning time far dominates the data transfer time, suggesting that to effectively utilize the available disk bandwidth, data should be reorganized such that accesses become more sequential.

79 citations


Journal ArticleDOI
Todd W. Arnold1, L. P. Van Doom1
TL;DR: The PCIXCC, the new coprocessor introduced in the IBM z990 server, is described, which is a watershed design that satisfies all requirements across all IBM server platforms.
Abstract: IBM has designed special cryptographic processors for its servers for more than 25 years. These began as very simple devices, but over time the requirements have become increasingly complex, and there has been a never-ending demand for increased speed. This paper describes the PCIXCC, the new coprocessor introduced in the IBM z990 server. In many ways, PCIXCC is a watershed design. For the first time, a single product satisfies all requirements across all IBM server platforms. It offers the performance demanded by today's Web servers', it supports the complex and specialized cryptographic functions needed in the banking and finance industry, and it uses packaging technology that leads the world in resistance to physical or electrical attacks against its secure processes and the secret data it holds. Furthermore, it is programmable and highly flexible, so that its function can be easily modified to meet new requirements as they appear. These features are possible because of innovative design in both the hardware and embedded software for the card. This paper provides an overview of that design.

76 citations


Journal ArticleDOI
Lisa Cranton Heller1, M. S. Farrell1
TL;DR: This paper is a review of millicode on previous zSeries CMOS systems and also describes enhancements made to the z990 system for processing of the millicodes.
Abstract: Because of the complex architecture of the zSeries® processors, an internal code, called millicode, is used to implement many of the functions provided by these systems. While the hardware can execute many of the logically less complex and high-performance instructions, millicode is required to implement the more complex instructions, as well as to provide additional support functions related primarily to the central processor. This paper is a review of millicode on previous zSeries CMOS systems and also describes enhancements made to the z990 system for processing of the millicode. It specifically discusses the flexibility millicode provides to the z990 system.

51 citations


Journal ArticleDOI
T. J. Siegel1, Erwin Pfeffer1, J. A. Magee1
TL;DR: The IBM eServerTM z990 microprocessor implements many features designed to give excellent performance on both newer and traditional mainframe applications, including a new superscalar instruction execution pipeline, high-bandwidth caches, a huge secondary translation-lookaside buffer (TLB), and an onboard cryptographic coprocessor.
Abstract: The IBM eServerTM z990 microprocessor implements many features designed to give excellent performance on both newer and traditional mainframe applications. These features include a new superscalar instruction execution pipeline, high-bandwidth caches, a huge secondary translation-lookaside buffer (TLB), and an onboard cryptographic coprocessor. The microprocessor maintains zSeries® leadership in RAS (reliability, availability, serviceability) capabilities that include state-of-the-art error detection and recovery.

51 citations


Journal ArticleDOI
Yangyi Chen1, Xiaoyan Chen1, Fangyan Rao1, Xiulan Yu1, Ying Li1, Duixian Liu1 
TL;DR: This infrastructure is based on a proposed location operating reference model (LORE), which addresses many major aspects of building location-aware services, including positioning, location modeling, location-dependent query processiug, tracking, and intelligent location- aware message notification.
Abstract: With the advance in wireless Internet and mobile computing, location-based services (LBS)--the capability to deliver location-aware content to subscribers on the basis of the positioning capability of the wireless infrastructure--are emerging as key value-added services that telecom operators can offer. To support efficient and effective development and deployment of innovative location-aware applications, a flexible and resilient middleware should be built as the enabling infrastructure. This paper presents the research and efforts made in the IBM China Research Laboratory toward developing an infrastructure that supports location-aware services. This infrastructure is based on a proposed location operating reference model (LORE), which addresses many major aspects of building location-aware services, including positioning, location modeling, location-dependent query processiug, tracking, and intelligent location-aware message notification. Three key components of the infrastructure--the location server, a moving object database, and a spatial publish/subscribe engine--are introduced in detail. The location server has a common location adapter framework that supports heterogeneous positioning techniques and industry-standard location application program interfaces (APIs). The moving object database manages the location stream and processes the location-based queries. The spatial publish/subscribe engine enables intelligent location-aware message notification. We also present some location-aware application demonstrations that leverage the LORE infrastructure. Part of our work has been tested in pilot projects with leading carriers in China and has been integrated into the IBM WebSphere® Everyplace® Suite.

44 citations


Journal ArticleDOI
TL;DR: This work compares and contrast Legion and Globus in terms of their underlying philosophy and the resulting architectures, and discusses how these projects converge in the context of the new standards being formulated for grids.
Abstract: Grids are collections of interconnected resources harnessed to satisfy various needs of users. Legion and Globus are pioneering grid technologies. Several of the aims and goals of both projects are similar, yet their underlying architectures and philosophies differ substantially. The scope of both projects is the creation of worldwide grids; in that respect, they subsume several distributed systems technologies. However, Legion has been designed as a virtual operating system (OS) for distributed resources with OS-like support for current and expected future interactions among resources, whereas Globus has long been designed as a "sum of services" infrastructure, in which tools are developed independently in response to current needs of users. We compare and contrast Legion and Globus in terms of their underlying philosophy and the resulting architectures, and we discuss how these projects converge in the context of the new standards being formulated for grids.

43 citations


Journal ArticleDOI
TL;DR: This paper describes these new capabilities of the IBM eServerTM zSeries® Model z990, in each case presenting the value of the feature, both in terms of enhancing the self-management capability of the server and its availability.
Abstract: The IBM eServerTM zSeries® Model z990 offers customers significant new opportunity for server growth while preserving and enhancing server availability. The z990 provides vertical growth capability by introducing the concurrent addition of processor/memory books and horizontal growth in channels by the use of extended virtualization technology. In order to continue to support the zSeries legacy for high availability and continuous reliable operation, the z990 delivers significant new features for reliability, availability, and serviceability (RAS). This paper describes these new capabilities, in each case presenting the value of the feature, both in terms of enhancing the self-management capability of the server and its availability.

Journal ArticleDOI
TL;DR: An efficient representation of operators using discontinuous multiwavelet bases that produces fast O(N) methods for multiscale solution of integral equations when combined with low separation rank methods is presented.
Abstract: We review some recent results on multiwavelet methods for solving integral and partial differential equations and present an efficient representation of operators using discontinuous multiwavelet bases, including the case for singular integral operators. Numerical calculus using these representations produces fast O(N) methods for multiscale solution of integral equations when combined with low separation rank methods. Using this formulation, we compute the Hilbert transform and solve the Poisson and SchrA¶dinger equations. For a fixed order of multiwavelets and for arbitrary but finite- precision computations, the computational complexity is O(N). The computational structures are similar to fast multipole methods but are more generic in yielding fast O(N) algorithm development.

Journal ArticleDOI
TL;DR: The idea of viewing quantum states as carriers of some kind of information (albeit unknowable in classical terms) leads naturally to interesting questions that might otherwise never have been asked, and corresponding new insights as discussed by the authors.
Abstract: Over the past decade, quantum information theory has developed into a vigorous field of research despite the fact that quantum information, as a precise concept, is undefined. Indeed, the very idea of viewing quantum states as carriers of some kind of information (albeit unknowable in classical terms) leads naturally to interesting questions that might otherwise never have been asked, and corresponding new insights. We discuss some illustrative examples, including a strengthening of the well-known no-cloning theorem leading to a property of permanence for quantum information, and considerations arising from information compression that reflect on fundamental issues.

Journal ArticleDOI
TL;DR: The floating-point unit (FPU) of the IBM z990 eServerTM is the first one in an IBM mainframe with a fused multiply-add dataflow and has a new extended-precision divide and square-root dataflow.
Abstract: The floating-point unit (FPU) of the IBM z990 eServerTM is the first one in an IBM mainframe with a fused multiply-add dataflow. It also represents the first time that an SRT divide algorithm (named after Sweeney, Robertson, and Tocher, who independently proposed the algorithm) was used in an IBM mainframe. The FPU supports dual architectures: the zSeries® hexadecimal floating-point architecture and the IEEE 754 binary floating-point architecture. Six floating-point formats-- including short, long, and extended operands-are supported in hardware. The throughput of this FPU is one multiply-add operation per cycle. The instructions are executed in five pipeline steps, and there are multiple provisions to avoid stalls in case of data dependencies. It is able to handle denormalized input operands and denormalized results without a stall (except for architectural program exceptions). It has a new extended-precision divide and square-root dataflow. This dataflow uses a radix-4 SRT algorithm (radix-2 for square root) and is able to handle divides and square-root operations in multiple floating-point and fixed-point formats. For fixed-point divisions, a new mechanism improves the performance by using an algorithm with which the number of divide iterations depends on the effective number of quotient bits.

Journal ArticleDOI
G. Grinstein1
TL;DR: This paper reviews and explains the behavior of the NEC rule and discusses the implications of the rule for the generic stabilization of complex structures.
Abstract: Toom's "NEC" cellular automaton is a simple model or dynamical "rule" that succeeds in producing two-phase coexistence generically, i.e., over a nonzero fraction of its two-dimensional parameter space. This paper reviews and explains the behavior of the NEC rule and discusses the implications of the rule for the generic stabilization of complex structures. Much of the discussion is based on work performed almost twenty years ago by Charles Bennett and the author.

Journal ArticleDOI
J. Zhu1, Zhong Tian1, T. Li1, Wei Sun1, S. Ye1, W. Ding1, C. C. Wang, G. Wu, L. Weng, S. Huang, B. Liu, D. Chou 
TL;DR: This paper illustrates how each key business process integration and solution development phase was carried out and guided by business process modeling, together with major experiences gained.
Abstract: Business process integration and management (BPIM) is a critical element in enterprise business transformation. Small and medium-sized businesses have their own requirements for BPIM solutions: The engagement methodology should be fast and efficient; a reusable and robust framework is required to reduce cost; and the whole platform should be lightweight so that one can easily revise, develop, and execute solutions. We believe that model-driven technologies are the key to solving all of the challenges mentioned above. Model Blue, a set of model-driven business integration and management methods, frameworks, supporting tools, and a runtime environment, was developed by the IBM China Research Laboratory (CRL) in Beijing to study the efficacy of model-driven BPIM. To verify the technology and methodology, Model Blue was deployed with Bank SinoPac, a mid-sized bank headquartered in Taiwan. A lightweight BPIM solution platform was delivered for Bank SinoPac to design, develop, and deploy its business logic and processes. During the eightmonth life span of the project, IBM teams developed four major solutions for Bank SinoPac, which also developed one solution independently. In spite of the remote working environment and the outbreak of the Severe Acute Respiratory Syndrome illness, the project was completed successfully on schedule and within budget, with up to 30% efficiency improvement compared with similar projects. Bank SinoPac was satisfied with the technology and methodology, and awarded IBM other projects. In this paper, we illustrate how each key business process integration and solution development phase was carried out and guided by business process modeling, together with major experiences gained. The following technical aspects are discussed in detail: a two-dimensional business process modeling view to integrate flow modeling and data modeling; a lightweight processing logic automation environment with tooling support; and the end-to-end BPIM methodology, with models and documents successfully integrated as part of (or replacement for) the deliverables defined in the existing servicing methodologies and software engineering approaches.

Journal ArticleDOI
TL;DR: This paper presents a system called BioAnnotator, which uses domain-based dictionary lookup for recognizing known terms and a rule engine for discovering new terms in documents and explains how the system uses a biomedical dictionary to learn extraction patterns for the rule engine.
Abstract: Journals and conference proceedings represent the dominant mechanisms for reporting new biomedical results. The unstructured nature of such publications makes it difficult to utilize data mining or automated knowledge discovery techniques. Annotation (or markup) of these unstructured documents represents the first step in making these documents machine-analyzable. Often, however, the use of similar (or the same) labels for different entities and the use of different labels for the same entity makes entity extraction difficult in biomedical literature, In this paper we present a system called BioAnnotator for identifying and classifying biological terms in documents. BioAnnotator uses domain-based dictionary lookup for recognizing known terms and a rule engine for discovering new terms. We explain how the system uses a biomedical dictionary to learn extraction patterns for the rule engine and how it disambiguates biological terms that belong to multiple semantic classes.

Journal ArticleDOI
TL;DR: A new classical information capacity of quantum channels is discovered, the adaptive capacity C1,A, which lies strictly between the C2,1 and the C3,1 capacities, and is shown to require a positive operator valued measure (POVM) with six outcomes.
Abstract: We investigate the capacity of three symmetric quantum states in three real dimensions to carry classical information. Several such capacities have already been defined, depending on what operations are allowed in the protocols that the sender uses to encode classical information into these quantum states, and that the receiver uses to decode it. These include the C1,1 capacity, which is the capacity achievable if separate measurements must be used for each of the received states, and the C1,∞ capacity, which is the capacity achievable if joint measurements are allowed on the tensor product of all of the received states. We discover a new classical information capacity of quantum channels, the adaptive capacity C1,A, which lies strictly between the C1,1 and the C1,∞ capacities. The adaptive capacity allows the use of what is known as the LOCC (local operations and classical communication) model of quantum operations for decoding the channel outputs. This model requires each of the signals to be measured by a separate apparatus, but allows the quantum states of these signals to be measured in stages, with the first stage partially reducing their quantum states; measurements in subsequent stages may depend on the results of a classical computation taking as input the outcomes of the first round of measurements. We also show that even in three dimensions, with the information carried by an ensemble containing three pure states, achieving the C1,1 capacity may require a positive operator valued measure (POVM) with six outcomes.

Journal ArticleDOI
Hiroyuki Okano1, A. J. Davenport1, M. Trumbo, Chandra Reddy1, K. Yoda1, M. Amano1 
TL;DR: The FLS system enabled a steel mill to expand its scheduling horizon from a few days to one month, and to improve decision frequency from monthly to daily.
Abstract: A new solution for large-scale scheduling in the steelmaking industry, called Finishing Line Scheduling (FLS), is described. FLS in a major steel mill is a task to create production campaigns (specific production runs) for steel coils on four continuous processes for a one-month horizon. Two process flows are involved in FLS, and the balancing of the two process flows requires resolving conflicts of due dates. There ate also various constraints along the timeline for each process with respect to sequences of campaigns and coils. The two types of constraints--along process flows and timelines--make the FLS problem very complex. We have developed a high-performance solution for this problem as follows: Input coils are clustered by two clustering algorithms to reduce the complexity and size of the problem. Campaigns are created for each process from downstream to upstream processes, while propagating upward the process timings of the clusters. Timing inconsistencies along the process flows are then repaired by scheduling downward. Finally, coils are sequenced within each campaign. The FLS system enabled a steel mill to expand its scheduling horizon from a few days to one month, and to improve decision frequency from monthly to daily.

Journal ArticleDOI
Hideki Tai1, K. Mitsui1, Takashi Nerome1, Mari Abe1, K. Ono1, Masahiro Hori2 
TL;DR: This paper discusses how a metamodel is used to describe part of the specification as a central contract among the developers and describes a tool that is implemented on the basis of the metammodel.
Abstract: This paper describes our approach to support the development of large-scale Web applications. Large development efforts have to be divided into a number of smaller tasks of different kinds that can be performed by multiple developers. Once this process has taken place, it is important to manage the consistency among the artifacts in an efficient and systematic manner. Our model-driven approach makes this possible. In this paper, we discuss how a metamodel is used to describe part of the specification as a central contract among the developers. We also describe a tool that we implemented on the basis of the metamodel. The tool provides a variety of code generators and a mechanism for checking whether view artifacts, such as JavaServer PagesTM, are compliant with the model. This feature helps developers manage the consistency between a view artifact and the related business logic--HyperText Transfer Protocol request handlers.

Journal ArticleDOI
TL;DR: The framework enables development, configuration, integration, and management of solutions at a higher semantic level, and provides commonly used services such as access to citizen and property records, access control and authentication services, public key infrastructure, and support for digital signatures.
Abstract: This paper presents a framework which simplifies the task of developing, deploying, and managing complex, integrated, and standards-compliant eGovernance solutions. The framework enables development, configuration, integration, and management of solutions at a higher semantic level. It also provides commonly used services such as access to citizen and property records, access control and authentication services, public key infrastructure, and support for digital signatures. The ability to manage solutions at a higher semantic level enables administrators who are not proficient in programming to customize solutions in order to address specific needs of the different national, state, and local governments. This includes the ability to customize interfaces for multiple local languages used in government transactions and to customize workflows to conform to the organizational structure and policies to manage access to and retention of government records.

Journal ArticleDOI
TL;DR: The paper discusses some of the storage systems and data management methods that are needed for computing facilities to address the challenges and describes some ongoing improvements.
Abstract: Increasingly, scientific advances require the fusion of large amounts of complex data with extraordinary amounts of computational power. The problems of deep science demand deep computing and deep storage resources. In addition to teraflop-range computing engines with their own local storage, facilities must provide large data repositories of the order of 10-100 petabytes, and networking to allow the movement of multi-terabyte files in a timely and secure manner. This paper examines such problems and identifies associated challenges. The paper discusses some of the storage systems and data management methods that are needed for computing facilities to address the challenges and describes some ongoing improvements.

Journal ArticleDOI
TL;DR: Under the roof of one controversial assumption about physics, five big questions are discussed using concepts from a modern understanding of digital informational processes, and experimental tests of the finite nature hypothesis are suggested.
Abstract: Under the roof of one controversial assumption about physics, we discuss five big questions that can be addressed using concepts from a modern understanding of digital informational processes. The assumption is called finite nature. The digital mechanics model is obtained by applying the assumption to physics. The questions are as follows: What is the origin of spin? Why are there symmetries and CPT (charge conjugation, parity, and time reversal)? What is the origin of length? What does a process model of motion tell us? Can the finite nature assumption account for the efficacy of quantum mechanics? Digital mechanics predicts that for every continuous symmetry of physics there will be some microscopic process that violates that symmetry. We are, therefore, able to suggest experimental tests of the finite nature hypothesis. Finally, we explain why experimental evidence for such violations might be elusive and hard to recognize.

Journal ArticleDOI
TL;DR: The design and implementation of the JIT compiler for IA-32 platforms by focusing on the recent advances achieved in the past several years is described, including the dynamic optimization framework, which focuses the expensive optimization efforts only on performance-critical methods, thus helping to manage the total compilation overhead.
Abstract: JavaTM has gained widespread popularity in the industry, and an efficient Java virtual machine (JVMTM) and just-in-time (JIT) compiler are crucial in providing high performance for Java applications. This paper describes the design and implementation of our JIT compiler for IA-32 platforms by focusing on the recent advances achieved in the past several years. We first present the dynamic optimization framework, which focuses the expensive optimization efforts only on performance-critical methods, thus helping to manage the total compilation overhead. We then describe the platform-independent features, which include the conversion from the stack-semantic Java bytecode into our register-based intermediate representation (IR) and a variety of aggressive optimizations applied to the IR. We also present some techniques specific to the IA-32 used to improve code quality, especially for the efficient use of the small number of registers on that platform. Using several industry-standard benchmark programs, the experimental results show that our approach offers high performance with low compilation overhead. Most of the techniques presented here are included in the IBM JIT compiler product, integrated into the IBM Development Kit for Microsoft Windows®, Java Technology Edition Version 1.4.0.

Journal ArticleDOI
TL;DR: This paper describes the challenging first- and second-level packaging technology of a new system packaging architecture for the IBM eServerTM z990, which dramatically increases the volumetric processor density over that of the predecessor z900 by implementing a super-blade design comprising four node cards.
Abstract: In this paper, we describe the challenging first- and second-level packaging technology of a new system packaging architecture for the IBM eServerTM z990. The z990 dramatically increases the volumetric processor density over that of the predecessor z900 by implementing a super-blade design comprising four node cards. Each blade is plugged into a common center board, and a blade contains the node with up to sixteen processor cores on the multichip module (MCM), up to 64 GB of memory on two memory cards, and up to twelve self-timed interface (STI) cables plugged into the front of the node. Each glass-ceramic MCM carries 16 chips dissipating a maximum power of 800 W. In this super-blade design, the packaging complexity is increased dramatically over that of the previous zSeries® eServer z900 to achieve increased volumetric density, processor performance, and system scalability. This approach permits the system to be scaled from one to four nodes, with full interaction between all nodes using a ring structure for the wiring between the four nodes. The processor frequencies are increased to 1.2 GHz, with a 0.6-GHz nest with synchronous double-data-rate interchip and interblade communication. This data rate over these package connections demands an electrical verification methodology that includes all of the different relevant system components to ensure that the proper signal and power distribution operation is achieved. The signal integrity analysis verifies that crosstalk limits are not exceeded and proper timing relationships are maintained. The power integrity simulations are performed to optimize the hierarchical decoupling in order to maintain the voltage on the power distribution networks within prescribed limits.

Journal ArticleDOI
N. D. Mermin1
TL;DR: A very economical solution to the Bernstein--Vazirani problem that does not even hint at interference between multiple universes, and how the Copenhagen interpretation was inadvertently reinvented in the course of constructing a simple, straightforward, and transparent introduction to quantum mechanics for computer scientists.
Abstract: To celebrate the 60th birthday of Charles H. Bennett, I 1) publicly announce my referee reports for the original dense coding and teleportation papers, 2) present a very economical solution to the Bernstein--Vazirani problem that does not even hint at interference between multiple universes, and 3) describe how I inadvertently reinvented the Copenhagen interpretation in the course of constructing a simple, straightforward, and transparent introduction to quantum mechanics for computer scientists.

Journal ArticleDOI
TL;DR: It is shown that quantum teleportation is a special case of a generalized Einstein--Podolsky--Rosen (EPR) nonlocality and perfect conclusive teleportation can be obtained with any pure entangled state.
Abstract: In this paper, we show that quantum teleportation is a special case of a generalized Einstein--Podolsky--Rosen (EPR) nonlocality. On the basis of the connection between teleportation and generalized measurements, we define conclusive teleportation. We show that perfect conclusive teleportation can be obtained with any pure entangled state, and it can be arbitrarily approached with a particular mixed state.

Journal ArticleDOI
R. Y. Fu1, H. Su1, J. C. Fletcher2, W. Li1, X. X. Liu1, Shiwan Zhao1, C. Y. Chi1 
TL;DR: A framework for augmenting mobile device capabilities with surrounding devices is proposed and a possible approach to represent an augmented device as one single virtual device is discussed.
Abstract: The proliferation of mobile devices is gradually making it possible to access information anywhere at any time. However, the physical capabilities of the mobile device still greatly limit the experience of users because functionality has usually been traded off for ubiquity. Nonetheless, the enormous growth rate of new information appliances heralds the dawning of a device-rich era. In this paper, we propose a framework for augmenting mobile device capabilities with surrounding devices. We then discuss a possible approach to represent an augmented device as one single virtual device.

Journal ArticleDOI
TL;DR: This paper presents a novel approach for conducting online marketing research based on several concepts such as active learning, matched control and experimental groups, and implicit and explicit experiments that enable rapid, systematic, and cost-effective marketing research.
Abstract: Marketing decisions are typically made on the basis of research conducted using direct mailings, mall intercepts, telephone interviews, focused group discussion, and the like. These methods of marketing research can be time-consuming and expensive, and can require a large amount of effort to ensure accurate results. This paper presents a novel approach for conducting online marketing research based on several concepts such as active learning, matched control and experimental groups, and implicit and explicit experiments. These concepts, along with the opportunity provided by the increasing numbers of online shoppers, enable rapid, systematic, and cost-effective marketing research.

Journal ArticleDOI
Klaus-Dieter Schubert1, E. C. McCain1, H. Pape1, K. Rebmann1, P. M. West1, Ralf Winkelmann1 
TL;DR: This paper focuses primarily on the hardware subsystem verification of the CLK chip [which is the interface between the central electronic complex (CEC) and the service element (SE] and on enhanced co-simulation.
Abstract: System integration of an IBM eServerTM z990 begins when a z990 book, which houses the main processors, memory, and I/O adapters, is installed in a z990 frame, Licensed Internal Code is "booted" in the service element (SE), and power is turned on. This initial system "bringup," also referred to as post-silicon integration, is composed of three major steps: initializing the chips, loading embedded code (firmware) into the system, and starting an initial program load (IPL) of an operating system. These processes are serialized, and verification of the majority of the system components cannot begin until they are complete. Therefore, it is important to shorten this critical time period by improving the quality of the integrated components through more comprehensive verification prior to manufacturing. This enhanced coverage is focused on verifying the interaction between the hardware components and firmware (often referred to as hardware and software co-simulation). Verification of the activities of these components first occurs independently and culminates in a pre-silicon system integration process, or virtual power-on (VPO). This paper focuses primarily on the hardware subsystem verification of the CLK chip [which is the interface between the central electronic complex (CEC) and the service element (SE)] and on enhanced co-simulation. It also considers the various environments (collections of hardware simulation models, firmware, execution time control code, and test cases to stimulate model behavior), with their advantages and disadvantages. Finally, it discusses the results of the improved comprehensive simulation effort with respect to system integration for the z990.