scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 1999"


Journal ArticleDOI
29 Oct 1999-Science
TL;DR: A thin-film field-effect transistor having an organic-inorganic hybrid material as the semiconducting channel was demonstrated and molecular engineering of the organic and inorganic components of the hybrids is expected to further improve device performance for low-cost thin- film transistors.
Abstract: Organic-inorganic hybrid materials promise both the superior carrier mobility of inorganic semiconductors and the processability of organic materials A thin-film field-effect transistor having an organic-inorganic hybrid material as the semiconducting channel was demonstrated Hybrids based on the perovskite structure crystallize from solution to form oriented molecular-scale composites of alternating organic and inorganic sheets Spin-coated thin films of the semiconducting perovskite (C(6)H(5)C(2)H(4)NH(3))(2)SnI(4) form the conducting channel, with field-effect mobilities of 06 square centimeters per volt-second and current modulation greater than 10(4) Molecular engineering of the organic and inorganic components of the hybrids is expected to further improve device performance for low-cost thin-film transistors

1,887 citations


Journal ArticleDOI
TL;DR: In this paper, a scheme that realizes controlled interactions between two distant quantum dot spins is proposed, where the effective long-range interaction is mediated by the vacuum field of a high finesse microcavity.
Abstract: The electronic spin degrees of freedom in semiconductors typically have decoherence times that are several orders of magnitude longer than other relevant time scales. A solid-state quantum computer based on localized electron spins as qubits is therefore of potential interest. Here, a scheme that realizes controlled interactions between two distant quantum dot spins is proposed. The effective long-range interaction is mediated by the vacuum field of a high finesse microcavity. By using conduction-band-hole Raman transitions induced by classical laser fields and the cavity-mode, parallel controlled-not operations, and arbitrary single qubit rotations can be realized.

1,702 citations


Journal ArticleDOI
17 May 1999
TL;DR: A new hypertext resource discovery system called a Focused Crawler that is robust against large perturbations in the starting set of URLs, and capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius.
Abstract: The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines In this paper we describe a new hypertext resource discovery system called a Focused Crawler The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics The topics are specified not using keywords, but using exemplary documents Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, and a distiller that identifies hypertext nodes that are great access points to many relevant pages within a few links We report on extensive focused-crawling experiments using several topics at different levels of specificity Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set Focused crawling is robust against large perturbations in the starting set of URLs It discovers largely overlapping sets of resources in spite of these perturbations It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius Our anecdotes suggest that focused crawling is very effective for building high-quality collections of Web documents on specific topics, using modest desktop hardware © 1999 Published by Elsevier Science BV All rights reserved

1,700 citations


Journal ArticleDOI
25 Nov 1999-Nature
TL;DR: It is shown that single quantum bit operations, Bell-basis measurements and certain entangled quantum states such as Greenberger–Horne–Zeilinger (GHZ) states are sufficient to construct a universal quantum computer.
Abstract: Algorithms such as quantum factoring1 and quantum search2 illustrate the great theoretical promise of quantum computers; but the practical implementation of such devices will require careful consideration of the minimum resource requirements, together with the development of procedures to overcome inevitable residual imperfections in physical systems3,4,5 Many designs have been proposed, but none allow a large quantum computer to be built in the near future6 Moreover, the known protocols for constructing reliable quantum computers from unreliable components can be complicated, often requiring many operations to produce a desired transformation3,4,5,7,8 Here we show how a single technique—a generalization of quantum teleportation9—reduces resource requirements for quantum computers and unifies known protocols for fault-tolerant quantum computation We show that single quantum bit (qubit) operations, Bell-basis measurements and certain entangled quantum states such as Greenberger–Horne–Zeilinger (GHZ) states10—all of which are within the reach of current technology—are sufficient to construct a universal quantum computer We also present systematic constructions for an infinite class of reliable quantum gates that make the design of fault-tolerant quantum computers much more straightforward and methodical

1,604 citations


Proceedings ArticleDOI
16 May 1999
TL;DR: A new paradigm for modeling and implementing software artifacts is described, one that permits separation of overlapping concerns along multiple dimensions of composition and decomposition, which addresses numerous problems throughout the software lifecycle.
Abstract: Done well, separation of concerns can provide many software engineering benefits, including reduced complexity, improved reusability, and simpler evolution. The choice of boundaries for separate concerns depends on both requirements on the system and on the kind(s) of decomposition and composition a given formalism supports. The predominant methodologies and formalisms available, however, support only orthogonal separations of concerns, along single dimensions of composition and decomposition. These characteristics lead to a number of well-known and difficult problems. The paper describes a new paradigm for modeling and implementing software artifacts, one that permits separation of overlapping concerns along multiple dimensions of composition and decomposition. This approach addresses numerous problems throughout the software lifecycle in achieving well-engineered, evolvable, flexible software artifacts and traceability across artifacts.

1,452 citations


Journal ArticleDOI
TL;DR: In this article, the unidirectional anisotropy of a ferromagnetic bilayer coupled to an antiferromagnetic film was studied. But the authors focused on the unideal anismotropy produced by the exchange bias field produced by a metal and an oxide bilayer.

1,365 citations


Journal ArticleDOI
TL;DR: The Ball-Pivoting Algorithm is applied to datasets of millions of points representing actual scans of complex 3D objects and the quality of the results obtained compare favorably with existing techniques.
Abstract: The Ball-Pivoting Algorithm (BPA) computes a triangle mesh interpolating a given point cloud. Typically, the points are surface samples acquired with multiple range scans of an object. The principle of the BPA is very simple: Three points form a triangle if a ball of a user-specified radius p touches them without containing any other point. Starting with a seed triangle, the ball pivots around an edge (i.e., it revolves around the edge while keeping in contact with the edge's endpoints) until it touches another point, forming another triangle. The process continues until all reachable edges have been tried, and then starts from another seed triangle, until all points have been considered. The process can then be repeated with a ball of larger radius to handle uneven sampling densities. We applied the BPA to datasets of millions of points representing actual scans of complex 3D objects. The relatively small amount of memory required by the BPA, its time efficiency, and the quality of the results obtained compare favorably with existing techniques.

1,311 citations


Journal ArticleDOI
Dieter Weller1, Andreas Moser
TL;DR: In this article, the authors discuss thermal effects in the framework of basic Arrhenius-Neel statistical switching models and reveal the onset of thermal decay at "stability ratios" (k/sub u/V/K/sub B/T)/sub 0//spl sime/35 /spl plusmn/ 2.
Abstract: In current longitudinal magnetic recording media, high areal density and low noise are achieved by statistical averaging over several hundred weakly coupled ferromagnetic grains per bit cell. Continued scaling to smaller bit and grain sizes, however, may prompt spontaneous magnetization reversal processes when the stored energy per particle starts competing with thermal energy, thereby limiting the achievable areal density. Charap et al. have predicted this to occur at about 40 Gbits/in/sup 2/. This paper discusses thermal effects in the framework of basic Arrhenius-Neel statistical switching models. It is emphasized that magnetization decay is intimately related to high-speed-switching phenomena. Thickness-, temperature- and bit-density dependent recording experiments reveal the onset of thermal decay at "stability ratios" (K/sub u/V/K/sub B/T)/sub 0//spl sime/35 /spl plusmn/ 2. The stability requirement is grain size dispersion dependent and shifts to about 60 for projected 40 Gbits/in/sup 2/ conditions and ten-year storage times. Higher anisotropy and coercivity media with reduced grain sizes are logical extensions of the current technology until write field limitations are reached. Future advancements will rely on deviations from traditional scaling. Squarer bits may reduce destabilizing stray fields inside the bit transitions. Perpendicular recording may shift the onset of thermal effects to higher bit densities. Enhanced signal processing may allow signal retrieval with fewer grains per bit. Finally, single grain per bit recording may be envisioned in patterned media, with lithographically defined bits.

1,223 citations


Journal ArticleDOI
TL;DR: In this paper, a quantum-gate mechanism based on electron spins in coupled semiconductor quantum dots is considered and the magnetization and the spin susceptibilities of the coupled dots are calculated.
Abstract: We consider a quantum-gate mechanism based on electron spins in coupled semiconductor quantum dots. Such gates provide a general source of spin entanglement and can be used for quantum computers. We determine the exchange coupling $J$ in the effective Heisenberg model as a function of magnetic $(B)$ and electric fields, and of the interdot distance $a$ within the Heitler-London approximation of molecular physics. This result is refined by using $\mathrm{sp}$ hybridization, and by the Hund-Mulliken molecular-orbit approach, which leads to an extended Hubbard description for the two-dot system that shows a remarkable dependence on $B$ and $a$ due to the long-range Coulomb interaction. We find that the exchange $J$ changes sign at a finite field (leading to a pronounced jump in the magnetization) and then decays exponentially. The magnetization and the spin susceptibilities of the coupled dots are calculated. We show that the dephasing due to nuclear spins in GaAs can be strongly suppressed by dynamical nuclear-spin polarization and/or by magnetic fields.

1,178 citations


Journal ArticleDOI
17 May 1999
TL;DR: The subject of this paper is the systematic enumeration of over 100,000 emerging communities from a Web crawl, motivating a graph-theoretic approach to locating such communities, and describing the algorithms and algorithmic engineering necessary to find structures that subscribe to this notion.
Abstract: The Web harbors a large number of communities — groups of content-creators sharing a common interest — each of which manifests itself as a set of interlinked Web pages. Newgroups and commercial Web directories together contain of the order of 20,000 such communities; our particular interest here is on emerging communities — those that have little or no representation in such fora. The subject of this paper is the systematic enumeration of over 100,000 such emerging communities from a Web crawl: we call our process trawling. We motivate a graph-theoretic approach to locating such communities, and describe the algorithms, and the algorithmic engineering necessary to find structures that subscribe to this notion, the challenges in handling such a huge data set, and the results of our experiment. © 1999 Published by Elsevier Science B.V. All rights reserved.

1,126 citations


Book ChapterDOI
26 Jul 1999
TL;DR: This paper describes two algorithms that operate on the Web graph, addressing problems from Web search and automatic community discovery, and proposes a new family of random graph models that point to a rich new sub-field of the study of random graphs, and raises questions about the analysis of graph algorithms on the Internet.
Abstract: The pages and hyperlinks of the World-Wide Web may be viewed as nodes and edges in a directed graph. This graph is a fascinating object of study: it has several hundred million nodes today, over a billion links, and appears to grow exponentially with time. There are many reasons -- mathematical, sociological, and commercial -- for studying the evolution of this graph. In this paper we begin by describing two algorithms that operate on the Web graph, addressing problems from Web search and automatic community discovery. We then report a number of measurements and properties of this graph that manifested themselves as we ran these algorithms on the Web. Finally, we observe that traditional random graph models do not explain these observations, and we propose a new family of random graph models. These models point to a rich new sub-field of the study of random graphs, and raise questions about the analysis of graph algorithms on the Web.

Journal ArticleDOI
01 Jun 1999
TL;DR: An algorithmic framework for solving the projected clustering problem, in which the subsets of dimensions selected are specific to the clusters themselves, is developed and tested.
Abstract: The clustering problem is well known in the database literature for its numerous applications in problems such as customer segmentation, classification and trend analysis. Unfortunately, all known algorithms tend to break down in high dimensional spaces because of the inherent sparsity of the points. In such high dimensional spaces not all dimensions may be relevant to a given cluster. One way of handling this is to pick the closely correlated dimensions and find clusters in the corresponding subspace. Traditional feature selection algorithms attempt to achieve this. The weakness of this approach is that in typical high dimensional data mining applications different sets of points may cluster better for different subsets of dimensions. The number of dimensions in each such cluster-specific subspace may also vary. Hence, it may be impossible to find a single small subset of dimensions for all the clusters. We therefore discuss a generalization of the clustering problem, referred to as the projected clustering problem, in which the subsets of dimensions selected are specific to the clusters themselves. We develop an algorithmic framework for solving the projected clustering problem, and test its performance on synthetic data.

Patent
23 Jun 1999
TL;DR: In this paper, the authors propose a solution to the general problem of secure storage and retrieval of information (SSRI) which guarantees that also the process of storing the information is correct even when some processors fail.
Abstract: A solution to the general problem of Secure Storage and Retrieval of Information (SSRI) guarantees that also the process of storing the information is correct even when some processors fail. A user interacts with the storage system by depositing a file and receiving a proof that the deposit was correctly executed. The user interacts with a single distinguished processor called the gateway. The mechanism enables storage in the presence of both inactive and maliciously active faults, while maintaining (asymptotical) space optimailty. This mechanism is enhanced with the added requirement of confidentiality of information; i.e., that a collusion of processors should not be able to learn anything about the information. Also, in this case space optimality is preserved.

01 Jan 1999
TL;DR: Damascene copper electroplating for on-chip interconnections, a process that was conceived and developed in the early 1990s, makes it possible to fill submicron trenches and vias with copper without creating a void or a seam and has thus proven superior to other technologies of copper deposition as discussed by the authors.
Abstract: Damascene copper electroplating for on-chip interconnections, a process that we conceived and developed in the early 1990s, makes it possible to fill submicron trenches and vias with copper without creating a void or a seam and has thus proven superior to other technologies of copper deposition. We discuss here the relationship of additives in the plating bath to superfilling, the phenomenon that results in superconformal coverage, and we present a numerical model which accounts for the experimentally observed profile evolution of the plated metal.

Journal ArticleDOI
TL;DR: This poster presents a probabilistic procedure to constrain the number of particles in the response of the immune system to the presence of Tau.
Abstract: Reference LPI-ARTICLE-1999-017View record in Web of Science Record created on 2006-02-21, modified on 2017-05-12

Journal ArticleDOI
TL;DR: In this paper, it was shown that there is a finite gap between the mutual information obtainable by a joint measurement on these states and a measurement in which only local actions are permitted.
Abstract: We exhibit an orthogonal set of product states of two three-state particles that nevertheless cannot be reliably distinguished by a pair of separated observers ignorant of which of the states has been presented to them, even if the observers are allowed any sequence of local operations and classical communication between the separate observers. It is proved that there is a finite gap between the mutual information obtainable by a joint measurement on these states and a measurement in which only local actions are permitted. This result implies the existence of separable superoperators that cannot be implemented locally. A set of states are found involving three two-state particles that also appear to be nonmeasurable locally. These and other multipartite states are classified according to the entropy and entanglement costs of preparing and measuring them by local operations.

Journal ArticleDOI
Alfred Grill1
TL;DR: A review of the state of the art of the preparation of diamond-like carbon films, the characterization and understanding of their properties, and their practical applications can be found in this article.

Journal ArticleDOI
Hervé Debar1, Marc Dacier1, Andreas Wespi1
TL;DR: A taxonomy of intrusion-detection systems is introduced that highlights the various aspects of this area and is illustrated by numerous examples from past and current projects.

Proceedings ArticleDOI
21 Mar 1999
TL;DR: A taxonomy of multicast scenarios on the Internet and an improved solution to the key revocation problem are presented, which can be regarded as a 'midpoint' between traditional message authentication codes and digital signatures.
Abstract: Multicast communication is becoming the basis for a growing number of applications. It is therefore critical to provide sound security mechanisms for multicast communication. Yet, existing security protocols for multicast offer only partial solutions. We first present a taxonomy of multicast scenarios on the Internet and point out relevant security concerns. Next we address two major security problems of multicast communication: source authentication, and key revocation. Maintaining authenticity in multicast protocols is a much more complex problem than for unicast; in particular, known solutions are prohibitively inefficient in many cases. We present a solution that is reasonable for a range of scenarios. This approach can be regarded as a 'midpoint' between traditional message authentication codes and digital signatures. We also present an improved solution to the key revocation problem.

Journal ArticleDOI
05 Feb 1999-Science
TL;DR: An all-room-temperature fabrication process sequence was used, which enabled the demonstration of high-performance organic IGFETs on transparent plastic substrates, at low operating voltages for organic devices.
Abstract: The gate bias dependence of the field-effect mobility in pentacene-based insulated gate field-effect transistors (IGFETs) was interpreted on the basis of the interaction of charge carriers with localized trap levels in the band gap. This understanding was used to design and fabricate IGFETs with mobility of more than 0.3 square centimeter per volt per second and current modulation of 10 5 , with the use of amorphous metal oxide gate insulators. These values were obtained at operating voltage ranges as low as 5 volts, which are much smaller than previously reported results. An all-room-temperature fabrication process sequence was used, which enabled the demonstration of high-performance organic IGFETs on transparent plastic substrates, at low operating voltages for organic devices.

Book
01 Sep 1999
TL;DR: The Promises and Challenges of Networked Virtual Environments: Real-Time System Design and Resource Management, and challenges in Net-VE Design and Development.
Abstract: 1. The Promises and Challenges of Networked Virtual Environments. What Is a Networked Virtual Environment? Graphics Engines and Displays. Control and Communication Devices. Processing Systems. Data Network. Challenges in Net-VE Design and Development. Network Bandwidth. Heterogeneity. Distributed Interaction. Real-Time System Design and Resource Management. Failure Management. Scalability. Deployment and Configuration. Conclusion. References. 2. The Origin of Networked Virtual Environments. Department of Defense Networked Virtual Environments. SIMNET. Distributed Interactive Simulation. Networked Games and Demos. SGI Flight and Dogfight. Doom. Other Games. Academic Networked Virtual Environments. NPSNET. PARADISE. DIVE. Brick Net. MR Toolkit Peer Package. Others. Conclusion. References. 3. A Networking Primer. Fundamentals of Data Transfer. Network Latency. Network Bandwidth. Network Reliability. Network Protocol. The BSD Sockets Architecture. Sockets and Ports. The Internet Protocol. Introducing the Internet Protocols for Net-Ves. Transmission Control Protocol. User Datagram Protocol. IP Broadcasting Using UDP. IP Multicasting. Selecting a Net-VE Protocol. Using TCP/IP. Using UDP/IP. Using IP Broadcasting. Using IP Multicasting. Conclusion. References. 4. Communication Architectures. Two Players on a LAN. Multiplayer Client-Server Systems. Multiplayer Client-Server, with Multiple-Server Architectures. Peer-to-Peer Architectures. Conclusion. References. 5. Managing Dynamic Shared State. The Consistency-Throughput Tradeoff. Maintaining Shared State Inside Centralized Repositories. Reducing Coupling through Frequent State Regeneration. Dead Reckoning of Shared State. Conclusion. References. 6. Systems Design. One Thread, Multiple Threads. Important Subsystems. Conclusion. References and Further Reading. 7. Resource Management for Scalability and Performance. An Information-Centric View of Resources. Optimizing the Communications Protocol. Controlling the Visibility of Data. Taking Advantage of Perceptual Limitations. Enhancing the System Architecture. Conclusion. References. 8. Internet Networked Virtual Environments. VRML-Based Virtual Environments. Virtual Reality Transfer Protocol. Internet Gaming. Conclusion. References. 9. Perspective and Predictions. Better Library Support. Toward a Better Internet. Research Frontiers. Past, Present, and Future. References. Appendix: Network Communication in C, C++, and Java. Using TCP/IP from C and C++. Managing Concurrent Connections in C and C++. Using TCP/IP from Java. Managing Concurrent Connections in Java. Using UDP/IP from C and C++. Using UDP/IP from Java. Broadcasting from C and C++. Broadcasting from Java. Multicasting from C and C++. Multicasting from Java. References. Index. 0201325578T04062001

Journal ArticleDOI
08 Jul 1999-Nature
TL;DR: An analytic solution and experimental investigation of the phase transition in K -satisfiability, an archetypal NP-complete problem, is reported and the nature of these transitions may explain the differing computational costs, and suggests directions for improving the efficiency of search algorithms.
Abstract: ......... Non-deterministic polynomial time (commonly termed ‘NP-complete’) problems are relevant to many computational tasks of practical interest—such as the ‘travelling salesman problem’—but are difficult to solve: the computing time grows exponentially with problem size in the worst case. It has recently been shown that these problems exhibit ‘phase boundaries’, across which dramatic changes occur in the computational difficulty and solution character—the problems become easier to solve away from the boundary. Here we report an analytic solution and experimental investigation of the phase transition in K-satisfiability, an archetypal NP-complete problem. Depending on the input parameters, the computing time may grow exponentially or polynomially with problem size; in the former case, we observe a discontinuous transition, whereas in the latter case a continuous (second-order) transition is found. The nature of these transitions may explain the differing computational costs, and suggests directions for improving the efficiency of search algorithms. Similar types of transition should occur in other combinatorial problems and in glassy or granular materials, thereby strengthening the link between computational models and properties of physical systems. Many computational tasks of practical interest are surprisingly difficult to solve even using the fastest available machines. Such problems, found for example in planning, scheduling, machine learning, hardware design, and computational biology, generally belong to the class of NP-complete problems 1‐3 . NP stands for ‘nondeterministic polynomial time’, which denotes an abstract computational model with a rather technical definition. Intuitively speaking, this class of computational tasks consists of problems for which a potential solution can be checked efficiently for correctness, yet finding such a solution appears to require exponential time in the worst case. A good analogy can be drawn from mathematics: proving open conjectures in mathematics is extremely difficult, but verifying any given proof (or solution) is generally relatively straightforward. The class of NP-complete problems lies at the foundations of the theory of computational complexity in modern computer science. Literally thousands of computational problems have been shown to be NP-complete. The completeness property of NPcomplete problems means that if an efficient algorithm for solving just one of these problems could be found, one would immediately have an efficient algorithm for all NP-complete problems. However,

Book
01 Jul 1999
TL;DR: This book discusses Business Processes as Enterprise Resource, Workflow Management System Basics, and Development of Workflow-based Applications, which focuses on the development of Workflows and Objects.
Abstract: 1. Introduction. Business Processes. Business Processes as Enterprise Resource. Virtual Enterprises. Processes and Workflows. Dimensions of Workflow. User Support. Categories of Workflows. Application Structure. Workflow and Objects. Application Operating System. Software Stack. Document/Image Processing. Groupware and Workflow. Different Views of Applications. Transactional Workflow. Advanced Usage. System Requirements. Relation to Other Technologies. 2. Business Engineering. Business Modeling. Business Logic. Enterprise Structure. Information Technology Infrastructure. Business Modeling Example. Business Process Reengineering. Process Discovery. Process Optimization. Process Analysis. Business Engineering and Workflow. Monitoring. 3. Workflow Management System Basics. Main Components. Types of Users. Buildtime. Metamodel Overview. Runtime. Audit Trail. Process Management. Authorization. Application Programming Interface. System Structure. Workflow Standards. 4. Metamodel. The Notion of a Metamodel. Process Data. Activities. Control Flow. Data Flow. Summary: PM-Graphs. Navigation. Summary: G-Instances. 5. Advanced Functions. Events. Dynamic Modification of Workflows. Advanced Join Conditions. Container Materialization. Object Staging. Context Management. Performance Spheres. Compile Spheres. 6. Workflows and Objects. Component-based Software Construction. Scripts in Object-Oriented Analysis and Design. The Object Request Broker. The OMG Workflow Management Facility. 7. Workflows and Transactions. Basic Transaction Concepts. Advanced Transaction Concepts. Streams. Atomic Spheres. Compensation Spheres. Phoenix Behavior. 8. Advanced Usage. Monitoring Dynamic Integrity Rules. Software Distribution. Security Management. Business-Process-Oriented Systems Management. 9. Application Topologies. Dependent Applications. Client/Server Structures. TP Monitors. Communication Paradigms. Message Monitors. Message Broker. Object Brokers. Distributed Applications. Web Applications. Workflow-based Applications. 10. Architecture and System Structure. Architectural Principles. System Structure. Servers. Client. Program Execution. System Group. Domains. System Tuning. Workload Management. Systems Management. Exploiting Parallel Databases. Server Implementation Aspects. Navigation. Message Queuing Usage. Process Compiler. 11. Development of Workflow-based Applications. Development Environment Blueprint. Component Generation. Testing. Animation. Debugging Activity Implementations. Application Database Design. Application Tuning. Optimization. A Travel Reservation Example. B List of Symbols. Bibliography. Index.

Patent
29 Dec 1999
TL;DR: In this paper, a method, system, program, and method of doing business are disclosed for electronic commerce that includes the feature of a "thin" consumer's wallet by providing issuers with an active role in each payment.
Abstract: A method, system, program, and method of doing business are disclosed for electronic commerce that includes the feature of a “thin” consumer's wallet by providing issuers with an active role in each payment. This is achieved by adding an issuer gateway and moving the credit/debit card authorization function from the merchant to the issuer. This enables an issuer to independently choose alternate authentication mechanisms without changing the acquirer gateway. It also results in a significant reduction in complexity, thereby improving the ease of implementation and overall performance.

Proceedings ArticleDOI
01 May 1999
TL;DR: It is proved that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N 1 ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes.
Abstract: Content-based subscription systems are an emerging alternative to traditional publish-subscribe systems, because they permit more flexible subscriptions along multiple dimensions. In these systems, each subscription is a predicate which may test arbitrary attributes within an event. However, the matching problem for content-based systems — determining for each event the subset of all subscriptions whose predicates match the event — is still an open problem. We present an efficient, scalable solution to the matching problem. Our solution has an expected time complexity that is sub-linear in the number of subscriptions, and it has a space complexity that is linear. Specifically, we prove that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N 1 ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes (in some cases, 1=2). We present some optimizations to our algorithms that improve the search time. We also present the results of simulations that validate the theoretical bounds and that show acceptable performance levels for tens of thousands of subscriptions. Department of Computer Science, Cornell University, Ithaca, N.Y. 14853-7501, aguilera@cs.cornell.edu IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598, fstrom, sturman, tusharg@watson.ibm.com Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 W. Springfield Ave, Urbana, I.L. 61801, astley@cs.uiuc.edu

Proceedings ArticleDOI
01 May 1999
TL;DR: This work explores a new direction in utilizing eye gaze for computer input by proposing an alternative approach, dubbed MAGIC (Manual And Gaze Input Cascaded) pointing, which might offer many advantages, including reduced physical effort and fatigue as compared to traditional manual pointing, greater accuracy and naturalness than traditional gaze pointing, and possibly fasterspeed than manual pointing.
Abstract: This work explores a new direction in utilizing eye gaze for computer input. Gaze tracking has long been considered as an alternative or potentially superior pointing method for computer input. We believe that many fundamental limitations exist with traditional gaze pointing. In particular, it is unnatural to overload a perceptual channel such as vision with a motor control task. We therefore propose an alternative approach, dubbed MAGIC (Manual And Gaze Input Cascaded) pointing. With such an approach, pointing appears to the user to be a manual task, used for fine manipulation and selection. However, a large portion of the cursor movement is eliminated by warping the cursor to the eye gaze area, which encompasses the target. Two specific MAGIC pointing techniques, one conservative and one liberal, were designed, analyzed, and implemented with an eye tracker we developed. They were then tested in a pilot study. This early- stage exploration showed that the MAGIC pointing techniques might offer many advantages, including reduced physical effort and fatigue as compared to traditional manual pointing, greater accuracy and naturalness than traditional gaze pointing, and possibly faster speed than manual pointing. The pros and cons of the two techniques are discussed in light of both performance data and subjective reports.

Book ChapterDOI
02 May 1999
TL;DR: A single-database computationally private information retrieval scheme with polylogarithmic communication complexity based on a new, but reasonable intractability assumption, which is essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization.
Abstract: We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the φ-Hiding Assumption (φHA): essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization.

Proceedings ArticleDOI
Roberto J. Bayardo1, Rakesh Agrawal1
01 Aug 1999
TL;DR: It is argued that by returning a broader set of rules than previous algorithms, these techniques allow for improved insight into the data and support more user-interaction in the optimized rule-mining process.
Abstract: Several algorithms have been proposed for finding the “best,” “optimal,” or “most interesting” rule(s) in a database according to a variety of metrics including confidence, support, gain, chi-squared value, gini, entropy gain, laplace, lift, and conviction. In this paper, we show that the best rule according to any of these metrics must reside along a support/confidence border. Further, in the case of conjunctive rule mining within categorical data, the number of rules along this border is conveniently small, and can be mined efficiently from a variety of real-world data-sets. We also show how this concept can be generalized to mine all rules that are best according to any of these criteria with respect to an arbitrary subset of the population of interest. We argue that by returning a broader set of rules than previous algorithms, our techniques allow for improved insight into the data and support more user-interaction in the optimized rule-mining process.

Patent
19 Oct 1999
TL;DR: In this paper, metal nitrate-containing precursor compounds are employed in atomic layer deposition processes to form metal-containing films, eg metal, metal oxide, and metal nitride, which films exhibit an atomically abrupt interface and an excellent uniformity.
Abstract: Metal nitrate-containing precursor compounds are employed in atomic layer deposition processes to form metal-containing films, eg metal, metal oxide, and metal nitride, which films exhibit an atomically abrupt interface and an excellent uniformity

Journal ArticleDOI
TL;DR: This work presents a system that adapts multimedia Web documents to optimally match the capabilities of the client device requesting it using a representation scheme called the InfoPyramid that provides a multimodal, multiresolution representation hierarchy for multimedia.
Abstract: Content delivery over the Internet needs to address both the multimedia nature of the content and the capabilities of the diverse client platforms the content is being delivered to. We present a system that adapts multimedia Web documents to optimally match the capabilities of the client device requesting it. This system has two key components. 1) A representation scheme called the InfoPyramid that provides a multimodal, multiresolution representation hierarchy for multimedia. 2) A customizer that selects the best content representation to meet the client capabilities while delivering the most value. We model the selection process as a resource allocation problem in a generalized rate distortion framework. In this framework, we address the issue of both multiple media types in a Web document and multiple resource types at the client. We extend this framework to allow prioritization on the content items in a Web document. We illustrate our content adaptation technique with a web server that adapts multimedia news stories to clients as diverse as workstations, PDA's and cellular phones.