scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 1997"


Journal ArticleDOI
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

10,771 citations


Journal ArticleDOI
27 Mar 1997-Nature
TL;DR: In this article, a gas can condense to high density inside narrow, single-walled nanotubes (SWNTs) under conditions that do not induce adsorption within a standard mesoporous activated carbon.
Abstract: Pores of molecular dimensions can adsorb large quantities of gases owing to the enhanced density of the adsorbed material inside the pores1, a consequence of the attractive potential of the pore walls. Pederson and Broughton have suggested2 that carbon nanotubes, which have diameters of typically a few nanometres, should be able to draw up liquids by capillarity, and this effect has been seen for low-surface-tension liquids in large-diameter, multi-walled nanotubes3. Here we show that a gas can condense to high density inside narrow, single-walled nanotubes (SWNTs). Temperature-programmed desorption spectrosocopy shows that hydrogen will condense inside SWNTs under conditions that do not induce adsorption within a standard mesoporous activated carbon. The very high hydrogen uptake in these materials suggests that they might be effective as a hydrogen-storage material for fuel-cell electric vehicles.

3,558 citations



MonographDOI
TL;DR: In this paper, the authors present a regional L-moments algorithm for detecting homogeneous regions in a set of homogeneous data points and then select a frequency distribution for each region.
Abstract: Preface 1. Regional frequency analysis 2. L-moments 3. Screening the data 4. Identification of homogeneous regions 5. Choice of a frequency distribution 6. Estimation of the frequency distribution 7. Performance of the regional L-moment algorithm 8. Other topics 9. Examples Appendix References Index of notation.

2,329 citations


Journal ArticleDOI
TL;DR: A new interest-measure for rules which uses the information in the taxonomy is presented, and given a user-specified “minimum-interest-level”, this measure prunes a large number of redundant rules.

1,790 citations


Book
30 Oct 1997
TL;DR: This chapter discusses decision problems and Complexity over a Ring and the Fundamental Theorem of Algebra: Complexity Aspects.
Abstract: 1 Introduction.- 2 Definitions and First Properties of Computation.- 3 Computation over a Ring.- 4 Decision Problems and Complexity over a Ring.- 5 The Class NP and NP-Complete Problems.- 6 Integer Machines.- 7 Algebraic Settings for the Problem "P ? NP?".- 8 Newton's Method.- 9 Fundamental Theorem of Algebra: Complexity Aspects.- 10 Bezout's Theorem.- 11 Condition Numbers and the Loss of Precision of Linear Equations.- 12 The Condition Number for Nonlinear Problems.- 13 The Condition Number in ?(H(d).- 14 Complexity and the Condition Number.- 15 Linear Programming.- 16 Deterministic Lower Bounds.- 17 Probabilistic Machines.- 18 Parallel Computations.- 19 Some Separations of Complexity Classes.- 20 Weak Machines.- 21 Additive Machines.- 22 Nonuniform Complexity Classes.- 23 Descriptive Complexity.- References.

1,594 citations


Journal ArticleDOI
TL;DR: An improved version of the minutia extraction algorithm proposed by Ratha et al. (1995), which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an online inkless scanner and an alignment-based elastic matching algorithm has been developed.
Abstract: Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is incapable of meeting today's increasing performance requirements. An automatic fingerprint identification system (AFIS) is needed. This paper describes the design and implementation of an online fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al. (1995), which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an online inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of online verification with high accuracy.

1,376 citations


Book ChapterDOI
Victor Shoup1
11 May 1997
TL;DR: Lower bounds on the complexity of the discrete logarithm and related problems are proved that match the known upper bounds: any generic algorithm must perform Ω(p1/2) group operations, where p is the largest prime dividing the order of the group.
Abstract: This paper considers the computational complexity of the discrete logarithm and related problems in the context of "generic algorithms"--that is, algorithms which do not exploit any special properties of the encodings of group elements, other than the property that each group element is encoded as a unique binary string. Lower bounds on the complexity of these problems are proved that match the known upper bounds: any generic algorithm must perform Ω(p1/2) group operations, where p is the largest prime dividing the order of the group. Also, a new method for correcting a faulty Diffie-Hellman oracle is presented.

1,341 citations


Journal ArticleDOI
TL;DR: The results show that a reliable link-layer protocol that is TCP-aware provides very good performance and it is possible to achieve good performance without splitting the end-to-end connection at the base station.
Abstract: Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements.

1,325 citations


Journal ArticleDOI
07 Mar 1997-Science
TL;DR: In this article, a simple technique for precisely controlling the interfacial energies and wetting behavior of polymers in contact with solid surfaces is described, where end-functionalized statistical random copolymers of styrene and methylmethacrylate are synthesized, with the styrene fraction f varying from 0 to 1, and were end-grafted onto silicon substrates to create random polymers about 5 nanometers thick.
Abstract: A simple technique for precisely controlling the interfacial energies and wetting behavior of polymers in contact with solid surfaces is described. End-functionalized statistical random copolymers of styrene and methylmethacrylate were synthesized, with the styrene fraction f varying from 0 to 1, and were end-grafted onto silicon substrates to create random copolymer brushes about 5 nanometers thick. For f < 0.7, polystyrene (PS) films (20 nanometers thick) rapidly dewet from the brushes when heated well above the glass transition temperature. The contact angle of the resulting polymer droplets increased monotonically with decreasing f . Similar behavior was observed for poly(methylmethacrylate) (PMMA) films but with an opposite dependence on f . The interfacial energies of the random copolymer brushes with PS and PMMA were equal when f was about 0.6. Thus, precise control of the relative surface affinities of PS and PMMA was possible, demonstrating a way to manipulate polymer-surface interactions.

1,293 citations


Proceedings ArticleDOI
01 Jun 1997
TL;DR: In this article, the authors propose an online aggregation interface that allows users to both observe the progress of their aggregation queries and control execution on the fly, and present a suite of techniques that extend a database system to meet these requirements.
Abstract: Aggregation in traditional database systems is performed in batch mode: a query is submitted, the system processes a large volume of data over a long period of time, and, eventually, the final answer is returned. This archaic approach is frustrating to users and has been abandoned in most other areas of computing. In this paper we propose a new online aggregation interface that permits users to both observe the progress of their aggregation queries and control execution on the fly. After outlining usability and performance requirements for a system supporting online aggregation, we present a suite of techniques that extend a database system to meet these requirements. These include methods for returning the output in random order, for providing control over the relative rate at which different aggregates are computed, and for computing running confidence intervals. Finally, we report on an initial implementation of online aggregation in POSTGRES.

Journal ArticleDOI
TL;DR: In this paper, the characterization of a home-made negative photoresist developed by IBM is described, called SU-8, which can be produced with commercially available materials and has an outstanding aspect ratio near 15 for lines and 10 for trenches, combined with the electroplating of copper allow the fabrication of highly integrated electromagnetic coils.
Abstract: This paper describes the characterization of a home-made negative photoresist developed by IBM. This resist, called SU-8, can be produced with commercially available materials. Three blends were prepared for this article and some of its optical and mechanical properties are presented. One of its numerous advantages is the broad range of thicknesses which can be obtained in one spin: from 750 nm to with a conventional spin coater. The resist is exposed with a standard UV aligner and has an outstanding aspect ratio near 15 for lines and 10 for trenches. These ratios combined with the electroplating of copper allow the fabrication of highly integrated electromagnetic coils.

Journal ArticleDOI
01 Sep 1997
TL;DR: The design and implementation of a prototype automatic identity-authentication system that uses fingerprints to authenticate the identity of an individual is described and an improved minutiae-extraction algorithm is developed that is faster and more accurate than the earlier algorithm.
Abstract: Fingerprint verification is an important biometric technique for personal identification. We describe the design and implementation of a prototype automatic identity-authentication system that uses fingerprints to authenticate the identity of an individual. We have developed an improved minutiae-extraction algorithm that is faster and more accurate than our earlier algorithm (1995). An alignment-based minutiae-matching algorithm has been proposed. This algorithm is capable of finding the correspondences between input minutiae and the stored template without resorting to exhaustive search and has the ability to compensate adaptively for the nonlinear deformations and inexact transformations between an input and a template. To establish an objective assessment of our system, both the Michigan State University and the National Institute of Standards and Technology NIST 9 fingerprint data bases have been used to estimate the performance numbers. The experimental results reveal that our system can achieve a good performance on these data bases. We also have demonstrated that our system satisfies the response-time requirement. A complete authentication procedure, on average, takes about 1.4 seconds on a Sun ULTRA I workstation (it is expected to run as fast or faster on a 200 HMz Pentium).

Patent
09 Jun 1997
TL;DR: In this paper, a method and system for autoconfiguring redundant arrays of memory storage devices contained within receptacles having one or more slots containing hardware sufficient to accept and electrically communicate with such memory storage device.
Abstract: A method and system for autoconfiguring redundant arrays of memory storage devices contained within receptacles having one or more slots containing hardware sufficient to accept and electrically communicate with such memory storage devices. The capacities of the memory storage device receptacles for accepting memory storage devices are determined, and used to define an initial positioning of devices in at least one memory storage device receptacle. One or more asymmetrical groupings of memory storage devices is defined to permit an equation of electrically detected relative positions of the memory storage devices with actual physical positions within the receptacle. Thereafter, additional devices are added into the receptacles such that the ability to equate electrically detected relative positions of the devices with physical positions is preserved.

Book ChapterDOI
11 May 1997
TL;DR: A new multi-authority secret-ballot election scheme that guarantees privacy, universal verifiability, and robustness is presented, and is the first scheme for which the performance is optimal in the sense that time and communication complexity is minimal both for the individual voters and the authorities.
Abstract: In this paper we present a new multi-authority secret-ballot election scheme that guarantees privacy, universal verifiability, and robustness. It is the first scheme for which the performance is optimal in the sense that time and communication complexity is minimal both for the individual voters and the authorities. An interesting property of the scheme is that the time and communication complexity for the voter is independent of the number of authorities. A voter simply posts a single encrypted message accompanied by a compact proof that it contains a valid vote. Our result is complementary to the result by Cramer, Franklin, Schoenmakers, and Yung in the sense that in their scheme the work for voters is linear in the number of authorities but can be instantiated to yield information-theoretic privacy, while in our scheme the voter's effort is independent of the number of authorities but always provides computational privacy-protection. We will also point out that the majority of proposed voting schemes provide computational privacy only (often without even considering the lack of information-theoretic privacy), and that our new scheme is by far superior to those schemes.

Proceedings Article
14 Aug 1997
TL;DR: In this paper, the problem of integrating constraints that are Boolean expressions over the presence or absence of items into the association discovery algorithm was considered and three integrated algorithms for mining association rules with item constraints were presented.
Abstract: The problem of discovering association rules has received considerable research attention and several fast algorithms for mining association rules have been developed. In practice, users are often interested in a subset of association rules. For example, they may only want rules that contain a specific item or rules that contain children of a specific item in a hierarchy. While such constraints can be applied as a post-processing step, integrating them into the mining algorithm can dramatically reduce the execution time. We consider the problem of integrating constraints that are Boolean expressions over the presence or absence of items into the association discovery algorithm. We present three integrated algorithms for mining association rules with item constraints and discuss their tradeoffs.

Journal ArticleDOI
Madhu Sudan1
TL;DR: To the best of the knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides error recovery capability beyond the error-correction bound of a code for any efficient code.

Journal ArticleDOI
01 Apr 1997
TL;DR: In this article, the key challenges in further scaling of CMOS technology into the nanometer (sub-100 nm) regime in light of fundamental physical effects and practical considerations are discussed, including power supply and threshold voltage, short-channel effect, gate oxide, high-field effects, dopant number fluctuations and interconnect delays.
Abstract: Starting with a brief review on 0.1-/spl mu/m (100 nm) CMOS status, this paper addresses the key challenges in further scaling of CMOS technology into the nanometer (sub-100 nm) regime in light of fundamental physical effects and practical considerations. Among the issues discussed are: lithography, power supply and threshold voltage, short-channel effect, gate oxide, high-field effects, dopant number fluctuations and interconnect delays. The last part of the paper discusses several alternative or unconventional device structures, including silicon-on-insulator (SOI), SiGe MOSFET's, low-temperature CMOS, and double-gate MOSFET's, which may lead to the outermost limits of silicon scaling.

Journal ArticleDOI
02 May 1997-Science
TL;DR: The microfluidic networks used to pattern biomolecules with high resolution on a variety of substrates suggest a practical way to incorporate biological material on technological substrates.
Abstract: Microfluidic networks (microFNs) were used to pattern biomolecules with high resolution on a variety of substrates (gold, glass, or polystyrene). Elastomeric microFNs localized chemical reactions between the biomolecules and the surface, requiring only microliters of reagent to cover square millimeter-sized areas. The networks were designed to ensure stability and filling of the microFN and allowed a homogeneous distribution and robust attachment of material to the substrate along the conduits in the microFN. Immunoglobulins patterned on substrates by means of microFNs remained strictly confined to areas enclosed by the network with submicron resolution and were viable for subsequent use in assays. The approach is simple and general enough to suggest a practical way to incorporate biological material on technological substrates.

Journal ArticleDOI
TL;DR: In this article, an accurate determination of the physical oxide thickness is achieved by fitting experimentally measured capacitanceversus-voltage curves to quantum-mechanically simulated capacitance-versusvoltage results.
Abstract: Quantum-mechanical modeling of electron tunneling current from the quantized inversion layer of ultra-thin-oxide (<40 /spl Aring/) nMOSFET's is presented, together with experimental verification. An accurate determination of the physical oxide thickness is achieved by fitting experimentally measured capacitance-versus-voltage curves to quantum-mechanically simulated capacitance-versus-voltage results. The lifetimes of quasibound states and the direct tunneling current are calculated using a transverse-resonant method. These results are used to project an oxide scaling limit of 20 /spl Aring/ before the chip standby power becomes excessive due to tunneling currents,.

Journal ArticleDOI
Don Coppersmith1
TL;DR: It is shown how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a Poole's inequality in two variables over the integers.
Abstract: We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\frac{1}{4} \log_2 N$ bits of P.

Patent
18 Jun 1997
TL;DR: In this paper, an Internet access device (100) uses an automatic configuration process (600) to handle the task of configuring the Internet access devices at a customer site for communication with the Internet (10).
Abstract: An Internet access device (100) uses an automatic configuration process (600) to handle the task of configuring the Internet access device at a customer site for communication with the Internet (10). Once configured, the customer has electronic mail and other access to the Internet from his local area network. A not yet configured Internet access device is shipped directly to a customer without having to be manually configured first. The customer enters a registration identification number (326) and a telephone number onto the Internet access device. The Internet access device then automatically connects to the Internet, downloads configuration data from a configuration server (410) containing customer site specific configuration data, and then automatically configures itself for communication with the Internet. The Internet access device is simple to install for a customer and provides valuable features such as a router (240), firewall, e-mail gateway (212), web server (220), and other servers (222). The Internet access device initially connects to the Internet through an Internet service provider (14) over a standard analog telephone line using a standard modem (52) and using a dynamic IP address. Once automatically configured, the Internet access device may then communicate with the Internet using any suitable connection including an analog telephone line, or a higher-speed line such as an ISDN line or a frame relay circuit and is assigned a static IP address and a range of IP addresses for other devices on its local area network.

Proceedings ArticleDOI
Miklós Ajtai1, Cynthia Dwork1
04 May 1997
TL;DR: A probabilistic public key cryptosystem which is secure unless the worst case of the following lattice problem can be solved in polynomial time is presented.

Journal ArticleDOI
TL;DR: A call admission algorithm is introduced, which uses current traffic and bandwidth utilization conditions, as well as the amount of resources and maximum allowable "dropping probability" being requested.
Abstract: The shadow cluster concept can be used to estimate future resource requirements and to perform call admission decisions in wireless networks. Shadow clusters can be used to decide if a new call can be admitted to a wireless network based on its quality-of-service (QoS) requirements and local traffic conditions. The shadow cluster concept can especially be useful in future wireless networks with microcellular architectures where service will be provided to users with diverse QoS requirements. The framework of a shadow cluster system is completely distributed, and can be viewed as a message system where mobile terminals inform the base stations in their neighborhood about their requirements, position, and movement parameters. With this information, base stations predict future demands, reserve resources accordingly, and admit only those mobile terminals which can be supported adequately. The shadow cluster concept involves some processing and communication overheads. These overheads have no effect on wireless resources, but only on the base stations and the underlying wireline network. It is shown how base stations determine the probabilities that a mobile terminal will be active in other cells at future times, define and maintain shadow clusters by using probabilistic information on the future position of their mobile terminals with active calls, and predict resource demands based on shadow cluster information. In addition, a call admission algorithm is introduced, which uses current traffic and bandwidth utilization conditions, as well as the amount of resources and maximum allowable "dropping probability" being requested. Performance results showing the advantages of the shadow cluster concept are also included.

Journal ArticleDOI
Daniel M. Yellin1, Robert E. Strom1
TL;DR: Leveraging the information provided by protocols, it is shown how adaptors can be automatically generated from a high-level description, called an interface mapping, and dene notions of interface compatibility based upon protocols and how compatibility can be checked.
Abstract: In this article we examine the augmentation of application interfaces with enhanced specications that include sequencing constraints called protocols. Protocols make explicit the relationship between messages (methods) supported by the application. These relationships are usually only given implicitly, either in the code or in textual comments. We dene notions of interface compatibility based upon protocols and show how compatibility can be checked, discovering a class of errors that cannot be discovered via the type system alone. We then dene software adaptors that can be used to bridge the dierence between applications that have functionally compatible but type- and protocol-incompatible interfaces. We discuss what it means for an adaptor to be well formed. Leveraging the information provided by protocols, we show how adaptors can be automatically generated from a high-level description, called an interface mapping.

Patent
20 Feb 1997
TL;DR: In this article, a client computer (102) with a scanner (118) capable of scanning objects (115) for a code (117) was used to translate the code into a URL (Uniform Ressource Locator) that specifies both a server computer (122, 160) and the location within the server of information that is relevant to the object (115).
Abstract: A client computer (102) with a scanner (118) capable of scanning objects (115) for a code (117). The client computer (102) scans the object (115) of interest and translates the code (117) into a URL (Uniform Ressource Locator) that specifies both a server computer (122, 160) and the location within the server of information that is relevant to the object (115). The client computer (102) transmits the URL to the server computer (122, 160), receives the information related to the object (115) from the server computer (122, 160), and communicates the information to the customer.

Proceedings ArticleDOI
22 Mar 1997
TL;DR: A great number of studies have verified and / or applied Fitts' law to HCI problems, making Fitt's' law one of the most intensively studied topic in the HCI literature.
Abstract: As a developing discipline, research results in the field of human computer interaction (HCI) tends to be "soft". Many workers in the field have argued that the advancement of HCI lies in "hardening" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.

Journal ArticleDOI
TL;DR: In this article, a universal mechanism for cluster formation in all epochs and environments is found to be consistent with the properties and locations of young and old globular clusters, open clusters and unbound associations, and interstellar clouds.
Abstract: A universal mechanism for cluster formation in all epochs and environments is found to be consistent with the properties and locations of young and old globular clusters, open clusters and unbound associations, and interstellar clouds. The primary structural differences between various cluster types result from differences in pressure at the time of formation, combined with different ages for subsequent evolution. All clusters begin with a mass distribution similar to that for interstellar clouds, which is approximately n(M)dM M-2 dM. Old halo globulars have a current mass distribution that falls off at low mass because of a Hubble time of cluster destruction. Young globulars have not yet had time for a similar loss, and some old open clusters have survived because of their low densities. The peak in the halo cluster luminosity function depends only on age, and is independent of the host galaxy luminosity, as observed. The peak globular cluster mass is not a characteristic or Jeans mass in the primordial galaxy, as previously suggested. The initial mass distribution functions for young and old globular clusters, open clusters and associations, and interstellar clouds are all power laws with a slope of ~ -2. This distribution could be the result of fractal structure in turbulent gas. New data on clusters in the LMC also follow this power law. The slope is so steep that it implies a significant fraction of star formation occurs in small clusters. Numerous halo field stars should come from the evaporation of small halo clusters, and a high fraction of disk field stars should arise in small unbound disk clusters. This differs significantly from previous suggestions that most disk stars form in large OB associations. Globular clusters of all ages preferentially form in high-pressure regions. This is directly evident today in the form of large kinematic pressures from the densities and relative velocities of member stars. High pressures at the time of globular cluster formation are either the result of a high background virial density in that part of the galaxy (as in dwarf galaxies or galactic nuclei and nuclear rings), turbulence compression (in halo globulars), or large-scale shocks (in interacting galaxies). Massive clusters that form in such high-pressure environments are more likely to be bound than low-mass clusters or clusters of equal mass in low-pressure regions. This is because virialized clouds are more tightly bound at high pressure. A simple model illustrates this effect. One implication of this result is that starburst regions preferentially make globular clusters, in which case some elliptical galaxies could have formed by the violent merger of spiral galaxies.

Patent
Brian John Cragun1, Paul Reuben Day1
24 Nov 1997
TL;DR: In this paper, a user friendly method for regulating the media environment of a television viewer by controlling content displayed on the television is proposed. But, the method is limited to the case where a user's profile is provided by a user which determines guidelines for an individual viewer.
Abstract: A user friendly method for regulating the media environment of a television viewer by controlling content displayed on the television. The method controls content in response to a viewer's profile, accumulated viewing time and at least one content classification source. A viewer's profile is provided by a user which determines guidelines for an individual viewer. Content classification values for television are received and stored in response to a viewer's request for viewing a program. The content classification values correspond to television program availability and values attributed to viewing time. The content classification values are categorized into desirable content and undesirable content. The viewer profile data associates a viewer with a content classification value. Thereafter, the quantity of time a viewer spends viewing desirable content and the quantity of time a viewer spends viewing undesirable content is determined. In response to a multidimensional user selected censorship structure, the media environment of the viewer is regulated. The censorship structure utilizes variables such as content classification values, rating value, rankings of rating sources and viewing time credits for desirable material and viewing time debits for undesirable material. Additionally, the method down-loads the content classification values from multiple sources utilizing an interconnected computer. Many sources can be queried utilizing the user selected ranking of rating sources and the user can edit the ratings. The present invention controls the television environment in response to past behavior of a viewer.

Proceedings ArticleDOI
01 May 1997
TL;DR: The Markov prefetcher acts as an interface between the on-chip and off-chip cache, and can be added to existing computer designs and reduces the overall execution stalls due to instruction and data memory operations by an average of 54% for various commercial benchmarks while only using two thirds the memory of a demand-fetch cache organization.
Abstract: Prefetching is one approach to reducing the latency of memory operations in modern computer systems. In this paper, we describe the Markov prefetcher. This prefetcher acts as an interface between the on-chip and off-chip cache, and can be added to existing computer designs. The Markov prefetcher is distinguished by prefetching multiple reference predictions from the memory subsystem, and then prioritizing the delivery of those references to the processor.This design results in a prefetching system that provides good coverage, is accurate and produces timely results that can be effectively used by the processor. In our cycle-level simulations, the Markov Prefetcher reduces the overall execution stalls due to instruction and data memory operations by an average of 54% for various commercial benchmarks while only using two thirds the memory of a demand-fetch cache organization.