scispace - formally typeset
Search or ask a question

Showing papers by "Hewlett-Packard published in 2003"


Journal Article
George Forman1
TL;DR: An empirical comparison of twelve feature selection methods evaluated on a benchmark of 229 text classification problem instances, revealing that a new feature selection metric, called 'Bi-Normal Separation' (BNS), outperformed the others by a substantial margin in most situations and was the top single choice for all goals except precision.
Abstract: Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g. Information Gain) evaluated on a benchmark of 229 text classification problem instances that were gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal perspectives-accuracy, F-measure, precision, and recall-since each is appropriate in different situations. The results reveal that a new feature selection metric we call 'Bi-Normal Separation' (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text classification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focuses on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspective, BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Information Gain and Chi-Squared have correlated failures, and so they work poorly together. When choosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a member of the pair---e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin.

2,621 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that some factors are better indicators of social connections than others, and that these indicators vary between user populations, and provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.

2,578 citations


Journal ArticleDOI
TL;DR: Combining Web services to create higher level, cross-organizational business processes requires standards to model the interactions.
Abstract: Combining Web services to create higher level, cross-organizational business processes requires standards to model the interactions. Several standards are working their way through industry channels and into vendor products.

1,291 citations


Journal ArticleDOI
TL;DR: This work describes a streaming algorithm that effectively clusters large data streams and provides empirical evidence of the algorithm's performance on synthetic and real data streams.
Abstract: The data stream model has recently attracted attention for its applicability to numerous types of data, including telephone records, Web documents, and clickstreams. For analysis of such data, the ability to process the data in a single pass, or a small number of passes, while using little memory, is crucial. We describe such a streaming algorithm that effectively clusters large data streams. We also provide empirical evidence of the algorithm's performance on synthetic and real data streams.

942 citations


Proceedings ArticleDOI
03 Dec 2003
TL;DR: This paper proposes and evaluates single-ISA heterogeneousmulti-core architectures as a mechanism to reduceprocessor power dissipation and results indicate a 39% average energy reduction while only sacrificing 3% in performance.
Abstract: This paper proposes and evaluates single-ISA heterogeneous multi-core architectures as a mechanism to reduce processor power dissipation. Our design incorporates heterogeneous cores representing different points in the power/performance design space; during an application's execution, system software dynamically chooses the most appropriate core to meet specific performance and power requirements. Our evaluation of this architecture shows significant energy benefits. For an objective function that optimizes for energy efficiency with a tight performance threshold, for 14 SPEC benchmarks, our results indicate a 39% average energy reduction while only sacrificing 3% in performance. An objective function that optimizes for energy-delay with looser performance bounds achieves, on average, nearly a factor of three improvements in energy-delay product while sacrificing only 22% in performance. Energy savings are substantially more than chip-wide voltage/frequency scaling.

809 citations


Proceedings Article
31 Mar 2003
TL;DR: The mechanisms in Plutus to reduce the number of cryptographic keys exchanged between users by using filegroups, distinguish file read and write access, handle user revocation efficiently, and allow an untrusted server to authorize file writes are explained.
Abstract: Plutus is a cryptographic storage system that enables secure file sharing without placing much trust on the file servers. In particular, it makes novel use of cryptographic primitives to protect and share files. Plutus features highly scalable key management while allowing individual users to retain direct control over who gets access to their files. We explain the mechanisms in Plutus to reduce the number of cryptographic keys exchanged between users by using filegroups, distinguish file read and write access, handle user revocation efficiently, and allow an untrusted server to authorize file writes. We have built a prototype of Plutus on OpenAFS. Measurements of this prototype show that Plutus achieves strong security with overhead comparable to systems that encrypt all network traffic.

781 citations


Journal ArticleDOI
13 Nov 2003-Nature
TL;DR: The results indicate that the hybrid organic/inorganic memory device is a reliable means for achieving rapid, large-scale archival data storage for ultralow-cost permanent storage of digital images, eliminating the need for slow, bulky and expensive mechanical drives used in conventional magnetic and optical memories.
Abstract: Organic devices promise to revolutionize the extent of, and access to, electronics by providing extremely inexpensive, lightweight and capable ubiquitous components that are printed onto plastic, glass or metal foils1,2,3. One key component of an electronic circuit that has thus far received surprisingly little attention is an organic electronic memory. Here we report an architecture for a write-once read-many-times (WORM) memory, based on the hybrid integration of an electrochromic polymer with a thin-film silicon diode deposited onto a flexible metal foil substrate. WORM memories are desirable for ultralow-cost permanent storage of digital images, eliminating the need for slow, bulky and expensive mechanical drives used in conventional magnetic and optical memories. Our results indicate that the hybrid organic/inorganic memory device is a reliable means for achieving rapid, large-scale archival data storage. The WORM memory pixel exploits a mechanism of current-controlled, thermally activated un-doping of a two-component electrochromic conducting polymer.

731 citations


Journal ArticleDOI
19 Oct 2003
TL;DR: The goal is to design tools that enable modestly-skilled programmers to isolate performance bottlenecks in distributed systems composed of black-box nodes by developing two very different algorithms for inferring the dominant causal paths through a distributed system from these traces.
Abstract: Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of "black-box" components: software from many different (perhaps competing) vendors, usually without source code available. Typical solutions-provider employees are not always skilled or experienced enough to debug these systems efficiently. Our goal is to design tools that enable modestly-skilled programmers (and experts, too) to isolate performance bottlenecks in distributed systems composed of black-box nodes.We approach this problem by obtaining message-level traces of system activity, as passively as possible and without any knowledge of node internals or message semantics. We have developed two very different algorithms for inferring the dominant causal paths through a distributed system from these traces. One uses timing information from RPC messages to infer inter-call causality; the other uses signal-processing techniques. Our algorithms can ascribe delay to specific nodes on specific causal paths. Unlike previous approaches to similar problems, our approach requires no modifications to applications, middleware, or messages.

724 citations


Tom Fawcett1
01 Jan 2003
TL;DR: This article serves both as a tutorial introduction to ROC graphs and as a practical guide for using them in research.
Abstract: Receiver Operating Characteristics (ROC) graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been increasingly adopted in the machine learning and data mining research communities. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. This article serves both as a tutorial introduction to ROC graphs and as a practical guide for using them in research.

722 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe the fabrication and testing of nanoscale molecular-electronic circuits that comprise a molecular monolayer of [2] rotaxanes sandwiched between metal nanowires to form an 8 × 8 crossbar within a 1 µm 2 area.
Abstract: Molecular electronics offer an alternative pathway to construct nanoscale circuits in which the critical dimension is naturally associated with molecular sizes. We describe the fabrication and testing of nanoscale molecular-electronic circuits that comprise a molecular monolayer of [2]rotaxanes sandwiched between metal nanowires to form an 8 × 8 crossbar within a 1 µm 2 area. The resistance at each cross point of the crossbar can be switched reversibly. By using each cross point as an active memory cell, crossbar circuits were operated as rewritable, nonvolatile memory with a density of 6. 4G bits cm −2 .B ys etting the resistances at specific cross points, two 4 × 4s ubarrays of the crossbar were configured to be a nanoscale demultiplexer and multiplexer that were used to read memory bits in a third subarray.

701 citations


Journal ArticleDOI
TL;DR: A variational model for the Retinex problem that unifies previous methods and shows that the illumination estimation problem can be formulated as a Quadratic Programming optimization problem.
Abstract: Retinex theory addresses the problem of separating the illumination from the reflectance in a given image and thereby compensating for non-uniform lighting. This is in general an ill-posed problem. In this paper we propose a variational model for the Retinex problem that unifies previous methods. Similar to previous algorithms, it assumes spatial smoothness of the illumination field. In addition, knowledge of the limited dynamic range of the reflectance is used as a constraint in the recovery process. A penalty term is also included, exploiting a-priori knowledge of the nature of the reflectance image. The proposed formulation adopts a Bayesian view point of the estimation problem, which leads to an algebraic regularization term, that contributes to better conditioning of the reconstruction problem. Based on the proposed variational model, we show that the illumination estimation problem can be formulated as a Quadratic Programming optimization problem. An efficient multi-resolution algorithm is proposed. It exploits the spatial correlation in the reflectance and illumination images. Applications of the algorithm to various color images yield promising results.

Journal ArticleDOI
TL;DR: In this paper, a complete PWM controller IC for high-frequency switching converters is described, including an A/D converter, compensator, and digital pulse-width modulator.
Abstract: This paper describes a complete digital PWM controller IC for high-frequency switching converters. Novel architecture and configurations of the key building blocks are A/D converter, compensator, and digital pulse-width modulator, are introduced to meet the requirements of tight output voltage regulation, high-speed dynamic response, and programmability without external passive components. The implementation techniques are experimentally verified on a prototype chip that takes less than 1 mm/sup 2/ of silicon area in a standard 0.5 /spl mu/ digital complementary metal oxide semiconductor (CMOS) process and operates at the switching frequency of 1 MHz.

Journal ArticleDOI
TL;DR: In this paper, it was shown that quantum computation circuits using coherent states as the logical qubits can be constructed from simple linear networks, conditional photon measurements, and small coherent superposition resource states.
Abstract: We show that quantum computation circuits using coherent states as the logical qubits can be constructed from simple linear networks, conditional photon measurements, and "small" coherent superposition resource states.

Posted Content
TL;DR: In this paper, small world search strategies using a contact's position in physical space or in an organizational hierarchy relative to the target can effectively be used to locate most individuals in a social network.
Abstract: We address the question of how participants in a small world experiment are able to find short paths in a social network using only local information about their immediate contacts. We simulate such experiments on a network of actual email contacts within an organization as well as on a student social networking website. On the email network we find that small world search strategies using a contact's position in physical space or in an organizational hierarchy relative to the target can effectively be used to locate most individuals. However, we find that in the online student network, where the data is incomplete and hierarchical structures are not well defined, local search strategies are less effective. We compare our findings to recent theoretical hypotheses about underlying social structure that would enable these simple search strategies to succeed and discuss the implications to social software design.

Proceedings ArticleDOI
25 Aug 2003
TL;DR: Experiments show that pSearch can achieve performance comparable to centralized information retrieval systems by searching only a small number of nodes, and techniques that help distribute the indices more evenly across the nodes are described.
Abstract: Content-based full-text search is a challenging problem in Peer-to-Peer (P2P) systems. Traditional approaches have either been centralized or use flooding to ensure accuracy of the results returned.In this paper, we present pSearch, a decentralized non-flooding P2P information retrieval system. pSearch distributes document indices through the P2P network based on document semantics generated by Latent Semantic Indexing (LSI). The search cost (in terms of different nodes searched and data transmitted) for a given query is thereby reduced, since the indices of semantically related documents are likely to be co located in the network.We also describe techniques that help distribute the indices more evenly across the nodes, and further reduce the number of nodes accessed using appropriate index distribution as well as using index samples and recently processed queries to guide the search.Experiments show that pSearch can achieve performance comparable to centralized information retrieval systems by searching only a small number of nodes. For a system with 128,000 nodes and 528,543 documents (from news, magazines, etc.), pSearch searches only 19 nodes and transmits only 95.5KB data during the search, whereas the top 15 documents returned by pSearch and LSI have a 91.7% intersection.

Proceedings Article
07 Sep 2003
TL;DR: This paper describes the persistence subsystem of Jena2 which is intended to support large datasets and query optimization for RDF is identified as a promising area for future research.
Abstract: RDF and related Semantic Web technologies have been the recent focus of much research activity. This work has led to new specifications for RDF and OWL. However, efficient implementations of these standards are needed to realize the vision of a world-wide semantic Web. In particular, implementations that scale to large, enterprise-class data sets are required. Jena2 is the second generation of Jena, a leading semantic web programmers' toolkit. This paper describes the persistence subsystem of Jena2 which is intended to support large datasets. This paper describes its features, the changes from Jena1, relevant details of the implementation and performance tuning issues. Query optimization for RDF is identified as a promising area for future research.

Book ChapterDOI
01 Jan 2003
TL;DR: In this paper, the authors describe a method for the automatic identification of communities of practice from email logs within an organization using a betweenness centrality algorithm that can rapidly find communities within a graph representing information flows.
Abstract: We describe a method for the automatic identification of communities of practice from email logs within an organization. We use a betweenness centrality algorithm that can rapidly find communities within a graph representing information flows. We apply this algorithm to an email corpus of nearly one million messages collected over a two-month span, and show that the method is effective at identifying true communities, both formal and informal, within these scale-free graphs. This approach also enables the identification of leadership roles within the communities. These studies are complemented by a qualitative evaluation of the results in the field.

Proceedings ArticleDOI
20 May 2003
TL;DR: It is found that the average degree of change varies widely across top-level domains, and that larger pages change more often and more severely than smaller ones.
Abstract: How fast does the web change? Does most of the content remain unchanged once it has been authored, or are the documents continuously updated? Do pages change a little or a lot? Is the extent of change correlated to any other property of the page? All of these questions are of interest to those who mine the web, including all the popular search engines, but few studies have been performed to date to answer them.One notable exception is a study by Cho and Garcia-Molina, who crawled a set of 720,000 pages on a daily basis over four months, and counted pages as having changed if their MD5 checksum changed. They found that 40% of all web pages in their set changed within a week, and 23% of those pages that fell into the .com domain changed daily.This paper expands on Cho and Garcia-Molina's study, both in terms of coverage and in terms of sensitivity to change. We crawled a set of 150,836,209 HTML pages once every week, over a span of 11 weeks. For each page, we recorded a checksum of the page, and a feature vector of the words on the page, plus various other data such as the page length, the HTTP status code, etc. Moreover, we pseudo-randomly selected 0.1% of all of our URLs, and saved the full text of each download of the corresponding pages.After completion of the crawl, we analyzed the degree of change of each page, and investigated which factors are correlated with change intensity. We found that the average degree of change varies widely across top-level domains, and that larger pages change more often and more severely than smaller ones.This paper describes the crawl and the data transformations we performed on the logs, and presents some statistical observations on the degree of change of different classes of pages.

Proceedings Article
09 Dec 2003
TL;DR: This paper suggests an alternative procedure to the Fisher kernel for systematically finding kernel functions that naturally handle variable length sequence data in multimedia domains and derives a kernel distance based on the Kullback-Leibler (KL) divergence between generative models.
Abstract: Over the last years significant efforts have been made to develop kernels that can be applied to sequence data such as DNA, text, speech, video and images. The Fisher Kernel and similar variants have been suggested as good ways to combine an underlying generative model in the feature space and discriminant classifiers such as SVM's. In this paper we suggest an alternative procedure to the Fisher kernel for systematically finding kernel functions that naturally handle variable length sequence data in multimedia domains. In particular for domains such as speech and images we explore the use of kernel functions that take full advantage of well known probabilistic models such as Gaussian Mixtures and single full covariance Gaussian models. We derive a kernel distance based on the Kullback-Leibler (KL) divergence between generative models. In effect our approach combines the best of both generative and discriminative methods and replaces the standard SVM kernels. We perform experiments on speaker identification/verification and image classification tasks and show that these new kernels have the best performance in speaker verification and mostly outperform the Fisher kernel based SVM's and the generative classifiers in speaker identification and image classification.

Journal ArticleDOI
TL;DR: A component-based method and two global methods for face recognition and evaluate them with respect to robustness against pose changes are presented and the component system clearly outperformed both global systems.

Journal ArticleDOI
TL;DR: Using the rich profile data provided by the users, the analysis of Club Nexus was able to deduce the attributes contributing to the formation of friendships, and to determine how the similarity of users decays as the distance between them in the network increases.
Abstract: We present an analysis of Club Nexus, an online community at Stanford University. Through the Nexus site we were able to study a reflection of the real world community structure within the student body. We observed and measured social network phenomena such as the small world effect, clustering, and the strength of weak ties. Using the rich profile data provided by the users we were able to deduce the attributes contributing to the formation of friendships, and to determine how the similarity of users decays as the distance between them in the network increases. In addition, we found correlations between users' personalities and their other attributes, as well as interesting correspondences between how users perceive themselves and how they are perceived by others.

Book ChapterDOI
01 Sep 2003
TL;DR: The aim of this research is to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination.
Abstract: We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.

Proceedings ArticleDOI
01 Jul 2003
TL;DR: The idea of "guaranteed visibility", where highlighted areas are treated as landmarks that must remain visually apparent at all times, is introduced in TreeJuxtaposer, a system designed to support the comparison task for large trees of several hundred thousand nodes.
Abstract: Structural comparison of large trees is a difficult task that is only partially supported by current visualization techniques, which are mainly designed for browsing. We present TreeJuxtaposer, a system designed to support the comparison task for large trees of several hundred thousand nodes. We introduce the idea of "guaranteed visibility", where highlighted areas are treated as landmarks that must remain visually apparent at all times. We propose a new methodology for detailed structural comparison between two trees and provide a new nearly-linear algorithm for computing the best corresponding node from one tree to another. In addition, we present a new rectilinear Focus+Context technique for navigation that is well suited to the dynamic linking of side-by-side views while guaranteeing landmark visibility and constant frame rates. These three contributions result in a system delivering a fluid exploration experience that scales both in the size of the dataset and the number of pixels in the display. We have based the design decisions for our system on the needs of a target audience of biologists who must understand the structural details of many phylogenetic, or evolutionary, trees. Our tool is also useful in many other application domains where tree comparison is needed, ranging from network management to call graph optimization to genealogy.

Proceedings Article
01 Jan 2003
TL;DR: In this article, both acoustic and subjective approaches for calculating similarity between artists, comparing their performance on a common database of 400 popular artists, were evaluated. And they evaluated acoustic techniques based on Mel-frequency cepstral coefficients and an intermediate "anchor space" of genre classification, and subjective techniques which use data from The All Music Guide, from a survey, from playlists and personal collections, and from web-text mining.
Abstract: music similarity, acoustic measures, evaluation, ground-truth Subjective similarity between musical pieces and artists is an elusive concept, but one that must be pursued in support of applications to provide automatic organization of large music collections. In this paper, we examine both acoustic and subjective approaches for calculating similarity between artists, comparing their performance on a common database of 400 popular artists. Specifically, we evaluate acoustic techniques based on Mel-frequency cepstral coefficients and an intermediate ‘anchor space’ of genre classification, and subjective techniques which use data from The All Music Guide, from a survey, from playlists and personal collections, and from web-text mining.

Journal ArticleDOI
01 Sep 2003-Bone
TL;DR: It is demonstrated that the male C57BL/6J mouse is a novel and appropriate model for use in studying endogenous, aging-related osteopenia and may be a useful model for the study of Type II osteoporosis.

Proceedings ArticleDOI
01 Sep 2003
TL;DR: This document describes an innovative approach and related mechanisms to enforce users' privacy by putting users in control and making organizations more accountable that leverages identity-based encryption (IBE) and TCPA technologies.
Abstract: Digital identities and profiles are precious assets. On one hand they enable users to engage in transactions and interactions on the Internet. On the other hand, abuses and leakages of this information could violate the privacy of their owners, sometimes with serious consequences. Nowadays most of the people have limited understanding of security and privacy policies when applied to their confidential information and little control over the destiny of this information since it has been disclosed to third parties. In most cases this is a matter of trust. This document describes an innovative approach and related mechanisms to enforce users' privacy by putting users in control and making organizations more accountable. As part of our ongoing research activity, we introduce a technical solution based on sticky policies and tracing services that leverages identity-based encryption (IBE) and TCPA technologies. Work is in progress to build a full working prototype and deploy it in a real-life environment.

Journal ArticleDOI
TL;DR: In this article, a single molecular monolayer of bistable rotaxanes sandwiched between two 40-nm metal electrodes was fabricated using imprint lithography, and it was observed that it has high on-off ratios and reversible switching properties.
Abstract: Nanoscale molecular-electronic devices comprising a single molecular monolayer of bistable [2]rotaxanes sandwiched between two 40-nm metal electrodes were fabricated using imprint lithography. Bistable current–voltage characteristics with high on–off ratios and reversible switching properties were observed. Such devices may function as basic elements for future ultradense electronic circuitry.

Journal ArticleDOI
TL;DR: In this article, the authors show that the form of the maximally entangled mixed states can vary with the combination of entanglement and mixedness measures chosen, and that for certain combinations, the forms can change discontinuously at a specific value of the entropy, along the way determining the states that, for a given value of entropy, achieve maximal violation of Bell's inequality.
Abstract: Maximally entangled mixed states are those states that, for a given mixedness, achieve the greatest possible entanglement. For two-qubit systems and for various combinations of entanglement and mixedness measures, the form of the corresponding maximally entangled mixed states is determined primarily analytically. As measures of entanglement, we consider entanglement of formation, relative entropy of entanglement, and negativity; as measures of mixedness, we consider linear and von Neumann entropies. We show that the forms of the maximally entangled mixed states can vary with the combination of (entanglement and mixedness) measures chosen. Moreover, for certain combinations, the forms of the maximally entangled mixed states can change discontinuously at a specific value of the entropy. Along the way, we determine the states that, for a given value of entropy, achieve maximal violation of Bell's inequality.

Proceedings ArticleDOI
15 Jul 2003
TL;DR: A number of issues related to identity based authenticated key agreement protocols in the Diffie-Hellman family enabled by the Weil or Tate pairings are investigated and formal proofs of security for these protocols are given.
Abstract: We investigate a number of issues related to identity based authenticated key agreement protocols in the Diffie-Hellman family enabled by the Weil or Tate pairings. These issues include how to make protocols efficient; to avoid key escrow by a Trust Authority (TA) who issues identity based private keys for users, and to allow users to use different TAs. We describe a few authenticated key agreement (AK) protocols and AK with key confirmation (AKC) protocols by modifying Smart's AK protocol (2002). We discuss the security of these protocols heuristically and give formal proofs of security for our AK and AKC protocols (using a security model based on the model defined in (Blake-Wilson et al., 1997)). We also prove that our AK protocol has the key compromise impersonation property. We also show that our second protocol has the TA forward secrecy property (which we define to mean that the compromise of the TA's private key will not compromise previously established session keys), and we note that this also implies that it has the perfect forward secrecy property.

Patent
03 Nov 2003
TL;DR: In this article, a backup apparatus and method suitable for protecting the data volume in a computer system function by acquiring a base state snapshot and a sequential series of data volume snapshots is presented.
Abstract: A backup apparatus and method suitable for protecting the data volume in a computer system function by acquiring a base state snapshot and a sequential series of data volume snapshots, the apparatus concurrently generating succedent and precedent lists of snapshot differences which are used to create succedent and precedent backups respectively. The data volume is restored by overwriting the base state data with data blocks identified in one or more succedent backups. File recovery is accomplished by overwriting data from a current snapshot with one or more precedent backups.