scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 2004"


ReportDOI
13 Aug 2004
TL;DR: This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points.
Abstract: We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. We close with a list of open problems in anonymous communication.

3,960 citations


Patent
03 Aug 2004
TL;DR: In this article, the authors present a method and system for providing an integration environment in which integration processes can be developed independent of integration servers and applications, and the integration environment provides an application service interface for each application that is independent of the integration servers.
Abstract: A method and system for providing an integration environment in which integration processes can be developed independent of integration servers and applications. The integration environment provides an application service interface for each application that is independent of the integration servers. An integration process that is developed to use the application service interface is compatible with any integration server that supports the applications that the integration process accesses. The integration environment provides a common service interface for each type of application. The common service interface is independent of the application that is providing the service and is also independent of the integration server. Thus, an integration process developed to use the common service interface is compatible with any application of the appropriate type and any integration server.

1,101 citations


Journal ArticleDOI
TL;DR: A modified version of the methods uses a coupling to give strong support to the design principle: It is better with few but quick servers.
Abstract: In a system with one queue and several service stations, it is a natural principle to route a customer to the idle station with the distributionwise shortest service time. For the case with exponentially distributed service times, we use a coupling to give strong support to that principle. We also treat another topic. A modified version of our methods brings support to the design principle: It is better with few but quick servers.

784 citations


Journal ArticleDOI
TL;DR: This work describes a hardware-based technique using Bloom filters, which can detect strings in streaming data without degrading network throughput and queries a database of strings to check for the membership of a particular string.
Abstract: There is a class of packet processing applications that inspect packets deeper than the protocol headers to analyze content. For instance, network security applications must drop packets containing certain malicious Internet worms or computer viruses carried in a packet payload. Content forwarding applications look at the hypertext transport protocol headers and distribute the requests among the servers for load balancing. Packet inspection applications, when deployed at router ports, must operate at wire speeds. With networking speeds doubling every year, it is becoming increasingly difficult for software-based packet monitors to keep up with the line rates. We describe a hardware-based technique using Bloom filters, which can detect strings in streaming data without degrading network throughput. A Bloom filter is a data structure that stores a set of signatures compactly by computing multiple hash functions on each member of the set. This technique queries a database of strings to check for the membership of a particular string. The answer to this query can be false positive but never a false negative. An important property of this data structure is that the computation time involved in performing the query is independent of the number of strings in the database provided the memory used by the data structure scales linearly with the number of strings stored in it. Furthermore, the amount of storage required by the Bloom filter for each string is independent of its length.

707 citations


Journal Article
TL;DR: Speed up your database app with a simple, fast caching layer that uses your existing servers' spare memory.
Abstract: Speed up your database app with a simple, fast caching layer that uses your existing servers' spare memory.

680 citations


Journal ArticleDOI
TL;DR: An open-source system for analyzing, storing, and validating proteomics information derived from tandem mass spectrometry, based on a combination of data analysis servers, a user interface, and a relational database is described.
Abstract: This paper describes an open-source system for analyzing, storing, and validating proteomics information derived from tandem mass spectrometry. It is based on a combination of data analysis servers, a user interface, and a relational database. The database was designed to store the minimum amount of information necessary to search and retrieve data obtained from the publicly available data analysis servers. Collectively, this system was referred to as the Global Proteome Machine (GPM). The components of the system have been made available as open source development projects. A publicly available system has been established, comprised of a group of data analysis servers and one main database server. Keywords: proteomics database • GPM • X Tandem • protein identification • XIAPE

678 citations


Proceedings ArticleDOI
07 Mar 2004
TL;DR: This work has designed scalable mechanisms to distribute the game state to the participating players and to maintain consistency in the face of node failures, and has implemented a simple game called SimMud, and experimented with up to 4000 players to demonstrate the applicability of this approach.
Abstract: We present an approach to support massively multiplayer games on peer-to-peer overlays. Our approach exploits the fact that players in MMGs display locality of interest, and therefore can form self-organizing groups based on their locations in the virtual world. To this end, we have designed scalable mechanisms to distribute the game state to the participating players and to maintain consistency in the face of node failures. The resulting system dynamically scales with the number of online players. It is more flexible and has a lower deployment cost than centralized games servers. We have implemented a simple game we call SimMud, and experimented with up to 4000 players to demonstrate the applicability of this approach.

578 citations


Book ChapterDOI
19 Apr 2004
TL;DR: This paper studies BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts and assesses the performance of the algorithms used in BitTorrent through several metrics.
Abstract: Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers. We assess the performance of the algorithms used in BitTorrent through several metrics. Our conclusions indicate that BitTorrent is a realistic and inexpensive alternative to the classical server-based content distribution.

553 citations


Book Chapter
01 Sep 2004
TL;DR: CmapTools is a software environment developed at the Institute for Human and Machine Cognition that empowers users, individually or collaboratively, to represent their knowledge using concept maps, to share them with peers and colleagues, and to publish them.
Abstract: Concept maps are an effective way of representing a person’s understanding of a domain of knowledge. Technology can further help by making it easy to construct and modify that representation, to manage large representations for complex domains, and to allow groups of people to share in the construction of the concept maps. CmapTools is a software environment developed at the Institute for Human and Machine Cognition (IHMC) that empowers users, individually or collaboratively, to represent their knowledge using concept maps, to share them with peers and colleagues, and to publish them. It is available for free for educational and not-for-profit organizations, and public servers have been established to promote the sharing of knowledge. The client-server architecture of CmapTools allows easy publishing of the knowledge models in concept map servers (CmapServers), and enables concept maps to be linked to related concept maps and to other types of media (e.g., images, videos, web pages, etc.) in other servers. The collaboration features enable remote users to asynchronously and/or synchronously collaborate in the construction of concept maps, and promote comments, criticism, and peer review. Public CmapServers have resulted in a large collection of knowledge models publicly available, constructed by users of all ages in a variety of domains of knowledge and from a large number of countries.

500 citations


ReportDOI
06 Dec 2004
TL;DR: SUNDR's protocol achieves a property called fork consistency, which guarantees that clients can detect any integrity or consistency failures as long as they see each other's file modifications.
Abstract: SUNDR is a network file system designed to store data securely on untrusted servers. SUNDR lets clients detect any attempts at unauthorized file modification by malicious server operators or users. SUNDR's protocol achieves a property called fork consistency, which guarantees that clients can detect any integrity or consistency failures as long as they see each other's file modifications. An implementation is described that performs comparably with NFS (sometimes better and sometimes worse), while offering significantly stronger security.

489 citations


Journal ArticleDOI
TL;DR: This survey shows that heterogeneous server clusters can be made more efficient by conserving power and energy while exploiting information from the service level, such as request priorities established by service-level agreements.
Abstract: This survey shows that heterogeneous server clusters can be made more efficient by conserving power and energy while exploiting information from the service level, such as request priorities established by service-level agreements.

Journal ArticleDOI
TL;DR: This article introduces network tomography, a new field which it is believed will benefit greatly from the wealth of statistical methods and algorithms including the application of pseudo-likelihood methods and tree estimation formulations.
Abstract: Today's Internet is a massive, distributed network which contin- ues to explode in size as e-commerce and related activities grow. The hetero- geneous and largely unregulated structure of the Internet renders tasks such as dynamic routing, optimized service provision, service level verification and detection of anomalous/malicious behavior extremely challenging. The problem is compounded by the fact that one cannot rely on the cooperation of individual servers and routers to aid in the collection of network traffic measurements vital for these tasks. In many ways, network monitoring and inference problems bear a strong resemblance to other "inverse problems" in which key aspects of a system are not directly observable. Familiar sig- nal processing or statistical problems such as tomographic image reconstruc- tion and phylogenetic tree identification have interesting connections to those arising in networking. This article introduces network tomography, a new field which we believe will benefit greatly from the wealth of statistical the- ory and algorithms. It focuses especially on recent developments in the field including the application of pseudo-likelihood methods and tree estimation formulations.

Proceedings Article
06 Dec 2004
TL;DR: Besides outlining the chain replication protocols themselves, simulation experiments explore the performance characteristics of a prototype implementation and several object-placement strategies (including schemes based on distributed hash table routing) are discussed.
Abstract: Chain replication is a new approach to coordinating clusters of fail-stop storage servers. The approach is intended for supporting large-scale storage services that exhibit high throughput and availability without sacrificing strong consistency guarantees. Besides outlining the chain replication protocols themselves, simulation experiments explore the performance characteristics of a prototype implementation. Throughput, availability, and several object-placement strategies (including schemes based on distributed hash table routing) are discussed.

Patent
Murali R. Krishnan1
07 Dec 2004
TL;DR: In this paper, the adaptive bandwidth throttling system implements a graceful diminution of services to the clients by delaying a first class of services provided by a network server in response to the effective bandwidth utilized by this network server exceeding a first threshold.
Abstract: The adaptive bandwidth throttling system implements a graceful diminution of services to the clients by delaying a first class of services provided by a network server in response to the effective bandwidth utilized by this network server exceeding a first threshold. If the demand for the bandwidth by this network server exceeds a second threshold, the bandwidth throttling system escalates the throttling response and blocks the first class of services from execution and can also concurrently delay execution of a second class of services. The implementation of the throttling process can be varied, to include additional levels of response or finer gradations of the response, to include subsets of a class of services. In addition, the threshold levels of bandwidth used to trigger the throttling response can be selected as desired by the system administrator.

Proceedings Article
29 Mar 2004
TL;DR: New techniques that resulted from this exploration include use of latency predictions based on synthetic co-ordinates, efficient integration of lookup routing and data fetching, and a congestion control mechanism suitable for fetching data striped over large numbers of servers.
Abstract: Designing a wide-area distributed hash table (DHT) that provides high-throughput and low-latency network storage is a challenge. Existing systems have explored a range of solutions, including iterative routing, recursive routing, proximity routing and neighbor selection, erasure coding, replication, and server selection. This paper explores the design of these techniques and their interaction in a complete system, drawing on the measured performance of a new DHT implementation and results from a simulator with an accurate Internet latency model. New techniques that resulted from this exploration include use of latency predictions based on synthetic co-ordinates, efficient integration of lookup routing and data fetching, and a congestion control mechanism suitable for fetching data striped over large numbers of servers. Measurements with 425 server instances running on 150 PlanetLab and RON hosts show that the latency optimizations reduce the time required to locate and fetch data by a factor of two. The throughput optimizations result in a sustainable bulk read throughput related to the number of DHT hosts times the capacity of the slowest access link; with 150 selected PlanetLab hosts, the peak aggregate throughput over multiple clients is 12.8 megabytes per second.

Book
05 Jan 2004
TL;DR: Practical systems modeling: learning exactly how to map real-life systems to accurate performance models, and use those models to make better decisions--both up front and throughout the entire system lifecycle.
Abstract: Practical systems modeling: planning performance, availability, security, and moreComputing systems must meet increasingly strict Quality of Service (QoS) requirements for performance, availability, security, and maintainability To achieve these goals, designers, analysts, and capacity planners need a far more thorough understanding of QoS issues, and the implications of their decisions Now, three leading experts present a complete, application-driven framework for understanding and estimating performance You'll learn exactly how to map real-life systems to accurate performance models, and use those models to make better decisions--both up front and throughout the entire system lifecycle Coverage includes: State-of-the-art quantitative analysis techniques, supported by extensive numerical examples and exercises QoS issues in requirements analysis, specification, design, development, testing, deployment, operation, and system evolution Specific scenarios, including e-Business and database services, servers, clusters, and data centers Techniques for identifying potential congestion at both software and hardware levels Performance Engineering concepts and tools Detailed solution techniques including exact and approximate MVA and Markov Chains Modeling of software contention, fork-and-join, service rate variability, and priorityAbout the Web SiteThe accompanying Web site provides companion Excel workbooks that implement many of the book's algorithms and numerical examples

Patent
04 Oct 2004
TL;DR: In this article, a backplane architecture, structure, and method that has no active components and separate power supply lines and protection to provide high reliability in server environment is presented for power management and workload management for multi-server environments.
Abstract: Network architecture, computer system and/or server, circuit, device, apparatus, method, and computer program and control mechanism for managing power consumption and workload in computer system and data and information servers. Further provides power and energy consumption and workload management and control systems and architectures for high-density and modular multi-server computer systems that maintain performance while conserving energy and method for power management and workload management. Dynamic server power management and optional dynamic workload management for multi-server environments is provided by aspects of the invention. Modular network devices and integrated server system, including modular servers, management units, switches and switching fabrics, modular power supplies and modular fans and a special backplane architecture are provided as well as dynamically reconfigurable multi-purpose modules and servers. Backplane architecture, structure, and method that has no active components and separate power supply lines and protection to provide high reliability in server environment.

Proceedings ArticleDOI
09 May 2004
TL;DR: This paper presents SIFF, a Stateless Internet Flow Filter, which allows an end-host to selectively stop individual flows from reaching its network, without any of the common assumptions listed above.
Abstract: One of the fundamental limitations of the Internet is the inability of a packet flow recipient to halt disruptive flows before they consume the recipient's network link resources. Critical infrastructures and businesses alike are vulnerable to DoS attacks or flash-crowds that can incapacitate their networks with traffic floods. Unfortunately, current mechanisms require per-flow state at routers, ISP collaboration, or the deployment of an overlay infrastructure to defend against these events. In this paper, we present SIFF, a Stateless Internet Flow Filter, which allows an end-host to selectively stop individual flows from reaching its network, without any of the common assumptions listed above. We divide all network traffic into two classes, privileged (prioritized packets subject to recipient control) and unprivileged (legacy traffic). Privileged channels are established through a capability exchange handshake. Capabilities are dynamic and verified statelessly by the routers in the network, and can be revoked by quenching update messages to an offending host. SIFF is transparent to legacy clients and servers, but only updated hosts will enjoy the benefits of it.

Patent
01 Oct 2004
TL;DR: In this article, a mobile handset monitors the user's progress along the route by monitoring the location and movement of the handset and, optionally, sensor data, and by comparing this information to rules that define permitted or prohibited locations or movements or threshold sensor values.
Abstract: A user selects a route between a starting point and a destination. A mobile handset monitors the user's progress along the route by monitoring the location and movement of the handset and, optionally, sensor data, and by comparing this information to rules that define permitted or prohibited locations or movements or threshold sensor values. The handset uses one or more positioning systems, such as GPS, to ascertain its location. A server provides the handset with information to correct errors in the location information. If a rule fires, possibly indicating that the user is in danger, the handset attempts to ascertain the user's wellbeing, warns the user to return to the prescribed route and begins sending the handset's location to a server, which displays the information to a dispatcher who dispatches safety or security personnel to the user's location. The handset and servers communicate via any available wireless channel(s).

Proceedings Article
06 Dec 2004
TL;DR: Failure-oblivious computing is presented, a new technique that enables servers to execute through memory errors without memory corruption and enables the servers to continue to operate successfully to service legitimate requests and satisfy the needs of their users even after attacks trigger their memory errors.
Abstract: We present a new technique, failure-oblivious computing, that enables servers to execute through memory errors without memory corruption. Our safe compiler for C inserts checks that dynamically detect invalid memory accesses. Instead of terminating or throwing an exception, the generated code simply discards invalid writes and manufactures values to return for invalid reads, enabling the server to continue its normal execution path. We have applied failure-oblivious computing to a set of widely-used servers from the Linux-based open-source computing environment. Our results show that our techniques 1) make these servers invulnerable to known security attacks that exploit memory errors, and 2) enable the servers to continue to operate successfully to service legitimate requests and satisfy the needs of their users even after attacks trigger their memory errors. We observed several reasons for this successful continued execution. When the memory errors occur in irrelevant computations, failure-oblivious computing enables the server to execute through the memory errors to continue on to execute the relevant computation. Even when the memory errors occur in relevant computations, failure-oblivious computing converts requests that trigger unanticipated and dangerous execution paths into anticipated invalid inputs, which the error-handling logic in the server rejects. Because servers tend to have small error propagation distances (localized errors in the computation for one request tend to have little or no effect on the computations for subsequent requests), redirecting reads that would otherwise cause addressing errors and discarding writes that would otherwise corrupt critical data structures (such as the call stack) localizes the effect of the memory errors, prevents addressing exceptions from terminating the computation, and enables the server to continue on to successfully process subsequent requests. The overall result is a substantial extension of the range of requests that the server can successfully process.

Journal ArticleDOI
TL;DR: The new PocketLens collaborative filtering algorithm along with five peer-to-peer architectures for finding neighbors are presented and evaluated in a series of offline experiments, showing that Pocketlens can run on connected servers, on usually connected workstations, or on occasionally connected portable devices, and produce recommendations that are as good as the best published algorithms to date.
Abstract: Recommender systems using collaborative filtering are a popular technique for reducing information overload and finding products to purchase. One limitation of current recommenders is that they are not portable. They can only run on large computers connected to the Internet. A second limitation is that they require the user to trust the owner of the recommender with personal preference data. Personal recommenders hold the promise of delivering high quality recommendations on palmtop computers, even when disconnected from the Internet. Further, they can protect the user's privacy by storing personal information locally, or by sharing it in encrypted form. In this article we present the new PocketLens collaborative filtering algorithm along with five peer-to-peer architectures for finding neighbors. We evaluate the architectures and algorithms in a series of offline experiments. These experiments show that Pocketlens can run on connected servers, on usually connected workstations, or on occasionally connected portable devices, and produce recommendations that are as good as the best published algorithms to date.

Proceedings ArticleDOI
30 Aug 2004
TL;DR: This paper deduced typical real world values of packet loss and latency experienced on the Internet by monitoring numerous operational UT2003 game servers and designed maps that isolated the fundamental first person shooter interaction components of movement and shooting, and conducted numerous user studies under controlled network conditions.
Abstract: The growth in the popularity of interactive network games has increased the importance of a better understanding of the effects of packet loss and latency on user performance. While previous work on network games has studied user tolerance for high latencies and has studied the effects of latency on user performance in real-time strategy games, to the best of our knowledge, there has been no systematic study of the effects of loss and latency on user performance. In this paper we study user performance for Unreal Tournament 2003 (UT2003), a popular first person shooter game, under varying amounts of packet loss and latency. First, we deduced typical real world values of packet loss and latency experienced on the Internet by monitoring numerous operational UT2003 game servers. We then used these deduced values of loss and latency in a controlled networked environment that emulated various conditions of loss and latency, allowing us to monitor UT2003 at the network, application and user levels. We designed maps that isolated the fundamental first person shooter interaction components of movement and shooting, and conducted numerous user studies under controlled network conditions. We find that typical ranges of packet loss have no impact on user performance or on the quality of game play. The levels of latency typical for most UT2003 Internet servers, while sometimes unpleasant, do not significantly affect the outcome of the game. Since most first person shooter games typically consist of generic player actions similar to those that we tested, we believe that these results have broader implications.

Patent
29 Dec 2004
TL;DR: In this article, a system for executing applications designed to run on a single SMP computer on an easily scalable network of computers, while providing each application with computing resources, including processing power, memory and others that exceed the resources available on any single computer.
Abstract: A system for executing applications designed to run on a single SMP computer on an easily scalable network of computers, while providing each application with computing resources, including processing power, memory and others that exceed the resources available on any single computer. A server agent program, a grid switch apparatus and a grid controller apparatus are included. Methods for creating processes and resources, and for accessing resources transparently across multiple servers are also provided.

Proceedings ArticleDOI
26 Jun 2004
TL;DR: This paper introduces a new conservation technique, called Popular Data Concentration (PDC), that migrates frequently accessed data to a subset of the disks that achieves more consistent and robust energy savings than MAID.
Abstract: In this paper, we study energy conservation techniques for disk array-based network servers. First, we introduce a new conservation technique, called Popular Data Concentration (PDC), that migrates frequently accessed data to a subset of the disks. The goal is to skew the load towards a few of the disks, so that others can be transitioned to low-power modes. Next, we introduce a user-level file server that takes advantage of PDC. In the context of this server, we compare PDC to the Massive Array of Idle Disks (MAID). Using a validated simulator, we evaluate these techniques for conventional and two-speed disks and a wide range of parameters. Our results for conventional disks show that PDC and MAID can only conserve energy when the load on the server is extremely low. When two-speed disks are used, both PDC and MAID can conserve significant energy with only a small fraction of delayed requests. Overall, we find that PDC achieves more consistent and robust energy savings than MAID.

Journal ArticleDOI
TL;DR: UNLABELLED JWS Online is a repository of kinetic models, describing biological systems, which can be interactively run and interrogated over the Internet, using a client-server strategy.
Abstract: Summary: JWS Online is a repository of kinetic models, describing biological systems, which can be interactively run and interrogated over the Internet. It is implemented using a client--server strategy where the clients, in the form of web browser based Java applets, act as a graphical interface to the model servers, which perform the required numerical computations. Availability: The JWS Online website is publicly accessible at http://jjj.biochem.sun.ac.za/ with mirrors at http://www.jjj.bio.vu.nl/ and http://jjj.vbi.vt.edu/

Patent
30 Jan 2004
TL;DR: In this article, a method and system for medical device authentication is disclosed, which includes a plurality of digital assistants (118) communicating over a wired or wireless network (102) and a pluralityof medical devices (120) (e.g., infusion pumps) (i.e., the data being transmitted is confidential medial data).
Abstract: A method and system for medical device authentication is disclosed. The system may include a plurality of digital assistants (118) and a plurality of medical devices (120) (e.g., infusion pumps) communicating over a wired or wireless network (102). Because some of the data being transmitted is confidential medial data, the data is preferably encrypted and only communicated in the clear to authorized users and devices. In order to setup a new digital assistant (118) or medical device (120), a commissioning phase of the authentication process may be performed. Each time a commissioned device is powered up, an authentication process is preferably performed in order to verify communication is occurring with an authorized device and/or user. Once a device and/or user is authenticated, secure one-way and/or two-way communication may occur in order to pass parameters, instructions, data, alarms, status information, and any other type of information between digital assistants (118), medical devices (120), and/or servers (108a, 109).

Journal ArticleDOI
TL;DR: A new decentralized honey bee algorithm which dynamically allocates servers to satisfy request loads, and is compared against an omniscient optimality algorithm, a conventional greedy algorithm, and an algorithm that computes omnisciently the optimal static allocation.
Abstract: Internet centers host services for e-banks, e-auctions and other clients. Hosting centers then must allocate servers among clients to maximize revenue. The limited number of servers, costs of reallocating servers, and unpredictability of requests make server allocation optimization difficult Based on the many similarities between server and honey bee colony forager allocation, we pro pose a new decentralized honey bee algorithm which dynamically allocates servers to satisfy request loads. We compare it against an omniscient optimality algorithm, a conventional greedy algorithm, and an algorithm that computes omnisciently the optimal static allocation. We evaluate performance on simulated request streams and commercial trace data Our algorithm performs better than static or greedy for highly variable request loads, but greedy can outperform it under low variability. Honey bee forager allocation, though suboptimal for static food sources, may possess a counterbalancing responsiveness to food source variability.

Proceedings ArticleDOI
29 Oct 2004
TL;DR: This work presents a framework that models this aspect of access control using logic programming with set constraints of a computable set theory [DPPR00] and specifies policies as stratified constraint flounder-free logic programs that admit primitive recursion.
Abstract: Attribute based access control (ABAC) grants accesses to services based on the attributes possessed by the requester. Thus, ABAC differs from the traditional discretionary access control model by replacing the subject by a set of attributes and the object by a set of services in the access control matrix. The former is appropriate in an identity-less system like the Internet where subjects are identified by their characteristics, such as those substantiated by certificates. These can be modeled as attribute sets. The latter is appropriate because most Internet users are not privy to method names residing on remote servers. These can be modeled as sets of service options. We present a framework that models this aspect of access control using logic programming with set constraints of a computable set theory [DPPR00]. Our framework specifies policies as stratified constraint flounder-free logic programs that admit primitive recursion. The design of the policy specification framework ensures that they are consistent and complete. Our ABAC policies can be transformed to ensure faster runtimes.

Patent
19 Aug 2004
TL;DR: In this article, a level of abstraction is created between a set of physical processors and virtual multiprocessors to form a virtualized data center, which consists of virtual isolated systems separated by a boundary referred as a partition.
Abstract: A level of abstraction is created between a set of physical processors and a set of virtual multiprocessors to form a virtualized data center. This virtualized data center comprises a set of virtual, isolated systems separated by a boundary referred as a partition. Each of these systems appears as a unique, independent virtual multiprocessor computer capable of running a traditional operating system and its applications. In one embodiment, the system implements this multi-layered abstraction via a group of microkernels, each of which communicates with one or more peer microkemel over a high-speed, low-latency interconnect and forms a distributed virtual machine monitor. Functionally, a virtual data center is provided, including the ability to take a collection of servers and execute a collection of business applications over a compute fabric comprising commodity processors coupled by an interconnect. Processor, memory and I/O are virtualized across this fabric, providing a single system, scalability and manageability. According to one embodiment, this virtualization is transparent to the application, and therefore, applications may be scaled to increasing resource demands without modifying the application.

Patent
30 Nov 2004
TL;DR: In this article, a power management architecture for an electrical power distribution system, or portion thereof, is disclosed, which includes multiple IEDs distributed throughout the power distribution systems to manage the flow and consumption of power from the system.
Abstract: A power management architecture for an electrical power distribution system, or portion thereof, is disclosed. The architecture includes multiple intelligent electronic devices (“IED's”) distributed throughout the power distribution system to manage the flow and consumption of power from the system. The IED's are linked via a network to back-end servers. Security mechanisms are further provided which protect and otherwise ensure the authenticity of communications transmitted via the network in furtherance of the management of the distribution and consumption of electrical power by the architecture. In particular, public key cryptography is employed to identify components of the architecture and provide for secure communication of power management data among those components. Further, certificates and certificate authorities are utilized to further ensure integrity of the security mechanism.