scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 2006"


Proceedings ArticleDOI
01 Sep 2006
TL;DR: Zhang et al. as mentioned in this paper presented Casper1, a new framework in which mobile and stationary users can entertain location-based services without revealing their location information, which consists of two main components, the location anonymizer and the privacy-aware query processor.
Abstract: This paper tackles a major privacy concern in current location-based services where users have to continuously report their locations to the database server in order to obtain the service. For example, a user asking about the nearest gas station has to report her exact location. With untrusted servers, reporting the location information may lead to several privacy threats. In this paper, we present Casper1; a new framework in which mobile and stationary users can entertain location-based services without revealing their location information. Casper consists of two main components, the location anonymizer and the privacy-aware query processor. The location anonymizer blurs the users' exact location information into cloaked spatial regions based on user-specified privacy requirements. The privacy-aware query processor is embedded inside the location-based database server in order to deal with the cloaked spatial areas rather than the exact location information. Experimental results show that Casper achieves high quality location-based services while providing anonymity for both data and queries.

1,239 citations


Patent
07 Feb 2006
TL;DR: In this article, a backplane architecture, structure, and method that has no active components and separate power supply lines and protection to provide high reliability in server environment is presented for power management and workload management for multi-server environments.
Abstract: Network architecture, computer system and/or server, circuit, device, apparatus, method, and computer program and control mechanism for managing power consumption and workload in computer system and data and information servers. Further provides power and energy consumption and workload management and control systems and architectures for high-density and modular multi-server computer systems that maintain performance while conserving energy and method for power management and workload management. Dynamic server power management and optional dynamic workload management for multi-server environments is provided by aspects of the invention. Modular network devices and integrated server system, including modular servers, management units, switches and switching fabrics, modular power supplies and modular fans and a special backplane architecture are provided as well as dynamically reconfigurable multi-purpose modules and servers. Backplane architecture, structure, and method that has no active components and separate power supply lines and protection to provide high reliability in server environment.

693 citations


Patent
13 Nov 2006
TL;DR: In this article, a log manager collects such log data using various protocols (e.g., Syslog, SNMP, SMTP, etc.) to determine events and transfer the events to an event manager.
Abstract: The present invention generally relates to log message processing such that events can be detected and alarms can be generated. For example, log messages are generated by a variety of network platforms (e.g., Windows servers, Linux servers, UNIX servers, databases, workstations, etc.). Often, relatively large numbers of logs are generated from these platforms in different formats. A log manager described herein collects such log data using various protocols (e.g., Syslog, SNMP, SMTP, etc.) to determine events. That is, the log manager may communicate with the network platforms using appropriate protocols to collect log messages therefrom. The log manager may then determine events (e.g., unauthorized access, logins, etc.) from the log data and transfer the events to an event manager. The event manager may analyze the events and determine whether alarms should be generated therefrom.

559 citations


Patent
25 Oct 2006
TL;DR: In this paper, the authors propose a method for providing access to a computing environment that includes the step of receiving a request from a client system for an enumeration of available computing environments.
Abstract: A method for providing access to a computing environment includes the step of receiving a request from a client system for an enumeration of available computing environments. Collected data regarding available computing environments are accessed. Accessed data are transmitted to a client system, the accessed data indicating to the client system each computing environment available to a user of the client system. A request is received from the client system to access one of the computing environments. A connection is established between the client system and a virtual machine hosting the requested computing environment via a terminal services session, the virtual machine executed by a hypervisor executing in the terminal services session provided by an operating system executing on one of a plurality of execution machines.

499 citations


Proceedings ArticleDOI
27 Jun 2006
TL;DR: This work defines a variety of essential and practical cost metrics associated with ODB systems and looks at solutions that can handle dynamic scenarios, where owners periodically update the data residing at the servers, both for static and dynamic environments.
Abstract: In outsourced database (ODB)systems the database owner publishes its data through a number of remote servers, with the goal of enabling clients at the edge of the network to access and query the data more efficiently. As servers might be untrusted or can be compromised, query authentication becomes an essential component of ODB systems. Existing solutions for this problem concentrate mostly on static scenarios and are based on idealistic properties for certain cryptographic primitives. In this work, first we define a variety of essential and practical cost metrics associated with ODB systems. Then, we analytically evaluate a number of different approaches, in search for a solution that best leverages all metrics. Most importantly, we look at solutions that can handle dynamic scenarios, where owners periodically update the data residing at the servers. Finally, we discuss query freshness, a new dimension in data authentication that has not been explored before. A comprehensive experimental evaluation of the proposed and existing approaches is used to validate the analytical models and verify our claims. Our findings exhibit that the proposed solutions improve performance substantially over existing approaches, both for static and dynamic environments.

434 citations


Journal ArticleDOI
01 May 2006
TL;DR: This paper proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble") by discussing an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis.
Abstract: One of the key challenges for high-density servers (e.g., blades) is the increased costs in addressing the power and heat density associated with compaction. Prior approaches have mainly focused on reducing the heat generated at the level of an individual server. In contrast, this work proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble"). Specifically, we discuss an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis. Our approach requires low-cost hardware modifications and relatively simple software support. We evaluate our architecture through both prototyping and simulation. For workloads representing 132 servers from nine different enterprise deployments, we show significant power budget reductions at performances comparable to conventional systems.

421 citations


Proceedings ArticleDOI
03 Apr 2006
TL;DR: This paper introduces the concept of server consolidation using virtualization and point out associated issues that arise in the area of application performance, and shows how some of these problems can be solved by monitoring key performance metrics and using the data to trigger migration of virtual machines within physical servers.
Abstract: As businesses have grown, so has the need to deploy I/T applications rapidly to support the expanding business processes. Often, this growth was achieved in an unplanned way: each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased. In many cases this has led to what is often referred to as "server sprawl", resulting in low server utilization and high system management costs. An architectural approach that is becoming increasingly popular to address this problem is known as server virtualization. In this paper we introduce the concept of server consolidation using virtualization and point out associated issues that arise in the area of application performance. We show how some of these problems can be solved by monitoring key performance metrics and using the data to trigger migration of Virtual Machines within physical servers. The algorithms we present attempt to minimize the cost of migration and maintain acceptable application performance levels.

364 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: This work presents fast and cheap attacks that reveal the location of a hidden server, the first actual intersection attacks on any deployed public network: thus confirming general expectations from prior theory and simulation.
Abstract: Hidden services were deployed on the Tor anonymous communication network in 2004. Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship. We present fast and cheap attacks that reveal the location of a hidden server. Using a single hostile Tor node we have located deployed hidden servers in a matter of minutes. Although we examine hidden services over Tor, our results apply to any client using a variety of anonymity networks. In fact, these are the first actual intersection attacks on any deployed public network: thus confirming general expectations from prior theory and simulation. We recommend changes to route selection design and implementation for Tor. These changes require no operational increase in network overhead and are simple to make; but they prevent the attacks we have demonstrated. They have been implemented.

362 citations


Patent
10 Feb 2006
TL;DR: In this article, an agent user interface is described that allows the agent to have control over accepting multiple calls, such that the agent can drag and drop canned responses, images, URLs, or other information into a window for immediate display on a customer's computer.
Abstract: Multiple communication types are integrated into a call center. The communication types can be chat, email, Internet Protocol (IP) voice, traditional telephone, web page, digital image, digital video and other types. Features of the invention include allowing a single agent to handle multiple customers on multiple channels, or “endpoints.” Prioritizing and assigning calls to agents based on a specific criteria such as the number of endpoints assigned to an agent, the agents availability, the priority of a customer call, the efficiency of a given agent and the agent's efficiency at handling a particular communication type call. An agent user interface is described that allows the agent to have control over accepting multiple calls. The agent can drag and drop canned responses, images, URLs, or other information into a window for immediate display on a customer's computer. The system provides for detailed agent performance tracking. The system provides failure recovery by using a backup system. If the network server fails, then the customer is connected directly to an agent. When a failed computer comes back on line, the statistics gathered are then used to synchronize the returned computer. The system provides extensive call recording or “data wake” information gathering. The system provides flexibility in transferring large amounts of historic and current data from one agent to another, and from storage to an active agent. The system integrates human agents' knowledge with an automated knowledge base. The system provides for an agent updating, or adding, to the knowledge base in real time. The system also provides for “blending” of different communication types.

285 citations


Patent
24 Oct 2006
TL;DR: In this paper, the authors provide power and energy consumption and workload management and control systems and architectures for high-density and modular multi-server computer systems that maintain performance while conserving energy.
Abstract: Network architecture, computer system and/or server, circuit, device, apparatus, method, and computer program and control mechanism for managing power consumption and workload in computer system and data and information servers Further provides power and energy consumption and workload management and control systems and architectures for high-density and modular multi-server computer systems that maintain performance while conserving energy and method for power management and workload management Dynamic server power management and optional dynamic workload management for multi-server environments is provided by aspects of the invention Modular network devices and integrated server system, including modular servers, management units, switches and switching fabrics, modular power supplies and modular fans and a special backplane architecture are provided as well as dynamically reconfigurable multi-purpose modules and servers Backplane architecture, structure, and method that has no active components and separate power supply lines and protection to provide high reliability in server environment

284 citations


Proceedings ArticleDOI
30 Oct 2006
TL;DR: This work suggests the same technique could be exploited as a classical covert channel and can even provide geolocation, because existing abstract models of anonymity-network nodes do not take into account the inevitable imperfections of the hardware they run on.
Abstract: Location-hidden services, as offered by anonymity systems such as Tor, allow servers to be operated under a pseudonym. As Tor is an overlay network, servers hosting hidden services are accessible both directly and over the anonymous channel. Traffic patterns through one channel have observable effects on the other, thus allowing a service's pseudonymous identity and IP address to be linked. One proposed solution to this vulnerability is for Tor nodes to provide fixed quality of service to each connection, regardless of other traffic, thus reducing capacity but resisting such interference attacks. However, even if each connection does not influence the others, total throughput would still affect the load on the CPU, and thus its heat output. Unfortunately for anonymity, the result of temperature on clock skew can be remotely detected through observing timestamps. This attack works because existing abstract models of anonymity-network nodes do not take into account the inevitable imperfections of the hardware they run on. Furthermore, we suggest the same technique could be exploited as a classical covert channel and can even provide geolocation.

Journal ArticleDOI
TL;DR: A novel hybrid architecture that integrates both CDN- and P2P-based streaming media distribution is proposed and analyzed, which significantly lowers the cost of CDN capacity reservation, without compromising the media quality delivered.
Abstract: To distribute video and audio data in real-time streaming mode, two different technologies --- Content Distribution Network (CDN) and Peer-to-Peer (P2P) --- have been proposed. However, both technologies have their own limitations: CDN servers are expensive to deploy and maintain, and consequently incur a cost for media providers and/or clients for server capacity reservation. On the other hand, a P2P-based architecture requires sufficient number of seed supplying peers to jumpstart the distribution process. Compared with a CDN server, a peer usually offers much lower out-bound streaming rate and hence multiple peers must jointly stream a media data to a requesting peer. Furthermore, it is not clear how to determine how much a peer should contribute back to the system after receiving the media data, in order to sustain the overall media distribution capacity. In this paper, we propose and analyze a novel hybrid architecture that integrates both CDN- and P2P-based streaming media distribution. The architecture is highly cost-effective: it significantly lowers the cost of CDN capacity reservation, without compromising the media quality delivered. In particular, we propose and compare different limited contribution policies for peers that request a media data, so that the streaming capacity of each peer can be exploited on a fair and limited basis. We present: (1) in-depth analysis of the proposed architecture under different contribution policies, and (2) extensive simulation results which validate the analysis. Our analytical and simulation results form a rigorous basis for the planning and dimensioning of the hybrid architecture.

Patent
16 Jun 2006
TL;DR: In this paper, the first packet of a server reply to a request from the client is stored in a send path list for server load balancing, and the subsequent packets are forwarded to the same server without hop-by-hop routing decisions or repeated load balancing decisions.
Abstract: A router for routing data from a client through load-balancing nodes to a selected load-balanced server among a plurality of servers in a network involves: receiving, at a last load balancing node associated with a selected server among the plurality of servers, a first packet of a server reply to a request from the client; storing identifiers of ingress interfaces on which the packet arrives, in a send path list for server load balancing, as the first packet of the server reply is routed from the last load balancing node to the client using hop-by-hop decisions; receiving subsequent packets of the client request; and forwarding the subsequent packets to the selected server only on a route that is defined by the send path list and without hop-by-hop routing decisions. Packet flows are routed from the same client to the same server without hop-by-hop routing decisions or repeated load-balancing decisions.

Patent
04 Dec 2006
TL;DR: In this article, a session distribution scheme is implemented such that connections are distributed to the server in the group of servers which has the fewest connections of the group, based on the predicted response times of the servers or according to a round robin scheme are used.
Abstract: Disclosed is a system and method for distributing connections among a plurality of servers at an Internet site. All connections are made to a single IP address and a local director selects the server from among the plurality of servers which is to receive the connection. Thus, the DNS server is not relied upon to distribute connections, and the connection distribution scheme is not avoided when DNS is bypassed. In one embodiment, a session distribution scheme is implemented such that connections are distributed to the server in the group of servers which has the fewest connections of the group. In other embodiments, other session distribution schemes which route connections based on the predicted response times of the servers or according to a round robin scheme are used.

Proceedings Article
31 Jul 2006
TL;DR: This paper discusses the design and implementation of a NIDS extension to perform dynamic application-layer protocol analysis and demonstrates the power of the enhancement with three examples: reliable detection of applications not using their standard ports, payload inspection of FTP data transfers, and detection of IRC-based botnet clients and servers.
Abstract: Many network intrusion detection systems (NIDS) rely on protocol-specific analyzers to extract the higher-level semantic context from a traffic stream. To select the correct kind of analysis, traditional systems exclusively depend on well-known port numbers. However, based on our experience, increasingly significant portions of today's traffic are not classifiable by such a scheme. Yet for a NIDS, this traffic is very interesting, as a primary reason for not using a standard port is to evade security and policy enforcement monitoring. In this paper, we discuss the design and implementation of a NIDS extension to perform dynamic application-layer protocol analysis. For each connection, the system first identifies potential protocols in use and then activates appropriate analyzers to verify the decision and extract higher-level semantics. We demonstrate the power of our enhancement with three examples: reliable detection of applications not using their standard ports, payload inspection of FTP data transfers, and detection of IRC-based botnet clients and servers. Prototypes of our system currently run at the border of three large-scale operational networks. Due to its success, the bot-detection is already integrated into a dynamic inline blocking of production traffic at one of the sites.

Patent
31 Oct 2006
TL;DR: In this article, a secure interactive communication of text and image information between a central server computer and one or more client computers located at remote sites for the purpose of storing and retrieving files describing and identifying unique products, services, or individuals is described.
Abstract: Methods and apparatus are described which provide secure interactive communication of text and image information between a central server computer and one or more client computers located at remote sites for the purpose of storing and retrieving files describing and identifying unique products, services, or individuals Textual information and image data from one or more of the remote sites are stored separately at the location of the central server computer, with the image data being in compressed form, and with the textual information being included in a relational database with identifiers associated with any related image data Means are provided at the central computer for management of all textural information and image data received to ensure that all information may be independently retrieved Requests are entered from remote terminals specifying particular subject matter, and the system is capable of responding to multiple simultaneous requests Textural information is recalled and downloaded for review, along with any subsequently requested image data, to be displayed at a remote site Various modes of data and image formatting are also disclosed, including encryption techniques to fortify data integrity The server computers may be interfaced with other computers to effect financial transactions, and images representing the subjects of transactions may be uploaded to the server computer to create temporary or permanent records of financial or legal transactions A further feature of the system is the ability to associate an identification image with a plurality of accounts, transactions, or records

Patent
24 Oct 2006
TL;DR: In this article, a content delivery network (CDN) for delivering content over the Internet is disclosed in one embodiment, which includes a domain resolution service (DNS) server, caching servers and an Internet interface.
Abstract: A content delivery network (CDN) for delivering content over the Internet is disclosed in one embodiment. The CDN is configured to deliver content for others and includes a domain resolution service (DNS) server, caching servers and an Internet interface. The DNS server receives a first domain resolution request and produces a first DNS solution, and receives a second domain resolution request and produces a second DNS solution. The first and second domain resolution requests correspond to a same domain. The caching servers correspond to a plurality of addresses. The interface receives domain resolution requests, which include the first and second domain resolution requests, and transmits DNS solutions, which include the first and second DNS solutions. The first DNS solution comprises a first plurality of addresses corresponding to at least a first subset of the plurality of caching servers, and the second DNS solution comprises a second plurality of addresses corresponding to at least a second subset of the plurality of caching servers. The first DNS solution is different from the second DNS solution in that the second subset includes an address for a caching server not in the first subset. The second subset is chosen to generally match a processing power of the first subset.

Book
24 Mar 2006
TL;DR: This presentation explains in detail the design and implementation of the NTP interleaved modes, and some of the mechanisms used for transferring data between servers and reference clocks.
Abstract: BASIC CONCEPTS Time Synchronization Time Synchronization Protocols Computer Clocks Processing Time Values Correctness and Accuracy Expectations Security NTP in the Internet Parting Shots References HOW NTP WORKS General Infrastructure Requirements How NTP Represents the Time How NTP Reckons the Time How NTP Disciplines the Time How NTP Clients and Servers Associate How NTP Discovers Servers How NTP Manages Network Resources How NTP Avoids Errors How NTP Performance Is Determined How NTP Controls Access How NTP Watches for Terrorists How NTP Clocks Are Watched Parting Shots References Further Reading IN THE BELLY OF THE BEAST Related Technology Terms and Notation Process Flow Packet Processing Clock Filter Algorithm Selection Algorithm Clustering Algorithm Combining Algorithm Huff-'n-Puff Filter Mitigation Rules and the Prefer Peer Poll Process Parting Shots References Further Reading CLOCK DISCIPLINE ALGORITHM Feedback Control Systems Phase and Frequency Discipline Weight Factors Poll Interval Control Popcorn and Step Control Clock State Machine Parting Shots References Further Reading NTP SUBNET CONFIGURATION Automatic Server Discovery Manual Server Discovery and Configuration Evaluating the Sources Selecting the Stratum Selecting the Number of Configured Servers Engineering Campus and Corporate Networks Engineering Home Office and Small Business Networks Hardware and Network Considerations Parting Shots References Further Reading NTP PERFORMANCE IN THE INTERNET Performance Measurement Tools System Clock Latency Characteristics Characteristics of a Primary Server and Reference Clock Characteristics between Primary Servers on the Internet Characteristics of a Client and a Primary Server on a Fast Ethernet Results from an Internet Survey Server and Network Resource Requirements Parting Shots References PRIMARY SERVERS AND REFERENCE CLOCKS Driver Structure and Interface Reference Clock Drivers Further Reading KERNEL TIMEKEEPING SUPPORT System Clock Reading Algorithm Clock Discipline Algorithms Kernel PLL/FLL Discipline Kernel PPS Discipline Clock Adjust Algorithm Proof of Performance Kernel PLL/FLL Discipline Performance Kernel PPS Discipline Parting Shots References Further Reading CRYPTOGRAPHIC AUTHENTICATION NTP Security Model NTP Secure Groups Autokey Security Protocol Parting Shots References Further Reading IDENTITY SCHEMES X509 Certificates Private Certificate (PC) Identity Scheme Trusted Certificate (TC) Identity Scheme Schnorr (IFF) Identity Scheme Guillou-Quisquater (GQ) Identity Scheme Mu-Varadharajan (MV) Identity Scheme Parting Shots References Further Reading ANALYSIS OF ERRORS Clock Reading Errors Timestamp Errors Sawtooth Errors Maximum Error Budget Expected Error Budget Parting Shots References MODELING AND ANALYSIS OF COMPUTER CLOCKS Computer Clock Concepts Mathematical Model of the Generic Feedback Loop Synthetic Timescales and Clock Wranglers Parting Shots References Further Reading METROLOGY AND CHRONOMETRY OF THE NTP TIMESCALE Scientific Timescales Based on Astronomy and Atomic Physics Civil Timescales Based on Earth Rotation How NTP Reckons with UTC Leap Seconds On Numbering the Calendars and Days On the Julian Day Number System On Timescales, Leap Events, and the Age of Eras The NTP Era and Buddy Epoch Comparison with Other Computer Timescales Primary Frequency and Time Standards Time and Frequency Dissemination Parting Shots References Further Reading NTP REFERENCE IMPLEMENTATION NTP Packet Header Control Flow Main Program and Common Routines Peer Process System Process Clock Discipline Process Clock Adjust Process Poll Process Parting Shots Reference Further Reading TECHNICAL HISTORY OF NTP On the Antiquity of NTP On the Proliferation of NTP around the Globe Autonomous Authentication Autonomous Configuration Radios, We Have Radios Hunting the Nanoseconds Experimental Studies Theory and Algorithms Growing Pains As Time Goes By Parting Shots References Further Reading BIBLIOGRAPHY INDEX

Journal ArticleDOI
01 Aug 2006
TL;DR: The aim of this paper is to give an overview of a middleware developed by the GRAAL team called DIET (for Distributed Interactive Engineering Tool-box), a hierarchical set of components used for the development of applications based on computational servers on the grid.
Abstract: Among existing grid middleware approaches, one simple, powerful, and flexible approach consists of using servers available in different administrative domains through the classical client-server or Remote Procedure Call RPC paradigm. Network Enabled Servers implement this model also called GridRPC. Clients submit computation requests to a scheduler whose goal is to find a server available on the grid. The aim of this paper is to give an overview of a middleware developed by the GRAAL team called DIET for Distributed Interactive Engineering Tool-box. DIET is a hierarchical set of components used for the development of applications based on computational servers on the grid.

Journal ArticleDOI
TL;DR: QN-MHP expands the three discrete serial stages of MHP, of perceptual, cognitive, and motor processing, into three continuous-transmission subnetworks of servers, each performing distinct psychological functions specified with a GOMS-style language.
Abstract: Queueing Network-Model Human Processor (QN-MHP) is a computational architecture that integrates two complementary approaches to cognitive modeling: the queueing network approach and the symbolic approach (exemplified by the MHP/GOMS family of models, ACT-R, EPIC, and SOAR). Queueing networks are particularly suited for modeling parallel activities and complex structures. Symbolic models have particular strength in generating a person's actions in specific task situations. By integrating the two approaches, QN-MHP offers an architecture for mathematical modeling and real-time generation of concurrent activities in a truly concurrent manner. QN-MHP expands the three discrete serial stages of MHP, of perceptual, cognitive, and motor processing, into three continuous-transmission subnetworks of servers, each performing distinct psychological functions specified with a GOMS-style language. Multitask performance emerges as the behavior of multiple streams of information flowing through a network, with no need to devise complex, task-specific procedures to either interleave production rules into a serial program (ACT-R), or for an executive process to interactively control task processes (EPIC). Using QN-MHP, a driver performance model was created and interfaced with a driving simulator to perform a vehicle steering, and a map reading task concurrently and in real time. The performance data of the model are similar to human subjects performing the same tasks.

Patent
09 Mar 2006
TL;DR: In this paper, a location-based Uniform Resource Locator (URL) is proposed to identify one or more resources on a network based on the geographical location of the resources, which is a proxy service that intercepts content information requests to the Internet and re-directs the content requests to an overlay.
Abstract: An aspect of the present invention is a method for routing content information to a mobile user or client application. The method preferably comprises re-directing a user request to one or more gateway servers provided via an overlay network. In another aspect, the present invention is an apparatus that includes a proxy service that intercepts content information requests to the Internet and re-directs the content requests to an overlay. Another aspect of the present invention comprises a location-based Uniform Resource Locator that includes a protocol semantic portion and a location-based resolver address portion that identifies one or more resources on a network based on the geographical location of the resources.

Journal ArticleDOI
11 Aug 2006
TL;DR: This research shows that in more than 50% of investigated scenarios, it is better to route through the nodes "recommended" by Akamai, than to use the direct paths, and develops lowoverhead pruning algorithms that avoidAkamai-driven paths when they are not beneficial.
Abstract: To enhance web browsing experiences, content distribution networks (CDNs) move web content "closer" to clients by caching copies of web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements, and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring.Our contributions are threefold. First, we conduct a broad measurement study of Akamai's CDN. We probe Akamai's network from 140 PlanetLab vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50% of investigated scenarios, it is better to route through the nodes "recommended" by Akamai, than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop lowoverhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial.

Proceedings ArticleDOI
30 Nov 2006
TL;DR: In this paper, a probabilistic network calculus with moment generating functions is presented, which achieves the objective of scaling linearly in the number of servers in series in a queuing network.
Abstract: Network calculus is a min-plus system theory for performance evaluation of queuing networks. Its elegance steins from intuitive convolution formulas for concatenation of deterministic servers. Recent research dispenses with the worst-case assumptions of network calculus to develop a probabilistic equivalent that benefits from statistical multiplexing. Significant achievements have been made, owing for example to the theory of effective bandwidths; however, the outstanding scalability set up by concatenation of deterministic servers has not been shown. This paper establishes a concise, probabilistic network calculus with moment generating functions. The presented work features closed-form, end-to-end, probabilistic performance bounds that achieve the objective of scaling linearly in the number of servers in series. The consistent application of moment generating functions put forth in this paper utilizes independence beyond the scope of current statistical multiplexing of flows. A relevant additional gain is demonstrated for tandem servers with independent cross-traffic

Patent
Ramy Dodin1
28 Feb 2006
TL;DR: In this paper, a computer-implemented method of effectuating an electronic on-line payment includes receiving at a computer server system a text message from a payor containing a payment request representing a payment amount sent by a device operating independently of the computer server.
Abstract: A computer-implemented method of effectuating an electronic on-line payment includes receiving at a computer server system a text message from a payor containing a payment request representing a payment amount sent by a payor device operating independently of the computer server system, determining a payment amount associated with the text message and debiting a payor account for an amount corresponding to the amount of the payment request, and crediting an account of a payee that is independent of the computer server system.

Patent
09 Aug 2006
TL;DR: An encrypted file storage solution consists of a cluster of processing nodes, external data storage, and a software agent (the "File System Watcher") which is installed on the application servers as discussed by the authors.
Abstract: An encrypted file storage solution consists of a cluster of processing nodes, external data storage, and a software agent (the “File System Watcher”), which is installed on the application servers. Cluster sizes of one node up to many hundreds of nodes are possible. There are also remote “Key Servers” which provide various services to one or more clusters. The preceding describes a preferred embodiment, though in some cases it may be desirable to “collapse” some of the functionality into a smaller number of hardware devices, typically trading off cost versus security and fault-tolerance.

Patent
24 Mar 2006
TL;DR: In this paper, a distributed on-demand computing system is proposed, which automatically provision distributed computing servers with customer application programs, based on the parameters of each application program, when a server is selected for hosting the program.
Abstract: A method and mechanism for a distributed on-demand computing system. The system automatically provisions distributed computing servers with customer application programs. The parameters of each customer application program are taken into account when a server is selected for hosting the program. The system monitors the status and performance of each distributed computing server. The system provisions additional servers when traffic levels exceed a predetermined level for a customer's application program and, as traffic demand decreases to a predetermined level, servers can be un-provisioned and returned back to a server pool for later provisioning. The system tries to fill up one server at a time with customer application programs before dispatching new requests to another server. The customer is charged a fee based on the usage of the distributed computing servers.

Patent
29 Dec 2006
TL;DR: In this paper, the edge server retrieves objects embedded in pages (normally HTML content) at the same time it serves the page to the browser rather than waiting for the browser's request for these objects.
Abstract: A CDN edge server is configured to provide one or more extended content delivery features on a domain-specific, customer-specific basis, preferably using configuration files that are distributed to the edge servers using a configuration system. A given configuration file includes a set of content handling rules and directives that facilitate one or more advanced content handling features, such as content prefetching. When prefetching is enabled, the edge server retrieves objects embedded in pages (normally HTML content) at the same time it serves the page to the browser rather than waiting for the browser's request for these objects. This can significantly decrease the overall rendering time of the page and improve the user experience of a Web site. Using a set of metadata tags, prefetching can be applied to either cacheable or uncacheable content. When prefetching is used for cacheable content, and the object to be prefetched is already in cache, the object is moved from disk into memory so that it is ready to be served. When prefetching is used for uncacheable content, preferably the retrieved objects are uniquely associated with the client browser request that triggered the prefetch so that these objects cannot be served to a different end user. By applying metadata in the configuration file, prefetching can be combined with tiered distribution and other edge server configuration options to further improve the speed of delivery and/or to protect the origin server from bursts of prefetching requests.

Patent
03 Jan 2006
TL;DR: In this article, the authors describe a load balancing of service requests on behalf of two or more of the servers that are located geographically proximate to the load balancer, where each server provides services to the communication devices within the communication network.
Abstract: Methods and apparatus are provided for geo-locating load balancing. According to one embodiment, a communication network architecture includes multiple servers, multiple load balancers, and multiple geographically dispersed communication devices. The servers provide services to the communication devices within the communication network. The load balancers each service a shared virtual Internet Protocol (IP) address common to all of the load balancers and perform load balancing of service requests on behalf of two or more of the servers that are located geographically proximate to the load balancer. The communication devices are communicatively coupled with the load balancers and are configured to issue service requests intended for any of the servers to the shared virtual IP address, whereby, upon issuing a service request, a communication device is directed to a particular server selected by a load balancing routine that is associated with a load balancer that is closest to the communication device.

Patent
28 Feb 2006
TL;DR: In this paper, a backplane architecture, structure, and method that has no active components and separate power supply lines and protection to provide high reliability in server environment is presented for power management and workload management for multi-server environments.
Abstract: Network architecture, computer system and/or server, circuit, device, apparatus, method, and computer program and control mechanism for managing power consumption and workload in computer system and data and information servers. Further provides power and energy consumption and workload management and control systems and architectures for high-density and modular multi-server computer systems that maintain performance while conserving energy and method for power management and workload management. Dynamic server power management and optional dynamic workload management for multi-server environments is provided by aspects of the invention. Modular network devices and integrated server system, including modular servers, management units, switches and switching fabrics, modular power supplies and modular fans and a special backplane architecture are provided as well as dynamically reconfigurable multi-purpose modules and servers. Backplane architecture, structure, and method that has no active components and separate power supply lines and protection to provide high reliability in server environment.

Patent
28 Aug 2006
TL;DR: In this article, a system, method, and computer program product for publishing transcoded media content in response to publishing service requests from end users is presented, where a user request for media content, is processed intelligently, either by directing the processing of the request to one of a set of transcoding servers so as to effectively balance the load among the servers, or by directing processing of request to an appropriate alternative means for satisfying the request.
Abstract: A system, method, and computer program product for publishing transcoded media content in response to publishing service requests from end users. A user request for media content, is processed intelligently, either by directing the processing of the request to one of a set of transcoding servers so as to effectively balance the load among the servers, or by directing the processing of the request to an appropriate alternative means for satisfying the request. Transcoding tasks can be prioritized. Moreover, the current load on any particular transcoding server can be monitored in conjunction with determination of the load to be created by a transcoding task, in order to facilitate server selection. Transcoding can be performed on-demand or in a batch mode. Alternatively, a request can be satisfied by distributing media content that has already been transcoded and is resident in cache memory in anticipation of such requests.