scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 2003"


Journal ArticleDOI
TL;DR: The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large by making use of universally available web GUIs (Graphical User Interfaces).
Abstract: The abbreviated name,‘mfold web server’,describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces),the server circumvents the problem of portability of this software. Detailed output,in the form of structure plots with or without reliability information,single strand frequency plots and ‘energy dot plots’, are available for the folding of single sequences. A variety of ‘bulk’ servers give less information,but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/ mfold. This URL will be referred to as ‘MFOLDROOT’.

12,535 citations


Journal ArticleDOI
TL;DR: Googless architecture features clusters of more than 15,000 commodity-class PCs with fault tolerant software that achieves superior performance at a fraction of the cost of a system built from fewer, but more expensive, high-end servers.
Abstract: Amenable to extensive parallelization, Google's web search application lets different queries run on different processors and, by partitioning the overall index, also lets a single query use multiple processors. to handle this workload, Googless architecture features clusters of more than 15,000 commodity-class PCs with fault tolerant software. This architecture achieves superior performance at a fraction of the cost of a system built from fewer, but more expensive, high-end servers.

1,129 citations


Patent
10 Dec 2003
TL;DR: In this paper, a distributed data storage system for sharing data among client computers running different types of operating systems by separating metadata from data is presented. But the client computers communicate with the metadata servers using a Storage Tank protocol and over a control network.
Abstract: A distributed data storage system for sharing data among client computers running different types of operating systems by separating metadata from data. Data is stored in storage pools that are accessed by the client computers through a storage network. Metadata is stored in a metadata store and provided to the client computers by a cluster of metadata servers. The client computers communicate with the metadata servers using a Storage Tank protocol and over a control network. Each client computer runs an operating system-specific client program that provides the client side functions of the Storage Tank protocol. The client program preferably includes a file system interface for communicating with the file system in the storage system and user applications, a client state manager for providing data consistency, and a plurality of operating system services for communicating with the metadata servers.

976 citations


Journal ArticleDOI
TL;DR: This work proposes a fully self-organized public-key management system that allows users to generate their public-private key pairs, to issue certificates, and to perform authentication regardless of the network partitions and without any centralized services.
Abstract: In contrast with conventional networks, mobile ad hoc networks usually do not provide online access to trusted authorities or to centralized servers, and they exhibit frequent partitioning due to link and node failures and to node mobility. For these reasons, traditional security solutions that require online trusted authorities or certificate repositories are not well-suited for securing ad hoc networks. We propose a fully self-organized public-key management system that allows users to generate their public-private key pairs, to issue certificates, and to perform authentication regardless of the network partitions and without any centralized services. Furthermore, our approach does not require any trusted authority, not even in the system initialization phase.

877 citations


Patent
09 Jan 2003
TL;DR: In this paper, a power management architecture for an electrical power distribution system, or portion thereof, is disclosed, which includes multiple intelligent electronic devices (IEDs) distributed throughout the power distribution systems to manage the flow and consumption of power from the system using real time communications.
Abstract: A power management architecture for an electrical power distribution system, or portion thereof, is disclosed. The architecture includes multiple intelligent electronic devices (“IED's”) distributed throughout the power distribution system to manage the flow and consumption of power from the system using real time communications. Power management application software and/or hardware components operate on the IED's and the back-end servers and inter-operate via the network to implement a power management application. The architecture provides a scalable and cost effective framework of hardware and software upon which such power management applications can operate to manage the distribution and consumption of electrical power by one or more utilities/suppliers and/or customers which provide and utilize the power distribution system. Autonomous communication on the network between IED's, back-end servers and other entities coupled with secure networks, themselves interconnected, via firewalls, by one or more unsecure networks, is facilitated by the use of a back-channel protocol. The back-channel protocol allows a device coupled with a secure network to solicit communications from a device on the unsecure network, thereby opening a back-channel through the firewall through which the unsecure network device may send unsolicited messages to the secure network device. Communications between multiple secure networks is accomplished using a unsecure device on an intermediary unsecure network to relay communications between the secure network devices using the protocol described above.

707 citations


Journal ArticleDOI
TL;DR: Commercial-server energy management now focuses on conserving power in the memory and microprocessor subsystems, which is more applicable to multiprocessor environments in commercial servers than techniques that primarily apply to single-application environments, such as those based on compiler optimizations.
Abstract: Servers: high-end, multiprocessor systems running commercial workloads, have typically included extensive cooling systems and resided in custom-built rooms for high-power delivery. Recently, as transistor density and demand for computing resources have rapidly increased, even these high-end systems face energy-use constraints. Commercial-server energy management now focuses on conserving power in the memory and microprocessor subsystems. Because their workloads are typically structured as multiple application programs, system-wide approaches are more applicable to multiprocessor environments in commercial servers than techniques that primarily apply to single-application environments, such as those based on compiler optimizations.

482 citations


Proceedings Article
04 Aug 2003
TL;DR: This work devise a timing attack against OpenSSL that can extract private keys from an OpenSSL-based web server running on a machine in the local network.
Abstract: Timing attacks are usually used to attack weak computing devices such as smartcards. We show that timing attacks apply to general software systems. Specifically, we devise a timing attack against OpenSSL. Our experiments show that we can extract private keys from an OpenSSL-based web server running on a machine in the local network. Our results demonstrate that timing attacks against network servers are practical and therefore security systems should defend against them.

474 citations


Book ChapterDOI
21 Feb 2003
TL;DR: This paper explores the space of designing load-balancing algorithms that uses the notion of “virtual servers” and presents three schemes that differ primarily in the amount of information used to decide how to re-arrange load.
Abstract: Most P2P systems that provide a DHT abstraction distribute objects among “peer nodes” by choosing random identifiers for the objects. This could result in an O(log N) imbalance. Besides, P2P systems can be highly heterogeneous, i.e. they may consist of peers that range from old desktops behind modem lines to powerful servers connected to the Internet through high-bandwidth lines. In this paper, we address the problem of load balancing in such P2P systems. We explore the space of designing load-balancing algorithms that uses the notion of “virtual servers”. We present three schemes that differ primarily in the amount of information used to decide how to re-arrange load. Our simulation results show that even the simplest scheme is able to balance the load within 80% of the optimal value, while the most complex scheme is able to balance the load within 95% of the optimal value.

473 citations


Proceedings ArticleDOI
Robert O'Callahan1, Jong-Deok Choi1
11 Jun 2003
TL;DR: A formalization of locksetbased and happens-before-based approaches in a common framework is presented, allowing us to prove a "folk theorem" that happens- before detection reports fewer false positives than lockset-based detection (but can report more false negatives), and to prove that two key optimizations are correct.
Abstract: We present a new method for dynamically detecting potential data races in multithreaded programs. Our method improves on the state of the art in accuracy, in usability, and in overhead. We improve accuracy by combining two previously known race detection techniques -- lockset-based detection and happens-before-based detection -- to obtain fewer false positives than lockset-based detection alone. We enhance usability by reporting more information about detected races than any previous dynamic detector. We reduce overhead compared to previous detectors -- particularly for large applications such as Web application servers -- by not relying on happens-before detection alone, by introducing a new optimization to discard redundant information, and by using a "two phase" approach to identify error-prone program points and then focus instrumentation on those points. We justify our claims by presenting the results of applying our tool to a range of Java programs, including the widely-used Web application servers Resin and Apache Tomcat. Our paper also presents a formalization of locksetbased and happens-before-based approaches in a common framework, allowing us to prove a "folk theorem" that happens-before detection reports fewer false positives than lockset-based detection (but can report more false negatives), and to prove that two key optimizations are correct.

442 citations


Journal ArticleDOI
TL;DR: An overview of the CDN architecture and popular CDN service providers can be found in this paper, where the authors offer an overview of some of the most popular service providers and their architecture.
Abstract: CDNs improve network performance and offer fast and reliable applications and services by distributing content to cache servers located close to users. The Web's growth has transformed communications and business services such that speed, accuracy, and availability of network-delivered content has become absolutely critical - both on their own terms and in terms of measuring Web performance. Proxy servers partially address the need for rapid content delivery by providing multiple clients with a shared cache location. In this context, if a requested object exists in a cache (and the cached version has not expired), clients get a cached copy, which typically reduces delivery time. CDNs act as trusted overlay networks that offer high-performance delivery of common Web objects, static data, and rich multimedia content by distributing content load among servers that are close to the clients. CDN benefits include reduced origin server load, reduced latency for end users, and increased throughput. CDNs can also improve Web scalability and disperse flash-crowd events. Here we offer an overview of the CDN architecture and popular CDN service providers.

430 citations


Proceedings ArticleDOI
27 Oct 2003
TL;DR: This paper presents a simple yet robust single-server solution for remote querying of encrypted databases on untrusted servers based on the use of indexing information attached to the encrypted database which can be used by the server to select the data to be returned in response to a query without the need of disclosing the database content.
Abstract: The scope and character of today's computing environments are progressively shifting from traditional, one-on-one client-server interaction to the new cooperative paradigm. It then becomes of primary importance to provide means of protecting the secrecy of the information, while guaranteeing its availability to legitimate clients. Operating on-line querying services securely on open networks is very difficult; therefore many enterprises outsource their data center operations to external application service providers. A promising direction towards prevention of unauthorized access to outsourced data is represented by encryption. However, data encryption is often supported for the sole purpose of protecting the data in storage and assumes trust in the server, that decrypts data for query execution.In this paper, we present a simple yet robust single-server solution for remote querying of encrypted databases on untrusted servers. Our approach is based on the use of indexing information attached to the encrypted database which can be used by the server to select the data to be returned in response to a query without the need of disclosing the database content. Our indexes balance the trade off between efficiency requirements in query execution and protection requirements due to possible inference attacks exploiting indexing information. We also investigate quantitative measures to model inference exposure and provide some related experimental results.

Patent
05 Feb 2003
TL;DR: Several ways of identifying users and collecting demographic information and market information are disclosed, including branding a browser with a unique identification in each user request, identifying a user by his key strokes or mouse clicks, gathering demographic information using multiple data sets and by monitoring network traffic.
Abstract: Several ways of identifying users and collecting demographic information and market information are disclosed, including branding a browser with a unique identification in each user request, identifying a user by his key strokes or mouse clicks, gathering demographic information using multiple data sets and by monitoring network traffic. Additionally, user requested content is distinguished from other, non-user content, and the performance of a server can be monitor and analyzed from a client a client perspective. Further, an Internet user's Internet data is routed to a known domain on the Internet, from which it is routed on to the intended recipient. The domain includes proxy servers which proxy the user's data requests to the domain, and database servers, which filter and build a database of the user's Internet usage. Particular data concerning certain behaviors of interest, such as purchasing data, is filtered into the database, and can form the basis for numerous market measures.

Patent
14 Jan 2003
TL;DR: In this article, a communication application server is proposed for supporting converged communications in a communication system, where the communication service requests from external endpoints, applications or other requesting entities are routed through the communication application servers.
Abstract: A communication application server for supporting converged communications in a communication system. The communication application server is responsive to communication service requests from external endpoints, applications or other requesting entities, and in one embodiment comprises at least first and second components. The first component is operative: (i) to process a given one of the communication service requests to identify at least one corresponding communication service supported by the communication application server; (ii) to determine one or more executable communication tasks associated with the identified communication service; and (iii) to establish communication with one or more external servers to carry out execution of at least a subset of the one or more executable communication taks associated with the communication service. The second component is coupled between the first component and the one or more external servers, and provides, for each of the external servers, a corresponding interface for connecting the communication application server to the external server.

Journal ArticleDOI
TL;DR: A method for improving the performance of web servers servicing static HTTP requests to give preference to requests for small files or requests with short remaining file size, in accordance with the SRPT (Shortest Remaining Processing Time) scheduling policy.
Abstract: Is it possible to reduce the expected response time of every request at a web server, simply by changing the order in which we schedule the requests? That is the question we ask in this paper.This paper proposes a method for improving the performance of web servers servicing static HTTP requests. The idea is to give preference to requests for small files or requests with short remaining file size, in accordance with the SRPT (Shortest Remaining Processing Time) scheduling policy.The implementation is at the kernel level and involves controlling the order in which socket buffers are drained into the network. Experiments are executed both in a LAN and a WAN environment. We use the Linux operating system and the Apache and Flash web servers.Results indicate that SRPT-based scheduling of connections yields significant reductions in delay at the web server. These result in a substantial reduction in mean response time and mean slowdown for both the LAN and WAN environments. Significantly, and counter to intuition, the requests for large files are only negligibly penalized or not at all penalized as a result of SRPT-based scheduling.

Proceedings ArticleDOI
23 Jun 2003
TL;DR: The results for Web and proxy servers show that the fourth approach can provide energy savings of up to 23%, in comparison to conventional servers, without any degradation in server performance.
Abstract: In this paper we study four approaches to conserving disk energy in high-performance network servers The first approach is to leverage the extensive work on laptop disks and power disks down during periods of idleness The second approach is to replace high-performance disks with a set of lower power disks that can achieve the same performance and reliability The third approach is to combine high-performance and laptop disks, such that only one of these two sets of disks is powered on at a time This approach requires the mirroring (and coherence) of all disk data on the two sets of disks Finally, the fourth approach is to use multi-speed disks, such that each disk is slowed down for lower energy consumption during periods of light load We demonstrate that the fourth approach is the only one that can actually provide energy savings for network servers In fact, our results for Web and proxy servers show that the fourth approach can provide energy savings of up to 23%, in comparison to conventional servers, without any degradation in server performance

Journal ArticleDOI
19 Oct 2003
TL;DR: Capriccio is presented, a scalable thread package for use with high-concurrency servers and introduced linked stack management, which minimizes the amount of wasted stack space by providing safe, small, and non-contiguous stacks that can grow or shrink at run time.
Abstract: This paper presents Capriccio, a scalable thread package for use with high-concurrency servers. While recent work has advocated event-based systems, we believe that thread-based systems can provide a simpler programming model that achieves equivalent or superior performance.By implementing Capriccio as a user-level thread package, we have decoupled the thread package implementation from the underlying operating system. As a result, we can take advantage of cooperative threading, new asynchronous I/O mechanisms, and compiler support. Using this approach, we are able to provide three key features: (1) scalability to 100,000 threads, (2) efficient stack management, and (3) resource-aware scheduling.We introduce linked stack management, which minimizes the amount of wasted stack space by providing safe, small, and non-contiguous stacks that can grow or shrink at run time. A compiler analysis makes our stack implementation efficient and sound. We also present resource-aware scheduling, which allows thread scheduling and admission control to adapt to the system's current resource usage. This technique uses a blocking graph that is automatically derived from the application to describe the flow of control between blocking points in a cooperative thread package. We have applied our techniques to the Apache 2.0.44 web server, demonstrating that we can achieve high performance and scalability despite using a simple threaded programming model.

Patent
09 Apr 2003
TL;DR: In this paper, a cache hierarchy is established in the CDN comprising a given edge server region and either (a) a single parent region, or (b) a subset of the edge server regions.
Abstract: A tiered distribution service is provided in a content delivery network (CDN) having a set of surrogate origin (namely, “edge”) servers organized into regions and that provide content delivery on behalf of participating content providers, wherein a given content provider operates an origin server. According to the invention, a cache hierarchy is established in the CDN comprising a given edge server region and either (a) a single parent region, or (b) a subset of the edge server regions. In response to a determination that a given object request cannot be serviced in the given edge region, instead of contacting the origin server, the request is provided to either the single parent region or to a given one of the subset of edge server regions for handling, preferably as a function of metadata associated with the given object request. The given object request is then serviced, if possible, by a given CDN server in either the single parent region or the given subset region. The original request is only forwarded on to the origin server if the request cannot be serviced by an intermediate node.

Patent
05 Feb 2003
TL;DR: Several ways of identifying users and collecting demographic information and market information are disclosed, including branding a browser with a unique identification in each user request, identifying a user by his key strokes or mouse clicks, gathering demographic information using multiple data sets and by monitoring network traffic as mentioned in this paper.
Abstract: Several ways of identifying users and collecting demographic information and market information are disclosed, including branding a browser with a unique identification in each user request, identifying a user by his key strokes or mouse clicks, gathering demographic information using multiple data sets and by monitoring network traffic. Additionally, user requested content is distinguished from other, non-user content, and the performance of a server can be monitor and analyzed from a client a client perspective. Further, an Internet user's Internet data is routed to a known domain on the Internet, from which it is routed on to the intended recipient. The domain includes proxy servers which proxy the user's data requests to the domain, and database servers, which filter and build a database of the user's Internet usage. Particular data concerning certain behaviors of interest, such as purchasing data, is filtered into the database, and can form the basis for numerous market measures.

Patent
18 Jun 2003
TL;DR: In this paper, a communication interface and a common communication protocol allow data transfer between gaming machines and other network nodes such as gaming service servers, despite the presence of different proprietary gaming machine functions and proprietary communication protocols.
Abstract: Open architecture communication systems and methods are provided that allow flexible data transmission between gaming machines and other devices and nodes within a gaming machine network. The gaming machine and other devices employ a communication interface that sends and receives data via a common communication protocol and via common communication hardware. The communication interface and common communication protocol allow data transfer between gaming machines and other network nodes such as gaming service servers, despite the presence of different proprietary gaming machine functions and proprietary communication protocols and despite the presence of various proprietary hardware and proprietary communication protocols relied on by the servers.

Patent
12 Dec 2003
TL;DR: In this paper, the hash values are used as unique identifiers for resources distributed across a network, and each one of a set of pool servers store the hash value for a subset of computers within a LAN, so that the information within the pool server can be used to access the required resource.
Abstract: Provided are methods, apparatus and computer programs for enhanced access to resources within a network, including for controlling use of bandwidth-sensitive connections within a network and/or for automated recovery. Hash values are used as ‘unique’ identifiers for resources distributed across a network, and each one of a set of pool servers store the hash values for a set of computers within a LAN. When a resource is required, a hash value representing the resource can be retrieved and compared with hash values stored at a pool server to determine whether the pool server holds a matching hash value. Any such matching hash value found on the pool server represents an identification of a local copy of the required resource, because of the uniqueness property of secure ash values. The information within the pool server can be used to access the required resource. If a large resource such as a BLOB or new version of a computer program can be obtained from another computer within a LAN, a reduction in reliance on bandwidth-sensitive Internet connections and reduced load on remote servers becomes possible.

Patent
02 Sep 2003
TL;DR: In this paper, a secure streaming content delivery system provides a plurality of content servers connected to a network that host customer content that can be cached and/or stored, e.g., images, video, text, and software.
Abstract: A secure streaming content delivery system provides a plurality of content servers connected to a network that host customer content that can be cached and/or stored, e.g., images, video, text, and/or software. The content servers respond to requests for customer content from users. The invention load balances user requests for cached customer content to the appropriate content server. A user makes a request to a customer's server/authorization server for delivery of the customer's content. The authorization server checks if the user is authorized to view the requested content. If the user is authorized, then the authorization server generates a hash value using the authorization server's secret key, the current time, a time-to-live value, and any other information that the customer has configured, and embeds it into the URL which is passed to the user. A content server receives a URL request from the user for customer content cached on the content server. The request is verified by the content server creating its own hash value using the customer server's secret key, the current time, a time-to-live value, and any other related information configured for the customer. If the hash value from the URL matches the content server's generated hash value, then the user's request is valid and within the expiration time period and the content server delivers the requested content to the user.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a multi-level transaction model that provides the necessary independence for the participating resource managers, e.g., local database and workflow servers, of organisations engaging in business transactions that are composed of interacting web services.
Abstract: Process oriented workflow systems and e-business applications require transactional support in order to orchestrate loosely coupled services into cohesive units of work and guarantee consistent and reliable execution. In this paper we introduce a multi-level transaction model that provides the necessary independence for the participating resource managers, e.g., local database and workflow servers, of organisations engaging in business transactions that are composed of interacting web services. We also present a taxonomy of e-business transaction features such as unconventional atomicity criteria, the need for support for business conversations and the need for distinguishing between three basic business transaction phases. In addition, we review current research and standard activities and outline the main ingredients of a business transaction framework necessary for building flexible e-business applications.

Patent
19 Jun 2003
TL;DR: In this paper, a network server system includes a download manager that manages the publication, purchase and delivery of digital products from multiple suppliers to wireless services subscribers in multiple domains, where each product can also be associated with multiple different provisioning models, each corresponding to a different set of device capabilities.
Abstract: A network server system includes a download manager that manages the publication, purchase and delivery of digital products from multiple suppliers to wireless services subscribers in multiple domains. Product suppliers can publish and manage their products on the server system via a computer network and make their products available to the subscribers for purchase or licensing. The subscribers in each domain can access the server remotely to purchase rights to download and use the products on associated wireless communication devices. Multiple different implementations of any product can be maintained, where each implementation corresponds to a different set of device capabilities. Each product can also be associated with multiple different provisioning models, each corresponding to a different set of device capabilities.

Patent
19 Jun 2003
TL;DR: In this article, a network server system includes a download manager that manages the publication, purchase and delivery of digital content from multiple content suppliers to wireless services subscribers in multiple domains, such as a wireless carrier or subsidiary thereof, a business enterprise, or other defined group of subscribers.
Abstract: A network server system includes a download manager that manages the publication, purchase and delivery of digital content from multiple content suppliers to wireless services subscribers in multiple domains. Each domain is defined as a different grouping of subscribers, such as a wireless carrier or subsidiary thereof, a business enterprise, or other defined group of subscribers. The download manager maintains data defining the multiple domains and associations between the domains and wireless services subscribers. Digital content suppliers can publish and manage their products on the server system via a computer network and make their products available to the subscribers for purchase or licensing. The subscribers in each of the domains can access the server remotely to purchase rights to download and use the digital content on associated wireless communication devices.

Proceedings ArticleDOI
Vivek Sharma1, A. Thomas1, Tarek Abdelzaher1, Kevin Skadron1, Zhijian Lu1 
03 Dec 2003
TL;DR: This paper investigates adaptive algorithms for dynamic voltage scaling in QoS-enabled Web servers to minimize energy consumption subject to service delay constraints and implements these algorithms inside the Linux kernel.
Abstract: Power management in data centers has become an increasingly important concern. Large server installations are designed to handle peak load, which may be significantly larger than in off-peak conditions. The increasing cost of energy consumption and cooling incurred in farms of high-performance Web servers make low-power operation during off-peak hours desirable. This paper investigates adaptive algorithms for dynamic voltage scaling in QoS-enabled Web servers to minimize energy consumption subject to service delay constraints. We implement these algorithms inside the Linux kernel. The instrumented kernel supports multiple client classes with per-class deadlines. Energy consumption is minimized by using a feedback loop that regulates frequency and voltage levels to keep the synthetic utilization around the aperiodic schedulability bound derived in an earlier publication. Enforcing the bound ensures that deadlines are met. Our evaluation of an Apache server running on the modifier Linux kernel shows that non-trivial off-peak energy savings are possible without sacrificing timeliness.

Patent
12 Nov 2003
TL;DR: In this article, a networked, online game system and method of operation, the system including a plurality of players, each operating a game playing computer interconnected over a network with a gaming server computer.
Abstract: A networked, online gaming system and method of operation, the system including a plurality of players, each operating a game playing computer interconnected over a network with a gaming server computer. The gaming server computer generates a profile for each of the players, which may include the player's gaming proficiency, and socioeconomic and physical data of the player. The gaming server computer matches the players (as teammates or opponents) to play a game based on the profile of the players, supervises the game played by the matched players, modifies controllable parameters of the game being played, and manages a reward point account provided for each player.

Journal ArticleDOI
TL;DR: A new remote user authentication scheme that does not need to maintain any verification table, and allows users to choose their passwords freely, and a user can be removed from the system easily when the subscription expires.

Book ChapterDOI
08 Jan 2003
TL;DR: This paper proposes future directions for research in P2P systems, and highlights problems that have not yet been studied in great depth, and suggests several open and important research problems for the community to address.
Abstract: In a Peer-To-Peer (P2P) system, autonomous computers pool their resources (e.g., files, storage, compute cycles) in order to inexpensively handle tasks that would normally require large costly servers. The scale of these systems, their "open nature," and the lack of centralized control pose difficult performance and security challenges. Much research has recently focused on tackling some of these challenges; in this paper, we propose future directions for research in P2P systems, and highlight problems that have not yet been studied in great depth. We focus on two particular aspects of P2P systems - search and security - and suggest several open and important research problems for the community to address.

Patent
06 Nov 2003
TL;DR: In this paper, a method for accurately determining the geographic location of a PC or other networked device on the Internet is presented, where the client collects an array of IP address and other network information as a result of the trace-routes, and the traceroute IP information is then transmitted to the service provider that is trying to identify the geographic locations of the client.
Abstract: A method for accurately determining the geographic location of a PC or other networked device on the Internet. Client software furnished by a service provider performs trace-route or other network analysis commands to known servers (e.g., eBay, Yahoo, Amazon) or even servers at random locations. The client collects an array of IP address and other network information as a result of the trace-routes, and the trace-route IP information is then transmitted to the service provider that is trying to identify the geographic location of the client. Using the array of IP addresses thus generated, the Internet server software can analyze location information of each Internet hop within each trace-route. For example, the server might look at the first five hops from the client to the server. If four of the five routers have addresses within the geographic area of interest, the server can conclude that the client is probably within the geographic area.

Patent
20 Aug 2003
TL;DR: In this article, a matching network system including communication devices, servers and software enables the provisioning of services and execution of transactions based on a plurality of private and public personality profiles and behavior models.
Abstract: A matching network system including communication devices, servers and software which enables the provisioning of services and execution of transactions based on a plurality of private and public personality profiles and behavior models of the users, of the communication devices, of the products/services and of the servers; in combination with the software resident at the communication device level and or the local/network server level. Matching and searching processes based on a plurality of personality profiles wherein the information, communication and transactions are enabled to be matched with the user, the communication device and or the servers. The communication device is a stationary device or a mobile device, such as a portable computing device, wireless telephone, cellular telephone, personal digital assistant, or a multifunction communication, computing and control device.