scispace - formally typeset
Search or ask a question

Showing papers by "Jose Renato Santos published in 2010"


Patent
16 Jul 2010
TL;DR: In this article, the authors present a method for sharing a memory page of a source domain executing on a first virtual machine with a destination domain executed on a second virtual machine by adding an address translation entry for the memory page in a table.
Abstract: Example methods, apparatus, and articles of manufacture to share memory spaces for access by hardware and software in a virtual machine environment are disclosed. A disclosed example method involves enabling a sharing of a memory page of a source domain executing on a first virtual machine with a destination domain executing on a second virtual machine. The example method also involves mapping the memory page to an address space of the destination domain and adding an address translation entry for the memory page in a table. In addition, the example method involves sharing the memory page with a hardware device for direct memory access of the memory page by the hardware device.

43 citations


Proceedings ArticleDOI
23 May 2010
TL;DR: This work proposes enhancements to layer-two (L2) Ethernet switches to enable multipath L2 routing in scalable datacenters to replace an expensive router with commodity switches, while exposing a powerful network management inter-face for multipath load balancing, QoS differentiation, and resilience to faults.
Abstract: Most datacenter networks are based on specialized edge-core topologies, which are costly to build, difficult to maintain and consume too much power. We propose enhancements to layer-two (L2) Ethernet switches to enable multipath L2 routing in scalable datacenters. This replaces an expensive router with commodity switches. Our hash-based routing approach reuses and minimally extends hardware structures in high-volume switches, while exposing a powerful network management inter-face for multipath load balancing, QoS differentiation, and resilience to faults. Simulation results demonstrate near-optimal load balancing for uniform and non-uniform traffic patterns, and effective management of large datacenter networks independent of the number of traffic flows.

32 citations


13 Mar 2010
TL;DR: An implementation of the new grant mechanism that fully supports driver domains, but not yet IOMMUs is developed, and performance results show that the new mechanism reduces per-packet overhead by up to 31% and increases network throughput byUp to 52%.
Abstract: Xen's memory sharing mechanism, called the grant mechanism, is used to share I/O buffers in guest domains' memory with a driver domain. Previous studies have identified the grant mechanism as a significant source of network I/O overhead in Xen. This paper describes a redesigned grant mechanism to significantly reduce the associated overheads. Unlike the original grant mechanism, the new mechanism allows guest domains to unilaterally issue and revoke grants. As a result, the new mechanism makes it simple for the guest OS to reduce the number of grant issue and revoke operations that are needed for I/O by taking advantage of temporal and/or spatial locality in its use of I/O buffers. Another benefit of the new mechanism is that it provides a unified interface for memory sharing, whether between guest and driver domains, or between guest domains and I/O devices using IOMMU hardware. We have developed an implementation of the new grant mechanism that fully supports driver domains, but not yet IOMMUs. The paper presents performance results using this implementation which show that the new mechanism reduces per-packet overhead by up to 31% and increases network throughput by up to 52%.

29 citations


Proceedings ArticleDOI
25 Oct 2010
TL;DR: The sNICh architecture is presented, which is a combination of a network interface card and switching accelerator for modern virtualized servers that outperforms both of these existing solutions and also exhibits better scalability.
Abstract: Virtualization has fundamentally changed the data center network. The last hop of the network is no longer handled by a physical network switch, but rather is typically performed in software inside the server to switch among virtual machines hosted by that server.In this paper, we present the concept of a sNICh, which is a combination of a network interface card and switching accelerator for modern virtualized servers. The sNICh architecture exploits the proximity of the switching accelerator to the server by carefully dividing the network switching tasks between them. This division enables the sNICh to address the resource intensiveness of exclusively software-based approaches and the scalability limits of exclusively hardware-based approaches. Essentially, the sNICh hardware performs basic flow-based switching and the sNICh software handles flow setup based on packet filtering rules. The sNICh also minimizes I/O bus bandwidth utilization by transferring, whenever possible, inter-virtual machine traffic within the main memory.We also present a preliminary evaluation of this architecture using software emulation. We compare the performance of the sNICh with two existing software solutions in Xen, the Linux bridge and Open vSwitch. Our results show that the sNICh outperforms both of these existing solutions and also exhibits better scalability.

27 citations


Patent
29 Apr 2010
TL;DR: In this article, the authors present a system and method for identifying a memory page that is accessible via a common physical address, the common physical access simultaneously accessed by a hypervisor remapping the physical address to a machine address, and the physical access used as part of a DMA operation generated by an I/O device that is programmed by a VM.
Abstract: Illustrated is a system and method for identifying a memory page that is accessible via a common physical address, the common physical address simultaneously accessed by a hypervisor remapping the physical address to a machine address, and the physical address used as part of a DMA operation generated by an I/O device that is programmed by a VM. It also includes transmitting data associated with the memory page as part of a memory disaggregation regime, the memory disaggregation regime to include an allocation of an additional memory page, on a remote memory device, to which the data will be written. It further includes updating a P2M translation table associated with the hypervisor, and an IOMMU translation table associated with the I/O device, to reflect a mapping from the physical address to a machine address associated with the remote memory device and used to identify the additional memory page.

25 citations


Patent
02 Nov 2010
TL;DR: In this article, a network node determines whether a rule identifier included in a cache entry of a cache of results of a packet filtering rule set is of a higher priority than a highest priority rule corresponding to a rule set version identifier.
Abstract: Example embodiments relate to selective invalidation of packet filtering cache results based on rule priority. In example embodiments, a network node determines whether a rule identifier included in a cache entry of a cache of results of a packet filtering rule set is of a higher priority than a highest priority rule corresponding to a rule set version identifier included in the cache entry. If so, the network node may apply an action included in the cache entry.

20 citations


Patent
30 May 2010
TL;DR: A memory has a page to store code executable by a processor as mentioned in this paper, and a management component is to inject the code into a virtual machine, indicating within a memory table for the virtual machine that the page of the memory has an injected code type.
Abstract: A memory has a page to store code executable by a processor. A management component is to inject the code into a virtual machine. The management component is to indicate within a memory table for the virtual machine that the page of the memory has an injected code type.

9 citations