scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 2007"


Proceedings ArticleDOI
A. Kumary1, P. Kunduz2, A.P. Singhx2, L.-S. Pehy1, N.K. Jhay 
01 Oct 2007
TL;DR: This paper presents a detailed design of a novel high throughput and low latency switch allocation mechanism, a non-speculative single-cycle router pipeline which uses advanced bundles to remove control setup overhead, a low-complexity virtual channel allocator and a dynamically-managed shared buffer design which uses prefetching to minimize critical path delay.
Abstract: As chip multiprocessors (CMPs) become the only viable way to scale up and utilize the abundant transistors made available in current microprocessors, the design of on-chip networks is becoming critically important. These networks face unique design constraints and are required to provide extremely fast and high bandwidth communication, yet meet tight power and area budgets. In this paper, we present a detailed design of our on-chip network router targeted at a 36-core shared-memory CMP system in 65 nm technology. Our design targets an aggressive clock frequency of 3.6 GHz, thus posing tough design challenges that led to several unique circuit and microarchitectural innovations and design choices, including a novel high throughput and low latency switch allocation mechanism, a non-speculative single-cycle router pipeline which uses advanced bundles to remove control setup overhead, a low-complexity virtual channel allocator and a dynamically-managed shared buffer design which uses prefetching to minimize critical path delay. Our router takes up 1.19 mm2 area and expends 551 mW power at 10% activity, delivering a single-cycle no-load latency at 3.6 GHz clock frequency while achieving apeak switching data rate in excess of 4.6 Tbits/sper router node.

217 citations


Proceedings Article
23 Sep 2007
TL;DR: This paper model the distributed load shedding problem as a linear optimization problem, and proposes two alternative solution approaches: a solver-based centralized approach, and a distributed approach based on metadata aggregation and propagation, whose centralized implementation is also available.
Abstract: In distributed stream processing environments, large numbers of continuous queries are distributed onto multiple servers. When one or more of these servers become overloaded due to bursty data arrival, excessive load needs to be shed in order to preserve low latency for the query results. Because of the load dependencies among the servers, load shedding decisions on these servers must be well-coordinated to achieve end-to-end control on the output quality. In this paper, we model the distributed load shedding problem as a linear optimization problem, for which we propose two alternative solution approaches: a solver-based centralized approach, and a distributed approach based on metadata aggregation and propagation, whose centralized implementation is also available. Both of our solutions are based on generating a series of load shedding plans in advance, to be used under certain input load conditions. We have implemented our techniques as part of the Borealis distributed stream processing system. We present experimental results from our prototype implementation showing the performance of these techniques under different input and query workloads.

171 citations


Patent
26 Oct 2007
TL;DR: In this paper, a thin client intelligent transportation system is proposed, where geospatial roadmaps and map matching systems are maintained in roadside nodes and are more fully exploited, and the thin client approach offers significant advantages over thick client approaches that rely on on-vehicle maps and matching systems.
Abstract: A thin client intelligent transportation system wherein geospatial roadmaps and map matching systems are maintained in roadside nodes and are more fully exploited. The thin client approach offers significant advantages over thick client approaches that rely on on-vehicle maps and map matching systems, including reduced complexity of on-board equipment and elimination of map integrity issues. The thin client approach also offers significant advantages over systems wherein the vehicle is required to access maps and map matching systems in real-time from a remote data center, including the ability to meet the low latency requirements for many vehicle safety applications. The present invention in some embodiments has added advantages in that it exploits roadside maps and map matching systems in revenue generating applications that are not directly related to passenger safety.

73 citations


Proceedings ArticleDOI
10 Oct 2007
TL;DR: This paper proposes a different approach to enhance edge computing in a wide area data replication protocol that enables the delivery of dynamic content with full consistency guarantees and with all the benefits of edge computing, such as low latency and high scalability.
Abstract: As the use of the Internet continues to grow explosively, edge computing has emerged as an important technique for delivering Web content over the Internet. Edge computing moves data and computation closer to end-users for fast local access and better load distribution. Current approaches use caching, which does not work well with highly dynamic data. In this paper, we propose a different approach to enhance edge computing. Our approach lies in a wide area data replication protocol that enables the delivery of dynamic content with full consistency guarantees and with all the benefits of edge computing, such as low latency and high scalability. What is more, the proposed solution is fully transparent to the applications that are brought to the edge. Our extensive evaluations in a real wide area network using TPC-W show promising results.

69 citations


Proceedings ArticleDOI
Amitava Ghosh1, Rapeepat Ratasuk1, Igor Filipovich1, Jun Tan1, Weimin Xiao1 
22 Apr 2007
TL;DR: A preliminary design and procedure for the random access channel used to establish a connection when the mobile is not yet time-synchronized to the network in the uplink is provided.
Abstract: Comprehensive long term evolution of the Universal Mobile Telecommunications System (UMTS) specifications is currently ongoing to provide significant improvement over the current release. Important goals for the evolved system include significantly improved system capacity and coverage, low latency, reduced operating costs, multi-antenna support, flexible bandwidth operations and seamless integration with existing systems. To ensure low latency, users must be able to establish a connection to the network quickly. This paper provides a preliminary design and procedure for the random access channel used to establish a connection when the mobile is not yet time-synchronized to the network in the uplink.

67 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: In this paper, a remote surgery experiment between Japan and Thailand using a research and development Internet is presented and a newly developed low latency CODEC system is introduced to shorten the time-delay.
Abstract: Remote surgery is one of the most desired applications in the context of recent advanced medical technologies. For a future expansion of remote surgery, it is important to use conventional network infrastructures such as Internet. However, using such conventional network infrastructures, we are confronting time-delay problems of data transmission. In this paper, a remote surgery experiment between Japan and Thailand using a research and development Internet is presented. In the experiment, the image and audio information was transmitted by a newly developed low latency CODEC system to shorten the time-delay. By introducing the low latency CODEC system, the time-delay was shortened compared with the past remote surgery experiments despite the longer distance. We also conducted several network measurements such as a comparison between TCP/IP and UDP/IP about the control signal transmission.

66 citations


Journal ArticleDOI
TL;DR: Negative impedance converters inserted at regular intervals along an on-chip line are shown to reduce losses from more than 1 dB/mm to less than 0.3 dB/ mm at 10 GHz, a factor-of-three improvement in power and a one-and-a-half-times improvement in latency over an optimally repeated RC line of the same wire width.
Abstract: In this paper, we describe the use of distributed loss compensation to provide nearly transmission-line behavior for long on-chip interconnects. Negative impedance converters (NICs) inserted at regular intervals along an on-chip line are shown to reduce losses from more than 1 dB/mm to less than 0.3 dB/mm at 10 GHz. Results are presented for a 14-mm 3-Gb/s on-chip double-data-rate (DDR) link in 0.18-mum CMOS technology, with a measured latency of 12.1 ps/mm and an energy consumption of less than 2 pJ/b with a BER<10-14. This constitutes a factor-of-three improvement in power and a one-and-a-half-times improvement in latency over an optimally repeated RC line of the same wire width

47 citations


Proceedings ArticleDOI
12 Nov 2007
TL;DR: An open source FPGA based NoC architecture with low area overhead, high throughput and low latency compared to other published works is described.
Abstract: Networks on chip (NoC) has long been seen as a potential solution to the problems encountered when implementing large digital hardware designs. In this paper we describe an open source FPGA based NoC architecture with low area overhead, high throughput and low latency compared to other published works. The architecture has been optimized for Xilinx FPGAs and the NoC is capable of operating at a frequency of 260 MHz in a Virtex-4 FPGA. We have also developed a bridge so that generic Wishbone bus compatible IP blocks can be connected to the NoC.

31 citations


Proceedings ArticleDOI
18 Feb 2007
TL;DR: The generic HyperTransport (HT) core is specially optimized to achieve a very low latency and has been verified in system using the rapid prototyping methodology with FPGAs, which allows the mapping to both ASICs and FPGA coprocessors.
Abstract: This paper presents the design of a generic HyperTransport (HT) core. It is specially optimized to achieve a very low latency. The core has been verified in system using the rapid prototyping methodology with FPGAs. This exhaustive verification and the generic design allows the mapping to both ASICs and FPGAs. The implementation described in this paper supports a link width of 16bit, as is used in Opteron based systems. On a Xilinx Virtex4FX60, the core supports a link frequency of 400MHz DDR and offers a maximum bidirectional bandwidth of 3.6 GB/s. The in-system verification has been performed using a custom FPGA board that has been plugged into a HyperTransport Extension Connector (HTX) of a standard Opteron based mainboard. HTX slots in Opteron based mainboards allow a very high-bandwidth, low latency communication, as the HTX device is directly connected to one of the Hyper-Transport links of the processor. HyperTransport is a packet-based interconnect technology for low-latency, high-bandwidth point-to-point connections. The HT core in combination with the HTX board is an ideal base for prototyping systems and FPGA coprocessors. The HT core is available as open source.

30 citations


Book ChapterDOI
28 Aug 2007
TL;DR: A novel mapping and scheduling algorithm that minimizes the latency of workflows that act on a stream of input data, while satisfying throughput requirements is developed.
Abstract: In many application domains, it is desirable to meet some user-defined performance requirement while minimizing resource usage and optimizing additional performance parameters. For example, application workflows with real-time constraints may have strict throughput requirements and desire a low latency or response-time. The structure of these workflows can be represented as directed acyclic graphs of coarse-grained application tasks with data dependences. In this paper, we develop a novel mapping and scheduling algorithm that minimizes the latency of workflows that act on a stream of input data, while satisfying throughput requirements. The algorithm employs pipelined parallelism and intelligent clustering and replication of tasks to meet throughput requirements. Latency is minimized by exploiting task parallelism and reducing communication overheads. Evaluation using synthetic benchmarks and application task graphs shows that our algorithm 1) consistently meets throughput requirements even when other existing schemes fail, 2) produces lower-latency schedules, and 3) results in lesser resource usage.

29 citations


30 Jan 2007
TL;DR: This paper investigates the claim that a variant of timing attack that does not require a global adversary to be applied to Tor and draws design principles for secure low latency anonymous network system (also secure against the above attack).
Abstract: Low latency anonymous network systems, such as Tor, were considered secure against timing attacks when the threat model does not include a global adversary. In this threat model the adversary can only see part of the links in the system. In a recent paper entitled Low-cost traffic analysis of Tor, it was shown that a variant of timing attack that does not require a global adversary can be applied to Tor. More importantly, authors claimed that their attack would work on any low latency anonymous network systems. The implication of the attack is that all low latency anonymous networks will be vulnerable to this attack even if there is no global adversary. In this paper, we investigate this claim against other low latency anonymous networks, including Tarzan and Morphmix. Our results show that in contrast to the claim of the aforementioned paper, the attack may not be applicable in all cases. Based on our analysis, we draw design principles for secure low latency anonymous network system (also secure against the above attack).

Proceedings ArticleDOI
01 Dec 2007
TL;DR: Through simulations, it is shown that the proposed approach can provide substantial energy saving in this class of sensor application compared to the traditional multihop approach used alone.
Abstract: We propose an energy-efficient hybrid data collection architecture based on controllably mobile infrastructure for a class of applications in which sensor networks provide both low-priority and high-priority data. High-priority data require a data delivery scheme with low latency and high fidelity. Meanwhile low-priority data may tolerate high-latency data delivery. Our approach exploits the design of a network that supports a hybrid data delivery scheme to enhance the network performance and reduces total network energy usage. In our system design two delivery schemes are deployed for purposes of comparison. The first is the traditional ad hoc approach to deliver high-priority data with high fidelity and low latency. The second presents a controllable infrastructure in the sensor field, which acts as low-priority data collection agent. Through simulations, we show that our proposed approach can provide substantial energy saving in this class of sensor application compared to the traditional multihop approach used alone.

Journal ArticleDOI
TL;DR: Results of fault tolerance and reliability analysis of thePrimary DV switch are presented, and a new Augmented Data Vortex (ADV) switch fabric is proposed, to improve the fault tolerance of the primary DV switch.

Proceedings ArticleDOI
06 Nov 2007
TL;DR: This demonstration paper presents a multi-radio MAC protocol and a prototype sensor node platform which supports dual frequency bands of operation and demonstrates the improved performance metrics and spectrum agility support.
Abstract: In this demonstration paper, we present a multi-radio MAC protocol and a prototype sensor node platform which supports dual frequency bands of operation. The multi-radio MAC protocol combines the advantages of both high and low frequency bands to give an energy efficient performance with high throughput and low latency in several applications. Our prototype supports spectrum agility, which is becoming important due to ever crowding spectrum. Besides demonstrating the improved performance metrics and spectrum agility support, we will also present the flexibility provided by our MAC protocol.

Proceedings Article
01 Jan 2007
TL;DR: It is shown that, despite of many constraints, it is possible to get latencies as low as a few milliseconds on a standard personal computer using Java.
Abstract: This paper discusses the implementation of real-time and low latency audio processing in Java. Despite the fact that Java SE is widespread and has a large programmer base, it is clearly neither targeted at real-time, nor at lowlatency applications. As such, doing good audio processing with this language is a challenging task and various issues have to be taken into account: these include limitations or properties of the audio drivers, the kernel and operating system, the audio API, the Java virtual machine and the garbage collector. We present a concrete Java audio processing framework called Decklight 4 that takes up the challenge. We show that, despite of many constraints, it is possible to get latencies as low as a few milliseconds on a standard personal computer using Java. We present the various elements of our implementation that allow such a result to be achieved, and we validate them through experimental measurements.

Journal ArticleDOI
TL;DR: A method for an automated synthesis of low-latency asynchronous controllers is presented and a software tool called OptiMist, which successfully interfaces conventional EDA design flow for simulation, timing analysis, and place-and-route, is developed.
Abstract: A method for an automated synthesis of low-latency asynchronous controllers is presented It is based on a direct mapping approach and starts from an initial specification in the form of a signal transition graph (STG) This STG is split into a device and an environment, which synchronize via a communication net that models wires The device is represented as a tracker and a bouncer The tracker follows the state of the environment and provides reference points to the device outputs The bouncer communicates with the environment and generates output events in response to the input events according to the state of the tracker This two-level architecture provides an efficient interface to the environment and is convenient for subsequent mapping into a circuit netlist A set of optimization heuristics is developed to reduce the latency and size of the circuit As a result of this paper, a software tool called OptiMist has been developed Its low algorithmic complexity allows large specifications to be synthesized, which is not possible for the tools based on state-space exploration OptiMist successfully interfaces conventional EDA design flow for simulation, timing analysis, and place-and-route

Proceedings ArticleDOI
26 Mar 2007
TL;DR: A set of modules are designed; which work together for providing network fault tolerance for user level applications leveraging the APM feature, and it is shown that APM incurs negligible overhead in the absence of faults in the system.
Abstract: High computational power of commodity PCs combined with the emergence of low latency and high bandwidth interconnects has escalated the trends of cluster computing. Clusters with InfiniBand are being deployed, as reflected in the TOP 500 Supercomputer rankings. However, increasing scale of these clusters has reduced the mean time between failures (MTBF) of components. Network component is one such component of clusters, where failure of network interface cards (NICs), cables and/or switches breaks existing path(s) of communication. InfiniBand provides a hardware mechanism, automatic path migration (APM), which allows user transparent detection and recovery from network fault(s), without application restart. In this paper, we design a set of modules; which work together for providing network fault tolerance for user level applications leveraging the APM feature. Our performance evaluation at the MPI layer shows that APM incurs negligible overhead in the absence of faults in the system. In the presence of network faults, APM incurs negligible overhead for reasonably long running applications.

Proceedings ArticleDOI
01 Oct 2007
TL;DR: An improved method for computing aggregate ETX for a path that increases end-to-end throughput and a greedy algorithm based on directed diffusion that reinforces routes with high link quality and low latency is presented, thus maximizing throughput and minimizing delay.
Abstract: Wireless sensor networks are distributed event-based systems with severe energy constraints, variable quality links, low data-rate and many-to-one event-to-sink flows. Communication algorithms for sensor networks, such as directed diffusion, are designed to operate efficiently under these constraints. However, directed diffusion is not efficient in more challenging domains, such as video sensor networks, because of the high throughput and low delay requirements of multimedia data. Instead, we propose EDGE - a greedy algorithm based on directed diffusion that reinforces routes with high link quality and low latency, thus maximizing throughput and minimizing delay. ETX (Expected Transmission Count) is used as the metric for measuring link quality. This paper presents an improved method for computing aggregate ETX for a path that increases end-to-end throughput. Simulation results with CBR (constant bit rate) traffic show that our proposed distributed algorithm selects routes that give better throughput than those reinforced by standard directed diffusion, while maintaining low delay.

Proceedings ArticleDOI
01 Mar 2007
TL;DR: A fully integrated scheme of self-configuration and self-organization was proposed and it can better resolve the distributed address allocation using the structure generated by itself with low message overhead and provides significant energy savings comparing to other schemes and low latency.
Abstract: To enable spontaneous communications in wireless sensors networks, both self-configuration and self-organization appear to be two fundamental mechanisms. However, the majority of prior contributions in ad hoc network suffer from the low processing capability, limited storage space, energy constraint and large number of nodes when being applied in wireless sensor networks. Moreover, self-configuration and self-organization have been always considered as two independent mechanisms. The authors believe that this consideration leads to inefficient network and redundant protocol cost. Hence, in this paper a fully integrated scheme of self-configuration and self-organization was proposed. It can better resolve the distributed address allocation using the structure generated by itself with low message overhead. It also provides significant energy savings comparing to other schemes and low latency.

Patent
23 Jul 2007
TL;DR: In this article, a message is stored in non-volatile, low-latency memory with associated destination list and other meta data, and the message is only removed from this lowlatency nonvolatile storage when an acknowledgement has been received from each destination indicating that the message has been successfully received, or if the message was in such memory for a period exceeding a time threshold or if memory resources are running low.
Abstract: A method of providing assured message delivery with low latency and high message throughput, in which a message is stored in non-volatile, low latency memory with associated destination list and other meta data. The message is only removed from this low-latency non-volatile storage when an acknowledgement has been received from each destination indicating that the message has been successfully received, or if the message is in such memory for a period exceeding a time threshold or if memory resources are running low, the message and associated destination list and other meta data is migrated to other persistent storage. The data storage engine can also be used for other high throughput applications.

Proceedings ArticleDOI
26 Mar 2007
TL;DR: A novel link-level flow control protocol is proposed that enables high-performance scalable routers based on the increasingly popular buffered crossbar architecture to scale to higher port counts without sacrificing performance.
Abstract: High-radix switches are desirable building blocks for large computer interconnection networks, because they are more suitable to convert chip I/O bandwidth into low latency and low cost than low-radix switches. Unfortunately, most existing switch architectures do not scale well to a large number of ports. For example, the complexity of the buffered crossbar architecture scales quadratically with the number of ports. Compounded with support for long round-trip times and many virtual channels, the overall buffer requirements limit the feasibility of such switches to modest port counts. Compromising on the buffer sizing leads to a drastic increase in latency and reduction in throughput, as long as traditional credit flow control is employed at the link level. We propose a novel link-level flow control protocol that enables high-performance scalable routers based on the increasingly popular buffered crossbar architecture to scale to higher port counts without sacrificing performance. By combining credited and speculative transmission, this scheme achieves reliable delivery, low latency, and high throughput, even with crosspoint buffers that are significantly smaller than the round-trip time.

Journal ArticleDOI
TL;DR: A new data dissemination protocol for wireless sensor networks, that basically pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the sink, showing that it significantly outperforms well known relevant solutions in the state of the art.

Proceedings ArticleDOI
08 Oct 2007
TL;DR: A low- latency MAC protocol (LLMAC) is proposed, which uses asynchrony message package for frame schedule between neighbor nodes instead of SYNC package in S-MAC, and brings in the stagger active schedule mechanism to maintain the data forwarding transmission.
Abstract: In wireless sensor networks, Energy efficiency is the most important performance target to prolong the networks lifetime. As the idle listening of sensor nodes result in primary energy waste, many typical MAC protocols are designed to save power by placing the radio in the low-power sleep mode, however, such schemes can lead to sleep latency problems in multi-hop transmissions. This paper analyses the sleep latency, and proposes a low- latency MAC protocol (LLMAC), which uses asynchrony message package for frame schedule between neighbor nodes instead of SYNC package in S-MAC, and brings in the stagger active schedule mechanism to maintain the data forwarding transmission. Simulation results show much better performance of the latency control compared with the existing schemes.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: The OSMOSIS project explores the role of optics in large-scale interconnection networks for high-performance computing (HPC) systems as mentioned in this paper, and the main objectives are solving the technical challenges to meet the stringent HPC requirements of high bandwidth, low latency, low error rates, and cost-effective scalability.
Abstract: The OSMOSIS project explores the role of optics in large-scale interconnection networks for high-performance computing (HPC) systems. Its main objectives are solving the technical challenges to meet the stringent HPC requirements of high bandwidth, low latency, low error rates, and cost-effective scalability. We discuss the technologies and architectural innovations that enabled us to build a demonstration system meeting these targets. We demonstrate the optical performance for the 64 ports @ 40 Gb/s data paths across the semiconductor optical amplifier based optical crossbar, and report on the implementation of the electronic central controller.

Proceedings ArticleDOI
15 Dec 2007
TL;DR: The relationship between the duration packet stays in queue and the number of the queue having been served is broken off and the Quantum Varying DRR algorithm is proposed, which keeps not only all advantages DRR has, but provides better low latency than DRR and achieves the fairness of Max.
Abstract: At the time of this paper writing, all scheduling algorithms are looking for tradeoffs between low complexity, low latency, and fairness. The priority queuing (PQ) scheduling can meet the requirements of real-time applications but is not good at fairness; sorted priority algorithms like WFQ, achieve better results in latency and fairness by calculating priorities dynamically at the cost of work complexity up to O(log(n)) (where n is the number of queues); frame- based schemes such as DRR, resolve the fairness variable length packets transmitting with O(1) work complexity, but sacrifice latency performance. In this paper, we break off the relationship between the duration packet stays in queue and the number of the queue having been served. We set the priority for each queue, insert packets into different queues according to their real-time needs. Then over all of them, we run Quantum Varying DRR algorithm we proposed, which keeps not only all advantages DRR has, but provides better low latency than DRR. It also achieves the fairness of Max (Max is the longest size of packets coming from all input links.). Analytical results and simulations verify all these characteristics.

Proceedings Article
01 Aug 2007
TL;DR: The Sliding Discrete Fourier Transform is used as the engine of a phase vocoder, to create a Sliding Phase Vocoder (SPV), which allows very accurate pitch shifting and low latency, and opens a number of possible extensions.
Abstract: The Sliding Discrete Fourier Transform (Sliding DFT) is used as the engine of a phase vocoder, to create a Sliding Phase Vocoder (SPV). With a little care this allows very accurate pitch shifting and low latency, and opens a number of possible extensions. We also consider the use of vector parallel processing to make these techniques a viable option.

Proceedings ArticleDOI
14 Sep 2007
TL;DR: A combined hardware and software mechanism based on heterogeneous wireless networking which works toward solving the problem of support application requiring low latency interaction in energy challenged environments is proposed.
Abstract: The importance in maintaining energy efficient communications in low power networks such as sensor and actuator networks is well understood. However, in recent years, a growing number of delay sensitive and interactive applications have been discovered for such networks, that are no longer purely limited to the data gathering model of sensor networks. Providing support application requiring low latency interaction in such environments without negatively affecting energy efficiency remains a challenging problem. This paper outlines the importance of this emerging class of application, discusses problems involved in supporting them in energy challenged environments, proposes a combined hardware and software mechanism based on heterogeneous wireless networking which works toward solving this problem, and goes on to evaluate this mechanism through experimental analysis. The paper concludes with a discussion of the applicability of the mechanism to typical application scenarios.

Proceedings ArticleDOI
01 Nov 2007
TL;DR: Novel low-latency RC4 implementations with cell-based VLSI design flow are proposed for IEEE 802.11 b wireless network, and the proposed architectures can reduce much latency in comparison with the conventional single-port 256 x 8 memory design.
Abstract: In this paper, novel low-latency RC4 implementations with cell-based VLSI design flow are proposed for IEEE 802. Hi WEP/TKIP. The RC4 stream cipher is used in the security protocol WEP in IEEE 802.11 b wireless network, and is also used in the TKIP of wireless network IEEE 802.11i cryptography. The major process of RC4 algorithm is to shuffle the memory continuously. For quick memory shuffling, we investigate two different memory shuffling architectures to design the RC4. By using single-port 128 x 16 memory design, this architecture reduces 25 % shuffling latency, compared with the conventional single-port 256 x 8 architecture. By using dual-port 256 x 8 memory design, this architecture achieves less latency and less power consumption at the same time. Both of the proposed architectures can reduce much latency in comparison with the conventional single-port 256 x 8 memory design.

Proceedings ArticleDOI
23 Oct 2007
TL;DR: The proposed optimization significantly reduces the control traffic for low data traffic intensity in the network and increases protocol scalability for large networks without compromising the low latency of proactive protocols.
Abstract: Many applications, such as disaster response and military applications, call for proactive maintenance of links and routes in Mobile Ad hoc NETworks (MANETs) to ensure low latency during data delivery. The goal of this paper is to minimize the wastage of energy in the network due to high control traffic, which restricts the scalability and applicability of such protocols, without trading-off the low latency. We categorize the proactive protocols based on the periodic route and link maintenance operations performed; and analytically derive the optimum periods for these operations in different protocol classes. The analysis takes into account data traffic intensity, link dynamics, application reliability requirements, and the size of the network. The proposed optimization significantly reduces the control traffic for low data traffic intensity in the network and increases protocol scalability for large networks without compromising the low latency of proactive protocols.

Journal ArticleDOI
TL;DR: A multi-agent perception and control architecture that combines a sophisticated long-range path detection method operating at high resolution and low frame rate, with a simple stereo-based obstacle detection methodoperating at low resolution, highframe rate, and low latency is proposed.