scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 2011"


Proceedings Article
09 May 2011
TL;DR: This paper argues that OS researchers must lead the charge in rearchitecting systems to push the boundaries of low-latency datacenter communication and argues that 5-10µs remote procedure calls are possible in the short term - two orders of magnitude better than today.
Abstract: The operating systems community has ignored network latency for too long. In the past, speed-of-light delays in wide area networks and unoptimized network hard-ware have made sub-100µs round-trip times impossible. However, in the next few years datacenters will be deployed with low-latency Ethernet. Without the burden of propagation delays in the datacenter campus and network delays in the Ethernet devices, it will be up to us to finish the job and see this benefit through to applications. We argue that OS researchers must lead the charge in rearchitecting systems to push the boundaries of low-latency datacenter communication. 5-10µs remote procedure calls are possible in the short term - two orders of magnitude better than today. In the long term, moving the network interface on to the CPU core will make 1µs times feasible.

172 citations


Proceedings ArticleDOI
10 Oct 2011
TL;DR: A distributed reconfiguration solution named Ariadne, targeting large, aggressively scaled, unreliable NoCs, which provides a 40%-140% latency improvement over other on-chip state-of-the-art fault tolerant solutions, while meeting the low area budget of on- chip routers with an overhead of just 1.97%.
Abstract: Extreme transistor technology scaling is causing increasing concerns in device reliability: the expected lifetime of individual transistors in complex chips is quickly decreasing, and the problem is expected to worsen at future technology nodes. With complex designs increasingly relying on Networks-on-Chip (NoCs) for on-chip data transfers, a NoC must continue to operate even in the face of many transistor failures. Specifically, it must be able to reconfigure and reroute packets around faults to enable continued operation, i.e., generate new routing paths to replace the old ones upon a failure. In addition to these reliability requirements, NoCs must maintain low latency and high throughput at very low area budget. In this work, we propose a distributed reconfiguration solution named Ariadne, targeting large, aggressively scaled, unreliable NoCs. Ariadne utilizes up*/down* for fast routing at high bandwidth, and upon any number of concurrent network failures in any location, it reconfigures to discover new resilient paths to connect the surviving nodes. Experimental results show that Ariadne provides a 40%-140% latency improvement (when subject to 50 faults in a 64-node NoC) over other on-chip state-of-the-art fault tolerant solutions, while meeting the low area budget of on-chip routers with an overhead of just 1.97%.

102 citations


Proceedings ArticleDOI
04 Oct 2011
TL;DR: A new fault tolerance approach based on active replication for StreamMapReduce systems is presented, which is cost effective for cloud consumers as well as cloud providers.
Abstract: MapReduce has become a popular programming paradigm in the domain of batch processing systems. Its simplicity allows applications to be highly scalable and to be easily deployed on large clusters. More recently, the MapReduce approach has been also applied to Event Stream Processing (ESP) systems. This approach, which we call StreamMapReduce, enabled many novel applications that require both scalability and low latency. Another recent trend is to move distributed applications to public clouds such as Amazon EC2 rather than running and maintaining private data centers. Most cloud providers charge their customers on an hourly basis rather than on CPU cycles consumed. However, many applications, especially those that process online data, need to limit their CPU utilization to conservative levels (often as low as $50\%$) to be able to accommodate natural and sudden load variations without causing unacceptable deterioration in responsiveness. In this paper, we present a new fault tolerance approach based on active replication for StreamMapReduce systems. This approach is cost effective for cloud consumers as well as cloud providers. Cost effectiveness is achieved by fully utilizing the acquired computational resources without performance degradation and by reducing the need for additional nodes dedicated to fault tolerance.

43 citations


Journal ArticleDOI
TL;DR: Iris, a CMOS-compatible high-performance low-power nanophotonic on- chip network, is introduced and offers an on-chip communication backplane that is power efficient while demonstrating low latency and high throughput.
Abstract: On-chip communication, including short, often-multicast, latency-critical coherence and synchronization messages, and long, unicast, throughput-sensitive data transfers, limits the power efficiency and performance scalability of many-core chip-multiprocessor systems. This article analyzes on-chip communication challenges and studies the characteristics of existing electrical and emerging nanophotonic interconnect. Iris, a CMOS-compatible high-performance low-power nanophotonic on-chip network, is thus introduced. Iris's circuit-switched subnetwork supports throughput-sensitive data transfer. Iris's optical-antenna-array-based broadcast--multicast subnetwork optimizes latency-critical traffic and supports the path setup of circuit-switched communication. Overall, the proposed nanophotonic network design offers an on-chip communication backplane that is power efficient while demonstrating low latency and high throughput.

43 citations


Journal ArticleDOI
TL;DR: In this article, a new superconducting digital technology, Reciprocal Quantum Logic (RQL), was developed that uses AC power carried on a transmission line, which also serves as a clock.
Abstract: We have developed a new superconducting digital technology, Reciprocal Quantum Logic, that uses AC power carried on a transmission line, which also serves as a clock. Using simple experiments we have demonstrated zero static power dissipation, thermally limited dynamic power dissipation, high clock stability, high operating margins and low BER. These features indicate that the technology is scalable to far more complex circuits at a significant level of integration. On the system level, Reciprocal Quantum Logic combines the high speed and low-power signal levels of Single-Flux- Quantum signals with the design methodology of CMOS, including low static power dissipation, low latency combinational logic, and efficient device count.

42 citations


Proceedings ArticleDOI
18 Aug 2011
TL;DR: In this article, high-speed, low latency, coherent demodulation of signals for dynamic or AC mode in Atomic Force Microscopes (AFMs) is described. But the authors focus on the mixing and integration of the demodulator.
Abstract: This paper describes methods for doing high-speed, low latency, coherent demodulation of signals for dynamic or AC mode in Atomic Force Microscopes (AFMs) [1]. These demodulation methods allow the system to extract signal information in as little as one cycle of the fundamental oscillation frequency. By having so little latency, the demodulator minimizes the time delay in the servo loop for an AC mode AFM. This in turn minimizes the negative phase effects of the demodulation allowing for higher speed scanning. This part of the paper describes the mixing and integration portion of the demodulator. Part II [2] describes efficient methods for extracting magnitude and phase in real time.

32 citations


Patent
19 Jan 2011
TL;DR: The DSCP Mirroring System as discussed by the authors enables the automatic reuse of the Differentiated Services Code Point header by the user devices that are served by a network to enable delivery of wireless services to the individually identified user wireless devices.
Abstract: The DSCP Mirroring System enables the automatic reuse of the Differentiated Services Code Point header by the user devices that are served by a network to enable delivery of wireless services to the individually identified user wireless devices and manage the various data traffic and classes of data to optimize or guarantee performance, low latency, and/or bandwidth without the overhead of the management of the Differentiated Services Code Point header.

31 citations


Proceedings ArticleDOI
29 Dec 2011
TL;DR: The implementation and validation of a fully compliant MAC/PHY solution developed in the scope of the DRIVE-IN project is presented, which will be used in a 500-node testbed and shows that indeed it is possible to assure the support of critical safety services in the presence of other traffic.
Abstract: The emerging interest in vehicular networks (VANET) led to the deployment of several small and medium-scale testbeds to evaluate the characteristics of this technology in real-world scenarios. Despite this, due to the low availability and high cost of IEEE 802.11p/WAVE fully compliant hardware and software, many of these experiments have been performed with other communication standards, which generate, in many cases, misleading results, that are not representative of real-world vehicular communications. As a mean of solving this problem, this paper presents the implementation and validation of a fully compliant MAC/PHY solution developed in the scope of the DRIVE-IN project, which will be used in a 500-node testbed. Contrarily to what happens with most of the solutions existent in the market, our system complies with the strict channel switching timings using GPS time synchronization, providing access to two different types of wireless channels (e.g. control and service channels as defined in IEEE 1609.4) in a seamless way for the end user. This feature allows safety-critical and control messages to be sent in a dedicated channel, with very low latency, while another channel may be used for less critical services (e.g. infotainment applications, advertisement). The results of the conducted tests show that indeed it is possible to assure the support of critical safety services in the presence of other traffic.

30 citations


Journal ArticleDOI
TL;DR: A new protocol that combines probabilistic flooding, counter-based broadcast, and lazy gossip is developed, and it is confirmed that RAPID obtains higher reliability with low latency and good communication overhead compared with each of the individual methods.
Abstract: Reliable broadcast is a basic service for many collaborative applications as it provides reliable dissemination of the same information to many recipients. This paper studies three common approaches for achieving scalable reliable broadcast in ad hoc networks, namely probabilistic flooding, counter-based broadcast, and lazy gossip. The strength and weaknesses of each scheme are analyzed, and a new protocol that combines these three techniques, called RAPID, is developed. Specifically, the analysis in this paper focuses on the trade-offs between reliability (percentage of nodes that receive each message), latency, and the message overhead of the protocol. Each of these methods excel in some of these parameters, but no single method wins in all of them. This motivates the need for a combined protocol that benefits from all of these methods and allows to trade between them smoothly. Interestingly, since the RAPID protocol only relies on local computations and probability, it is highly resilient to mobility and failures and even selfish behavior. By adding authentication, it can even be made malicious tolerant. Additionally, the paper includes a detailed performance evaluation by simulation. The simulations confirm that RAPID obtains higher reliability with low latency and good communication overhead compared with each of the individual methods.

26 citations


Journal ArticleDOI
TL;DR: The problem of rate/delay balancing according to a Network Utility Maximization (NUM) paradigm assuming Scalable Video Coding (SVC) feeding a cross-layer DiffServ architecture, and an adaptive physical layer is formulated, which provides a fair rate-delay trade-off.
Abstract: Trading delay and rate in upcoming satellite networks are of paramount interest. In this paper, we present a solution to optimally distribute resources across MAC, IP and APP layers to deal with the problem of efficiently and reliably delivering low latency and high rate inelastic services over such networks. In order to do so, we formulate the problem of rate/delay balancing according to a Network Utility Maximization (NUM) paradigm assuming Scalable Video Coding (SVC) feeding a cross-layer DiffServ architecture, and an adaptive physical layer. We solve not only the bit loading balancing across the IP queues but also the video layers distribution to be sent among the adaptive physical layers, all constrained by delay requirements. The solution, which provides a fair rate-delay trade-off, happens to be on a water filling load across the queuing architecture, and it is suitable for multicasting scenarios. Finally, we propose a full protocol design, implementation, and performance evaluation based on truly available standardized tools, hence, it is ready-to-use. Our solution significantly outperforms non cross-layer approaches in terms of delay and video quality, and it dynamically adapts to channel and traffic variations.

26 citations


Proceedings ArticleDOI
04 Oct 2011
TL;DR: Simulation results demonstrated that this adaptive method significantly outperforms the fixed control method under varying number of vehicles and the impact of estimation error on the number of cars in the network on system level performance is investigated.
Abstract: Congestion control is critical for the provisioning of quality of services (QoS) over dedicated short range communications (DSRC) vehicle networks for road safety applications. In this paper we propose a congestion control method for DSRC vehicle networks at road intersection, with the aims of providing high availability and low latency channels for high priority emergency safety applications while maximizing channel utilization for low priority routine safety applications. In this method a offline simulation based approach is used to find out the best possible configurations of message rate and MAC layer backoff exponent (BE) for a given number of vehicles equipped with DSRC radios. The identified best configurations are then used online by an roadside access point (AP) for system operation. Simulation results demonstrated that this adaptive method significantly outperforms the fixed control method under varying number of vehicles. The impact of estimation error on the number of vehicles in the network on system level performance is also investigated.

Book ChapterDOI
20 Mar 2011
TL;DR: The overall latency introduced by HSUPA is measured and accurately dissected into contributions of USB-modem (UE), base station (NodeB) and network controller (RNC) by combining traces recorded at each interface along the data-path of a public operational UMTS network.
Abstract: Users expect mobile Internet access via 3G technologies to be comparable to wired access in terms of throughput and latency. HSPA achieves this for throughput, whereas delay is significantly higher. In this paper we measure the overall latency introduced by HSUPA and accurately dissect it into contributions of USB-modem (UE), base station (NodeB) and network controller (RNC). We achieve this by combining traces recorded at each interface along the data-path of a public operational UMTS network. The actively generated sample traffic covers real-time applications. Results show the delay to be strongly dependent on the packet size, with random components depending on synchronization issues. We provide models for latency of single network entities as well as accumulated delay. These findings allow to identify optimum settings in terms of low latency, both for application and network parameters.

Patent
24 Oct 2011
TL;DR: In this paper, the authors propose a bus arbitration protocol for low latency memory access with low latency. But the protocol requires a master to send a request from a specific master to a slave, and to ensure a band necessary for another master.
Abstract: An arbitration circuit 108 receives a read/write request from a master 101, such as a CPU, in which low latency is required, at a regular interval, such that the master 101 performs memory access with low latency. A remaining band which is not used by the master 101 is allocated to masters 102 and 103, such as a DMA controller, in which a wideband is required, thereby ensuring a necessary band. When a read/write request is retained in a buffer 119 of a slave 118, the arbitration circuit 108 suppresses the acceptance of the read/write requests from the masters 102 and 103 having low priority. Therefore, it is possible to provide a bus arbitration apparatus capable of transmitting a request from a specific master to a slave with low latency, and to ensure a band necessary for another master.

Journal ArticleDOI
TL;DR: Methods for doing high-speed, low latency, coherent demodulation of signals for dynamic or AC mode in Atomic Force Microscopes (AFMs), Abramovitch (2010) and a phase-locked loop (PLL) based method.

Journal ArticleDOI
01 Oct 2011
TL;DR: The proposed algorithm focuses on the two performance metrics, latency and throughput, and minimizes the latency of workflows while satisfying strict throughput requirements, and is designed for a realistic bounded multi-port communication model.
Abstract: Scheduling, in many application domains, involves optimization of multiple performance metrics. For example, application workflows with real-time constraints have strict throughput requirements and also desire a low latency or response time. In this paper, we present a novel algorithm for the scheduling of workflows that act on a stream of input data. Our algorithm focuses on the two performance metrics, latency and throughput, and minimizes the latency of workflows while satisfying strict throughput requirements. We also describe steps to use the above approach to solve the problem of meeting latency requirements while maximizing throughput. We leverage pipelined, task and data parallelism in a coordinated manner to meet these objectives and investigate the benefit of task duplication in alleviating communication overheads in the pipelined schedule for different workflow characteristics. The proposed algorithm is designed for a realistic bounded multi-port communication model, where each processor can simultaneously communicate with at most k distinct processors. Experimental evaluation using synthetic benchmarks as well as those derived from real applications shows that our algorithm consistently produces lower latency schedules that meet throughput requirements, even when previously proposed schemes fail.

Journal ArticleDOI
TL;DR: A novel method for passive resource discovery in cluster grid environments, where resources constantly utilize internode communication is presented, offering the ability to nonintrusively identify resources that have available CPU cycles; this is critical for lowering queue wait times in large cluster grid networks.
Abstract: We present the details of a novel method for passive resource discovery in cluster grid environments, where resources constantly utilize internode communication. Our method offers the ability to nonintrusively identify resources that have available CPU cycles; this is critical for lowering queue wait times in large cluster grid networks. The benefits include: 1) low message complexity, which facilitates low latency in distributed networks, 2) scalability, which provides support for very large networks, and 3) low maintainability, since no additional software is needed on compute resources. Using a 50-node (multicore) test bed (DETERlab), we demonstrate the feasibility of our method with experiments utilizing TCP, UDP, and ICMP network traffic. We use a simple but powerful technique that monitors the frequency of network packets emitted from the Network Interface Card (NIC) of local resources. We observed the correlation between CPU load and the timely response of network traffic. A highly utilized CPU will have numerous, active processes which require context switching. The latency associated with numerous context switches manifests as a delay signature within the packet transmission process. Our method detects that delay signature to determine the utilization of network resources. Results show that our method can consistently and accurately identify nodes with available CPU cycles (<;70 percent CPU utilization) through analysis of existing network traffic, including network traffic that has passed through a switch (noncongested). Also, in situations where there is no existing network traffic for nodes, ICMP ping replies can be used to ascertain this resource information.

Journal ArticleDOI
TL;DR: The simulation results demonstrate that the proposed joint V2V/V2R (R2V) communication protocol is capable of improving the message delivery ratio and obtaining low latency, which are very important merits for highway traffic safety.
Abstract: A joint vehicle-vehicle/vehicle-roadside communication protocol is proposed for cooperative collision avoiding in Vehicular Ad Hoc Networks (VANETs). In this protocol, emergency warning messages are simultaneously transmitted via Vehicle-to-Vehicle (V2V) and Vehicle-to-Roadside (V2R) communications in order to achieve multipath diversity routing. In addition, to further improve communication reliability and achieve low latency, a Multi-Channel (MC) technique based on two nonoverlapping channels for Vehicle-Vehicle (V2V) and V2R (or R2V) is proposed. The simulation results demonstrate that the proposed joint V2V/V2R (R2V) communication protocol is capable of improving the message delivery ratio and obtaining low latency, which are very important merits for highway traffic safety.

Proceedings ArticleDOI
11 Dec 2011
TL;DR: This paper presents an experimental methodology exploring the effect of tracking latency on object recognition after exposure to an immersive VE, in terms of both scene context and associated awareness states, and preliminary results from initial pilot studies reveal better memory performance of objects in the low latency condition.
Abstract: This paper presents an experimental methodology exploring the effect of tracking latency on object recognition after exposure to an immersive VE, in terms of both scene context and associated awareness states. System latency (time delay) and its visible consequences are fundamental Virtual Environment (VE) deficiencies that can hamper spatial awareness and memory. The immersive simulation consisted of a radiosity-rendered space divided in three zones including a kitchen/dining area, an office area and a lounge area. The space was populated by objects consistent as well as inconsistent with each zone's context. The simulation was displayed on a stereo head-tracked Head Mounted Display. Participants across two conditions of varying latency (system minimum latency vs added latency condition) were exposed to the VE and completed an object-based memory recognition task. Participants also reported one of three states of awareness following each recognition response which reflected either the recollection of contextual detail, the sense of familiarity unaccompanied by contextual information or even informed guesses. Preliminary results from initial pilot studies reveal better memory performance of objects in the low latency condition. A disproportionately large proportion of guess responses for consistent objects viewed with high latency is also observed and correspondingly a disproportionately low proportion of remember responses for consistent objects in the same latency condition.

Proceedings Article
01 Jan 2011
TL;DR: This work analyzes the latency present in several current multi-touch platforms, and describes a new custom system that reduces latency to an average of 30 ms while providing programmable haptic feedback to the user.
Abstract: During the past decade, multi-touch surfaces have emerged as valuable tools for collaboration, display, interaction, and musical expression. Unfortunately, they tend to be costly and often suffer from two drawbacks for music performance: (1) relatively high latency owing to their sensing mechanism, and (2) lack of haptic feedback. We analyze the latency present in several current multi-touch platforms, and we describe a new custom system that reduces latency to an average of 30 ms while providing programmable haptic feedback to the user. The paper concludes with a description of ongoing and future work.

Proceedings ArticleDOI
25 Jan 2011
TL;DR: This paper proposes a quality-of-service framework for optical network-on-chip based on frame-based arbitration and shows that the proposed approach achieves excellent differentiated bandwidth allocation with only simple hardware additions and low performance overheads.
Abstract: With the recent development in silicon photonics, researchers have developed optical network-on-chip (NoC) architectures that achieve both low latency and low power, which are beneficial for future large scale chip-multiprocessors (CMPs). However, none of the existing optical NoC architectures has quality-of-service (QoS) support, which is a desired feature of an efficient interconnection network. QoS support provides contending flows with differentiated bandwidths according to their priorities (or weights), which is crucial to account for application-specific communication patterns and provides bandwidth guarantees for real-time applications. In this paper, we propose a quality-of-service framework for optical network-on-chip based on frame-based arbitration. We show that the proposed approach achieves excellent differentiated bandwidth allocation with only simple hardware additions and low performance overheads. To the best of our knowledge, this is the first work that provides QoS support for optical network-on-chip.

Proceedings ArticleDOI
01 May 2011
TL;DR: A new bi-modal asynchronous arbitration node is introduced for use as a building block in an adaptive asynchronous interconnection network, which dynamically reconfigures based on the traffic it receives, entering a special “single-channel-bias” mode when the other channel has no recent activity.
Abstract: A new bi-modal asynchronous arbitration node is introduced for use as a building block in an adaptive asynchronous interconnection network. The target network topology is a variant Mesh-of-Trees (MoT), combining a binary fan-out network (i.e. routing) and a binary fan-in network (i.e. arbitration) for each source-sink pair. The key feature of the new arbitration node is that it dynamically reconfigures based on the traffic it receives, entering a special “single-channel-bias” mode when the other channel has no recent activity. Arbitration is totally bypassed on the critical path, resulting in significantly lower node latency and, in high-traffic scenarios, improved throughput. The router nodes were implemented in IBM 90nm technology using ARM standard cells. SPICE simulations indicate that the bi-modal arbitration node provided significant reductions in latency (41.6%), and increased throughput (19.8%, in high-traffic single-channel scenarios), when in biased mode. Node reconfiguration required at most 338 ps. Simulations were then performed on two distinct MoT indirect networks, “baseline” and “adaptive” (the latter incorporating the new bi-modal node), on eight diverse synthetic benchmarks, using mixes of random and deterministic traffic. Improvements in system latency up to 19.8% and throughput up to 27.8% were obtained using the adaptive network. Overall end-to-end latencies, through 6 router nodes and 5 hops, of 1.8–2.8 ns (at 25% load) and throughputs of 0.27–1.8 Gigaflits/s (at saturation rate) were also observed. Categories and Subject Descriptors

Proceedings ArticleDOI
10 Apr 2011
TL;DR: An approximation algorithm is presented that maintains packet order and approximates the optimal scheduling within a factor of √2Nk with regard to the number of packets transmitted, in line with the best known approximation algorithm for set packing.
Abstract: Optical switches are widely considered as the most promising candidate to provide ultra-high speed interconnections. Due to the difficulty in implementing all-optical buffer, optical switches with electronic buffers have been proposed recently [1] [4] [5]. Among these switches, the Optical Cut-Through (OpCut) switch has the capability to achieve low latency and minimize optical-electronic-optical (O/E/O) conversions. In this paper, we consider packet scheduling in this switch with wavelength division multiplexing (WDM). Our goal is to maximize throughput and maintain packet order at the same time. While we prove that such an optimal scheduling problem is NP-hard and inapproximable in polynomial time within any constant factor by reducing it to the set packing problem, we present an approximation algorithm that maintains packet order and approximates the optimal scheduling within a factor of √2Nk with regard to the number of packets transmitted, where N is the switch size and k is the number of wavelengths multiplexed on each fiber. This result is in line with the best known approximation algorithm for set packing. Based on the approximation algorithm, we also give practical schedulers that can be implemented in fast optical switches. Simulation results show that the schedulers achieve close performance to the ideal WDM output-queued switch in terms of packet delay under various traffic models.


Proceedings ArticleDOI
22 Mar 2011
TL;DR: A wormhole input queued 2-D mesh router was created to verify the capability of the router and provide a comparative study with other implementations in FPGA thechnology, with different flit size.
Abstract: Network on Chip is an efficient on-chip communication architecture for SoC architectures. It enables the integration of a large number of computational and storage blocks on a single chip. The router is the basic element of NoC with multiple, connecting to other router and to a local IP core. This router architecture can be used later for building a NoC with standard or arbitrary topology with low latency and high speed and High maximal peak performance. The low latency and high speed is achieved by allowing for each input port a routing function which runs in parallel with Link controler and with distributed arbiters. To evaluate our approach, A wormhole input queued 2-D mesh router was created to verify the capability of our router. Various parameterized designs were synthesized to provide a comparative study with other implementations in FPGA thechnology, with different flit size.

Patent
01 Sep 2011
TL;DR: In this paper, the authors proposed a distributed real-time tracking system that can process the signal in real time (connected via a high speed network), with low latency and also stores the location estimation, device identification and time collected for analytics and reporting.
Abstract: Services based on radio frequencies like GPS, RFID, GSM, CDMA, Wi-Fi or Bluetooth triangulation allows us to know, with some level of confidence, the position of a mobile device. However, those methods rely on special software and/or hardware. They also fail inside buildings since they cannot provide fine granulated values with a high level of accuracy. Each radio frequency device (mobile or not) emits a signal with a given strength and identification (Mac address or similar) to the air. The signal is captured by our nodes (at least 2 of the nodes), that measures the strength of the signal and therefore calculate the distance estimation. Since the signal is affected by electromagnetic noise, number of people in the building, or even temperature, the suggested system self corrects the values by knowing the distance between the nodes and measuring the signal strength. This gives to the system a very low error margin of about 1%. Some applications rely on real time tracking solutions, that traditional tracking systems cannot provide or need to provide tags to follow devices. The system here presented is distributed to be able to process the signal in real time (connected via a high speed network), with low latency and also stores the location estimation, device identification and time collected for analytics and reporting.

Book ChapterDOI
18 Jul 2011
TL;DR: PaderMAC is presented, a strobed preamble MAC layer which supports cross-layer integration with an arbitrary opportunistic routing layer and is compared to X-MAC in conjunction with pathbased routing and shows how PaderMAC reduces thePreamble length, better balances the load and further improves the end-to-end latency within the network.
Abstract: Modern medium access control (MAC) protocols for wireless sensor networks (WSN) focus on energy-efficiency by switching a node's radio on only when necessary. This intoduced rendezvous problem is gracefully handled by modern asynchronous approaches to WSN MAC's, e.g. X-MAC, using strobed preambles. Nevertheless, most MAC layer ignore the possible benefits in energy consumption and end-toend latency, supporting opportunistic routing can provide. In this paper we present PaderMAC, a strobed preamble MAC layer which supports cross-layer integration with an arbitrary opportunistic routing layer. This work specifies the PaderMAC protocol, explains its implementation using TinyOS and the MAC layer architecture (MLA), and presents the results of a testbed performance study. The study compares PaderMAC in conjunction with opportunistic routing to X-MAC in conjunction with pathbased routing and shows how PaderMAC reduces the preamble length, better balances the load and further improves the end-to-end latency within the network.

Book ChapterDOI
02 Jan 2011
TL;DR: An integrated solution that combines routing, clustering and medium access control operations while basing them on a common meshed tree algorithm is presented to achieve an efficient airborne surveillance network of unmanned aerial vehicles.
Abstract: In this paper we present an integrated solution that combines routing, clustering and medium access control operations while basing them on a common meshed tree algorithm. The aim is to achieve an efficient airborne surveillance network of unmanned aerial vehicles, wherein any loss of captured data is kept to the minimum while maintaining low latency in packet and data delivery. Surveillance networks of varying sizes were evaluated with varying numbers of senders, while the physical layer was maintained invariant.

Journal ArticleDOI
TL;DR: This work defines the most important constraints for practical applications, introduces an algorithm, which tries to fulfill all requirements as good as possible, and compares it to state-of-the-art sound source localization approaches.
Abstract: Sound source localization algorithms determine the physical position of a sound source in respect to a listener. For practical applications, a localization algorithm design has to take into account real world conditions like multiple active sources, reverberation, and noise. The application can impose additional constraints on the algorithm, e.g., a requirement for low latency. This work defines the most important constraints for practical applications, introduces an algorithm, which tries to fulfill all requirements as good as possible, and compares it to state-of-the-art sound source localization approaches.

Patent
15 Apr 2011
TL;DR: In this article, a low-latency interface between a customer server, such as a server that employs algorithmic trading methods to generate buy and sell orders for securities, and a brokerage server that validates such securities trading orders is optimized for handling the trading orders.
Abstract: A network enables monitors, trading platforms and libraries to share information about customers' trading activities and locally recalculate customer trading limits resulting from these trading activities. A low-latency interface between a customer server, such as a server that employs algorithmic trading methods to generate buy and sell orders for securities, and a brokerage server that validates such securities trading orders is optimized for handling the securities trading orders. The interface supports a trading command set specifically designed for orders from customer trading application programs, and the interface formats received trading commands into compact messages that are sent over a high-speed communication link to the brokerage server. The interface receives order acknowledgement messages and the like from the brokerage server and invokes callback routines in the customer trading application program to report status information.

Journal ArticleDOI
TL;DR: An improved BP algorithm with pixel classification that outperforms classical BP while reducing the number of memory accesses and an adaptive message compression technique with a low performance penalty that greatly reduces the memory traffic is presented.