scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 2011"


Journal ArticleDOI
TL;DR: In this paper, the authors point out that bufferbloat, the existence of excessively large and frequently full buffers inside the network, is a major cause of unnecessary latency and poor system performance.
Abstract: Today’s networks are suffering from unnecessary latency and poor system performance. The culprit is bufferbloat, the existence of excessively large and frequently full buffers inside the network. Large buffers have been inserted all over the Internet without sufficient thought or testing. They damage or defeat the fundamental congestion-avoidance algorithms of the Internet’s most common transport protocol. Long delays from bufferbloat are frequently attributed incorrectly to network congestion, and this misinterpretation of the problem leads to the wrong solutions being proposed.

421 citations


Journal ArticleDOI
TL;DR: Results of all three studies showed an inverse relation between rate and latency, indicating that latency might be a useful measure of responding when repeated occurrences of behavior are undesirable or impractical to arrange.
Abstract: Dependent variables in research on problem behavior typically are based on measures of response repetition, but these measures may be problematic when behavior poses high risk or when its occurrence terminates a session. We examined response latency as the index of behavior during assessment. In Experiment 1, we compared response rate and latency to the first response under acquisition and maintenance conditions. In Experiment 2, we compared data from existing functional analyses when graphed as rate versus latency. In Experiment 3, we compared results from pairs of independent functional analyses. Sessions in the first analysis were terminated following the first occurrence of behavior, whereas sessions in the second analysis lasted for 10 min. Results of all three studies showed an inverse relation between rate and latency, indicating that latency might be a useful measure of responding when repeated occurrences of behavior are undesirable or impractical to arrange.

145 citations


Journal ArticleDOI
TL;DR: The robustness of RPE signaling by these neurons suggests that actor-critic models of reinforcement learning in which the PFC and particularly the caudate are considered primarily to be “actors” rather than “critics,” should be reconsidered to include a prominent evaluative role for these structures.
Abstract: Learning can be motivated by unanticipated success or unexpected failure. The former encourages us to repeat an action or activity, whereas the latter leads us to find an alternative strategy. Understanding the neural representation of these unexpected events is therefore critical to elucidate learning-related circuits. We examined the activity of neurons in the lateral prefrontal cortex (PFC) and caudate nucleus of monkeys as they performed a trial-and-error learning task. Unexpected outcomes were widely represented in both structures, and neurons driven by unexpectedly negative outcomes were as frequent as those activated by unexpectedly positive outcomes. Moreover, both positive and negative reward prediction errors (RPEs) were represented primarily by increases in firing rate, unlike the manner in which dopamine neurons have been observed to reflect these values. Interestingly, positive RPEs tended to appear with shorter latency than negative RPEs, perhaps reflecting the mechanism of their generation. Last, in the PFC but not the caudate, trial-by-trial variations in outcome-related activity were linked to the animals' subsequent behavioral decisions. More broadly, the robustness of RPE signaling by these neurons suggests that actor-critic models of reinforcement learning in which the PFC and particularly the caudate are considered primarily to be "actors" rather than "critics," should be reconsidered to include a prominent evaluative role for these structures.

100 citations


Journal ArticleDOI
TL;DR: This work proposes an alternative approach for statistical analyses of latency outcomes that has less distributional assumptions and adequately handle results of trials in which the performance measure did not occur within the trial time.

96 citations


Journal ArticleDOI
01 Jan 2011-Methods
TL;DR: This work describes in detail an experimental protocol for the generation of HIV-1 latency using human primary CD4(+) T cells, and presents the salient points of other latency models in the field, along with key findings arising from each model.

93 citations


Posted Content
TL;DR: In this paper, a parallel SC polar decoder is proposed to reduce the decoding latency by 50% with pipelining and parallel processing schemes, and a sub-structure sharing approach is employed to design the merged processing element (PE).
Abstract: Polar codes have become one of the most favorable capacity achieving error correction codes (ECC) along with their simple encoding method. However, among the very few prior successive cancellation (SC) polar decoder designs, the required long code length makes the decoding latency high. In this paper, conventional decoding algorithm is transformed with look-ahead techniques. This reduces the decoding latency by 50%. With pipelining and parallel processing schemes, a parallel SC polar decoder is proposed. Sub-structure sharing approach is employed to design the merged processing element (PE). Moreover, inspired by the real FFT architecture, this paper presents a novel input generating circuit (ICG) block that can generate additional input signals for merged PEs on-the-fly. Gate-level analysis has demonstrated that the proposed design shows advantages of 50% decoding latency and twice throughput over the conventional one with similar hardware cost.

87 citations


Journal ArticleDOI
TL;DR: This paper presents a link architecture based on the highspeed SerDeses embedded in Xilinx Virtex 5 and Spartan 6 Field Programmable Gate Arrays (FPGAs) and discusses the latency performance of the architecture, and shows how to make it constant and predictable.
Abstract: Most of the off-the-shelf high-speed Serializer-Deserializer (SerDes) chips do not keep the same latency through the data-path after a reset, a loss of lock or a power cycle. This implementation choice is often made because fixed-latency operations require dedicated circuitry and they are usually not needed for most telecom and data-corn applications. However timing synchronization applications and triggers systems of the high energy physics experiments would benefit from fixed-latency links. In this paper, we present a link architecture based on the highspeed SerDeses embedded in Xilinx Virtex 5 and Spartan 6 Field Programmable Gate Arrays (FPGAs). We discuss the latency performance of our architecture and we show how we made it constant and predictable. We also present test results showing the fixed latency of the link and we finally offer some guidelines to exploit our solution with other SerDes devices.

81 citations


Proceedings Article
27 Apr 2011
TL;DR: This paper presents a M2M system architecture based on LTE/LTE-A and highlights the delays associated with each part of the system and proposals on how the latency can be further reduced are described.
Abstract: Machine-to-machine communication has attracted a lot of interest in the mobile communication industry and is under standardization process in 3GPP. Of particular interest is LTE-Advanced support for various M2M service requirements and efficient management and handling of a huge number of machines as mobile subscribers. In addition to the higher throughput, one of the main advantages of LTE/LTE-A in comparison with the previous cellular networks is the reduced transmission latency, which makes this type of networks very attractive for real-time mobile M2M communication scenarios. This paper presents a M2M system architecture based on LTE/LTE-A and highlights the delays associated with each part of the system. Three real-time M2M applications are analyzed and the main latency bottlenecks are identified. Proposals on how the latency can be further reduced are described.

76 citations


Patent
David E. Taylor1, Scott Parsons1
09 Dec 2011
TL;DR: In this paper, an integrated order management engine is proposed that reduces the latency associated with managing multiple orders to buy or sell a plurality of financial instruments, and an integrated trading platform that provides low latency communications between various platform components.
Abstract: An integrated order management engine is disclosed that reduces the latency associated with managing multiple orders to buy or sell a plurality of financial instruments. Also disclosed is an integrated trading platform that provides low latency communications between various platform components. Such an integrated trading platform may include a trading strategy offload engine.

75 citations


Journal ArticleDOI
TL;DR: This review will focus on the telomere integration of HHV-6, the potential viral and cellular genes that mediate integration, and the clinical impact on the host.

69 citations


Patent
30 Dec 2011
TL;DR: In this article, the authors describe a method, apparatus and system for reducing system latency caused by switching memory page permission views between programs while still protecting critical regions of the memory from attacks of malwares.
Abstract: Various embodiments of this disclosure may describe method, apparatus and system for reducing system latency caused by switching memory page permission views between programs while still protecting critical regions of the memory from attacks of malwares. Other embodiments may be disclosed and claimed.

Patent
10 Aug 2011
TL;DR: In this article, a beacon time is received at the sensor node from an upstream node, the beacon time offset from global time by the latency, the global time and a corresponding local time are determined at the node.
Abstract: Determining time latency at a sensor node in a mesh network. A beacon time is received at the sensor node from an upstream node, the beacon time offset from global time by the latency. The latency, the global time, and a corresponding local time are determined at the sensor node.

Patent
05 Oct 2011
TL;DR: In this article, the authors proposed a method to automatically replicate virtual machine image (VM) files on secondary VM computing devices, from a primary VM computing device, by constantly reviewing the operating parameter values (e.g., cost of resources, power consumption, etc.) of a number of secondaryVM computing devices available of storing VM image replicas.
Abstract: Systems and methods are disclosed herein to automatically replicate virtual machine image (VM) files on secondary VM computing devices, from a primary VM computing device. The secondary VM computing devices are automatically selected by constantly reviewing the operating parameter values (e.g., cost of resources, power consumption, etc.) of a number of secondary VM computing devices available of storing VM image replicas. The replica of the primary VM image is stored in the secondary VM computing devices in geographically disparate cloud locations. The primary VM image is automatically broken into constituent data blocks stored in an active index, which is compared against a stale index of data blocks. When an update is detected in the primary VM image, the comparison of indices will indicate that there is new data. Only the new data is used to update the secondary VM images, thereby reducing network traffic and latency issues.

Proceedings ArticleDOI
02 Nov 2011
TL;DR: Examination of the performance of MPLS in Microsoft's online service network (MSN), a well-provisioned multi-continent production network connecting tens of data centers, finds that many paths experience significantly inflated latencies.
Abstract: While MPLS has been extensively deployed in recent years, little is known about its behavior in practice. We examine the performance of MPLS in Microsoft's online service network (MSN), a well-provisioned multi-continent production network connecting tens of data centers. Using detailed traces collected over a 2-month period, we find that many paths experience significantly inflated latencies. We correlate occurrences of latency inflation with routers, links, and DC-pairs. This analysis sheds light on the causes of latency inflation and suggests several avenues for alleviating the problem.

Patent
12 Jul 2011
TL;DR: In this paper, a method and processor architecture for achieving a high level of concurrency and latency hiding in an "infinite-thread processor architecture" with a limited number of hardware threads is disclosed.
Abstract: A method and processor architecture for achieving a high level of concurrency and latency hiding in an “infinite-thread processor architecture” with a limited number of hardware threads is disclosed. A preferred embodiment defines “fork” and “join” instructions for spawning new context-switched threads. Context switching is used to hide the latency of both memory-access operations (i.e., loads and stores) and arithmetic/logical operations. When an operation executing in a thread incurs a latency having the potential to delay the instruction pipeline, the latency is hidden by performing a context switch to a different thread. When the result of the operation becomes available, a context switch back to that thread is performed to allow the thread to continue.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: This work proposes a low latency HTTP streaming approach using HTTP chunked encoding, which enables the server to transmit partial fragments before the entire video fragment is published, and develops an analytical model to quantify and compare the live latencies in three HTTP streaming approaches.
Abstract: Hypertext transfer protocol (HTTP) based streaming solutions for live video and video on demand (VOD) applications have become available recently. However, the existing HTTP streaming solutions cannot provide a low latency experience due to the fact that inherently in all of them, latency is tied to the duration of the media fragments that are individually requested and obtained over HTTP. We propose a low latency HTTP streaming approach using HTTP chunked encoding, which enables the server to transmit partial fragments before the entire video fragment is published. We develop an analytical model to quantify and compare the live latencies in three HTTP streaming approaches. Then, we present the details of our experimental setup and implementation. Both the analysis and experimental results show that the chunked encoding approach is capable of reducing the live latency to one to two chunk durations and that the resulting live latency is independent of the fragment duration.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: Mace is provably equivalent to maximum system latency in a (potentially complex, multi-node) distributed event-based system and is ideal for addressing the problems described earlier.
Abstract: A distributed event processing system consists of one or more nodes (machines), and can execute a directed acyclic graph (DAG) of operators called a dataflow (or query), over long-running high-event-rate data sources. An important component of such a system is cost estimation, which predicts or estimates the “goodness” of a given input, i.e., operator graph and/or assignment of individual operators to nodes. Cost estimation is the foundation for solving many problems: optimization (plan selection and distributed operator placement), provisioning, admission control, and user reporting of system misbehavior. Latency is a significant user metric in many commercial real-time applications. Users are usually interested in quantiles of latency, such as worst-case or 99th percentile. However, existing cost estimation techniques for event-based dataflows use metrics that, while they may have the side-effect of being correlated with latency, do not directly or provably estimate latency. In this paper, we propose a new cost estimation technique using a metric called Mace (Maximum cumulative excess). Mace is provably equivalent to maximum system latency in a (potentially complex, multi-node) distributed event-based system. The close relationship to latency makes Mace ideal for addressing the problems described earlier. Experiments with real-world datasets on Microsoft StreamInsight deployed over 1–13 nodes in a data center validate our ability to closely estimate latency (within 4%), and the use of Mace for plan selection and distributed operator placement.

Journal ArticleDOI
TL;DR: It is concluded that, with advanced telepresence, sophisticated robots could be operated with high cognition throughout a lunar hemisphere by astronauts within a station at an Earth–Moon L1 or L2 venue.

Proceedings ArticleDOI
24 Jul 2011
TL;DR: The goal of this paper is to model the communication latency among distributed intelligent agents because latency can have a significant impact on the higher-level capabilities of a smart grid installation, in particular any protection or coordination functions.
Abstract: The goal of this paper is to model the communication latency among distributed intelligent agents because latency 1) is not zero, 2) is not constant, and 3) can have a significant impact on the higher-level capabilities of a smart grid installation, in particular any protection or coordination functions. Communication latency is considered an inherent parameter which affects the performance of the communication network — the backbone of the multi-agent system. Due to many stochastic factors in a communication environment, communication latency will be best modeled as a random parameter with a probability density function. The latency of sending/receiving messages among distributed intelligent agents is randomly generated based on user input data. In the numerical studies, two abnormal events occurring in the modified IEEE 34 node test feeder will be simulated to validate the proposed methodology. The simulation will measure how fast the smart grid responds to the disturbances when considering fixed latency, as well as random latency.

Journal ArticleDOI
TL;DR: The role of ejecting and retaining currents in determining the time‐course of neuronal responses to microelectrophoretically applied drugs was investigated and changes were observed: the response latency became progressively shorter, the plateau became higher, and the recovery time was prolonged.
Abstract: The role of ejecting and retaining currents in determining the time-course of neuronal responses to microelectrophoretically applied drugs (acetylcholine, glutamate, noradrenaline, 5-hydroxytryptamine, and mescaline) was investigated. Comparing the parameters of excitatory responses to ejecting currents of successively increasing intensity, the following changes were observed: the response latency became progressively shorter, the plateau became higher, and the recovery time was prolonged. An increase in the intensity or duration of the pre-ejection retaining current resulted in the prolongation of the response latency and the latency to plateau, but did not alter the plateau itself. An increase in the intensity of the post-ejection retaining current reduced the recovery time of the response.

Proceedings Article
25 May 2011
TL;DR: The results can be used to better quantify the effects of different factors on moving objects in interactive scenarios and aid the designers in selecting target sizes and velocities, as well as in adjusting smoothing, prediction and compensation algorithms.
Abstract: In this paper we describe how human target following performance changes in the presence of latency, latency variations, and signal dropouts. Many modern games and game systems allow for networked, remote participation. In such networks latency, variations and dropouts are commonly encountered factors. Our user study reveals that all of the investigated factors decrease tracking performance. The errors increase very quickly for latencies of over 110 ms, for latency jitters above 40 ms, and for dropout rates of more than 10%. The effects of target velocity on errors are close to linear, and transverse errors are smaller than longitudinal ones. The results can be used to better quantify the effects of different factors on moving objects in interactive scenarios. They also aid the designers in selecting target sizes and velocities, as well as in adjusting smoothing, prediction and compensation algorithms.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: A novel dataset and algorithms for reducing the latency in recognizing the action are presented and a classifier based on logistic regression that uses canonical poses to identify the action is trained.
Abstract: An important aspect in interactive, action-based interfaces is the latency in recognizing the action. High latency will cause the system's feedback to lag behind user actions, reducing the overall quality of the user experience. This paper presents a novel dataset and algorithms for reducing the latency in recognizing the action. Latency in classification is minimized with a classifier based on logistic regression that uses canonical poses to identify the action. The classifier is trained from the dataset using a learning formulation that makes it possible to train the classifier to reduce latency. The classifier is compared against both a Bag of Words and a Conditional Random Field classifier and is found to be superior in both pre-segmented and on-line classification tasks.

Journal ArticleDOI
01 Jan 2011-Methods
TL;DR: The identification of CpGIs within the HIV-1 provirus and the study of their differential methylation patterns in several HIV- 1 latency models using bisulfite-mediated methylcytosine mapping are discussed.

Patent
06 Oct 2011
TL;DR: In this article, a method for managing the latency of memory commands is described, where each memory operation command is associated with one of a plurality of memory devices and a cumulative latency estimate is maintained for each memory device.
Abstract: Methods and apparatus for managing latency of memory commands are disclosed. An example method includes receiving memory operation commands for execution by a data storage device, each memory operation command being associated, for execution, with one of a plurality of memory devices. The example method also includes maintaining, for each memory device, a respective cumulative latency estimate. The example method also includes, for each memory operation command, when received by the memory controller, comparing the respective cumulative latency estimate of the associated memory device with a latency threshold for the received memory operation command. In the event the cumulative latency estimate is at or below the latency threshold, the received memory operation command is provided to a respective command queue operatively coupled with the respective memory device. In the event the cumulative latency estimate is above the latency threshold, the received memory operation command is returned to a host device.

Proceedings ArticleDOI
11 Dec 2011
TL;DR: An immersive simulation system which improves upon current latency measurement and minimization techniques and Visualization and application-level control of latency in the VE was implemented using the XVR platform.
Abstract: System latency (time delay) and its visible consequences are fundamental Virtual Environment (VE) deficiencies that can hamper user perception and performance. This paper presents an immersive simulation system which improves upon current latency measurement and minimization techniques. Hardware used for latency measurements and minimization is assembled based on low-cost and portable equipment, most of them commonly found in an academic facility without reduction in accuracy of measurements. A custom-made mechanism of measuring and minimizing end-to-end head tracking latency in an immersive VE is assembled. The mechanism is based on an oscilloscope comparing two signals. One is generated due to the head-tracker movement by a shaft encoder attached on a servo motor moving the tracker. The other signal is generated by the visual consequences of this movement in the VE using a photodiode attached to the computer monitor. Visualization and application-level control of latency in the VE was implemented using the XVR platform. Minimization processes resulted in almost 50% reduction of initial measured latency. The description of the mechanism by which VE latency is measured and minimized will be essential to guide system countermeasures such as predictive compensation.

Proceedings ArticleDOI
18 Nov 2011
TL;DR: A novel structure for on-chip networks, named Agent-based Network-on-Chip (ANoC), is presented to diagnose the congested areas and an efficient Congestion-Aware Selection (CAS) method is proposed to reduce overall network latency.
Abstract: Congestion in on-chip networks may cause many drawbacks in multiprocessor systems including throughput reduction, increase in latency, and additional power consumption. Furthermore, conventional congestion control methods, employed for on-chip networks, cannot efficiently collect congestion information and distribute them over the on-chip network. In this paper, we present a novel structure for on-chip networks, named Agent-based Network-on-Chip (ANoC), to diagnose the congested areas. In addition to the presented structure, an efficient Congestion-Aware Selection (CAS) method is proposed to reduce overall network latency. CAS is capable of selecting an appropriate output channel to route packets along a less congested path. 29% average and 35% maximum latency reduction are achieved on SPLASH-2 and PARSEC benchmarks running on a 36-core Chip Multi-Processor.

Journal ArticleDOI
TL;DR: It is concluded that only IE promoter activation can efficiently precede latency establishment and that this activation is likely to occur through a VP16-independent mechanism.
Abstract: Herpes simplex virus (HSV) type-1 establishes lifelong latency in sensory neurones and it is widely assumed that latency is the consequence of a failure to initiate virus immediate-early (IE) gene expression. However, using a Cre reporter mouse system in conjunction with Cre-expressing HSV-1 recombinants we have previously shown that activation of the IE ICP0 promoter can precede latency establishment in at least 30% of latently infected cells. During productive infection of non-neuronal cells, IE promoter activation is largely dependent on the transactivator VP16 a late structural component of the virion. Of significance, VP16 has recently been shown to exhibit altered regulation in neurones; where its de novo synthesis is necessary for IE gene expression during both lytic infection and reactivation from latency. In the current study, we utilized the Cre reporter mouse model system to characterize the full extent of viral promoter activity compatible with cell survival and latency establishment. In contrast to the high frequency activation of representative IE promoters prior to latency establishment, cell marking using a virus recombinant expressing Cre under VP16 promoter control was very inefficient. Furthermore, infection of neuronal cultures with VP16 mutants reveals a strong VP16 requirement for IE promoter activity in non-neuronal cells, but not sensory neurones. We conclude that only IE promoter activation can efficiently precede latency establishment and that this activation is likely to occur through a VP16-independent mechanism.

Patent
Andrew M. Magruder1, Lex N. Bayer1
20 Oct 2011
TL;DR: The LATENCY PAYMENT SETTLEMENT APPARATUSES, METHODS and Systems (LPS) as mentioned in this paper transforms latency payment request inputs via LPS components into latency payment requests.
Abstract: The LATENCY PAYMENT SETTLEMENT APPARATUSES, METHODS AND SYSTEMS (“LPS”) transforms latency payment request inputs via LPS components into latency payment requests. In one embodiment, a method is disclosed comprising obtaining a latency payment method request and determining a latency payment period associated with the latency payment method request. The method includes determining a consumer item currency amount by applying a currency conversion factor to the merchant item currency amount. The method also determines a latency buffer amount based on the latency payment method request, generates a latency payment request by summing the latency buffer amount to the consumer item currency amount and structuring the summed amounts according to the patency payment period, and provides the latency payment request. In some embodiments LPS may determine the latency payment request according to maximized remittance aspects associated with a consumer specified payment method so as to optimize system wide remittance results.

Book ChapterDOI
09 Jul 2011
TL;DR: The goal of this study was to determine the level at which touch screen latency becomes annoying for common tablet tasks and to show levels of user ratings by latency duration.
Abstract: The goal of this study was to determine the level at which touch screen latency becomes annoying for common tablet tasks. Two types of touch screen latency were manipulated for three applications: Web page browsing, photo viewing, and ebook reading. Initial latency conditions involved an initial delay in the screen’s visual response to touch inputs but with no delay after the beginning of a touch input. Continuous latency involved continuous delay for the duration of a touch input. Both types were tested from 80 to 780 ms. Touch inputs included resizing with multitouch input, panning, scrolling, zooming, and page turning. Results showed a statistically significant main effect for application, but differences were small. Continuous and initial latency showed little difference in ratings except with ebook reading. Trend graphs show levels of user ratings by latency duration.

Proceedings ArticleDOI
06 Dec 2011
TL;DR: ASAP is introduced, a new naming and transport protocol that reduces latency by shortcutting DNS requests and eliminating TCP's three-way handshake, while ensuring the key security property of verifiable provenance of client requests.
Abstract: For interactive networked applications like web browsing, every round-trip time (RTT) matters. We introduce ASAP, a new naming and transport protocol that reduces latency by shortcutting DNS requests and eliminating TCP's three-way handshake, while ensuring the key security property of verifiable provenance of client requests. ASAP eliminates between one and two RTTs, cutting the delay of small requests by up to two-thirds.