scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 2005"


Proceedings ArticleDOI
13 Mar 2005
TL;DR: The paper develops two new algorithms to control and exploit the presence of multiple schedules to reduce energy consumption and latency in large sensor networks, including the global schedule algorithm (GSA) and the fast path algorithm (FPA).
Abstract: Recently, several MAC protocols, such as S-MAC and T-MAC, have exploited scheduled sleep/wakeup cycles to conserve energy in sensor networks. Until now, most protocols have assumed all nodes in the network were configured to follow the same schedule, or have assumed border nodes would follow multiple schedules, but those cases have not been evaluated. The paper develops two new algorithms to control and exploit the presence of multiple schedules to reduce energy consumption and latency. The first one is the global schedule algorithm (GSA). Through experiments, we demonstrate that, because of radio propagation vagaries, large sensor networks have very ragged, overlapping borders where many nodes listen to two or more schedules. GSA is a fully distributed algorithm that allows a large network to converge on a single global schedule to conserve energy. Secondly, we demonstrate that strict schedules incur a latency penalty in a multi-hop network when packets must wait for the next schedule for transmission. To reduce latency in multi-hop paths, we develop the fast path algorithm (FPA). FPA provides fast data forwarding paths by adding additional wake-up periods on the nodes along paths from sources to sinks. We evaluate both algorithms through experiments on Berkeley motes and demonstrate that the protocols accomplish their goals of reducing energy consumption and latency in large sensor networks.

171 citations


Journal ArticleDOI
TL;DR: This work designs and conducts user studies that measure the impact of latency on user performance on three of the most popular RTS games and finds modest statistical correlations between user performance and latency for exploration, but very weak correlations for building and combat.

125 citations


Journal ArticleDOI
TL;DR: Data, emerging from both in vitro and in vivo model systems, are focused on, which provide a framework for a mechanistic understanding of latency and the existence and possible significance of non-uniform latent states.

111 citations


Proceedings ArticleDOI
14 Mar 2005
TL;DR: This paper presents a novel scheduling discipline called asynchronous latency guarantee (ALG) scheduling, which provides latency and bandwidth guarantees in accessing a shared media, e.g. a physical link shared between a number of virtual channels.
Abstract: Guaranteed services (GS) are important in that they provide predictability in the complex dynamics of shared communication structures. This paper discusses the implementation of GS in an asynchronous network-on-chip. We present a novel scheduling discipline called asynchronous latency guarantee (ALG) scheduling, which provides latency and bandwidth guarantees in accessing a shared media, e.g. a physical link shared between a number of virtual channels. ALG overcomes the drawbacks of existing scheduling disciplines, in particular, the coupling between latency and bandwidth guarantees. A 0.12 /spl mu/m CMOS standard cell implementation of an ALG link has been simulated. The operation speed of the design was 702 MDI/s.

109 citations


Journal ArticleDOI
TL;DR: Surgeons are able to complete tasks with a signal transmission latency of up to 500 ms, and the clinical impact of slower TCT and increased error rates encountered at higher latency needs to be established.
Abstract: Objective: It has been suggested that robotic-assisted remote telepresence surgery with a signal transmission latency of greater than 300 ms may not be possible. Methods: We evaluated the impact of four different latencies of up to 500 ms on task completion and error rate in five surgeons after completion of three different surgical tasks. Results: The surgeons were able to complete all tasks with a latency of 500 ms. However, higher latency was associated with higher error rates and task completion time (TCT). There were significant variations between surgeons and different tasks. Conclusion: Surgeons are able to complete tasks with a signal transmission latency of up to 500 ms. The clinical impact of slower TCT and increased error rates encountered at higher latency needs to be established.

98 citations


Patent
25 Apr 2005
TL;DR: In this article, a latency mitigating congestion avoidance and control technique for unreliable transport protocols is described, which is suitable for use with unreliable transport protocol and is not suitable for applications that communicate using unreliable communication protocols but cannot tolerate high latencies.
Abstract: Techniques for optimizing bandwidth usage while controlling latency. A latency mitigating congestion avoidance and control technique is described that is suitable for use with unreliable transport protocols. Embodiments of the present invention facilitate communication of data for applications that communicate using unreliable communication protocols and that would like to maximize use of available bandwidth but cannot tolerate high latencies. Techniques are described for preventing latency from exceeding a certain level, without destroying the ability of an application or system to probe for additional available bandwidth and maximize bandwidth usage.

97 citations


Proceedings ArticleDOI
13 Oct 2005
TL;DR: Two fast re-authentication methods based on the predictive authentication mechanism defined by IEEE 802.11i security group are proposed and it is demonstrated that they provide significant latency reductions compared to already proposed solutions.
Abstract: Recently, user mobility in wireless data networks is increasing because of the popularity of portable devices and the desire for voice and multimedia applications. These applications, however, require fast handoffs among base stations to maintain the quality of the connections. Re-authentication during handoff procedures causes a long handoff latency which affects the flow and service quality especially for multimedia applications. Therefore minimizing re-authentication latency is crucial in order to support real-time multimedia applications on public wireless IP networks.In this paper, we proposed two fast re-authentication methods based on the predictive authentication mechanism defined by IEEE 802.11i security group. We have implemented these methods in an experimental test-bed using freeware and commodity 802.11 hardware and we demonstrate that they provide significant latency reductions compared to already proposed solutions. Conducted measurements show a very low latency not exceeding 50 ms under extreme congested network conditions.

95 citations


Journal ArticleDOI
TL;DR: A new technique that uses the timing of neuronal and behavioral responses to explore the contributions of individual neurons to specific behaviors is described, suggesting that this technique is a valuable tool for exploring the functional organization of neuronal circuits that underlie specific behaviors.
Abstract: We describe a new technique that uses the timing of neuronal and behavioral responses to explore the contributions of individual neurons to specific behaviors. The approach uses both the mean neuro...

81 citations


Patent
18 Aug 2005
TL;DR: In this article, the authors present a method and computer program product for performing a latency analysis to determine one or more latency statistics for a network link within a distributed computing network, and compare these statistics to a set of benchmark latency criteria to determine if at least one of the network links is a latency compatible network link.
Abstract: A method and computer program product for performing a latency analysis to determine one or more latency statistics for one or more network links within a distributed computing network. The one or more latency statistics are compared to one or more benchmark latency criteria to determine if at least one of the network links is a latency-compatible network link. If at least one of the network links is a latency-compatible network link, at least one additional network analysis is performed on at least one of the latency-compatible network links.

81 citations


Proceedings ArticleDOI
30 Apr 2005
TL;DR: This work presents AppSleep, a stream-oriented power management protocol for latency tolerant sensor network applications which demonstrates an over 3/spl times/ lifetime gain over B-MAC and SMAC and an application driven addition which supports varying latency requirements while still maximizing energy efficiency.
Abstract: Most power management protocols are packet-based and optimized for applications with mostly asynchronous (i.e. unexpected) traffic. We present AppSleep, a stream-oriented power management protocol for latency tolerant sensor network applications. For this class of applications, AppSleep demonstrates an over 3/spl times/ lifetime gain over B-MAC and SMAC. AppSleep leverages application characteristics in order to take advantage of periods of high latency tolerance to put the network to sleep for extended periods of time, while still facilitating low latency responses when required. AppSleep also gives applications the flexibility to efficiently and effectively trade latency for energy when desired, and enables energy efficient multi-fragment unicast communication by only keeping the active route awake. We also present Adaptive AppSleep, an application driven addition to AppSleep which supports varying latency requirements while still maximizing energy efficiency. Our evaluation demonstrates that for an overlooked class of applications, stream-oriented power management protocols such as AppSleep outperform packet-based protocols such as B-MAC and S-MAC.

80 citations


Proceedings ArticleDOI
15 Jun 2005
TL;DR: It is shown that bots experience similar unfairness problems as humans and it is demonstrated that the application developed significantly improves fairness.
Abstract: Over the past few years, the prominence of multiplayer network gaming has increased dramatically in the Internet. The effect of network delay (lag) on multiplayer network gaming has been studied before. Players with higher delays (whether due to slower connections, congestion or a larger distance to the server) are at a clear disadvantage relative to players with low delay. In this paper we evaluate whether eliminating the delay differences will provide a fairer solution whilst maintaining good gameplay. We have designed and implemented an application that can be used with existing network games to equalize the delay differences. To evaluate the effectiveness of the approach we use a novel method involving computer players (bots) instead of human players. This method provides some advantages over difficult and time-consuming human usability trials. We show that bots experience similar unfairness problems as humans and demonstrate that the application we have developed significantly improves fairness.

Patent
31 Mar 2005
TL;DR: In this paper, the authors describe a method and apparatus for measuring true end-to-end latency for calls to Web services, where a Web service client and a web service provider collaborate to collect timing/latency data for calls.
Abstract: Method and apparatus for measuring true end-to-end latency for calls to Web services are described. In embodiments, a Web service client and a Web service provider may collaborate to collect timing/latency data for calls to the Web service. This data may be collected, stored, and analyzed by a latency measurement service to generate displays and/or reports on true end-to-end latency measurements for Web service calls. Embodiments may collect Internet/network infrastructure latency for Web service calls up to and including the “last mile” to the Web service client and the Web service processing time. Additionally, by analyzing latency data collected from a number of Web services clients and/or Web service providers, embodiments may provide a macro-level view into overall Internet performance. In one embodiment, the latency measurement service may be a Web service.

Proceedings ArticleDOI
27 Jun 2005
TL;DR: An architecture for the computation of the double-precision floating-point multiply-add fused (MAF) operation A+(B/spl times/C) that permits to compute the floating-points addition with lower latency than floating- point multiplication and MAF is proposed.
Abstract: In this paper we propose an architecture for the computation of the double-precision floating-point multiply-add fused (MAF) operation A+(B/spl times/C) that permits to compute the floating-point addition with lower latency than floating-point multiplication and MAF. While previous MAF architectures compute the three operations with the same latency, the proposed architecture permits to skip the first pipeline stages, those related with the multiplication B/spl times/C, in case of an addition. For instance, for a MAF unit pipelined into three or five stages, the latency of the floating-point addition is reduced to two or three cycles, respectively. To achieve the latency reduction for floating-point addition, the alignment shifter, which in previous organizations is in parallel with the multiplication, is moved so that the multiplication can be bypassed. To avoid that this modification increases the critical path, a double-datapath organization is used, in which the alignment and normalization are in separate paths. Moreover, we use the techniques developed previously of combining the addition and the rounding and of performing the normalization before the addition.

Proceedings ArticleDOI
27 Sep 2005
TL;DR: This paper presents a 2-level scheduling framework that can be built on top of an existing storage utility that uses a low-level feedback-driven request scheduler that is intended to meet the latency bounds determined by the SLO.
Abstract: I/O consolidation is a growing trend in production environments due to the increasing complexity in tuning and managing storage systems. A consequence of this trend is the need to serve multiple users/workloads simultaneously. It is imperative to make sure that these users are insulated from each other by visualization in order to meet any service level objective (SLO). This paper presents a 2-level scheduling framework that can be built on top of an existing storage utility. This framework uses a low-level feedback-driven request scheduler, called AVATAR, that is intended to meet the latency bounds determined by the SLO. The load imposed on AVATAR is regulated by a high-level rate controller, called SARC, to insulate the users from each other. In addition, SARC is work-conserving and tries to fairly distribute any spare bandwidth in the storage system to the different users. This framework naturally decouples rate and latency allocation. Using extensive I/O traces and a detailed storage simulator, we demonstrate that this 2-level framework can simultaneously meet the latency and throughput requirements imposed by an SLO, without requiring extensive knowledge of the underlying storage system.

Journal ArticleDOI
TL;DR: With this approach, the limitations of a single fixed size integration step, as required by EMTP-type programs, can be overcome, resulting in a decreased number of numerical operations for a given total simulation time.
Abstract: This work presents the techniques derived for an efficient and accurate latency exploitation of electric networks using time-domain transients simulation software, such as the "electromagnetic transients program" (EMTP). Latency exploitation is related to the capability of numerically solving the differential equations governing the behavior of electric networks with different integration steps. With this approach, the limitations of a single fixed size integration step, as required by EMTP-type programs, can be overcome, resulting in a decreased number of numerical operations for a given total simulation time. Using a network partitioning and recombination technique, latency exploitation is achieved using noniterative solutions. Results are shown for networks consisting exclusively of lumped elements and networks with transmission lines and are compared with those obtained from conventional EMTP simulations.

Book ChapterDOI
01 Jan 2005
TL;DR: This chapter describes an analytical method to compute latency, throughput and buffering requirements for the AEthereal NoC and shows the usefulness of the method by applying it on an MPEG-2 (Moving Picture Experts Group) codec example.
Abstract: As the complexity of Systems-on-Chip (SoC) is growing, meeting real-time requirements is becoming increasingly difficult. Predictability for computation, memory and communication components is needed to build real-time SoC. We focus on a predictable communication infrastructure called the AEthereal Network-on-Chip (NoC). The AEthereal NoC is a scalable communication infrastructure based on routers and network interfaces (NI). It provides two services: guaranteed throughput and latency (GT), and best effort (BE). Using the GT service, one can derive guaranteed bounds on latency and throughput. To achieve guaranteed throughput, buffers in NI must be dimensioned to hide round-trip latency and rate difference between computation and communication IPs (Intellectual Property). With the BE service, throughput and latency bounds cannot be derived with guarantees. In this chapter, we describe an analytical method to compute latency, throughput and buffering requirements for the AEthereal NoC. We show the usefulness of the method by applying it on an MPEG-2 (Moving Picture Experts Group) codec example.

Journal ArticleDOI
TL;DR: Compensation for visually delayed image perception occurs on several levels; initial adaptations include slower end-effector manipulation; late adaptive changes include a move-and-wait strategy.
Abstract: Telerobotic surgery is ideally suited for remote applications in which the instrument control console is stationed separately from the end-effectors at the patient’s bedside. However, if the distance between the console and the patient is great enough, a lag effect or latency between end-effector manipulation and the depicted image leads to alterations in movement patterns. The purpose of this study was to determine the effect of visual delay on surgical task performance. At an endoscopic skill station, an analogue delay device was interposed between the surgical field and monitor to delay the transmission of visual information, thus mimicking the distance effect of data transmission. Three surgeons with similar laparoscopic experience participated in the laparoscopic knot tying portion of the study, and seven residents participated in the accuracy and dexterity tasks. The time to complete a single throw was recorded in seconds after adding consecutively increasingly time delay in 50 ms increments. Similar time delay increments were added for the accuracy and dexterity tasks, which involved passing a needle through two adjacent circles and passing a small cylinder through a larger one to reproduce two-handed coordination and spatial resolution. Data were presented as the median time to complete each task. For all three tasks, an incremental increase in time delay was associated with a significant (p < 0.001) increase in the time to complete the task. For dexterity, a statistically significant (p ≤ 0.05) delay was identified at 0.25 s of delay from control values without delay. A move-and-wait strategy was gradually adopted up to 0.4 s of visual delay. Compensation for visually delayed image perception occurs on several levels. Initial adaptations include slower end-effector manipulation; late adaptive changes include a move-and-wait strategy. Increased time to perform surgical maneuvers as well as diminished accuracy, diminished dexterity, and increasing fatigue represent additional performance encumbrances evoked by visual time delay. The nuances of both human and digital compensatory mechanisms for visual time delay must be defined and enhanced to maximize the potential for telerobotic surgical applications.

Patent
05 Jul 2005
TL;DR: In this paper, the authors proposed a scheme to dynamically adjust the data rate in response to data status feedback, which may include information regarding data errors and/or latency, such feedback may include a first communication node communicating with a second communication node and sends data at an initial data rate.
Abstract: Aspects of the present disclosure are directed to providing flexible and efficient communication by dynamically adjusting a transmit data rate in response to data status feedback. Such feedback may include information regarding data errors and/or latency. A first communication node communicates with a second communication node and sends data at an initial data rate. The transmit data rate is then selectively adjusted based on data status feedback received from the second communication node or other destination.

Journal ArticleDOI
TL;DR: It is shown that γHV68 infection leads to significant splenic B-cell proliferation as late as day 42 postinfection, which provides direct evidence that the proliferation of latently infected B cells is critical for the establishment of chronic γ HV 68 infection.
Abstract: Murine gammaherpesvirus 68 (gammaHV68) provides a tractable small animal model with which to study the mechanisms involved in the establishment and maintenance of latency by gammaherpesviruses. Similar to the human gammaherpesvirus Epstein-Barr virus (EBV), gammaHV68 establishes and maintains latency in the memory B-cell compartment following intranasal infection. Here we have sought to determine whether, like EBV infection, gammaHV68 infection in vivo is associated with B-cell proliferation during the establishment of chronic infection. We show that gammaHV68 infection leads to significant splenic B-cell proliferation as late as day 42 postinfection. Notably, gammaHV68 latency was found predominantly in the proliferating B-cell population in the spleen on both days 16 and 42 postinfection. Furthermore, virus reactivation upon ex vivo culture was heavily biased toward the proliferating B-cell population. DNA methyltransferase 1 (Dnmt1) is a critical maintenance methyltransferase which, during DNA replication, maintains the DNA methylation patterns of the cellular genome, a process that is essential for the survival of proliferating cells. To assess whether the establishment of gammaHV68 latency requires B-cell proliferation, we characterized infections of conditional Dnmt1 knockout mice by utilizing a recombinant gammaHV68 that expresses Cre-recombinase (gammaHV68-Cre). In C57BL/6 mice, the gammaHV68-Cre virus exhibited normal acute virus replication in the lungs as well as normal establishment and reactivation from latency. Furthermore, the gammaHV68-Cre virus also replicated normally during the acute phase of infection in the lungs of Dnmt1 conditional mice. However, deletion of the Dnmt1 alleles from gammaHV68-infected cells in vivo led to a severe ablation of viral latency, as assessed on both days 16 and 42 postinfection. Thus, the studies provide direct evidence that the proliferation of latently infected B cells is critical for the establishment of chronic gammaHV68 infection.

Patent
Sang-Bo Lee1, Ho-young Song1
26 Jul 2005
TL;DR: In this paper, the authors propose a memory device that includes a memory cell array, and an output buffer receiving data addressed from the memory array and outputting the data based on a latency signal.
Abstract: The memory device includes a memory cell array, and an output buffer receiving data addressed from the memory cell array and outputting the data based on a latency signal. A latency circuit selectively associates at least one transfer signal with at least one sampling signal based on CAS latency information to create a desired timing relationship between the associated sampling and transfer signals. The latency circuit stores read information in accordance with at least one of the sampling signals, and generates a latency signal based on the transfer signal associated with the sampling signal used in storing the read information.

Journal ArticleDOI
TL;DR: A series of ORF63 carboxy-terminal mutants showed that the last 70 amino acids do not affect replication in vitro or latency in rodents; however, the last 108 amino acids are important for replication and latency.
Abstract: Varicella-zoster virus (VZV) open reading frame 63 (ORF63) is one of the most abundant transcripts expressed during VZV latency in humans, and ORF63 protein has been detected in human ganglia by several laboratories. Deletion of over 90% of the ORF63 gene showed that the protein is required for efficient establishment of latency in rodents. We have constructed viruses with a series of mutations in ORF63. While prior experiments showed that transfection of cells with a plasmid expressing ORF63 but lacking the putative nuclear localization signal of the protein resulted in increased expression of the protein in the cytoplasm, we found that ORF63 protein remained in the nucleus in cells infected with a VZV ORF63 nuclear localization signal deletion mutant. This mutant was not impaired for growth in cell culture or for latency in rodents. Replacement of five serine or threonine phosphorylation sites in ORF63 with alanines resulted in a virus that was impaired for replication in vitro and for latency. A series of ORF63 carboxy-terminal mutants showed that the last 70 amino acids do not affect replication in vitro or latency in rodents; however, the last 108 amino acids are important for replication and latency. Thus, regions of ORF63 that are important for replication in vitro are also required for efficient establishment of latency.

Journal ArticleDOI
TL;DR: This paper analytically proves that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion, and indicates that the Internet router-level topology resembles power- law latency expansion.
Abstract: Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems. In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS converges quickly, and requires relatively few modifications to existing DHT systems. For idealized versions of DHT systems like Chord, Tapestry, and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord. The overhead of LPRS is one sample per lookup hop in the worst case. Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.

13 Dec 2005
TL;DR: The results show that statistical filtering of latency samples improves accuracy and stability and that a small number of neighbors is sufficient when updating coordinates, and two different APIs for accessing coordinates are presented.
Abstract: Large-scale distributed applications need latency information to make network-aware routing decisions. Collecting these measurements, however, can impose a high burden. Network coordinates are a scalable and efficient way to supply nodes with up-to-date latency estimates. We present our experience of maintaining network coordinates on PlanetLab. We present two different APIs for accessing coordinates: a per-application library, which takes advantage of application-level traffic, and a stand-alone service, which is shared across applications. Our results show that statistical filtering of latency samples improves accuracy and stability and that a small number of neighbors is sufficient when updating coordinates.

Book ChapterDOI
26 Oct 2005
TL;DR: An experimental testbed for telesurgery that is currently available in the laboratory is described, capable of supporting both wired and satellite connections as well as simulated network environments.
Abstract: The paper is concerned with determining the feasibility of performing telesurgery over long communication links. It describes an experimental testbed for telesurgery that is currently available in our laboratory. The tesbed is capable of supporting both wired and satellite connections as well as simulated network environments. The feasibility of performing telesurgery over a satellite link with approximately 600 ms delay is shown through a number of dry and wet lab experiments. Quantative results of these experiments are also discussed.

Patent
25 Aug 2005
TL;DR: In this paper, a list of timed events may be used to synchronize a pyrotechnic firing sequence with music or other external events over a series of embedded microprocessors.
Abstract: A method for achieving zero, or near zero, latency timed pyrotechnic events by utilizing distributed processing is presented. A list of timed events may be used to synchronize a pyrotechnic firing sequence with music or other external events. This list is distributed over a series of embedded microprocessors. Each microprocessor is then synchronized to a master controller clock, and enabled such that each processor may then fire independently as required by the master list. This distributed process removes the split-second timing requirement from the main controller enabling the achievement of zero latency and providing significantly more timing events to be processed simultaneously while alleviating problems such as wireless radio interference delays. Each module is capable of forwarding information to other modules, which may be a position that prevents wireless communication directly with the master controller.

01 Jan 2005
TL;DR: There were highly significant, moderately strong, negative correlations between speed and coding performance, and these techniques may have promise for user modelling and assessment as well as in educational diagnostics.

Proceedings ArticleDOI
08 Mar 2005
TL;DR: This paper proposes a reservation scheme, latency minimized energy efficient MAC protocol (LEEM), which is a novel hop-ahead reservation scheme in a dual frequency radio to minimize the latency in the multihop path data transmission by reserving the next hop's channel a priori.
Abstract: In wireless sensor networks, efficient usage of energy helps in improving the network lifetime. As the battery of a sensor node, in most cases, cannot be recharged or replaced after the deployment of the sensors, energy management becomes a critical issue in such networks. In order to detect an event, a sensor network spends majority of the time in monitoring its environment, during which a significant amount of energy can be saved by placing the radio in the low-power sleep mode. This can be achieved by using a dual frequency radio setup. However, such energy saving protocols increase the latency encountered in setting up a multihop path. We, in this paper, propose a reservation scheme, latency minimized energy efficient MAC protocol (LEEM), which is a novel hop-ahead reservation scheme in a dual frequency radio to minimize the latency in the multihop path data transmission by reserving the next hop's channel a priori. Thus, in a multihop sensor network, a packet can be forwarded to the next hop, as soon as it is received by a sensor node, which helps in eliminating the delay incurred for setting up the path. Simulation results show that LEEM consumes lesser power and reduces end-to-end latency by around 50% than that of the existing schemes

Patent
07 Jun 2005
TL;DR: In this article, the authors propose a method for reducing latency between two clock domains in a digital electronic device by including a delay in the time before first writing data to a First In First Out (FIFO) queue used to buffer and synchronize data.
Abstract: Disclosed is a method for reducing latency between two clock domains in a digital electronic device. The time between a write to a queue position and a corresponding read of the queue position is reduced by up to one clock cycle by including a delay in the time before first writing data to a First In First Out (FIFO) queue used to buffer and synchronize data between two clock domains. The two clock domains have the same frequency, but may be out of phase. Reducing the latency between the write and the corresponding read reduces the required size of the FIFO queue and also results in more efficient system operation.

Journal ArticleDOI
TL;DR: The authors found that the relation between the latency of recognizing a message as a joke and the funniness of that joke to be primarily negative and linear, and that funnier material was reacted to more quickly than less funny material providing some evidence for the expert skill hypothesis.
Abstract: Abstract The relation between humor appreciation and comprehension difficulty has been described as an inverted U function. That is, when a joke is too easy or too hard to understand it will be less funny than a joke of intermediate difficulty. Humor appreciation might, however, be a kind of expert skill. Then the easier it is to get a joke for the experienced language user the funnier the joke will be. Two experiments found the relation between the latency of recognizing a message as a joke and the funniness of that joke to be primarily negative and linear. There was no evidence of an inverted U with this material. Funnier material was reacted to more quickly than less funny material providing some evidence for the expert skill hypothesis. Some jokes congruent with male gender stereotypes, however, resulted in higher humor ratings by females but did not affect recognition latency. This finding suggests the possibility of an implicit structural and a more explicit content factor in humor appreciation.

Journal ArticleDOI
TL;DR: In response to paired sound pulses, PLS neurons exhibited delay-dependent response suppression, confirming that high-threshold leading inhibition was responsible for PLS and its role in time-domain processing.
Abstract: A number of central auditory neurons exhibit paradoxical latency shift (PLS), a response characterized by longer response latencies at higher sound levels. PLS neurons are known to play a role in t...