scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 2000"


Proceedings ArticleDOI
12 Nov 2000
TL;DR: It is proved that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed).
Abstract: We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times-the total latency-is minimized. In many settings, including the Internet and other large-scale communication networks, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a "selfishly motivated" assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. We quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and non-decreasing in the edge congestion.

811 citations


Patent
21 Dec 2000
TL;DR: In this paper, the authors use TCP/IP acknowledgement packets to deduce network latency and use a plurality of remotely located network monitors (and/or monitors co-located with servers and/or clients) to derive and report on actual latency experienced throughout the network.
Abstract: A remote network monitor for monitoring transaction-based protocols such as HTTP receives and analyzes protocol requests and associated responses, and derives therefrom a parameter associated with round-trip network latency. For example, TCP/IP acknowledgement packets can be used to deduce network latency. Such network latency and total latency parameters can be used to determine which portion of total latency can be attributable to the network and which portion is attributable to node processing time (e.g., server and/or client processing). A plurality of remotely located network monitors (and/or monitors co-located with servers and/or clients) can be used to derive and report on actual latency experienced throughout the network.

198 citations


Patent
08 Jun 2000
TL;DR: In this paper, a method for reducing a variable latency associated with a buffer and at least partially resulting from at least one splice between a FROM bitstream and a TO bitstream each including data corresponding to a plurality of frames is proposed.
Abstract: In a compressed domain digital communications system, a method for reducing a variable latency associated with a buffer and at least partially resulting from at least one splice between a FROM bitstream and a TO bitstream each including data corresponding to a plurality of frames, the method including: selectively deleting data corresponding to a select at least one of the frames from the buffer based upon the variable latency so as to reduce the variable latency when an amount of data corresponding to a number of frames present in the buffer is greater than a given number of frames; and, regulating a flow of data in the system to prevent an underflow condition in the system by effecting a repeat last frame command and prevent an overflow condition in the system by slowing a rate of transmission for the data associated with at least one of the frames in the TO bitstream.

128 citations


Journal ArticleDOI
TL;DR: It is suggested that LAT enhances the establishment of latency in rabbits and that this may be one of the mechanisms by which LAT enhances spontaneous reactivation.
Abstract: The latency-associated transcript (LAT) gene the only herpes simplex virus type 1 (HSV-1) gene abundantly transcribed during neuronal latency, is essential for efficient in vivo reactivation. Whether LAT increases reactivation by a direct effect on the reactivation process or whether it does so by increasing the establishment of latency, thereby making more latently infected neurons available for reactivation, is unclear. In mice, LAT-negative mutants appear to establish latency in fewer neurons than does wild-type HSV-1. However, this has not been confirmed in the rabbit, and the role of LAT in the establishment of latency remains controversial. To pursue this question, we inserted the gene for the enhanced green fluorescent protein (EGFP) under control of the LAT promoter in a LAT-negative virus (DeltaLAT-EGFP) and in a LAT-positive virus (LAT-EGFP). Sixty days after ocular infection, trigeminal ganglia (TG) were removed from the latently infected rabbits, sectioned, and examined by fluorescence microscopy. EGFP was detected in significantly more LAT-EGFP-infected neurons than DeltaLAT-EGFP-infected neurons (4.9% versus 2%, P < 0.0001). The percentages of EGFP-positive neurons per TG ranged from 0 to 4.6 for DeltaLAT-EGFP and from 2.5 to 11.1 for LAT-EGFP (P = 0.003). Thus, LAT appeared to increase neuronal latency in rabbit TG by an average of two- to threefold. These results suggest that LAT enhances the establishment of latency in rabbits and that this may be one of the mechanisms by which LAT enhances spontaneous reactivation. These results do not rule out additional LAT functions that may be involved in maintenance of latency and/or reactivation from latency.

112 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: This work proposes scalable deployment solutions to control the potential overhead to proxies and particularly to Web servers, and proposes simple techniques that address these factors: pre-resolving host-names (pre-performing DNS lookup), pre-connecting (prefetching TCP connections prior to issuance of HTTP request), and pre-warming.
Abstract: User-perceived latency is recognized as the central performance problem in the Web. We systematically measure factors contributing to this latency, across several locations. Our study reveals that DNS query times, TCP connection establishment, and start-of-session delays at HTTP servers, more so than transmission time, are major causes of long waits. Wait due to these factors also afflicts high-bandwidth users and has detrimental effect on perceived performance. We propose simple techniques that address these factors: (i) pre-resolving host-names (pre-performing DNS lookup); (ii) pre-connecting (prefetching TCP connections prior to issuance of HTTP request); and (iii) pre-warming (sending a "dummy" HTTP HEAD request to Web servers). Trace-based simulations demonstrate a potential to reduce perceived latency dramatically. Our techniques surpass document prefetching in performance improvement per bandwidth used and can be used with non-prefetchable URL. Deployment of these techniques at Web browsers or proxies does not require protocol modifications or the cooperation of other entities. Applicable servers can be identified, for example, by analyzing hyperlinks. Bandwidth overhead is minimal, and so is processing overhead at the user's browser. We propose scalable deployment solutions to control the potential overhead to proxies and particularly to Web servers.

108 citations


Patent
12 Apr 2000
TL;DR: In this paper, a network of memory and coherence controllers is provided which interconnected nodes in a cache-coherent multi-processor system, which supports better processor utilization and better application performance by reducing the latency in accessing data by performing proactive speculative data transfers.
Abstract: A network of memory and coherence controllers is provided which interconnected nodes in a cache-coherent multi-processor system. The nodes contain multiple processors operatively connected via respective caches to associated memory and coherence controllers. The system supports better processor utilization and better application performance by reducing the latency in accessing data by performing proactive speculative data transfers. In being proactive, the system speculates, without specific requests from the processors, as to what data transfers will reduce the latency and will make data transfers according to information derived from the system at any time that data transfers could be made.

99 citations


Proceedings ArticleDOI
01 Jun 2000
TL;DR: A simple, yet rigorous, method to model the key properties of a latency insensitive system, analyze the impact of interconnect latency on the overall throughput, and optimize the performance of the final implementation is presented.
Abstract: Latency insensitive design has been recently proposed in literature as a way to design complex digital systems, whose functional behavior is robust with respect to arbitrary variations in interconnect latency. However, this approach does not guarantee the same robustness for the performance of the design, which indeed can experience big losses. This paper presents a simple, yet rigorous, method to (1) model the key properties of a latency insensitive system, (2) analyze the impact of interconnect latency on the overall throughput, and (3) optimize the performance of the final implementation.

83 citations


Patent
03 Mar 2000
TL;DR: In this paper, a synchronous DRAM with posted column access strobe (CAS) latency and a method of controlling CAS latency is presented, where the SDRAM can include a counter for controlling the CAS latency.
Abstract: A synchronous DRAM (SDRAM) having a posted column access strobe (CAS) latency and a method of controlling CAS latency are provided. In order to control a delay time from the application of a CAS command and a column address to the beginning of memory, reading or writing operations in units of clock cycles, a first method of programing the delay time as a mode register set (MRS) and a second method of detecting the delay time using an internal signal and an external signal, are provided. In the second method, the SDRAM can include a counter for controlling the CAS latency. This counter controls the CAS latency of the SDRAM by generating a signal for controlling the CAS latency according to the number of clock cycles of a clock signal from the generation of a row access command to a column access command in the same memory bank and reading the signal. It is therefore possible to appropriately perform a posted CAS latency operation and a general CAS latency operation by the SDRAM without an additional MRS command according to this SDRAM and the method of controlling the CAS latency.

72 citations


Proceedings ArticleDOI
27 Nov 2000
TL;DR: It is argued that, when choosing a single metric for network latency, RTT should be the metric of choice when trying to reduce clients' perceived latency, because it is the least expensive metric to measure.
Abstract: This paper investigates network latency metrics in the context of the server proximity problem. Using a combination of experimentation and statistical analysis, we study the correlation among number of network and administrative system (AS) hops, and round-trip time (RRT). We ran experiments involving 601 Internet sites spanning 5 continents. Our results show reasonably strong AS hop-network hop correlations of up to 70%. We also observe an average RTT-number of hop correlation close to 50%, which represents a considerable improvement over what Crovella and Clark observed in 1995. Based on our results, we argue that, when choosing a single metric for network latency, RTT should be the metric of choice when trying to reduce clients' perceived latency. However, hop counts are good indicators of network resource usage. Another factor that favors RTT is that it is the least expensive metric to measure.

61 citations


01 Jan 2000
TL;DR: A video camera and recorder based measurement of the end-to-end latency of virtual reality systems using an ordinary video camera to record movements of the tracked wand along with its virtual representation in a CAVE or an ImmersaDesk.
Abstract: We describe an end-to-end latency measurement method for virtual environments. The method incorporates a video camera to record both a physical controller and the corresponding virtual cursor at the same time. The end-to-end latency can be concluded based on the analysis of th e playback of the videotape. The only hardware necessary is a standard interlaced NTSC video camera and a video recorder that can display individual video fields. We describe an example of analyzing the effect of different hardware and software configurat ions upon the system latency. The example shows that the method is effective and easy to implement. 1. Introduction Bryson and Fisher [Bryson90] drew a virtual cursor This paper describes a simple to implement method for measuring end-to-end system latency in projection based virtual environments suchas CAVEs and ImmersaDesks [Cruz93]. Interactivity is an essential feature of virtual reality systems. System end-to-end latency, or lag, is one of the most important problems limiting the quality of a virtual reality system. Other technological problems, such as tracker inaccuracy and display resolution do not seem to impact user performance as profoundly as latency [Ellis99]. In augmented reality, the system latency has even more impact on the quality of the virtual experience. Latency will make the virtual objects appear to “swim around” and “lag behind" real objects [Azuma95]. A prerequisite to reducing system latency is to have a convenient method of measuring it. The system end-to-end latency is the time difference between a user input to a system and the display of the system’s response to that input. It can be the time delay from when the user moves the controller to when the corresponding cursor responds on the screen, or it can be the difference from when the user moves his or her head to when the resulting scene is displayed on the screen. The end-to end latency iscomposed of tracker delay, communication delay, application host delay, image generation delay and display system delay [Mine93]. In this paper, we describe a video camera and recorder based measurement of the end-to-end latency of virtual reality systems. This latency measurement system uses an ordinary video camera to record movements of the tracked wand along with its virtual representation in a CAVE or an ImmersaDesk. The recording is viewed on a field-by-field basis to determine total delay.

61 citations


Proceedings ArticleDOI
10 Apr 2000
TL;DR: A context-specific prefetching technique that relies on keywords in anchor texts of URLs to characterize user access patterns and on neural networks over the keyword set to predict future requests is proposed, which features a self-learning capability and good adaptivity to the change of user surfing interest.
Abstract: With the explosive growth of WWW applications on the Internet, users are experiencing access delays more often than ever. Recent studies showed that prefetching could alleviate the WWW latency to a larger extent than caching. Existing prefetching methods are mostly based on URL graphs. They use the graphical nature of hypertext links to determine the possible paths through a hypertext system. While they have been demonstrated effective in prefetching of documents that are often accessed, they are incapable of pre-retrieving documents whose URLs had never been accessed. We propose a context-specific prefetching technique to overcome the limitation. It relies on keywords in anchor texts of URLs to characterize user access patterns and on neural networks over the keyword set to predict future requests. It features a self-learning capability and good adaptivity to the change of user surfing interest. The technique was implemented in a SmartNewsReader system and cross-examined in a daily browsing of MSNBC and CNN news sites. The experimental results showed an achievement of approximately 60% hit ratio due to prefetching. Of the prefetched documents, less than 30% was undesired.

Journal ArticleDOI
01 Jul 2000
TL;DR: Evaluating the influence of interface design configuration, control mode and latency on teleoperation performance, telepresence, and workload in a pick-and-place task demonstrated significant benefits of using VR in conjunction with video feedback to control the telerobot.
Abstract: Human-machine interfaces that facilitate telepresence are speculated to improve performance with teleoperators. Unfortunately, there is little experimental evidence to substantiate a direct link be...

Journal ArticleDOI
TL;DR: The results showed that high-probability requests were effective in reducing the latency to compliance but only minimally affected duration of engagement.
Abstract: The purpose of this study was to evaluate the effectiveness of a high-probability request sequence on the latency to and duration of compliance to a request for completion of an independent math assignment. The participant was an elementary-school student with learning disabilities who exhibited noncompliance during math instruction. The results showed that high-probability requests were effective in reducing the latency to compliance but only minimally affected duration of engagement.

Patent
09 Jun 2000
TL;DR: In this article, a method of and an electronic apparatus for determining real-time data latency are disclosed, which may include creating a plurality of outgoing data packets having an outgoing time stamp, a group identifier and validation information.
Abstract: A method of and an electronic apparatus for determining real-time data latency are disclosed. The method may include creating a plurality of outgoing data packets having an outgoing time stamp, a group identifier and validation information. The outgoing data packets may be transmitted onto a network. A plurality of incoming data packets may be received over the network. The incoming data packets may be validated. For each of the incoming data packets that is valid, a round-trip time delay for the incoming data packet may be calculated, and statistics for the incoming data packets may be updated based on the round-trip time delay and the group identifier included in the incoming data packet. The method may be implemented on an electronic apparatus.

Journal ArticleDOI
TL;DR: Within-group analyses of target data showed that the three syndromes (determined by principal component analysis of PANS ratings) were differentiated by ERP latency, but not amplitude (Disorganisation-delayed left hemisphere P200 and P300 latency; Reality Distortion earlier global, midline and left hemisphere N200 latency; Psychomotor Poverty delayed posterior N100 latency; Disorganisation showed a divergent pattern of associations with non-target ERP data).
Abstract: Previous studies have revealed various abnormalities in late-component ERP amplitude and latency in schizophrenia, considered as a diagnostic category. The aim of this study was to investigate the within-sample associations between late-component ERPs and three primary syndromes of schizophrenia Reality Distortion, Psychomotor Poverty and Disorganisation. Subjects included 40 schizophrenics and 40 age and sex matched nonpsychiatric controls. Auditory ERPs (N100, N200, P200, P300) were elicited using an auditory oddball paradigm. Between-group analyses of target data showed reduced N100, N200 and P300 amplitude, increased P200 amplitude and delayed N200 latency in schizophrenics compared to controls. For non-target data, schizophrenics showed similarly reduced N100 amplitude and delayed N200 latency. Within-group analyses of target data showed that the three syndromes (determined by principal component analysis of PANSS ratings) were differentiated by ERP latency, but not amplitude (Disorganisation delayed left hemisphere P200 and P300 latency; Reality Distortion earlier global, midline and left hemisphere N200 latency; Psychomotor Poverty delayed posterior N100 latency). Notably, only Disorganisation showed a divergent pattern of associations with non-target ERP data: reduced P200 amplitude and delayed N100 latency.

Book ChapterDOI
27 Aug 2000
TL;DR: A compiler directed approach to hiding the configuration loading latency is presented and it is shown that the Chameleon CS2112 chip performance is significantly improved by leveraging such compiler and multithreading techniques.
Abstract: The Chameleon CS2112 chip is the industry's first reconfigurable communication processor To attain high performance, the reconfiguration latency must be effectively tolerated in such a processor In this paper, we present a compiler directed approach to hiding the configuration loading latency We integrate multithreading, instruction scheduling, register allocation, and prefetching techniques to tolerate the configuration loading latency Furthermore, loading configuration is overlapped with communication to further enhance performance By running some kernel programs on a cycle-accurate simulator, we showed that the chip performance is significantly improved by leveraging such compiler and multithreading techniques

Patent
03 Mar 2000
TL;DR: In this article, the authors propose a method and system for packet service category requests to asymmetric digital subscriber line (ADSL) latency paths, where a data packet request from a customer premise distribution network with a desired service category and a desired latency is mapped to an ADSL device latency interface by checking a latency mapping policy.
Abstract: A method and system for packet service category requests to asymmetric digital subscriber line (“ADSL”) latency paths. A data packet request from a customer premise distribution network with a desired service category (e.g., quality-of-service) and a desired latency is mapped to an ADSL device latency interface by checking a latency mapping policy. This mapping provides a virtual connection with a desired service category and a desired latency over ADSL links. The latency mapping includes an embedded service category mapping allowing differential services to be provided for user information based on a desired service category. The latency mapping mechanism may allow easier use of end-to-end packet service categories such as type-of-service categories, for data packets such as Internet Protocol (“IP”) data packets, or Voice over IP (“VoIP”) data packets over real-time asymmetric digital subscriber line links.

Proceedings ArticleDOI
02 Apr 2000
TL;DR: This paper presents several new asynchronous FIFO designs implemented as circular arrays of cells connected to common data buses, with a goal to achieve very low latency while maintaining good throughput.
Abstract: This paper presents several new asynchronous FIFO designs. While most existing FIFO's trade higher throughput for higher latency, our goal is to achieve very low latency while maintaining good throughput. The designs are implemented as circular arrays of cells connected to common data buses. Data items are not moved around the array once they are enqueued. Each cell's input and output behavior is dictated by the flow of two tokens around the ring: one that allows enqueuing data and one that allows dequeuing data. Two novel protocols are introduced with various degrees of parallelism, as well as four different implementations. The best simulation results, in 0.6 /spl mu/, have a latency of 1.73 ns and throughput of 454 MegaOperations/second for a 4-place FIFO.

Journal ArticleDOI
TL;DR: Regardless of patients' depression status, increased P300 latency predicts poorer performance on executive function tasks requiring speeded performance.
Abstract: The authors asked whether impaired executive functioning and long P300 latency are related dysfunctions and whether they are associated with geriatric depression. A group of 25 elderly depressed patients without dementia and 20 control subjects were assessed on tasks of fluency, initiation and perseveration, the Stroop task, the Wisconsin Card Sorting Test (WCST) perseverative error score, and P300 latency. The groups' performance differed significantly on these tasks and in P300 latency. Longer latency was associated with poorer performance in both groups on all measures except WCST perseverative errors. Regardless of patients' depression status, increased P300 latency predicts poorer performance on executive function tasks requiring speeded performance.

Patent
Ray Wang1, Paul Y. B. Shieh1
03 Mar 2000
TL;DR: In this paper, the authors propose a method and system for mapping virtual connections to asymmetric digital subscriber line (ADSL) latency paths, which includes an embedded service category mapping from a transport network to latency paths at an ADSL transmission convergence sub-layer allowing differential services to be provided for user data based on a desired service category.
Abstract: A method and system for mapping virtual connections to asymmetric digital subscriber line (“ADSL”) latency paths. A request for virtual connection from a transport network (e.g., Asynchronous Transport Mode, Frame Relay, etc.) with a desired service category (e.g., quality-of-service) and a desired latency is mapped to an ADSL device latency interface by checking a latency mapping policy. This mapping provides a virtual connection with a desired service category and a desired latency over ADSL links. The latency mapping includes an embedded service category mapping from a transport network to latency paths at an ADSL transmission convergence sub-layer allowing differential services to be provided for user data based on a desired service category. The latency mapping mechanism may help provide use of end-to-end service categories such as quality-of-service categories, over real-time ADSL links.

Proceedings ArticleDOI
05 Nov 2000
TL;DR: This paper proves an upper bound on the additional latency of the system introduced by power management strategies, and shows that this upper bound occurs each time the system is shutdown and hence is an important system design parameter.
Abstract: A power management algorithm for an embedded system reduces system level power dissipation by shutting off parts of the system when they are not being used and turning them back on when they are required. Algorithms for this problem are online in nature since they must operate without knowledge of the arrival time or service requirements of future requests. In this paper, we present online algorithms to manage power for embedded systems. We perform an empirical analysis of these algorithms and give theoretical justification for the empirical results. Effective power management strategies have an adverse impact on the latency of the system for which the strategy is designed. Typically, the more aggressive the power management scheme, the greater the increase in the latency of the system. In this paper, we prove an upper bound on the additional latency of the system introduced by power management strategies. Moreover, we show that this upper bound occurs each time the system is shutdown and hence is an important system design parameter. In addition, service time and latencies have an effect on power management strategies since they alter the length and occurrences of idle periods which. We study this phenomenon experimentally, by modeling the disk drive of a laptop computer as an embedded system. The results show that if service times of arriving requests are modeled, the relative performance of algorithms can change leading to non-adaptive algorithms performing better than adaptive ones. We compare the performance of adaptive and non-adaptive power management algorithms. In particular, our experimental results show that an "immediate" shutdown strategy that shuts down the system whenever it encounters an idle period performs surprising better than sophisticated adaptive algorithms suggested in the literature. We provide an analytical explanation for the effectiveness of power management strategies.

Proceedings ArticleDOI
11 Aug 2000
TL;DR: A general analytical model is defined to investigate the impact of prefetching on both latency and energy consumption in a wireless broadcast data delivery system, and to compare by simulation two policies derived from the model results with a policy proposed in the literature.
Abstract: Periodic data broadcasting on a wireless channel has been proposed as an effective data dissemination technique for mobile users. With this technique, users access data by simply monitoring the channel until the required data appear in the broadcast. Hence, this access mode may be advantageous in a wireless environment with respect to the traditional client-server access mode, since it consumes less bandwidth and avoids energy-expensive requests sending. Potential drawbacks of this access mode are the latency caused by the wait for the required data in the broadcast, and the energy consumption caused by active listening on the wireless channel. Some techniques have been proposed to alleviate these two drawbacks. In particular, the use of prefetching has been proposed to reduce the latency experienced by a user. However, prefetching could have a negative effect on energy consumption, since it could actually increase the number of channel accesses. We define a general analytical model to investigate the impact of prefetching on both latency and energy consumption in a wireless broadcast data delivery system, with the goal of determining policies characterized by a good tradeoff between the two goals of reducing latency and saving energy, and then compare by simulation two policies derived from the model results with a policy proposed in the literature.

Patent
24 Feb 2000
TL;DR: In this paper, the phases of frequency domain symbols are rotated prior to application of the IFFT so that cyclic prefix addition may be implemented as cyclic postfix addition.
Abstract: Systems and methods for appending cyclic prefixes to OFDM bursts while employing minimal additional memory and adding minimal latency are provided. This facilitates lower cost implementations of OFDM communication systems including systems that carry real time traffic such as telephony and video conferencing. The phases of frequency domain symbols are rotated prior to application of the IFFT so that cyclic prefix addition may be implemented as cyclic postfix addition. Cyclic postfix addition requires much less memory and imposes much less latency then cyclic prefix addition.

Patent
01 Aug 2000
TL;DR: In this article, the authors propose a fairness algorithm for destination number (DN) requests in a communication center in response to requests for destination numbers (DNs) from network-level routers.
Abstract: A method for promoting fairness in a communication center in response to requests for destination numbers (DNs) from network-level routers has steps of determining latency for requests from individual ones of the network-level routers, receiving a request from a first router for which latency is determined, assigning a fairness wait time to the request, the time determined as an inverse function of latency, and answering the request according to rules in effect only after the wait time has expired. In some cases requests arrive with priority, and priority is used a swell as latency in determining wait time. In other cases a second fairness time is imposed, after which a fairness algorithm is called to award a DN according to statistical history and cal priority. The system is useful for communication centers for connection-oriented telephone systems, Internet protocol systems, and for all sorts of digital messaging and mail systems.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: A tool is described to analyze the performance of LDAP directories, and the importance of the factors in determining scalability, namely front-end versus back-end processes, CPU capability, and available memory, is studied.
Abstract: The Lightweight Directory Access Protocol (LDAP) is being used for an increasing number of distributed directory applications. We describe a tool to analyze the performance of LDAP directories, and study the performance of a LDAP directory under a variety of access patterns. In the experiments, we use a LDAP schema proposed for the administration of Service Level Specifications (SLSs) in a differentiated services network. Individual modules in the server and client code are instrumented to obtain a detailed profile of their contributions to the overall system latency and throughput. We first study the performance under our default experiment setup. We then study the importance of the factors in determining scalability, namely front-end versus back-end processes, CPU capability, and available memory. At high loads, the connection management latency increases sharply to dominate the response in most cases. The TCP Nagle algorithm is found to introduce a very large additional latency, and it appears beneficial to disable it in the LDAP server. The CPU capability is found to be significant in limiting the performance of the LDAP server, and for larger directories, which cannot be kept in memory, data transfer from the disk also plays a major role. The scaling of server performance with the number of directory entries is determined by the increase in back-end search latency, and scaling with directory entry size is limited by the front-end encoding of search results, and, for out-of-memory directories, by the disk access latency. We investigate different mechanisms to improve the server performance.

Journal ArticleDOI
TL;DR: An increase in the frequency of brief arousals from sleep was detected in the mask condition, and this is a possible source for the sleep‐onset latency increase perceived by the subjects, consistent with the concept of a physiological basis for sleep misperception in insomnia.
Abstract: It is well established that insomniacs overestimate sleep-onset latency. Furthermore, there is evidence that brief arousals from sleep may occur more frequently in insomnia. This study examined the hypothesis that brief arousals from sleep influence the perception of sleep-onset latency. An average of four sleep onsets was obtained from each of 20 normal subjects on each of two nonconsecutive, counterbalanced, experimental nights. The experimental nights consisted of a control night (control condition) and a condition in which a moderate respiratory load was applied to increase the frequency of microarousals during sleep onset (mask condition). Subjective estimation of sleep-onset latency and indices of sleep quality were assessed by self-report inventory. Objective measures of sleep-onset latency and microarousals were assessed using polysomnography. Results showed that sleep-onset latency estimates were longer in the mask condition than in the control condition, an effect not reflected in objective sleep-stage scoring of sleep-onset latency. Furthermore, an increase in the frequency of brief arousals from sleep was detected in the mask condition, and this is a possible source for the sleep-onset latency increase perceived by the subjects. Findings are consistent with the concept of a physiological basis for sleep misperception in insomnia.

Patent
21 Dec 2000
TL;DR: In this article, the authors use TCP/IP acknowledgement packets to deduce network latency and use a plurality of remotely located network monitors (and/or monitors co-located with servers and/or clients) to derive and report on actual latency experienced throughout the network.
Abstract: A remote network monitor for monitoring transaction-based protocols such as HTTP receives and analyzes protocol requests and associated responses, and derives therefrom a parameter associated with round-trip network latency. For example, TCP/IP acknowledgement packets can be used to deduce network latency. Such network latency and total latency parameters can be used to determine which portion of total latency can be attributable to the network and which portion is attributable to node processing time (e.g., server and/or client processing). A plurality of remotely located network monitors (and/or monitors co-located with servers and/or clients) can be used to derive and report on actual latency experienced throughout the network.

Journal ArticleDOI
TL;DR: Exploring the scientific possibilities of new therapies targeting HIV-1 latency may hold new promise of eventual HIV- 1 eradication and warrant further consideration for rational drug design.

Journal ArticleDOI
TL;DR: Minimal F-wave latency and the ratio between the amplitudes of the sural and superficial radial sensory nerve action potential are sensitive measures for the detection of nerve pathology and should be considered in electrophysiologic studies of diabetic polyneuropathy.
Abstract: The possibility of whether minimal F-wave latency and a simple ratio between the sural and superficial radial sensory response amplitudes may provide a useful electrodiagnostic test in diabetic patients was investigated in this report. To evaluate the diagnostic sensitivity of minimal F-wave latency, the Z-scores of the minimal F-wave latency, motor nerve conduction velocity (MCV), amplitude of compound muscle action potentials (CMAP), and distal latency (DL) of the median, ulnar, tibial, and peroneal nerve were compared in 37 diabetic patients. For the median, ulnar, and tibial nerves, the Z scores of the minimal F-wave latency were significantly larger than those of the MCV. In addition for all four motor nerves, the Z scores of the minimal F-wave latency were significantly larger than those for the CMAP amplitude. Furthermore, 19 subjects showing abnormal results in the standard sensory nerve conduction study had a significantly lower sural/radial amplitude ratio (SRAR), and 84% of them had an SRAR of less than 0.5. In conclusion, minimal F-wave latency and the ratio between the amplitudes of the sural and superficial radial sensory nerve action potential are sensitive measures for the detection of nerve pathology and should be considered in electrophysiologic studies of diabetic polyneuropathy.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This work proposes a technique by which the buffers and cross-bars are eliminated from the critical path of the load/store execution, which results in both, a low and a deterministic latency.
Abstract: One of the problems in future processors will be the resource conflicts caused by several load/store units competing to access the same cache bank. The traditional approach for handling this case is by introducing buffers combined with a cross-bar. This approach suffers from (i) the non-deterministic latency of a load/store and (ii) the extra latency caused by the cross-bar and the buffer management. A deterministic latency is of the utmost importance for the forwarding mechanism of out-of-order processors because it enables back-to-back operation of instructions. We propose a technique by which we eliminate the buffers and cross-bars from the critical path of the load/store execution. This results in both, a low and a deterministic latency. Our solution consists of predicting which bank is to be accessed. Only in the case of a wrong prediction a penalty results.