scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal on Selected Areas in Communications in 1995"


Journal ArticleDOI
Roy D. Yates1
TL;DR: It is shown that systems in which transmitter powers are subject to maximum power limitations share these common properties, which permit a general proof of the synchronous and totally asynchronous convergence of the iteration p(t+1)=I(p(t)) to a unique fixed point at which total transmitted power is minimized.
Abstract: In cellular wireless communication systems, transmitted power is regulated to provide each user an acceptable connection by limiting the interference caused by other users. Several models have been considered including: (1) fixed base station assignment where the assignment of users to base stations is fixed, (2) minimum power assignment where a user is iteratively assigned to the base station at which its signal to interference ratio is highest, and (3) diversity reception where a user's signal is combined from several or perhaps all base stations. For the above models, the uplink power control problem can be reduced to finding a vector p of users' transmitter powers satisfying p/spl ges/I(p) where the jth constraint p/sub j//spl ges/I/sub j/(p) describes the interference that user j must overcome to achieve an acceptable connection. This work unifies results found for these systems by identifying common properties of the interference constraints. It is also shown that systems in which transmitter powers are subject to maximum power limitations share these common properties. These properties permit a general proof of the synchronous and totally asynchronous convergence of the iteration p(t+1)=I(p(t)) to a unique fixed point at which total transmitted power is minimized. >

2,526 citations


Journal ArticleDOI
TL;DR: The three key techniques employed by Vegas are described, and the results of a comprehensive experimental performance study, using both simulations and measurements on the Internet, of the Vegas and Reno implementations of TCP are presented.
Abstract: Vegas is an implementation of TCP that achieves between 37 and 71% better throughput on the Internet, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study, using both simulations and measurements on the Internet, of the Vegas and Reno implementations of TCP. >

1,602 citations


Journal ArticleDOI
Scott Shenker1
TL;DR: This work addresses some of the fundamental architectural design issues facing the future Internet, including whether the Internet should adopt a new service model, how this service model should be invoked, and whether this service models should include admission control.
Abstract: The Internet has been a startling and dramatic success. Originally designed to link together a small group of researchers, the Internet is now used by many millions of people. However, multimedia applications, with their novel traffic characteristics and service requirements, pose an interesting challenge to the technical foundations of the Internet. We address some of the fundamental architectural design issues facing the future Internet. In particular, we discuss whether the Internet should adopt a new service model, how this service model should be invoked, and whether this service model should include admission control. These architectural issues are discussed in a nonrigorous manner, through the use of a utility function formulation and some simple models. While we do advocate some design choices over others, the main purpose here is to provide a framework for discussing the various architectural alternatives. >

1,072 citations


Journal ArticleDOI
TL;DR: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented, and the notion of ideal free traffic is introduced.
Abstract: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented. Insight into the parameters is obtained by relating the model to an equivalent burst model. Results on a corresponding storage process are presented. The buffer occupancy distribution is approximated by a Weibull distribution. The model is compared with publicly available samples of real Ethernet traffic. The degree of the short-term predictability of the traffic model is studied through an exact formula for the conditional variance of a future value given the past. The applicability and interpretation of the self-similar model are discussed extensively, and the notion of ideal free traffic is introduced. >

800 citations


Journal ArticleDOI
TL;DR: The basic economic theory of pricing a congestible resource, such as an FTP server, a router, a Web site, etc, is described, and the implications of "congestion pricing" are examined as a way to encourage efficient use of network resources.
Abstract: We describe the basic economic theory of pricing a congestible resource such as an FTP server, a router, a Web site, etc. In particular, we examine the implications of "congestion pricing" as a way to encourage efficient use of network resources. We explore the implications of flat pricing and congestion pricing for capacity expansion in centrally planned, competitive, and monopolistic environments. The most common form of Internet pricing is pricing by access, with no usage-sensitive prices. With a fixed set of users, we expect to see greater capacity when usage is not priced, but also greater congestion. However, with greater congestion, congestion-sensitive users might not use the resource. >

672 citations


Journal ArticleDOI
TL;DR: This work shows how current TCP implementations introduce unacceptably long pauses in communication during cellular handoffs, and proposes an end-to-end fast retransmission scheme that can reduce these pauses to levels more suitable for human interaction.
Abstract: We explore the performance of reliable data communication in mobile computing environments. Motion across wireless cell boundaries causes increased delays and packet losses while the network learns how to route data to a host's new location. Reliable transport protocols like TCP interpret these delays and losses as signs of network congestion. They consequently throttle their transmissions, further degrading performance. We quantify this degradation through measurements of protocol behavior in a wireless networking testbed. We show how current TCP implementations introduce unacceptably long pauses in communication during cellular handoffs (800 ms and longer), and propose an end-to-end fast retransmission scheme that can reduce these pauses to levels more suitable for human interaction (200 ms). Our work makes clear the need for reliable transport protocols to differentiate between motion-related and congestion-related packet losses and suggests how to adapt these protocols to perform better in mobile computing environments. >

607 citations


Journal ArticleDOI
Stephen V. Hanly1
TL;DR: It is shown that the algorithm converges to an allocation of users to cells that is optimal in the sense that interference is minimized, and how effectively the algorithm relieves local network congestion is shown.
Abstract: There is much current interest in spread spectrum wireless mobile communications and in particular the issue of spread spectrum wireless capacity. We characterize spread spectrum cellular capacity and provide a combined power control, cell-site selection algorithm that enables this capacity to be achieved. The algorithm adapts users' transmitter power levels and switches them between cell-sites, and it is shown that the algorithm converges to an allocation of users to cells that is optimal in the sense that interference is minimized. The algorithm is decentralized, and can be considered as a mechanism for cell-site diversity and handover. We provide numerical examples to show how effectively the algorithm relieves local network congestion, by switching users in a heavily congested cell to adjacent, less congested cells. >

507 citations


Journal ArticleDOI
TL;DR: It is found that service curves provide a convenient framework for managing the allocation of performance guarantees and bounds on end-to-end performance measures can be simply obtained in terms of service curves and burstiness constraints on arriving traffic.
Abstract: We review some recent results regarding the problem of providing deterministic quality of service guarantees in slot-based virtual circuit switched networks. The concept of a service curve is used to partially characterize the service that virtual circuit connections receive. We find that service curves provide a convenient framework for managing the allocation of performance guarantees. In particular, bounds on end-to-end performance measures can be simply obtained in terms of service curves and burstiness constraints on arriving traffic. Service curves can be allocated to the connections, and we consider scheduling algorithms that can support the allocated service curves. Such an approach provides the required degree of isolation between the connections in order to support performance guarantees, without precluding statistical multiplexing. Finally, we examine the problem of enforcing burstiness constraints in slot-based networks. >

492 citations


Journal ArticleDOI
TL;DR: This work proposes algorithms to manipulate compressed video in the compressed domain using the discrete cosine transform with or without motion compensation (MC), and derives a complete set of algorithms for all aforementioned manipulation functions in the transform domain.
Abstract: Many advanced video applications require manipulations of compressed video signals. Popular video manipulation functions include overlap (opaque or semitransparent), translation, scaling, linear filtering, rotation, and pixel multiplication. We propose algorithms to manipulate compressed video in the compressed domain. Specifically, we focus on compression algorithms using the discrete cosine transform (DCT) with or without motion compensation (MC). Such compression systems include JPEG, motion JPEG, MPEG, and H.261. We derive a complete set of algorithms for all aforementioned manipulation functions in the transform domain, in which video signals are represented by quantized transform coefficients. Due to a much lower data rate and the elimination of decompression/compression conversion, the transform-domain approach has great potential in reducing the computational complexity. The actual computational speedup depends on the specific manipulation functions and the compression characteristics of the input video, such as the compression rate and the nonzero motion vector percentage. The proposed techniques can be applied to general orthogonal transforms, such as the discrete trigonometric transform. For compression systems incorporating MC (such as MPEG), we propose a new decoding algorithm to reconstruct the video in the transform domain and then perform the desired manipulations in the transform domain. The same technique can be applied to efficient video transcoding (e.g., from MPEG to JPEG) with minimal decoding. >

489 citations


Journal ArticleDOI
TL;DR: The authors investigate two packet-discard strategies that alleviate the effects of fragmentation and introduce early packet discard, a strategy in which the switch drops whole packets prior to buffer overflow that prevents fragmentation and restores throughput to maximal levels.
Abstract: Investigates the performance of transport control protocol (TCP) connections over ATM networks without ATM-level congestion control and compares it to the performance of TCP over packet-based networks. For simulations of congested networks, the effective throughput of TCP over ATM can be quite low when cells are dropped at the congested ATM switch. The low throughput is due to wasted bandwidth as the congested link transmits cells from "corrupted" packets, i.e., packets in which at least one cell is dropped by the switch. The authors investigate two packet-discard strategies that alleviate the effects of fragmentation. Partial packet discard, in which remaining cells are discarded after one cell has been dropped from a packet, somewhat improves throughput. They introduce early packet discard, a strategy in which the switch drops whole packets prior to buffer overflow. This mechanism prevents fragmentation and restores throughput to maximal levels. >

432 citations


Journal ArticleDOI
TL;DR: A new approach to determining the admissibility of variable bit rate (VBR) traffic in buffered digital networks is developed, and the boundary of the set of admissible traffic sources is found to be sufficiently linear that an effective bandwidth can be meaningfully assigned to each VBR source.
Abstract: A new approach to determining the admissibility of variable bit rate (VBR) traffic in buffered digital networks is developed. In this approach all traffic presented to the network is assumed to have been subjected to leaky-bucket regulation, and extremal, periodic, on-off regulated traffic is considered; the analysis is based on fluid models. Each regulated traffic stream is allocated bandwidth and buffer resources which are independent of other traffic. Bandwidth and buffer allocations are traded off in a manner optimal for an adversarial situation involving minimal knowledge of other traffic. This leads to a single-resource statistical-multiplexing problem which is solved using techniques previously used for unbuffered traffic. VBR traffic is found to be divisible into two classes, one for which statistical multiplexing is effective and one for which statistical multiplexing is ineffective in the sense that accepting small losses provides no advantage over lossless performance. The boundary of the set of admissible traffic sources is examined, and is found to be sufficiently linear that an effective bandwidth can be meaningfully assigned to each VBR source, so long as only statistically-multiplexable sources are considered, or only nonstatistically-multiplexable sources are considered. If these two types of sources are intermixed, then nonlinear interactions occur and fewer sources can be admitted than a linear theory would predict. A qualitative characterization of the nonlinearities is presented. The complete analysis involves conservative approximations; however, admission decisions based on this work are expected to be less overly conservative than decisions based on alternative approaches. >

Journal ArticleDOI
TL;DR: The objective of this paper is to use large deviation theory and the Laplace method of integration to provide an simple intuitive overview of the theory of effective bandwidth for high-speed digital networks, especially ATM networks.
Abstract: The theory of large deviations provides a simple unified basis for statistical mechanics, information theory and queueing theory. The objective of this paper is to use large deviation theory and the Laplace method of integration to provide an simple intuitive overview of the theory of effective bandwidth for high-speed digital networks, especially ATM networks. This includes (1) identification of the appropriate energy function, entropy function and effective bandwidth function of a source, (2) the calculus of the effective bandwidth functions, (3) bandwidth allocation and buffer management, (4) traffic descriptors, and (5) envelope processes and conjugate processes for fast simulation and bounds. >

Journal ArticleDOI
TL;DR: This paper proposes techniques that discourage unauthorized distribution by embedding each document with a unique codeword, and describes one in detail, and presents experimental results showing that the identification techniques are highly reliable, even after documents have been photocopied.
Abstract: Modern computer networks make it possible to distribute documents quickly and economically by electronic means rather than by conventional paper means. However, the widespread adoption of electronic distribution of copyrighted material is currently impeded by the ease of unauthorized copying and dissemination. In this paper we propose techniques that discourage unauthorized distribution by embedding each document with a unique codeword. Our encoding techniques are indiscernible by readers, yet enable us to identify the sanctioned recipient of a document by examination of a recovered document. We propose three coding methods, describe one in detail, and present experimental results showing that our identification techniques are highly reliable, even after documents have been photocopied. >

Journal ArticleDOI
TL;DR: This methodology differs from many previous studies that have concentrated on end-point definitions of flows in terms of state derived from observing the explicit opening and closing of TCP connections, by defining flows based on traffic satisfying various temporal and spatial locality conditions, as observed at internal points of the network.
Abstract: We present a parameterizable methodology for profiling Internet traffic flows at a variety of granularities. Our methodology differs from many previous studies that have concentrated on end-point definitions of flows in terms of state derived from observing the explicit opening and closing of TCP connections. Instead, our model defines flows based on traffic satisfying various temporal and spatial locality conditions, as observed at internal points of the network. This approach to flow characterization helps address some central problems in networking based on the Internet model. Among them are route caching, resource reservation at multiple service levels, usage based accounting, and the integration of IP traffic over an ATM fabric. We first define the parameter space and then concentrate on metrics characterizing both individual flows as well as the aggregate flow profile. We consider various granularities of the definition of a flow, such as by destination network, host-pair, or host and port quadruple. We include some measurements based on case studies we undertook, which yield significant insights into some aspects of Internet traffic, including demonstrating (i) the brevity of a significant fraction of IP flows at a variety of traffic aggregation granularities, (ii) that the number of host-pair IP flows is not significantly larger than the number of destination network flows, and (iii) that schemes for caching traffic information could significantly benefit from using application information. >

Journal ArticleDOI
TL;DR: It is shown that, for systems of parallel links, such paradoxes cannot occur and the optimal solution coincides with the solution in the single-user case, and some extensions to general network topologies are derived.
Abstract: In noncooperative networks users make control decisions that optimize their individual performance measure. Focusing on routing, two methodologies for architecting noncooperative networks are devised, that improve the overall network performance. These methodologies are motivated by problem settings arising in the provisioning and the run time phases of the network. For either phase, Nash equilibria characterize the operating point of the network. The goal in the provisioning phase is to allocate link capacities that lead to systemwide efficient Nash equilibria. The solution of such design problems is, in general, counterintuitive, since adding link capacity might lead to degradation of user performance. For systems of parallel links, it is shown that such paradoxes cannot occur and that the optimal solution coincides with the solution in the single-user case. Extensions to general network topologies are derived. During the run time phase, a manager controls the routing of part of the network flow. The manager is aware of the noncooperative behavior of the users and makes its routing decisions based on this information while aiming at improving the overall system performance. We obtain necessary and sufficient conditions for enforcing an equilibrium that coincides with the global network optimum, and indicate that these conditions are met in many cases of interest. >

Journal ArticleDOI
Anwar Elwalid1, Daniel P. Heyman1, T. V. Lakshman, Debasis Mitra, Alan Weiss 
TL;DR: An approximation to the steady-state buffer distribution is called Chenoff-dominant eigenvalue, which is effective for analyzing ATM multiplexers, even when the traffic has many, possibly heterogeneous, sources and their models are of high dimension.
Abstract: The main contributions of this paper are two-fold. First, we prove fundamental, similarly behaving lower and upper bounds, and give an approximation based on the bounds, which is effective for analyzing ATM multiplexers, even when the traffic has many, possibly heterogeneous, sources and their models are of high dimension. Second, we apply our analytic approximation to statistical models of video teleconference traffic, obtain the multiplexing system's capacity as determined by the number of admissible sources for given cell-loss probability, buffer size and trunk bandwidth, and, finally, compare with results from simulations, which are driven by actual data from coders. The results are surprisingly close. Our bounds are based on large deviations theory. The main assumption is that the sources are Markovian and time-reversible. Our approximation to the steady-state buffer distribution is called Chenoff-dominant eigenvalue since one parameter is obtained from Chernoffs theorem and the other is the system's dominant eigenvalue. Fast, effective techniques are given for their computation. In our application we process the output of variable bit rate coders to obtain DAR(1) source models which, while of high dimension, require only knowledge of the mean, variance, and correlation. We require cell-loss probability not to exceed 10/sup -6/, trunk bandwidth ranges from 45 to 150 Mb/s, buffer sizes are such that maximum delays range from 1 to 60 ms, and the number of coder-sources ranges from 15 to 150. Even for the largest systems, the time for analysis is a fraction of a second, while each simulation takes many hours. Thus, the real-time administration of admission control based on our analytic techniques is feasible. >

Journal ArticleDOI
TL;DR: This paper proposes an intelligent method for users locating: the alternative strategy (AS), based on the observation that the mobility behavior of a majority of people can be foretold and can save signaling messages due to mobility management procedures, leading thus to savings in the system resources.
Abstract: Mobile radio communications raise two major problems. First: a very poor radio link quality. Second: the users' mobility, which requires the management of their position, is resource consuming (especially radio bandwidth). This paper focuses on the second issue and proposes an intelligent method for users locating: the alternative strategy (AS). Our proposal is based on the observation that the mobility behavior of a majority of people can be foretold. If taken into consideration by the system, this characteristic can save signaling messages due to mobility management procedures, leading thus to savings in the system resources. Several versions of the AS are described: a basic version for long term events (i.e., received calls and registrations), and versions with increased memory for short and medium term events. The evaluation of the basic versions was performed using analytic and simulation approaches. It shows that storing the mobility related information brings great savings in system resources when the users have medium or high predictable mobility patterns. More generally speaking, this work points out the fact that the future systems will have to integrate users related information in order: firstly: to provide customized services and secondly: to save system resources. On the other hand, current trends in mobile communications show that adaptive and dynamic system capabilities require that more information to be collected and computed. >

Journal ArticleDOI
TL;DR: A multiaccess communication model over the additive Gaussian noise channel is developed and analyzed that incorporates some queueing-theoretic aspects of the problem.
Abstract: We develop and analyze a multiaccess communication model over the additive Gaussian noise channel. The framework is information-theoretic; nonetheless it also incorporates some queueing-theoretic aspects of the problem. >

Journal ArticleDOI
G.K. Kaleh1
TL;DR: In this article, a nonstationary innovations representation based on Cholesky factorization is used in order to define a noise whitener and a maximum-likelihood block detector, and the derived block linear equalizers and block decision-feedback equalizers are derived.
Abstract: In a block transmission system the information symbols are arranged in the form of blocks separated by known symbols. Such a system is suitable for communication over time-dispersive channels subject to fast time-variations, e,g., the HF channel. The known reliable receiver for this system is the nonlinear data-directed estimator (NDDE). This paper presents appropriate equalization methods for this system. A nonstationary innovations representation based on Cholesky factorization is used in order to define a noise whitener and a maximum-likelihood block detector. Also block linear equalizers and block decision-feedback equalizers are derived. For each type we give the zero-forcing and the minimum-mean-squared-error versions. Performance evaluations and comparisons are given. We show that they perform better than conventional equalizers. As compared to the NDDE, the derived block decision-feedback equalizers perform better and are much less complex. Whereas the NDDE uses the Levinson algorithm to solve M/2 Toeplitz systems of decreasing order (where M is the number of symbols per block), the derived equalizers need to process only one Toeplitz system. Moreover, the Schur algorithm, proposed for Cholesky factorization allows us to further reduce the complexity. >

Journal ArticleDOI
TL;DR: A simple and robust ATM call admission control is described, and the theoretical background for its analysis is developed, allowing an explicit treatment of the trade-off between cell loss and call rejection.
Abstract: This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-off between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types. >

Journal ArticleDOI
TL;DR: Results of simulations show that the CRBFN with the stochastic-gradient algorithm can be quite effective in channel equalization.
Abstract: It is generally recognized that digital channel equalization can be interpreted as a problem of nonlinear classification. Networks capable of approximating nonlinear mappings can be quite useful in such applications. The radial basis function network (RBFN) is one such network. We consider an extension of the RBFN for complex-valued signals (the complex RBFN or CRBFN). We also propose a stochastic-gradient (SG) training algorithm that adapts all free parameters of the network. We then consider the problem of equalization of complex nonlinear channels using the CRBFN as part of an equalizer. Results of simulations we have carried out show that the CRBFN with the SG algorithm can be quite effective in channel equalization. >

Journal ArticleDOI
TL;DR: Borders on the individual session backlog and delay distribution under the generalized processor sharing (GPS) scheduling discipline are developed and it is shown that networks belonging to a broad class of GPS assignments, the so-called consistent relative session treatment (CRST) GPS assignments are stable in a stochastic sense.
Abstract: We develop bounds on the individual session backlog and delay distribution under the generalized processor sharing (GPS) scheduling discipline. This work is motivated by, and is an extension of, Parekh and Gallager's (see IEEE/ACM Trans. Networking, vol.1, no.6, p.344-357, 1993, and vol. 2, no.4, p.137-150, 1994) deterministic study of the GPS scheduling discipline with leaky-bucket token controlled sessions. Using the exponentially bounded burstiness (EBB) process model introduced by Yaron and Sidi (see IEEE/ACM Trans. Networking, vol.1, p.372-385, 1993) as a source traffic characterization, we establish results that extend the deterministic study of GPS. For a single GPS server in isolation, we present statistical bounds on the distributions of backlog and delay for each session. In the network setting, we show that networks belonging to a broad class of GPS assignments, the so-called consistent relative session treatment (CRST) GPS assignments, are stable in a stochastic sense. In particular, we establish simple bounds on the distribution of backlog and delay for each session in a rate proportional processor sharing (RPPS) GPS network with arbitrary topology. >

Journal ArticleDOI
TL;DR: This paper surveys the on-line routing framework, the proposed routing and admission control strategies, and discusses some of the implementation issues.
Abstract: Classical routing and admission control strategies achieve provably good performance by relying on an assumption that the virtual circuits arrival pattern can be described by some a priori known probabilistic model. A new on-line routing framework, based on the notion of competitive analysis, was proposed. This framework is geared toward design of strategies that have provably good performance even in the case where there are no statistical assumptions on the arrival pattern and parameters of the virtual circuits. The on-line strategies motivated by this framework are quite different from the min-hop and reservation-based strategies. This paper surveys the on-line routing framework, the proposed routing and admission control strategies, and discusses some of the implementation issues. >

Journal ArticleDOI
TL;DR: The overall dynamic bandwidth-allocation scheme presented is shown to be promising and practically feasible in obtaining efficient transmission of real-time video traffic.
Abstract: This paper presents a novel approach to dynamic transmission bandwidth allocation for transport of real-time variable-bit-rate video in ATM networks. Video traffic statistics are measured in the frequency domain. The low-frequency signal captures the slow time-variation of consecutive scene changes while the high-frequency signal exhibits the feature of strong frame autocorrelation. Our queueing study indicates that the video transmission bandwidth in a finite-buffer system is essentially characterized by the low-frequency signal. We further observe in typical JPEG/MPEG video sequences that the time scale of video scene changes is in the range of a second or longer, which localizes the low-frequency video signal in a well-defined low-frequency band. Hence, in a network design it is feasible to implement dynamic allocation of video transmission bandwidth using on-line observation and prediction of scene changes. Two prediction schemes are examined: recursive least square method and time delay neural network method. A time delay neural network with low-complexity high-order architecture, called "pi-sigma network," is successfully used to predict scene changes. The overall dynamic bandwidth-allocation scheme presented is shown to be promising and practically feasible in obtaining efficient transmission of real-time video traffic. >

Journal ArticleDOI
TL;DR: A performance comparison with a classical fixed channel allocation has been carried out, and it has been shown that a higher traffic density, with respect to GEO systems, is manageable by means of LEO satellites.
Abstract: Efficient dynamic channel allocation techniques with handover queuing suitable for applications in mobile satellite cellular networks, are discussed. The channel assignment on demand is performed on the basis of the evaluation of a suitable cost function. Geostationary and low Earth orbit (LEO) satellites have been considered. In order to highlight the better performance of the dynamic techniques proposed, a performance comparison with a classical fixed channel allocation (FCA) has been carried out, as regards the probability that a newly arriving call is not completely served. It has also been shown that a higher traffic density, with respect to GEO systems, is manageable by means of LEO satellites. >

Journal ArticleDOI
TL;DR: An analytical model for teletraffic performance (including hand-off) is developed and shows the carried traffic, traffic distribution, blocking, and forced termination probabilities for users having different mobility characteristics.
Abstract: A personal communication system with multiple hierarchical cellular overlays is considered. The system can include a terrestrial segment and a space segment. The terrestrial trail segment, consisting of microcells and macrocells, provides high channel capacity by covering service areas with microcells. Overlaying macrocells cover spots that are difficult in radio propagation for microcells and provide overflow groups of channels for clusters of microcells. At the highest hierarchical level, communications satellites comprise a space segment. The satellite beams overlay clusters of terrestrial macrocells and provide primary access for satellite-only subscribers. Call attempts from cellular/satellite dual subscribers are first directed to the terrestrial cellular network, with satellites providing necessary overlay. At each level of the hierarchy, hand-off calls are given priority access to the channels. The mathematical structure is that of a multilayer hierarchical overflow system. An analytical model for teletraffic performance (including hand-off) is developed. Theoretical performance measures are calculated for users having different mobility characteristics. These show the carried traffic, traffic distribution, blocking, and forced termination probabilities. >

Journal ArticleDOI
TL;DR: It is indicated that pricing schemes may be used to control network congestion either by rescheduling time-insensitive traffic to a less expensive time of the day, or by smoothing packet transfers to reduce traffic peaks.
Abstract: This paper presents a system for billing users for their TCP traffic. This is achieved by postponing the establishment of connections while the user is contacted, verifying in a secure way that they are prepared to pay. By presenting the user with cost and price information, the system can be used for cost recovery and to encourage efficient use of network resources. The system requires no changes to existing protocols or applications and can be used to recover costs between cooperating sites. Statistics collected from a four-day trace of traffic between the University of California, Berkeley, and the rest of the Internet demonstrate that such a billing system is practical and introduces acceptable latency. An implementation based on the BayBridge prototype router is described. Our study also indicates that pricing schemes may be used to control network congestion either by rescheduling time-insensitive traffic to a less expensive time of the day, or by smoothing packet transfers to reduce traffic peaks. >

Journal ArticleDOI
TL;DR: It is shown that this proposal is soundly based on statistical sampling theory, and it is established that it is feasible to collect the required data in real time.
Abstract: For the purposes of estimating quality-of-service parameters, it is enough to know the large deviation rate-function of an ATM traffic stream; modeling procedures can be bypassed if we can estimate the rate-function directly, exploiting the analogy between the rate-function and thermodynamic entropy. We show that this proposal is soundly based on statistical sampling theory. Experiments on the Fairisle ATM network at the University of Cambridge have established that it is feasible to collect the required data in real time. >

Journal ArticleDOI
TL;DR: An in-depth analysis of these constellations is addressed, evaluating both geometrical performance measures and cochannel interference levels caused by extensive frequency reuse, allowing a fair comparison between LEO, MEO, and GEOconstellations.
Abstract: Several multisatellite and multispot systems have been recently proposed for provision of mobile and personal services with global coverage, adopting GEO or non-GEO (i.e., MEO, LEO) satellite constellations. The paper addresses an in-depth analysis of these constellations, evaluating both geometrical performance measures and cochannel interference levels caused by extensive frequency reuse. The geometrical analysis yields the statistics for coverage, frequency of satellite hand-overs, and link absence periods. The interference analysis is based on a general model valid for all access techniques, which is here applied to the case of FDMA. The outage probability as a function of the specification on carrier-to-interference power ratio is evaluated for four selected constellations. Several techniques are introduced for interference reduction in non-GEO systems, in which the satellites coverage areas may intersect: spot turnoff, intraorbital plane frequency division, and interorbital plane frequency division. The effects of Rice fading have also been analyzed by means of an analytic approximated method. The overall analysis allows a fair comparison between LEO, MEO, and GEO constellations. >

Journal ArticleDOI
TL;DR: A cost function is introduced that captures the combined bandwidth and storage requirements of the network and can be used by network designers to determine optimal topology, sharing, and caches for desired bandwidth versus memory costs in a particular network deployment.
Abstract: A significant driver for the consumer use of high bandwidth in the near future will be interactive video on demand (IVOD). A range of service types can be deployed, based on a differing sophistication, which must be traded against the network costs (bandwidth) and component costs (switch complexity and memory). The potential aggregate bandwidth required is huge (O(1Pb/s)), and thus it is essential to properly engineer the network to reduce the bandwidth required. This paper describes a variety of IVOD scenarios, and introduces a cost function that captures the combined bandwidth and storage requirements of the network. This cost function is used to compare different network engineering alternatives, particularly program caching and stream sharing. The effects of nonlinear pricing and differing weights of bandwidth and storage are also reflected by the cost function. This cost function can be used by network designers to determine optimal topology, sharing, and caching strategies for desired bandwidth versus memory costs in a particular network deployment. In addition, a simulation model is used to evaluate caching of programs or windows within programs. We show that there are some results that are widely applicable. In particular, the level in the network at which caching should take place is at approximately 80% depth in the distribution tree, above the head end switch in the network hierarchy. We also observe that the bandwidth savings in sharing streams (actually buffered windows of program content) is fairly small for user behavior based on Zipfs law. The overall intent of this work is to evaluate the effects of various server, cache, and sharing strategies on the bandwidth and storage requirements of the network and their proper placement within the network. >