scispace - formally typeset
Search or ask a question

Showing papers on "Upper and lower bounds published in 2000"


Journal ArticleDOI
TL;DR: An exact algorithm for filling a single bin is developed, leading to the definition of an exact branch-and-bound algorithm for the three-dimensional bin packing problem, which also incorporates original approximation algorithms.
Abstract: The problem addressed in this paper is that of orthogonally packing a given set of rectangular-shaped items into the minimum number of three-dimensional rectangular bins. The problem is strongly NP-hard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotic worst-case performance ratio of the continuous lower bound is ?. An exact algorithm for filling a single bin is developed, leading to the definition of an exact branch-and-bound algorithm for the three-dimensional bin packing problem, which also incorporates original approximation algorithms. Extensive computational results, involving instances with up to 90 items, are presented: It is shown that many instances can be solved to optimality within a reasonable time limit.

569 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: Two new Ω(√N) lower bounds on computing AND of ORs and inverting a permutation and more uniform proofs for several known lower bounds which have been previously proven via a variety of different techniques are proved.
Abstract: We propose a new method for proving lower bounds on quantum query algorithms. Instead of a classical adversary that runs the algorithm with on input and then modifies the input, we use a quantum adversary that runs the algorithm with a superposition of inputs. If the algorithm works correctly, its state becomes entangled with the superposition over inputs. We bound the number of queries needed to achieve a sufficient entanglement and this implies a lower bound on the number of queries for the computation. Using this method, we prove two new Ω(√N) lower bounds on computing AND of ORs and inverting a permutation and also provide more uniform proofs for several known lower bounds which have been previously proven via a variety of different techniques.

385 citations


Proceedings ArticleDOI
25 Jun 2000
TL;DR: The real, discrete-time Gaussian parallel relay network is introduced and upper and lower bounds to capacity are presented and explained where they coincide.
Abstract: We introduce the real, discrete-time Gaussian parallel relay network. This simple network is theoretically important in the context of network information theory. We present upper and lower bounds to capacity and explain where they coincide.

362 citations


Journal ArticleDOI
TL;DR: A new model predictive controller (MPC) is developed for polytopic linear parameter varying (LPV) systems that allows the inclusion of the first move u(k|k) separately from the rest of the control moves governed by a feedback law and is shown to reduce conservatism and improve feasibility characteristics with respect to input and output constraints.

325 citations


Journal ArticleDOI
TL;DR: This paper describes an iterative dynamic programming algorithm which computes an interval value function for a given bounded parameter MDP and specified policy and introduces {\em interval value functions\/} as a natural extension of traditional value functions.

314 citations


Journal ArticleDOI
TL;DR: The phase shifts offline are optimized by applying the method for computing the PAPR for the coding scenario proposed by the ETSI BRAN Standardization Committee and most of the gain is preserved when the computed optimal phase shifts are rounded to quantenary phase-shift keying (PSK), 8-PSK, and 16- PSK type phase shifts.
Abstract: For any code C defined over an equal energy constellation, it is first shown that at any time instance, the problem of determining codewords of C with high peak-to-average power ratios (PAPR) in a multicarrier communication system is intimately related to the problem of minimum-distance decoding of C. Subsequently, a method is proposed for computing the PAPR by minimum-distance decoding of C at many points of time. Moreover an upper bound on the error between this computed value and the true one is derived. Analogous results are established for codes defined over arbitrary signal constellations. As an application of this computational method, an approach for reducing the PAPR of C proposed by Jones and Wilkinson (1996) is revisited. This approach is based on introducing a specific phase shift to each coordinate of all the codewords where phase shifts are independent of the codewords and known both to the transmitter and the receiver. We optimize the phase shifts offline by applying our method for computing the PAPR for the coding scenario proposed by the ETSI BRAN Standardization Committee. Reductions of order 4.5 dB can be freely obtained using the computed phase shifts. Examples are provided showing that most of the gain is preserved when the computed optimal phase shifts are rounded to quantenary phase-shift keying (PSK), 8-PSK, and 16-PSK type phase shifts.

310 citations


Journal ArticleDOI
01 Sep 2000
TL;DR: The first lower bound on the peak-to-average power ratio (PAPR) of a constant energy code of a given length n, minimum Euclidean distance and rate is established and there exist asymptotically good codes whose PAPR is at most 8 log n.
Abstract: The first lower bound on the peak-to-average power ratio (PAPR) of a constant energy code of a given length n, minimum Euclidean distance and rate is established. Conversely, using a nonconstructive Varshamov-Gilbert style argument yields a lower bound on the achievable rate of a code of a given length, minimum Euclidean distance and maximum PAPR. The derivation of these bounds relies on a geometrical analysis of the PAPR of such a code. Further analysis shows that there exist asymptotically good codes whose PAPR is at most 8 log n. These bounds motivate the explicit construction of error-correcting codes with low PAPR. Bounds for exponential sums over Galois fields and rings are applied to obtain an upper bound of order (log n)/sup 2/ on the PAPRs of a constructive class of codes, the trace codes. This class includes the binary simplex code, duals of binary, primitive Bose-Chaudhuri-Hocquenghem (BCH) codes and a variety of their nonbinary analogs. Some open problems are identified.

288 citations


Journal ArticleDOI
TL;DR: The first subexponential algorithm for this exploration problem, which achieves an upper bound of dO(log d) m, is given and a matching lower bound of $d^{\Omega(\log d)}m$ is shown for the algorithm.
Abstract: We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number R of edge traversals. Deng and Papadimitriou [ Proceedings of the 31st Symposium on the Foundations of Computer Science, 1990, pp. 356--361] showed an upper bound for R of dO(d) m and Koutsoupias (reported by Deng and Papadimitriou) gave a lower bound of $\Omega(d^2 m)$, where m is the number of edges in the graph and d is the minimum number of edges that have to be added to make the graph Eulerian. We give the first subexponential algorithm for this exploration problem, which achieves an upper bound of dO(log d) m. We also show a matching lower bound of $d^{\Omega(\log d)}m$ for our algorithm. Additionally, we give lower bounds of $2^{\Omega(d)}m$, respectively, $d^{\Omega(\log d)}m$ for various other natural exploration algorithms.

284 citations


Journal ArticleDOI
TL;DR: The method of Kovari, Sos, and Turan can be extended to give tight lower bounds for extractors, in terms of both the number of truly random bits needed to extract one additional bit and the unavoidable entropy loss in the system.
Abstract: We show that the size of the smallest depth-two $N$-superconcentrator is $$ \Theta(N\log^2 N/\log\log N) $$ Before this work, optimal bounds were known for all depths except two For the upper bound, we build superconcentrators by putting together a small number of disperser graphs; these disperser graphs are obtained using a probabilistic argument For obtaining lower bounds, we present two different methods First, we show that superconcentrators contain several disjoint disperser graphs When combined with the lower bound for disperser graphs of Kovari, Sos, and Turan, this gives an almost optimal lower bound of $\Omega( N (\log N/\log \log N)^2)$ on the size of $N$-superconcentrators The second method, based on the work of Hansel, gives the optimal lower bound The method of Kovari, Sos, and Turan can be extended to give tight lower bounds for extractors, in terms of both the number of truly random bits needed to extract one additional bit and the unavoidable entropy loss in the system If the input is an $n$-bit source with min-entropy $k$ and the output is required to be within a distance of $\epsilon$ from uniform distribution, then to extract even one additional bit, one must invest at least $\log(n-k) + 2\log(1/\epsilon) - O(1)$ truly random bits; to obtain $m$ output bits one must invest at least $m-k+2\log(1/\epsilon)-O(1)$ Thus, there is a loss of $2\log(1/\epsilon)$ bits during the extraction Interestingly, in the case of dispersers this loss in entropy is only about $\log\log (1/\epsilon)$

280 citations


Journal ArticleDOI
TL;DR: It is shown that Foschini's lower bound is, in fact, the Shannon bound when the output signal-to-noise ratio (SNR) of the space- time processing in each layer is represented by the corresponding "matched filter" bound, which proves the optimality of the layered space-time concept.
Abstract: By deriving a generalized Shannon capacity formula for multiple-input, multiple-output Rayleigh fading channels, and by suggesting a layered space-time architecture concept that attains a tight lower bound on the capacity achievable. Foschini (see Wireless Pers. Commun., vol.6, no.3, p.311-35, 1998) has shown a potential enormous increase in the information capacity of a wireless system employing multiple-element antenna arrays at both the transmitter and receiver. The layered space-time architecture allows signal processing complexity to grow linearly, rather than exponentially, with the promised capacity increase. This paper includes two important contributions. First, we show that Foschini's lower bound is, in fact, the Shannon bound when the output signal-to-noise ratio (SNR) of the space-time processing in each layer is represented by the corresponding "matched filter" bound. This proves the optimality of the layered space-time concept. Second, we present an embodiment of this concept for a coded system operating at a low average SNR and in the presence of possible intersymbol interference. This embodiment utilizes the already advanced space-time filtering, coding and turbo processing techniques to provide yet a practical solution to the processing needed. Performance results are provided for quasi-static Rayleigh fading channels with no channel estimation errors. We see for the first time that the Shannon capacity for wireless communications can be both increased by N times (where N is the number of the antenna elements at the transmitter and receiver) and achieved within about 3 dB in average SNR about 2 dB of which is a loss due to the practical coding scheme we assume-the layered space-time processing itself is nearly information-lossless.

274 citations



30 Sep 2000
TL;DR: In this paper, a method for computing a lower bound for the constraint violation penalty weight of the exact penalty function is presented, which can then be used to guarantee that the soft constrained MPC solution will be equal to the hard-constrained MPC solutions for a bounded subset of initial states, control inputs and reference trajectories.
Abstract: One of the strengths of Model Predictive Control (MPC) is its ability to incorporate constraints in the control formulation. Often a disturbance drives the system into a region where the MPC problem is infeasible and hence no control action can be computed. Feasibility can be recovered by softening the constraints using slack variables. This approach does not necessarily guarantee that the constraints will be satisfied, if possible. Results from the theory of exact penalty functions can be used to guarantee constraint satisfaction. This paper describes a method for computing a lower bound for the constraint violation penalty weight of the exact penalty function. One can then guarantee that the soft-constrained MPC solution will be equal to the hard-constrained MPC solution for a bounded subset of initial states, control inputs and reference trajectories.

Book ChapterDOI
TL;DR: In this article, the authors derived an upper bound on the queuing delay as a function of priority traffic utilization and the maximum hop count of any flow, and the shaping parameters at the network ingress.
Abstract: A large number of products implementing aggregate buffering and scheduling mechanisms have been developed and deployed, and still more are under development. With the rapid increase in the demand for reliable end-to-end QoS solutions, it becomes increasingly important to understand the implications of aggregate scheduling on the resulting QoS capabilities. This paper studies the bounds on the worst case delay in a network implementing aggregate scheduling. We derive an upper bound on the queuing delay as a function of priority traffic utilization and the maximum hop count of any flow, and the shaping parameters at the network ingress. Our bound explodes at a certain utilization level which is a function of the hop count. We show that for a general network configuration and larger utilization utilization an upper bound on delay, if it exists, must be a function of the number of nodes and/or the number of flows in the network.

Journal ArticleDOI
TL;DR: In this paper, the Anderson Localization for one-dimensional lattice Schroedinger operators with quasi-periodic potentials with d frequencies was studied, and it was shown that the spectrum is pure point with exponentially decaying eigenfunctions for all potentials.
Abstract: The two main results of the article are concerned with Anderson Localization for one-dimensional lattice Schroedinger operators with quasi-periodic potentials with d frequencies. First, in the case d = 1 or 2, it is proved that the spectrum is pure-point with exponentially decaying eigenfunctions for all potentials (defined in terms of a trigonometric polynomial on the d-dimensional torus) for which the Lyapounov exponents are strictly positive for all frequencies and all energies. Second, for every non-constant real-analytic potential and with a Diophantine set of d frequencies, a lower bound is given for the Lyapounov exponents for the same potential rescaled by a sufficiently large constant.

Journal ArticleDOI
TL;DR: In this article, the authors consider H (curl ; Ω)-elliptic problems that have been discretized by means of Nedelec's edge elements on tetrahedral meshes.
Abstract: We consider H (curl ;Ω)-elliptic problems that have been discretized by means of Nedelec's edge elements on tetrahedral meshes. Such problems occur in the numerical computation of eddy currents. From the defect equation we derive localized expressions that can be used as a posteriori error estimators to control adaptive refinement. Under certain assumptions on material parameters and computational domains, we derive local lower bounds and a global upper bound for the total error measured in the energy norm. The fundamental tool in the numerical analysis is a Helmholtz-type decomposition of the error into an irrotational part and a weakly solenoidal part.

Proceedings ArticleDOI
09 Jul 2000
TL;DR: The initial experiments on iterative data-parallel applications show that the work-stealing scheduling algorithm matches the performance of static-partitioning under traditional work loads but improves the performance up to 50% over static partitioning under multiprogrammed work loads and a locality-guided work stealing algorithm that improves the data locality of multi-threaded computations by allowing a thread to have an affinity for a processor.
Abstract: This paper studies the data locality of the work-stealing scheduling algorithm on hardware-controlled shared-memory machines. We present lower and upper bounds on the number of cache misses using work stealing, and introduce a locality-guided work-stealing algorithm along with experimental validation.As a lower bound, we show that there is a family of multi-threaded computations Gn each member of which requires T(n) total instructions (work) for which when using work-stealing the number of cache misses on one processor is constant, while even on two processors the total number of cache misses is O(n). This implies that for general computations there is no useful bound relating multiprocessor to uninprocessor cache misses. For nested-parallel computations, however, we show that on P processors the expected additional number of cache misses beyond those on a single processor is bounded by O(C⌈m/sPT∞), where m is the execution time of an instruction incurring a cache miss, s is the steal time, C is the size of cache, and T∞ is the number of nodes on the longest chain of dependences. Based on this we give strong bounds on the total running time of nested-parallel computations using work stealing.For the second part of our results, we present a locality-guided work stealing algorithm that improves the data locality of multi-threaded computations by allowing a thread to have an affinity for a processor. Our initial experiments on iterative data-parallel applications show that the algorithm matches the performance of static-partitioning under traditional work loads but improves the performance up to 50% over static partitioning under multiprogrammed work loads. Furthermore, the locality-guided work stealing improves the performance of work-stealing up to 80%.

Journal ArticleDOI
25 Jun 2000
TL;DR: Both upper and lower bounds on the decoding error probability of maximum-likelihood (ML) decoded low-density parity-check (LDPC) codes are derived, indicating that for various appropriately chosen ensembles of LDPC codes, reliable communication is possible up to channel capacity.
Abstract: We derive both upper and lower bounds on the decoding error probability of maximum-likelihood (ML) decoded low-density parity-check (LDPC) codes. The results hold for any binary-input symmetric-output channel. Our results indicate that for various appropriately chosen ensembles of LDPC codes, reliable communication is possible up to channel capacity. However, the ensemble averaged decoding error probability decreases polynomially, and not exponentially. The lower and upper bounds coincide asymptotically, thus showing the tightness of the bounds. However, for ensembles with suitably chosen parameters, the error probability of almost all codes is exponentially decreasing, with an error exponent that can be set arbitrarily close to the standard random coding exponent.

Book ChapterDOI
20 Aug 2000
TL;DR: In this article, the authors investigated the relationship between the nonlinearity and the order of resiliency of a Boolean function, and showed that functions achieving the best possible trade-off can be constructed by the Maiorana-McFarland like technique.
Abstract: In this paper we investigate the relationship between the nonlinearity and the order of resiliency of a Boolean function. We first prove a sharper version of McEliece theorem for Reed-Muller codes as applied to resilient functions, which also generalizes the well known Xiao-Massey characterization. As a consequence, a nontrivial upper bound on the nonlinearity of resilient functions is obtained. This result coupled with Siegenthaler's inequality leads to the notion of best possible trade-off among the parameters: number of variables, order of resiliency, nonlinearity and algebraic degree. We further show that functions achieving the best possible trade-off can be constructed by the Maiorana-McFarland like technique. Also we provide constructions of some previously unknown functions.

Proceedings ArticleDOI
01 May 2000
TL;DR: A method of estimating a tight upper bound on the statistical metric associated with any superset of an itemset, as well as the novel use of the resulting information of upper bounds to prune unproductive supersets while traversing itemset lattices is presented.
Abstract: We study how to efficiently compute significant association rules according to common statistical measures such as a chi-squared value or correlation coefficient. For this purpose, one might consider to use of the Apriori algorithm, but the algorithm needs major conversion, because none of these statistical metrics are anti-monotone, and the use of higher support for reducing the search space cannot guarantee solutions in its the search space. We here present a method of estimating a tight upper bound on the statistical metric associated with any superset of an itemset, as well as the novel use of the resulting information of upper bounds to prune unproductive supersets while traversing itemset lattices. Experimental tests demonstrate the efficiency of this method.

Journal ArticleDOI
TL;DR: In this article, the authors considered a version of multiprocessor scheduling with the special feature that jobs may be rejected at a certain penalty, and the main result was a $1+\phi\approx 2.618$ competitive algorithm for the on-line version of the problem.
Abstract: We consider a version of multiprocessor scheduling with the special feature that jobs may be rejected at a certain penalty. An instance of the problem is given by $m$ identical parallel machines and a set of $n$ jobs, with each job characterized by a processing time and a penalty. In the on-line version the jobs become available one by one and we have to schedule or reject a job before we have any information about future jobs. The objective is to minimize the makespan of the schedule for accepted jobs plus the sum of the penalties of rejected jobs. The main result is a $1+\phi\approx 2.618$ competitive algorithm for the on-line version of the problem, where $\phi$ is the golden ratio. A matching lower bound shows that this is the best possible algorithm working for all $m$. For fixed $m$ we give improved bounds; in particular, for $m=2$ we give a $\phi\approx 1.618$ competitive algorithm, which is best possible. For the off-line problem we present a fully polynomial approximation scheme for fixed $m$ and a polynomial approximation scheme for arbitrary $m$. Moreover, we present an approximation algorithm which runs in time $O(n\log n)$ for arbitrary $m$ and guarantees a $2-\frac{1}{m}$ approximation ratio.

Posted Content
TL;DR: In this paper, an upper bound of O(log n + log log (1/epsilon) on the circuit complexity for computing an approximation of the quantum Fourier transform with respect to the modulus 2^n with error bounded by epsilon was given.
Abstract: We give new bounds on the circuit complexity of the quantum Fourier transform (QFT). We give an upper bound of O(log n + log log (1/epsilon)) on the circuit depth for computing an approximation of the QFT with respect to the modulus 2^n with error bounded by epsilon. Thus, even for exponentially small error, our circuits have depth O(log n). The best previous depth bound was O(n), even for approximations with constant error. Moreover, our circuits have size O(n log (n/epsilon)). We also give an upper bound of O(n (log n)^2 log log n) on the circuit size of the exact QFT modulo 2^n, for which the best previous bound was O(n^2). As an application of the above depth bound, we show that Shor's factoring algorithm may be based on quantum circuits with depth only O(log n) and polynomial-size, in combination with classical polynomial-time pre- and post-processing. In the language of computational complexity, this implies that factoring is in the complexity class ZPP^BQNC, where BQNC is the class of problems computable with bounded-error probability by quantum circuits with poly-logarithmic depth and polynomial size. Finally, we prove an Omega(log n) lower bound on the depth complexity of approximations of the QFT with constant error. This implies that the above upper bound is asymptotically optimal (for a reasonable range of values of epsilon).

Journal ArticleDOI
TL;DR: Bounds to this rate are found for the intracell TDMA protocol by incorporating information-theoretic inequalities and the Chebyshev-Markov moment theory as applied to the limiting distribution of the eigenvalues of a quadratic form of tridiagonal random matrices.
Abstract: Shannon-theoretic limits on the achievable throughput for a simple infinite cellular multiple-access channel (MAC) model (Wyner 1994) in the presence of fading are presented. In this model, which is modified to account for flat fading, the received signal, at a given cell-site's antenna, is the sum of the faded signals transmitted from all users within that cell plus an attenuation factor /spl alpha//spl isin/[0,1] times the sum of the faded signals received from the adjacent cells, accompanied by Gaussian additive noise. This model serves as a tractable model providing considerable insight into complex and analytically intractable real-world cellular communications. Both linear and planar cellular arrays are considered with exactly K active users in each cell. We assume a hyper-receiver, jointly decoding all of the users, incorporating the received signals from all of the active cell-sites. The hyper-receiver is assumed to be aware of the codebooks and realizations of the fading processes of all the users in the system. In this work we consider the intracell time-division multiple-access (TDMA) and the wideband (WB) protocols. We focus on the maximum reliably transmitted equal rate. Bounds to this rate are found for the intracell TDMA protocol by incorporating information-theoretic inequalities and the Chebyshev-Markov moment theory as applied to the limiting distribution of the eigenvalues of a quadratic form of tridiagonal random matrices. We demonstrate our results for the special case where the amplitudes of the fading coefficients are drawn from a Rayleigh distribution, i.e., Rayleigh fading. For this special case, we observe the rather surprising result that fading may increase the maximum equal rate, for a certain range of /spl alpha/ as compared to the nonfaded case. In this setting, the WB strategy, which achieves the maximum reliable equal rate of the model, is proved to be superior to the TDMA scheme. An upper bound to the maximum equal rate of the WB scheme is also obtained. This bound is asymptotically tight when the number of users is large (K/spl Gt/1). The asymptotic bound shows that the maximum equal rate of the WB scheme in the presence of fading is higher than the rate which corresponds to the nonfaded case for any intercell interference factor /spl alpha//spl isin/[0,1] signal-to-noise ratio (SNR) values. This result is found to be independent of the statistics of the fading coefficients.

Journal ArticleDOI
TL;DR: This paper presents a lower bound of $\Omega(D+\sqrt n/\log n)$ on the time required for the distributed construction of a minimum-weight spanning tree (MST) in weighted n-vertex networks of diameter D=Omega(\ log n) in the bounded message model.
Abstract: This paper presents a lower bound of $\Omega(D+\sqrt n/\log n)$ on the time required for the distributed construction of a minimum-weight spanning tree (MST) in weighted n-vertex networks of diameter $D=\Omega(\log n)$, in the bounded message model. This establishes the asymptotic near-optimality of existing time-efficient distributed algorithms for the problem, whose complexity is $O(D + \sqrt n \log^* n)$.

Journal ArticleDOI
TL;DR: Bousso et al. as mentioned in this paper established the holographic principle as a universal law, rather than a property only of static systems and special spacetimes, and derived an upper bound on entropy which applies to both open and closed surfaces, independently of shape or location.
Abstract: We aim to establish the holographic principle as a universal law, rather than a property only of static systems and special spacetimes. Our covariant formalism yields an upper bound on entropy which applies to both open and closed surfaces, independently of shape or location. It reduces to the Bekenstein bound whenever the latter is expected to hold, but complements it with novel bounds when gravity dominates. In particular, it remains valid in closed FRW cosmologies and in the interior of black holes. We give an explicit construction for obtaining holographic screens in arbitrary spacetimes (which need not have a boundary). This may aid the search for non-perturbative definitions of quantum gravity in spacetimes other than AdS. More details, references and examples can be found in papers by Bousso R (1999 J. High Energy Phys. JHEP07(1999)004, JHEP06(1999)028).

Journal ArticleDOI
TL;DR: A posteriori error estimators of residual type are derived for piecewise linear finite element approximations to elliptic obstacle problems with instrumental ingredient an instrumental ingredient which requires minimal regularity, exhibits optimal approximation properties and preserves positivity.
Abstract: A posteriori error estimators of residual type are derived for piecewise linear finite element approximations to elliptic obstacle problems. An instrumental ingredient is a new interpolation operator which requires minimal regularity, exhibits optimal approximation properties and preserves positivity. Both upper and lower bounds are proved and their optimality is explored with several examples. Sharp a priori bounds for the a posteriori estimators are given, and extensions of the results to double obstacle problems are briefly discussed.

Journal ArticleDOI
TL;DR: In this article, the upper bound for sums of dependent random variables X 1 + X 2 +⋯+ X n derived by using comonotonicity is sharpened for the case when there exists a random variable Z such that the distribution functions of the X i, given Z = z, are known.
Abstract: In this contribution, the upper bounds for sums of dependent random variables X 1 + X 2 +⋯+ X n derived by using comonotonicity are sharpened for the case when there exists a random variable Z such that the distribution functions of the X i , given Z = z , are known. By a similar technique, lower bounds are derived. A numerical application for the case of lognormal random variables is given.

Journal ArticleDOI
TL;DR: A new rapidly mixing Markov chain for independent sets is defined and a polynomial upper bound for the mixing time of the new chain is obtained for a certain range of values of the parameter ?, which is wider than the range for which the mixingTime of the Luby?Vigoda chain is known to be polynomially bounded.

Journal Article
TL;DR: A lower bound on the minimum power consumption of stations on the plane for constant h is provided and the tightness of the upper bound implies that MIN 2D H-RANGE ASSIGNMENT restricted to well spread instances admits a polynomial time approximation algorithm.
Abstract: Given a finite set S of points (i.e. the stations of a radio network) on a d-dimensional Euclidean space and a positive integer 1 ≤ h ≤ |S| - 1, the MIN d D h-RANGE ASSIGNMENT problem consists of assigning transmission ranges to the stations so as to minimize the total power consumption, provided that the transmission ranges of the stations ensure the communication between any pair of stations in at most h hops.Two main issues related to this problem are considered in this paper: the trade-off between the power consumption and the number of hops; the computational complexity of the MIN dD h-RANGE ASSIGNMENT problem.As for the first question, we provide a lower bound on the minimum power consumption of stations on the plane for constant h. The lower bound is a function of |S|, h and the minimum distance over all the pairs of stations in S. Then, we derive a constructive upper bound as a function of |S|, h and the maximum distance over all pairs of stations in S (i.e. the diameter of S). It turns out that when the minimum distance between any two stations is "not too small" (i.e. well spread instances) the upper bound matches the lower bound. Previous results for this problem were known only for very special 1-dimensional configurations (i.e., when points are arranged on a line at unitary distance) [Kirousis, Kranakis, Krizanc and Pelc, 1997].As for the second question, we observe that the tightness of our upper bound implies that MIN 2D h-RANGE ASSIGNMENT restricted to well spread instances admits a polynomial time approximation algorithm. Then, we also show that the same approximation result can be obtained for random instances. On the other hand, we prove that for h=|S|-1 (i.e. the unbounded case) MIN 2D h-RANGE ASSIGNMENT is NP-hard and MIN 3D h-RANGE ASSIGNMENT is APX-complete.

Journal ArticleDOI
TL;DR: It is shown that the algorithm of IEEE-STD-1057 provides accurate estimates for Gaussian and quantization noise and in the Gaussian scenario it provides estimates with performance close to the derived lower bound.
Abstract: The IEEE Standard 1057 (IEEE-STD-1057) provides algorithms for fitting the parameters of a sine wave to noisy discrete time observations. The fit is obtained as an approximate minimizer of the sum of squared errors, i.e., the difference between observations and model output. The contributions of this paper include a comparison of the performance of the four-parameter algorithm in the standard with the Cramer-Rao lower bound on accuracy, and with the performance of a nonlinear least squares approach. It is shown that the algorithm of IEEE-STD-1057 provides accurate estimates for Gaussian and quantization noise. In the Gaussian scenario it provides estimates with performance close to the derived lower bound. In severe conditions with noisy data covering only a fraction of a period, however, it is shown to have inferior performance compared with a one-dimensional search of a concentrated cost function.

Journal ArticleDOI
TL;DR: The shotgun cellular system is introduced, a two-dimensional interference-limited cellular radio system that places base stations randomly and assigns channels randomly and indicates cellular performance is very robust and little is lost in making rapid minimally planned deployments.
Abstract: This paper considers two-dimensional interference-limited cellular radio systems. It introduces the shotgun cellular system that places base stations randomly and assigns channels randomly. Such systems are shown to provide lower bounds to cellular performance that are easy to compute, independent of shadow fading, and apply to a number of design scenarios. Traditional hexagonal systems provide an upper performance bound. The difference between upper and lower bounds is small under operating conditions typical in modern TDMA and CDMA cellular systems. Furthermore, in the strong shadow fading limit, the bounds converge. To give insights into the design of practical systems, several variations are explored including mobile access methods, sectorizing, channel assignments, and placement with deviations. Together these results indicate cellular performance is very robust and little is lost in making rapid minimally planned deployments.