scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 1998"


Journal ArticleDOI
TL;DR: If U is an ellipsoidal uncertainty set, then for some of the most important generic convex optimization problems (linear programming, quadratically constrained programming, semidefinite programming and others) the corresponding robust convex program is either exactly, or approximately, a tractable problem which lends itself to efficientalgorithms such as polynomial time interior point methods.
Abstract: We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, yet the constraints must hold for all possible values of the data from U. The ensuing optimization problem is called robust optimization. In this paper we lay the foundation of robust convex optimization. In the main part of the paper we show that if U is an ellipsoidal uncertainty set, then for some of the most important generic convex optimization problems (linear programming, quadratically constrained programming, semidefinite programming and others) the corresponding robust convex program is either exactly, or approximately, a tractable problem which lends itself to efficientalgorithms such as polynomial time interior point methods.

2,501 citations


Journal ArticleDOI
TL;DR: The search for a piecewise quadratic Lyapunov function is formulated as a convex optimization problem in terms of linear matrix inequalities and the relation to frequency domain methods such as the circle and Popov criteria is explained.
Abstract: This paper presents a computational approach to stability analysis of nonlinear and hybrid systems. The search for a piecewise quadratic Lyapunov function is formulated as a convex optimization problem in terms of linear matrix inequalities. The relation to frequency domain methods such as the circle and Popov criteria is explained. Several examples are included to demonstrate the flexibility and power of the approach.

1,186 citations


Journal ArticleDOI
TL;DR: This paper shows how to formulate sufficient conditions for a robust solution to exist as SDPs, and provides sufficient conditions which guarantee that the robust solution is unique and continuous (Holder-stable) with respect to the unperturbed problem's data.
Abstract: In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worst-case) objective while satisfying the constraints for every possible value of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist as SDPs. When the perturbation is "full," our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique and continuous (Holder-stable) with respect to the unperturbed problem's data. The approach can thus be used to regularize ill-conditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation, and integer programming.

985 citations


Journal ArticleDOI
TL;DR: Two alternative design techniques for constructing gain-scheduled controllers for uncertain linear parameter-varying systems are discussed and are amenable to linear matrix inequality problems via a gridding of the parameter space and a selection of basis functions.
Abstract: This paper is concerned with the design of gain-scheduled controllers for uncertain linear parameter-varying systems. Two alternative design techniques for constructing such controllers are discussed. Both techniques are amenable to linear matrix inequality problems via a gridding of the parameter space and a selection of basis functions. These problems are then readily solvable using available tools in convex semidefinite programming. When used together, these techniques provide complementary advantages of reduced computational burden and ease of controller implementation. The problem of synthesis for robust performance is then addressed by a new scaling approach for gain-scheduled control. The validity of the theoretical results are demonstrated through a two-link flexible manipulator design example. This is a challenging problem that requires scheduling of the controller in the manipulator geometry and robustness in face of uncertainty in the high-frequency range.

887 citations


Journal ArticleDOI
TL;DR: This paper presents efficiency estimates for several symmetric primal-dual methods that can loosely be classified as path-following methods for convex programming problems expressed in conic form when the cone and its associated barrier are self-scaled.
Abstract: In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point algorithms for convex programming problems expressed in conic form, when the cone and its associated barrier are self-scaled (see Yu. E. Nesterov and M. J. Todd, Math. Oper. Res., 22 (1997), pp. 1--42). The class of problems under consideration includes linear programming, semidefinite programming, and convex quadratically constrained, quadratic programming problems. For such problems we introduce a new definition of affine-scaling and centering directions. We present efficiency estimates for several symmetric primal-dual methods that can loosely be classified as path-following methods. Because of the special properties of these cones and barriers, two of our algorithms can take steps that typically go a large fraction of the way to the boundary of the feasible region, rather than being confined to a ball of unit radius in the local norm defined by the Hessian of the barrier.

532 citations


Proceedings ArticleDOI
16 Dec 1998
TL;DR: In this article, linear matrix inequalities (LMI) are used to perform local stability and performance analysis of linear systems with saturating elements, which leads to less conservative information on stability regions, disturbance rejection, and L/sub 2/gain than standard global stability analysis.
Abstract: We show how linear matrix inequalities (LMI) can be used to perform local stability and performance analysis of linear systems with saturating elements. This leads to less conservative information on stability regions, disturbance rejection, and L/sub 2/-gain than standard global stability and performance analysis. The circle and Popov criteria are used to obtain Lyapunov functions whose sublevel sets provide regions of guaranteed stability and performance within a restricted state space region. Our LMI formulation leads directly to simple convex optimization problems that can be solved efficiently as semidefinite programs. The results cover both single and multiple saturation elements and can be immediately extended to discrete time systems. An obvious application of these techniques is in the analysis of control systems with saturating control inputs.

372 citations


Journal ArticleDOI
TL;DR: In this article, a unified approach to linear controller synthesis that employs various LMI conditions to represent control specifications is proposed, where a general synthesis problem described by any LMI of the class is reduced to solving a certain LMI.
Abstract: This paper proposes a unified approach to linear controller synthesis that employs various LMI conditions to represent control specifications. We define a comprehensive class of LMIs and consider a general synthesis problem described by any LMI of the class. We show a procedure that reduces the synthesis problem, which is a BMI problem, to solving a certain LMI. The derived LMI condition is equivalent to the original BMI condition and also gives a convex parametrization of all the controllers that solve the synthesis problem. The class contains many of widely-known LMIs (for H∞ norm, H2 norm, etc.), and hence the solution of this paper unifies design methods that have been proposed depending on each LMI. Further, the class also provides LMIs for multi-objective performance measures, which enable a new formulation of controller design through convex optimization. © 1998 John Wiley & Sons, Ltd.

343 citations


Proceedings ArticleDOI
21 Jun 1998
TL;DR: In this paper, the authors consider analysis and controller synthesis of piecewise-linear systems based on constructing quadratic and piecewisequadratic Lyapunov functions that prove stability and performance for the system.
Abstract: We consider analysis and controller synthesis of piecewise-linear systems. The method is based on constructing quadratic and piecewise-quadratic Lyapunov functions that prove stability and performance for the system. It is shown that proving stability and performance, or designing (state-feedback) controllers, can be cast, as convex optimization problems involving linear matrix inequalities that can be solved very efficiently. A couple of simple examples are included to demonstrate applications of the methods described.

330 citations


Journal ArticleDOI
TL;DR: How the search directions for the Nesterov--Todd (NT) method can be computed efficiently and how they can be viewed as Newton directions are discussed and demonstrated.
Abstract: We study different choices of search direction for primal-dual interior-point methods for semidefinite programming problems. One particular choice we consider comes from a specialization of a class of algorithms developed by Nesterov and Todd for certain convex programming problems. We discuss how the search directions for the Nesterov--Todd (NT) method can be computed efficiently and demonstrate how they can be viewed as Newton directions. This last observation also leads to convenient computation of accelerated steps, using the Mehrotra predictor-corrector approach, in the NT framework. We also provide an analytical and numerical comparison of several methods using different search directions, and suggest that the method using the NT direction is more robust than alternative methods.

279 citations


Journal ArticleDOI
16 May 1998
TL;DR: Qualitative test of 3D frictional form-closure grasps of n robotic fingers is formalized as a problem of linear programming (LP) and the problem of minimizing the L/sub 1/ norm of the grasp forces balancing an external wrench can be transformed to a ray-shooting problem.
Abstract: This paper formalizes qualitative test of 3D frictional form-closure grasps of n robotic fingers as a problem of linear programming (LP). It is well-known that a sufficient and necessary condition for form-closure grasps is that the origin of the wrench space lies inside the convex hull of primitive contact wrenches. We demonstrate that the problem of querying whether the origin lies inside the convex hull is equivalent to a ray-shooting problem, which is dual to a LP problem based on the duality between convex hulls and convex polytopes. Furthermore, this paper addresses a problem of minimizing the L/sub 1/ norm of the grasp forces balancing an external wrench, which can be also transformed to a ray-shooting problem. We have implemented the algorithms and confirmed their real-time efficiency for qualitative test and grasp force optimization.

253 citations


Journal ArticleDOI
TL;DR: A new estimation method whereby signal subspace truncation of the DAA augmented matrix is used for initialization and is followed by a local maximum-likelihood optimization routine, and the accuracy of this method is demonstrated to be asymptotically optimal for the various superior scenarios presented.
Abstract: This paper considers the problem of direction-of arrival (DOA) estimation for multiple uncorrelated plane waves incident on so-called "fully augmentable" sparse linear arrays. In situations where a decision is made on the number of existing signal sources (m) prior to the estimation stage, we investigate the conditions under which DOA estimation accuracy is effective (in the maximum-likelihood sense). In the case where m is less than the number of antenna sensors (M), a new approach called "MUSIC-maximum-entropy equalization" is proposed to improve DOA estimation performance in the "preasymptotic region" of finite sample size (N) and signal-to-noise ratio. A full-sized positive definite (p.d.) Toeplitz matrix is constructed from the M/spl times/M direct data covariance matrix, and then, alternating projections are applied to find a p.d. Toeplitz matrix with m-variate signal eigensubspace ("signal subspace truncations"). When m/spl ges/M, Cramer-Rao bound analysis suggests that the minimal useful sample size N is rather large, even for arbitrarily strong signals. It is demonstrated that the well-known direct augmentation approach (DAA) cannot approach the accuracy of the corresponding Cramer-Rao bound, even asymptotically (as N/spl rarr//spl infin/) and, therefore, needs to be improved. We present a new estimation method whereby signal subspace truncation of the DAA augmented matrix is used for initialization and is followed by a local maximum-likelihood optimization routine. The accuracy of this method is demonstrated to be asymptotically optimal for the various superior scenarios (m/spl ges/M) presented.

Book ChapterDOI
01 Jan 1998
TL;DR: In this article, the existence of a globed error bound for convex inequality systems was studied using convex analysis and a necessary and sufficient condition was established for a closed convex set defined by a closed proper convex function to possess a global error bound in terms of a natural residual.
Abstract: Using convex analysis, this paper gives a systematic and unified treatment for the existence of a globed error bound for a convex inequality system. We establish a necessary and sufficient condition for a closed convex set defined by a closed proper convex function to possess a globed error bound in terms of a natural residual. We derive many special cases of the main characterization, including the case where a Slater assumption is in place. Our results show clearly the essential conditions needed for convex inequality systems to satisfy global error bounds; they unify and extend a large number of existing results on global error bounds for such systems.1

Journal ArticleDOI
TL;DR: OSC is faster than the convex algorithm, the amount of acceleration being approximately proportional to the number of subsets in OSC, and it causes only a slight increase of noise and global errors in the reconstructions.
Abstract: Iterative maximum likelihood (ML) transmission computed tomography algorithms have distinct advantages over Fourier-based reconstruction, but unfortunately require increased computation time. The convex algorithm is a relatively fast iterative ML algorithm but it is nevertheless too slow for many applications. Therefore, an acceleration of this algorithm by using ordered subsets of projections is proposed [ordered subsets convex algorithm (OSC)]. OSC applies the convex algorithm sequentially to subsets of projections, OSC was compared with the convex algorithm using simulated and physical thorax phantom data. Reconstructions were performed for OSC using eight and 16 subsets (eight and four projections/subset, respectively). Global errors, image noise, contrast recovery, and likelihood increase were calculated. Results show that OSC is faster than the convex algorithm, the amount of acceleration being approximately proportional to the number of subsets in OSC, and it causes only a slight increase of noise and global errors in the reconstructions. Images and image profiles of the reconstructions were in good agreement, In conclusion, OSC and the convex algorithm result in similar image quality but OSC is more than an order of magnitude faster.

Journal ArticleDOI
TL;DR: This work considers the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to anεk-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set.
Abstract: We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to anek-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizesek are exogenously given, satisfyingΣk=0∞ αk = ∞, Σk=0∞ αk2 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: This work presents a method for optimizing and automating component and transistor sizing for CMOS operational amplifiers, and shows how the method can be applied to six common op-amp architectures, and gives several example designs.
Abstract: We present a method for optimizing and automating component and transistor sizing for CMOS operational amplifiers. We observe that a wide variety of performance measures can be formulated as posynomial functions of the design variables. As a result, amplifier design problems can be formulated as a geometric program, a special type of convex optimization problem for which very efficient global optimization methods have recently been developed. The synthesis method is therefore fast, and determines the globally optimal design; in particular the final solution is completely independent of the starting point (which can even be infeasible), and infeasible specifications are unambiguously detected. After briefly introducing the method, which is described in more detail by M. Hershenson et al., we show how the method can be applied to six common op-amp architectures, and give several example designs.

Journal ArticleDOI
TL;DR: A convergence theorem is proved which extends existing results by relaxing the assumption of uniqueness of minimizers and derives a decomposition scheme for block angular optimization and presents computational results on a class of dual block angular problems.
Abstract: We study a generalized version of the method of alternating directions as applied to the minimization of the sum of two convex functions subject to linear constraints. The method consists of solving consecutively in each iteration two optimization problems which contain in the objective function both Lagrangian and proximal terms. The minimizers determine the new proximal terms and a simple update of the Lagrangian terms follows. We prove a convergence theorem which extends existing results by relaxing the assumption of uniqueness of minimizers. Another novelty is that we allow penalty matrices, and these may vary per iteration. This can be beneficial in applications, since it allows additional tuning of the method to the problem and can lead to faster convergence relative to fixed penalties. As an application, we derive a decomposition scheme for block angular optimization and present computational results on a class of dual block angular problems.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: A new method for reducing peak to average power ratio (PAR) in multicarrier systems is presented, and optimizing the time domain signal leads to a convex optimization problem that can be transformed into a linear program (LP).
Abstract: A new method for reducing peak to average power ratio (PAR) in multicarrier systems is presented. The PAR is reduced by adding time domain signals to the data sequence that lay in a disjoint frequency space to the multicarrier data symbols. With this formulation, optimizing the time domain signal leads to a convex optimization problem that can be transformed into a linear program (LP). Solving the LP exactly leads to PAR reductions of 6-10 dB, but a simple gradient algorithm could achieve most of this reduction after a few iterations. These additive signals can be easily removed from the received signal without transmitter-receiver symbol handshake.

Journal ArticleDOI
TL;DR: An algorithm for the variational inequality problem on convex sets with nonempty interior establishes full convergence to a solution with minimal conditions upon the monotone operatorF, weaker than strong monotonicity or Lipschitz continuity, for instance, and including cases where the solution needs not be unique.
Abstract: We present an algorithm for the variational inequality problem on convex sets with nonempty interior. The use of Bregman functions whose zone is the convex set allows for the generation of a sequence contained in the interior, without taking explicitly into account the constraints which define the convex set. We establish full convergence to a solution with minimal conditions upon the monotone operatorF, weaker than strong monotonicity or Lipschitz continuity, for instance, and including cases where the solution needs not be unique. We apply our algorithm to several relevant classes of convex sets, including orthants, boxes, polyhedra and balls, for which Bregman functions are presented which give rise to explicit iteration formulae, up to the determination of two scalar stepsizes, which can be found through finite search procedures. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

Journal ArticleDOI
TL;DR: In this article, the authors present convergence results and an estimation of the rate of convergence for this method, and then apply it to variational inequalities and structured convex programming problems to get new parallel decomposition algorithms.
Abstract: Many problems of convex programming can be reduced to finding a zero of the sum of two maximal monotone operators For solving this problem, there exists a variety of methods such as the forward–backward method, the Peaceman–Rachford method, the Douglas–Rachford method, and more recently the θ-scheme This last method has been presented without general convergence analysis by Glowinski and Le Tallec and seems to give good numerical results The purpose of this paper is first to present convergence results and an estimation of the rate of convergence for this recent method, and then to apply it to variational inequalities and structured convex programming problems to get new parallel decomposition algorithms

Journal ArticleDOI
TL;DR: Two methods of volume-dependent optimization for intensity modulated beams such as those generated by computer-controlled multileaf collimators using a volume sensitive penalty function and a theory of projections onto convex sets in which the dose-volume constraint is replaced by a limit on integral dose are described.
Abstract: For accurate prediction of normal tissue tolerance, it is important that the volumetric information of dose distribution be considered. However, in dosimetric optimization of intensity modulated beams, the dose-volume factor is usually neglected. In this paper we describe two methods of volume-dependent optimization for intensity modulated beams such as those generated by computer-controlled multileaf collimators. The first method uses a volume sensitive penalty function in which fast simulated annealing is used for cost function minimization (CFM). The second technique is based on the theory of projections onto convex sets (POCS) in which the dose-volume constraint is replaced by a limit on integral dose. The ability of the methods to respect the dose-volume relationship was demonstrated by using a prostate example involving partial volume constraints to the bladder and the rectum. The volume sensitive penalty function used in the CFM method can be easily adopted by existing optimization programs. The convex projection method can find solutions in much shorter time with minimal user interaction.

Journal ArticleDOI
TL;DR: In the present paper trim-loss problems, often named the cutting stock problem, connected to the paper industry, the problem is to cut out a set of product paper rolls from raw paper rolls such that the cost function, including the trim loss, is minimized.

Journal ArticleDOI
TL;DR: In this article, an extended cutting plane method is introduced for solving non-convex mixed-integer non-linear programming problems, although the method was originally introduced for the solution of convex problems only.

Proceedings ArticleDOI
01 Apr 1998
TL;DR: This paper proposes a placement method for a mixed set of hard, soft, and pre-placed modules, based on a placement topology representation called sequence-pair, which is directly applied in simulated annealing to present the most exact placement method.
Abstract: This paper proposes a placement method for a mixed set of hard, soft, and pre-placed modules, based on a placement topology representation called sequence-pair. Under one sequence-pair, a convex optimization problem is efficiently formulated and solved to optimize the aspect ratios of the soft modules. The method is used in two ways: i) directly applied in simulated annealing to present the most exact placement method, ii) applied as a post process in an approximate placement method for faster computation. The performance of these two methods are reported using MCNC benchmark examples.

Journal ArticleDOI
TL;DR: This article presents a primal-dual predictor-corrector interior-point method for solving quadratically constrained convex optimization problems that arise from truss design problems and illustrates the surprising efficiency of the method.
Abstract: This article presents a primal-dual predictor-corrector interior-point method for solving quadratically constrained convex optimization problems that arise from truss design problems. We investigate certain special features of the problem, discuss fundamental differences of interior-point methods for linearly and nonlinearly constrained problems, extend Mehrotra's predictor-corrector strategy to nonlinear programs, and establish convergence of a long step method. Numerical experiments on large scale problems illustrate the surprising efficiency of the method.

Journal ArticleDOI
TL;DR: This paper treats the problem of computing the collapse state in limit analysis for a solid with a quadratic yield condition, such as, for example, the von Mises condition, as a convex optimization problem in large sparse form.
Abstract: This paper treats the problem of computing the collapse state in limit analysis for a solid with a quadratic yield condition, such as, for example, the von Mises condition. After discretization with the finite element method, using divergence-free elements for the plastic flow, the kinematic formulation reduces to the problem of minimizing a sum of Euclidean vector norms, subject to a single linear constraint. This is a nonsmooth minimization problem, since many of the norms in the sum may vanish at the optimal point. Recently an efficient solution algorithm has been developed for this particular convex optimization problem in large sparse form. The approach is applied to test problems in limit analysis in two different plane models: plane strain and plates. In the first case more than 80% of the terms in the objective function are zero in the optimal solution, causing extreme ill conditioning. In the second case all terms are nonzero. In both cases the method works very well, and problems are solved which are larger by at least an order of magnitude than previously reported. The relative accuracy for the solution of the discrete problems, measured by duality gap and feasibility, is typically of the order 10-8.

Journal ArticleDOI
TL;DR: In this article, the problem of finding an optimal point in the intersection of the fixed point sets of a family of nonexpansive mappings is a frequent problem in various areas of mathematical science and engineering.
Abstract: Finding an optimal point in the intersection of the fixed point sets of a family of nonexpansive mappings is a frequent problem in various areas of mathematical science and engineering. Let be nonexpansive mappings on a Hilbert space H, and let be a quadratic function defined by for all , where is a strongly positive bounded self-adjoint linear operator. Then, for each sequence of scalar parameters (λn) satisfying certain conditions, we propose an algorithm that generates a sequence converting strongly to a unique minimizer u* of Θ over the intersection of the fixed point sets of all the Ti’s. This generalizes some results of Halpern (1967), Lions (1977), Wittmann (1992), and Bauschke (1996). In particular, the minimization of Θ over the intersection of closed convex sets Ci can be handled by taking Ti to the metric projection onto Ci without introducing any special inner products that depends on A. We also propose an algorithm that generates a sequence converging to a unique minimizer of Θ over , where K...

Proceedings ArticleDOI
21 Jun 1998
TL;DR: In this paper, the synthesis of non-fragile or resilient regulators for linear systems is described using state-space methodologies, and the LQ/H/sub 2/ static state-feedback case is examined in detail.
Abstract: This paper describes the synthesis of non-fragile or resilient regulators for linear systems. The general framework for fragility is described using state-space methodologies, and the LQ/H/sub 2/ static state-feedback case is examined in detail. We discuss the multiplicative structured uncertainties case, and propose remedies of the fragility problem using a convex programming framework as a possible solution scheme. The benchmark problem is taken as an example to show how controller gain variations can affect the performance of the closed-loop system.

Journal ArticleDOI
TL;DR: In this article, local subdivision schemes that interpolate functional univariate data and preserve convexity were constructed, and the resulting limit function of these schemes is continuous and convex for arbitrary convex data.
Abstract: We construct local subdivision schemes that interpolate functional univariate data and that preserve convexity. The resulting limit function of these schemes is continuous and convex for arbitrary convex data. Moreover this class of schemes is restricted to a subdivision scheme that generates a limit function that is convex and continuously differentiable for strictly convex data. The approximation order of this scheme is four. Some generalizations, such as tension control and piecewise convexity preservation, are briefly discussed.

Journal ArticleDOI
TL;DR: Using techniques of linear algebra, combinatorial optimization, and convex optimization, upper and lower bounds on the optimal value for the Gaussian case are developed and integrated into a branch-and-bound algorithm for the exact solution of these design problems.
Abstract: A fundamental experimental design problem is to select a most informative subset, having prespecified size, from a set of correlated random variables. Instances of this problem arise in many applied domains such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and, possibly, at different times. Information is measured by "entropy." Practical situations have further restrictions on the design space. For example, budgetary limits, geographical considerations, as well as legislative and political considerations may restrict the design space in a complicated manner. Using techniques of linear algebra, combinatorial optimization, and convex optimization, we develop upper and lower bounds on the optimal value for the Gaussian case. We describe how these bounds can be integrated into a branch-and-bound algorithm for the exact solution of these design problems. Finally, we describe how we have implemented this algorithm, and we present computational results for estimated covariance matrices corresponding to sets of environmental monitoring stations in the Ohio Valley of the United States.

Journal ArticleDOI
TL;DR: The smallest (best) barrier parameter of self-concordant barriers for homogeneous convex cones is characterized and it is proved that this parameter is the same as the rank of the cone which is the number of steps in a recursive construction of the cones.
Abstract: We characterize the smallest (best) barrier parameter of self-concordant barriers for homogeneous convex cones. In particular, we prove that this parameter is the same as the rank of the cone which is the number of steps in a recursive construction of the cone (Siegel domain construction). We also provide lower bounds on the barrier parameter in terms of the Caratheodory number of the cone. The bounds are tight for homogeneous self-dual cones. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.