scispace - formally typeset
Search or ask a question

Showing papers on "Extreme point published in 2016"


Proceedings Article
01 Jan 2016
TL;DR: A flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal to avoid the prohibitive computational cost associated with "lifting" and semidefinite programming and compete with recently developed non-convex techniques for phase retrieval.
Abstract: We propose a flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal. Therefore, we avoid the prohibitive computational cost associated with "lifting" and semidefinite programming (SDP) in methods such as PhaseLift and compete with recently developed non-convex techniques for phase retrieval. We relax the quadratic equations for phaseless measurements to inequality constraints each of which representing a symmetric "slab". Through a simple convex program, our proposed estimator finds an extreme point of the intersection of these slabs that is best aligned with a given anchor vector. We characterize geometric conditions that certify success of the proposed estimator. Furthermore, using classic results in statistical learning theory, we show that for random measurements the geometric certificates hold with high probability at an optimal sample complexity. Phase transition of our estimator is evaluated through simulations. Our numerical experiments also suggest that the proposed method can solve phase retrieval problems with coded diffraction measurements as well.

77 citations


Journal ArticleDOI
TL;DR: A general decomposition framework to solve exactly adjustable robust linear optimization problems subject to polytope uncertainty and shows that the relative performance of the algorithms depend on whether the budget is integer or fractional.
Abstract: We present in this paper a general decomposition framework to solve exactly adjustable robust linear optimization problems subject to poly-tope uncertainty. Our approach is based on replacing the polytope by the set of its extreme points and generating the extreme points on the fly within row generation or column-and-row generation algorithms. The novelty of our approach lies in formulating the separation problem as a feasibility problem instead of a max-min problem as done in recent works. Applying the Farkas lemma, we can reformulate the separation problem as a bilinear program, which is then linearized to obtained a mixed-integer linear programming formulation. We compare the two algorithms on a robust telecommunications network design under demand uncertainty and budgeted uncertainty polytope. Our results show that the relative performance of the algorithms depend on whether the budget is integer or fractional.

58 citations


Posted Content
TL;DR: In this article, a bootstrap-based procedure is proposed to build con-dence intervals for single components of a partially identi ed parameter vector, and for smooth functions of such components, in moment (in) equality models.
Abstract: This paper proposes a bootstrap-based procedure to build con dence intervals for single components of a partially identi ed parameter vector, and for smooth functions of such components, in moment (in)equality models. The extreme points of our con dence interval are obtained by maximizing/minimizing the value of the component (or function) of interest subject to the sample analog of the moment (in)equality conditions properly relaxed. The novelty is that the amount of relaxation, or critical level, is computed so that the component (or function) of, instead of itself, is uniformly asymptotically cov- ered with prespeci ed probability. Calibration of the critical level is based on repeatedly checking feasibility of linear programming problems, rendering it computationally attrac- tive. Computation of the extreme points of the con dence interval is based on a novel application of the response surface method for global optimization, which may prove of independent interest also for applications of other methods of inference in the moment (in)equalities literature. The critical level is by construction smaller (in nite sample) than the one used if projecting con dence regions designed to cover the entire parameter vector. Hence, our con dence interval is weakly shorter than the projection of established con dence sets (Andrews and Soares, 2010), if one holds the choice of tuning parameters constant. We provide simple conditions under which the comparison is strict. Our inference method controls asymptotic coverage uniformly over a large class of data generating processes. Our assumptions and those used in the leading alternative approach (a pro ling based method) are not nested. We explain why we employ some restrictions that are not required by other methods and provide examples of models for which our method is uniformly valid but pro ling based methods are not.

46 citations


Journal ArticleDOI
TL;DR: In this paper, a bootstrap-based calibrated projection procedure is proposed to build confidence intervals for single components and for smooth functions of a partially identified parameter vector in moment (in) equality models.
Abstract: We propose a bootstrap-based calibrated projection procedure to build confidence intervals for single components and for smooth functions of a partially identified parameter vector in moment (in)equality models. The method controls asymptotic coverage uniformly over a large class of data generating processes. The extreme points of the calibrated projection confidence interval are obtained by extremizing the value of the function of interest subject to a proper relaxation of studentized sample analogs of the moment (in)equality conditions. The degree of relaxation, or critical level, is calibrated so that the function of theta, not theta itself, is uniformly asymptotically covered with prespecified probability. This calibration is based on repeatedly checking feasibility of linear programming problems, rendering it computationally attractive. Nonetheless, the program defining an extreme point of the confidence interval is generally nonlinear and potentially intricate. We provide an algorithm, based on the response surface method for global optimization, that approximates the solution rapidly and accurately, and we establish its rate of convergence. The algorithm is of independent interest for optimization problems with simple objectives and complicated constraints. An empirical application estimating an entry game illustrates the usefulness of the method. Monte Carlo simulations confirm the accuracy of the solution algorithm, the good statistical as well as computational performance of calibrated projection (including in comparison to other methods), and the algorithm's potential to greatly accelerate computation of other confidence intervals.

41 citations


Posted Content
TL;DR: In this paper, a convex relaxation of the quadratic equations for phaseless measurements is proposed to find an extreme point of the intersection of these slabs that is best aligned with a given anchor vector.
Abstract: We propose a flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal. Therefore, we avoid the prohibitive computational cost associated with "lifting" and semidefinite programming (SDP) in methods such as PhaseLift and compete with recently developed non-convex techniques for phase retrieval. We relax the quadratic equations for phaseless measurements to inequality constraints each of which representing a symmetric "slab". Through a simple convex program, our proposed estimator finds an extreme point of the intersection of these slabs that is best aligned with a given anchor vector. We characterize geometric conditions that certify success of the proposed estimator. Furthermore, using classic results in statistical learning theory, we show that for random measurements the geometric certificates hold with high probability at an optimal sample complexity. Phase transition of our estimator is evaluated through simulations. Our numerical experiments also suggest that the proposed method can solve phase retrieval problems with coded diffraction measurements as well.

38 citations


Journal ArticleDOI
TL;DR: In this article, a family of homeomorphisms of the two-torus isotopic to the identity was constructed, for which all of the rotation sets can be described explicitly and the typical behavior of rotation sets in the family was analyzed.
Abstract: We construct a family $$\{\varPhi _t\}_{t\in [0,1]}$$ of homeomorphisms of the two-torus isotopic to the identity, for which all of the rotation sets $$\rho (\varPhi _t)$$ can be described explicitly. We analyze the bifurcations and typical behavior of rotation sets in the family, providing insight into the general questions of toral rotation set bifurcations and prevalence. We show that there is a full measure subset of [0, 1], consisting of infinitely many mutually disjoint non-trivial closed intervals, on each of which the rotation set mode locks to a constant polygon with rational vertices; that the generic rotation set in the Hausdorff topology has infinitely many extreme points, accumulating on a single totally irrational extreme point at which there is a unique supporting line; and that, although $$\rho (\varPhi _t)$$ varies continuously with t, the set of extreme points of $$\rho (\varPhi _t)$$ does not. The family also provides examples of rotation sets for which an extreme point is not represented by any minimal invariant set, or by any directional ergodic measure.

36 citations


Journal Article
TL;DR: A survey of the general theory of extreme point models in statistics is given in this paper, where the extreme points of the convex set of probability measures satisfying (in a general sense) a symmetry condition is considered.
Abstract: We give a survey of the general theory of extreme point models in statistics, i.e. statistical models that are given as the extreme points of the convex set of probability measures satisfying (in a general sense) a symmetry condition. Special emphasis is payed to examples, some of which are only partially solved, some are classical and some are recent.

36 citations


Journal ArticleDOI
TL;DR: Geraumig as discussed by the authors is a family of natural normalized Loewner chains in the unit ball, which allow us to construct, by means of suitable variations, other normalized LoEWner chains which coincide with the given ones from a certain time on.
Abstract: We introduce a family of natural normalized Loewner chains in the unit ball, which we call “geraumig”—spacious—which allow us to construct, by means of suitable variations, other normalized Loewner chains which coincide with the given ones from a certain time on. We apply our construction to the study of support points, extreme points, and time- $$\log M$$ -reachable mappings in the class $$S^0$$ of mappings admitting parametric representation.

34 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the Lipschitz free space over an ultrametric space is not isometric to the Banach-Mazur distance in the finite case.
Abstract: We characterize metric spaces whose Lipschitz free space is isometric to $\ell_1$. In particular, we show that the Lipschitz free space over an ultrametric space is not isometric to $\ell_1(\Gamma)$ for any set $\Gamma$. We give a lower bound for the Banach-Mazur distance in the finite case.

22 citations


ReportDOI
TL;DR: This paper proposes a bootstrap-based procedure to build con dence intervals for single components of a partially identi ed parameter vector, and for smooth functions of such components, in moment (in)equality models, and explains why some restrictions that are not required by other methods are employed.
Abstract: This paper proposes a bootstrap-based procedure to build condence intervals for single components of a partially identied parameter vector, and for smooth functions of such components, in moment (in)equality models. The extreme points of our condence interval are obtained by maximizing/minimizing the value of the component (or function) of interest subject to the sample analog of the moment (in)equality conditions properly relaxed. The novelty is that the amount of relaxation, or critical level, is computed so that the component (or function) of, instead of itself, is uniformly asymptotically cov- ered with prespecied probability. Calibration of the critical level is based on repeatedly checking feasibility of linear programming problems, rendering it computationally attrac- tive. Computation of the extreme points of the condence interval is based on a novel application of the response surface method for global optimization, which may prove of independent interest also for applications of other methods of inference in the moment (in)equalities literature. The critical level is by construction smaller (in nite sample) than the one used if projecting condence regions designed to cover the entire parameter vector. Hence, our condence interval is weakly shorter than the projection of established condence sets (Andrews and Soares, 2010), if one holds the choice of tuning parameters constant. We provide simple conditions under which the comparison is strict. Our inference method controls asymptotic coverage uniformly over a large class of data generating processes. Our assumptions and those used in the leading alternative approach (a proling based method) are not nested. We explain why we employ some restrictions that are not required by other methods and provide examples of models for which our method is uniformly valid but proling based methods are not.

20 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a compact depth from extreme points (CDFP) method for imaging potential field sources, which yields an image of a quantity proportionally to the source distribution (magnetization or density).
Abstract: We propose a fast method for imaging potential field sources. The new method is a variant of the “Depth from Extreme Points,” which yields an image of a quantity proportional to the source distribution (magnetization or density). Such transformed field is here transformed into source-density units by determining a constant with adequate physical dimension by a linear regression of the observed field versus the field computed from the “Depth from Extreme Points” image. Such source images are often smooth and too extended, reflecting the loss of spatial resolution for increasing altitudes. Consequently, they also present too low values of the source density. We here show that this initial image can be improved and made more compact to achieve a more realistic model, which reproduces a field consistent with the observed one. The new algorithm, which is called “Compact Depth from Extreme Points” iteratively produces different source distributions models, with an increasing degree of compactness and, correspondingly, increasing source-density values. This is done through weighting the model with a compacting function. The compacting function may be conveniently expressed as a matrix that is modified at any iteration, based on the model obtained in the previous step. At any iteration step the process may be stopped when the density reaches values higher than prefixed bounds based on known or assumed geological information. As no matrix inversion is needed, the method is fast and allows analysing massive datasets. Due to the high stability of the “Depth from Extreme Points” transformation, the algorithm may be also applied to any derivatives of the measured field, thus yielding an improved resolution. The method is investigated by application to 2D and 3D synthetic gravity source distributions, and the imaged sources are a good reconstruction of the geometry and density distributions of the causative bodies. Finally, the method is applied to microgravity data to model underground crypts in St. Venceslas Church, Tovacov, Czech Republic.

Journal ArticleDOI
TL;DR: An algorithm to search the convex combination of extreme points for an arbitrary given multi-stochastic tensor is developed and a new approach for the partially filled square problem under the framework of multi- Stochastic Tensors is given.
Abstract: Abstract Stochastic matrices play an important role in the study of probability theory and statistics, and are often used in a variety of modeling problems in economics, biology and operation research. Recently, the study of tensors and their applications became a hot topic in numerical analysis and optimization. In this paper, we focus on studying stochastic tensors and, in particular, we study the extreme points of a set of multi-stochastic tensors. Two necessary and sufficient conditions for a multi-stochastic tensor to be an extreme point are established. These conditions characterize the “generators” of multi-stochastic tensors. An algorithm to search the convex combination of extreme points for an arbitrary given multi-stochastic tensor is developed. Based on our obtained results, some expression properties for third-order and n-dimensional multi-stochastic tensors ( n = 3 ${n=3}$ and 4) are derived, and all extreme points of 3-dimensional and 4-dimensional triply-stochastic tensors can be produced in a simple way. As an application, a new approach for the partially filled square problem under the framework of multi-stochastic tensors is given.

Journal ArticleDOI
TL;DR: In this article, it was shown that some characterizations of polyhedral Lindenstrauss spaces, based on Zippin's result, are false, whereas some others remain unproven; then they provided a correct proof for those characterizations.
Abstract: We present a Lindenstrauss space with an extreme point that does not contain a subspace linearly isometric to c. This example disproves a result stated by Zippin in a paper published in 1969 and it shows that some classical characterizations of polyhedral Lindenstrauss spaces, based on Zippin’s result, are false, whereas some others remain unproven; then we provide a correct proof for those characterizations. Finally, we also disprove a characterization of polyhedral Lindenstrauss spaces given by Lazar, in terms of the compact norm-preserving extension of compact operators, and we give an equivalent condition for a Banach space X to satisfy this property.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the sets DF(β) of digit frequencies of β-expansions of numbers in [0, 1] and showed that the generic digit frequency set has infinitely many extreme points, accumulating on a single non-rational extreme point whose components are rationally independent.
Abstract: We study the sets DF(β) of digit frequencies of β-expansions of numbers in [0,1]. We show that DF(β) is a compact convex set with countably many extreme points which varies continuously with β; that there is a full measure collection of non-trivial closed intervals on each of which DF(β) mode locks to a constant polytope with rational vertices; and that the generic digit frequency set has infinitely many extreme points, accumulating on a single non-rational extreme point whose components are rationally independent.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated some generalizations of classes of harmonic functions and obtained coefficients estimates distortion theorems and integral mean inequalities in these classes of functions by using the extreme points theory.

Journal ArticleDOI
TL;DR: In this article, a necessary and sufficient condition for L ∈ L m, n to be an extreme point was given, and it was shown that generically, this condition is also sufficient.

Journal ArticleDOI
TL;DR: In this article, the dynamics of doubly stochastic quadratic operators (d.s.q.o) on a finite-dimensional simplex is studied and shown to be convergent.
Abstract: The present paper focuses on the dynamics of doubly stochastic quadratic operators (d.s.q.o) on a finite- dimensional simplex. We prove that if a d.s.q.o. has no periodic points then the trajectory of any initial point inside the simplex is convergent. We show that if d.s.q.o. is not a permutation then it has no periodic points on the interior of the two dimensional (2D) simplex. We also show that this property fails in higher dimensions. In addition, the paper also discusses the dynamics classifications of extreme points of d.s.q.o. on two dimensional simplex. As such, we provide some examples of d.s.q.o. which has a property that the trajectory of any initial point tends to the center of the simplex. We also provide and example of d.s.q.o. that has infinitely many fixed points and has infinitely many invariant curves. We therefore came-up with a number of evidences. Finally, we classify the dynamics of extreme points of d.s.q.o. on 2D simplex.

Journal ArticleDOI
TL;DR: In this article, a probability measure on the set of all solutions to the Cauchy problem with x(0) = 0 was constructed, i.e., the derivatives of these random solutions take values within the set extF(x) of extreme points for a.e.t. time t.
Abstract: Given a Lipschitz continuous multifunction F on \({\mathbb{R}^{n}}\), we construct a probability measure on the set of all solutions to the Cauchy problem \(\dot{x}\in F(x)\) with x(0) = 0. With probability one, the derivatives of these random solutions take values within the set extF(x) of extreme points for a.e. time t. This provides an alternative approach in the analysis of solutions to differential inclusions with non-convex right hand side.


Book ChapterDOI
01 Jan 2016
TL;DR: In this article, the dynamics of doubly stochastic quadratic operators (d.s.q.s) on 2D simplex is studied, and it is shown that the trajectory of extreme d.s., starting at some interior point of the simplex, is convergent.
Abstract: We study the dynamics of extreme doubly stochastic quadratic operators (d.s.q.o.) on two dimensional (2D) simplex. We provide some examples of d.s.q.o. which have infinitely many fixed points. We prove that the trajectory of extreme d.s.q.o., starting at some interior point of the simplex is convergent. Finally, we classify the dynamics of all extreme points of d.s.q.o. on 2D.

Journal ArticleDOI
TL;DR: In this paper, it was shown that every extreme measure can be represented by a linear combination of k Dirac probability measures with nonnegative coefficients, where k is the number of restrictions on moments.
Abstract: Necessary and sufficient conditions for a measure to be an extreme point of the set of measures on a given measurable space with prescribed generalized moments are given, as well as an application to extremal problems over such moment sets; these conditions are expressed in terms of atomic partitions of the measurable space. It is also shown that every such extreme measure can be adequately represented by a linear combination of k Dirac probability measures with nonnegative coefficients, where k is the number of restrictions on moments; moreover, when the measurable space has appropriate topological properties, the phrase “can be adequately represented by” here can be replaced simply by “is”. Applications to specific extremal problems are also given, including an exact lower bound on the exponential moments of truncated random variables, exact lower bounds on generalized moments of the interarrival distribution in queuing systems, and probability measures on product spaces with prescribed generalized marginal moments.

Journal ArticleDOI
TL;DR: In this paper, a large class of indecomposable positive linear maps from the matrix algebra into matrix algebra is constructed, which generate exposed extreme rays of the convex cone of all positive maps.
Abstract: We construct a large class of indecomposable positive linear maps from the matrix algebra into the matrix algebra, which generate exposed extreme rays of the convex cone of all positive maps. We show that extreme points of the dual faces for separable states arising from these maps are parametrized by the Riemann sphere, and the convex hulls of the extreme points arising from a circle parallel to the equator have exactly the same properties with the convex hull of the trigonometric moment curve studied from combinatorial topology. Any interior points of the dual faces are boundary separable states with full ranks. We exhibit concrete examples of such states.

Journal ArticleDOI
TL;DR: In this paper, a sufficient condition for this to happen at extreme points in the optimal solution set is given, which essentially amounts to saying that the number of elements containing material at a solution must be greater than the order of the matrix.
Abstract: Many problems in structural optimization can be formulated as a minimization of the maximum eigenvalue of a symmetric matrix. In practise it is often observed that the maximum eigenvalue has multiplicity greater than one close to or at optimal solutions. In this note we give a sufficient condition for this to happen at extreme points in the optimal solution set. If, as in topology optimization, each design variable determines the amount of material in a finite element in the design domain then this condition essentially amounts to saying that the number of elements containing material at a solution must be greater than the order of the matrix.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: This study addresses the size issue by generalizing the density of S subseteq V, which is defined as w(S)/f (|S|), and introduces a flow-based combinatorial exact algorithm for unweighted graphs that runs in O(n^3) time.
Abstract: Given an edge-weighted undirected graph G = (V, E, w), the density of S subseteq V is defined as w(S)/|S|, where w(S) is the sum of weights of the edges in the subgraph induced by S. The densest subgraph problem asks for S subseteq V that maximizes the density w(S)/|S|. The problem has received significant attention recently because it can be solved exactly in polynomial time. However, the densest subgraph problem has a drawback; it may happen that the obtained subset is too large or too small in comparison with the desired size of the output. In this study, we address the size issue by generalizing the density of S subseteq V. Specifically, we introduce the f -density of S subseteq V, which is defined as w(S)/f (|S|), where f : Z geq 0 to R geq 0 is a monotonically non-decreasing function. In the f-densest subgraph problem (f-DS), we are asked to find S subseteq V that maximizes the f-density w(S)/f (|S|). Although f-DS does not explicitly specify the size of the output subset of vertices, we can handle the above size issue using a convex size function f or a concave size function f appropriately. For f-DS with convex function f, we propose a nearly-linear-time algorithm with a provable approximation guarantee. In particular, for f-DS with f(x) = x^alpha (alpha in [1, 2]), our algorithm has an approximation ratio of 2 · n^{(alpha-1)(2-alpha)}. On the other hand, for f-DS with concave function f , we propose a linear-programming-based polynomial-time exact algorithm. It should be emphasized that this algorithm obtains not only an optimal solution to the problem but also subsets of vertices corresponding to the extreme points of the upper convex hull of {(|S|, w(S)) | S subseteq V }, which we refer to as the dense frontier points. We also propose a flow-based combinatorial exact algorithm for unweighted graphs that runs in O(n^3) time. Finally, we propose a nearly-linear-time 3-approximation algorithm.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proves that any set of n points in general position in the plane has at least Omega(2.631^n) geometric triangulations, and provides tight lower bounds for the number of triangulation of point sets with up to 15 points, which further support the double circle conjecture.
Abstract: Upper and lower bounds for the number of geometric graphs of specific types on a given set of points in the plane have been intensively studied in recent years. For most classes of geometric graphs it is now known that point sets in convex position minimize their number. However, it is still unclear which point sets minimize the number of geometric triangulations; the so-called double circles are conjectured to be the minimizing sets. In this paper we prove that any set of n points in general position in the plane has at least Omega(2.631^n) geometric triangulations. Our result improves the previously best general lower bound of Omega(2.43^n) and also covers the previously best lower bound of Omega(2.63^n) for a fixed number of extreme points. We achieve our bound by showing and combining several new results, which are of independent interest: (1) Adding a point on the second convex layer of a given point set (of 7 or more points) at least doubles the number of triangulations. (2) Generalized configurations of points that minimize the number of triangulations have at most n/2 points on their convex hull. (3) We provide tight lower bounds for the number of triangulations of point sets with up to 15 points. These bounds further support the double circle conjecture.

Journal ArticleDOI
TL;DR: Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×–6× speedups over the famous Qhull implementation for 20M points.
Abstract: This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×–6× speedups over the famous Qhull implementation for 20M points.

Posted Content
TL;DR: In this article, a new characterization of the so-called quasi-extreme multipliers of the Drury-Arveson space was given and it was shown that every such multiplier is an extreme point of the unit ball of the multiplier algebra.
Abstract: We give a new characterization of the so-called quasi-extreme multipliers of the Drury-Arveson space $H^2_d$, and show that every quasi-extreme multiplier is an extreme point of the unit ball of the multiplier algebra of $H^2_d$.

Journal ArticleDOI
TL;DR: In this paper, the existence of extreme points in compact convex subsets of asymmetric normed spaces was investigated, and the authors gave a geometric description of all such subsets.

Journal ArticleDOI
01 Mar 2016
TL;DR: The enumerative technique developed not only finds the set of efficient solutions but also a corresponding fuzzy solution, enabling the decision maker to operate in the range obtained.
Abstract: HighlightsMultiobjective Fixed Charge Problem with fractional objective functions.Imprecise nature of objectives - fuzzy coefficients and fuzzy fixed charges.Systematically enumerating the extreme points of the feasible region.Ranking function is employed to deal with fuzziness, and a set of efficient solutions is obtained.Numerical example provided to illustrate the algorithm. The fixed charge problem is a special type of nonlinear programming problem which forms the basis of many industry problems wherein a charge is associated with performing an activity. In real world situations, the information provided by the decision maker regarding the coefficients of the objective functions may not be of a precise nature. This paper aims to describe a solution algorithm for solving such a fixed charge problem having multiple fractional objective functions which are all of a fuzzy nature. The enumerative technique developed not only finds the set of efficient solutions but also a corresponding fuzzy solution, enabling the decision maker to operate in the range obtained. A real life numerical example in the context of the ship routing problem is presented to illustrate the proposed method.

Journal ArticleDOI
TL;DR: The Fréchet and Mordukhovich subdifferentials of a d.p. (difference of polyhedral convex functions) programming models, unconstrained and linearly constrained, in a finite-dimensional setting are studied.
Abstract: This paper is concerned with two d.p. (difference of polyhedral convex functions) programming models, unconstrained and linearly constrained, in a finite-dimensional setting. We obtain exact formulae for the Frechet and Mordukhovich subdifferentials of a d.p. function. We establish optimality conditions via subdifferentials in the sense of convex analysis, of Frechet and of Mordukhovich, and describe their relationships. Existence and computation of descent and steepest descent directions for both the models are also studied.