scispace - formally typeset
Search or ask a question
Author

Dongdong Ge

Bio: Dongdong Ge is an academic researcher from Shanghai University of Finance and Economics. The author has contributed to research in topics: Mathematics & Computer science. The author has an hindex of 15, co-authored 33 publications receiving 948 citations. Previous affiliations of Dongdong Ge include Stony Brook University & Stanford University.

Papers
More filters
Journal ArticleDOI
TL;DR: It is proved that finding the global minimal value of the problem is strongly NP-Hard, but computing a local minimizer of theproblem can be done in polynomial time.
Abstract: We discuss the L p (0 ≤ p < 1) minimization problem arising from sparse solution construction and compressed sensing. For any fixed 0 < p < 1, we prove that finding the global minimal value of the problem is strongly NP-Hard, but computing a local minimizer of the problem can be done in polynomial time. We also develop an interior-point potential reduction algorithm with a provable complexity bound and demonstrate preliminary computational results of effectiveness of the algorithm.

274 citations

Journal ArticleDOI
TL;DR: Theoretical results show that the minimizers of the L_q-L_p minimization problem have various attractive features due to the concavity and non-Lipschitzian property of the regularization function.
Abstract: We consider the unconstrained $$L_q$$ - $$L_p$$ minimization: find a minimizer of $$\Vert Ax-b\Vert ^q_q+\lambda \Vert x\Vert ^p_p$$ for given $$A \in R^{m\times n}$$ , $$b\in R^m$$ and parameters $$\lambda >0$$ , $$p\in [0, 1)$$ and $$q\ge 1$$ . This problem has been studied extensively in many areas. Especially, for the case when $$q=2$$ , this problem is known as the $$L_2-L_p$$ minimization problem and has found its applications in variable selection problems and sparse least squares fitting for high dimensional data. Theoretical results show that the minimizers of the $$L_q$$ - $$L_p$$ problem have various attractive features due to the concavity and non-Lipschitzian property of the regularization function $$\Vert \cdot \Vert ^p_p$$ . In this paper, we show that the $$L_q$$ - $$L_p$$ minimization problem is strongly NP-hard for any $$p\in [0,1)$$ and $$q\ge 1$$ , including its smoothed version. On the other hand, we show that, by choosing parameters $$(p,\lambda )$$ carefully, a minimizer, global or local, will have certain desired sparsity. We believe that these results provide new theoretical insights to the studies and applications of the concave regularized optimization problems.

120 citations

Journal ArticleDOI
TL;DR: In this paper, a review of piecewise linearization methods and analyzes the computational efficiency of various piecewise-linearization methods is presented, where extra binary variables, continuous variables, and constraints are introduced to reformulate the original problem.
Abstract: Various optimization problems in engineering and management are formulated as nonlinear programming problems. Because of the nonconvexity nature of this kind of problems, no efficient approach is available to derive the global optimum of the problems. How to locate a global optimal solution of a nonlinear programming problem is an important issue in optimization theory. In the last few decades, piecewise linearization methods have been widely applied to convert a nonlinear programming problem into a linear programming problem or a mixed-integer convex programming problem for obtaining an approximated global optimal solution. In the transformation process, extra binary variables, continuous variables, and constraints are introduced to reformulate the original problem. These extra variables and constraints mainly determine the solution efficiency of the converted problem. This study therefore provides a review of piecewise linearization methods and analyzes the computational efficiency of various piecewise linearization methods.

102 citations

Posted Content
TL;DR: In this paper, the authors consider the unconstrained minimization problem of concave regularized optimization problems and show that the problem is strongly NP-hard for any constant > 0.
Abstract: We consider the unconstrained $L_2$-$L_p$ minimization: find a minimizer of $\|Ax-b\|^2_2+\lambda \|x\|^p_p$ for given $A \in R^{m\times n}$, $b\in R^m$ and parameters $\lambda>0$, $p\in [0,1)$. This problem has been studied extensively in variable selection and sparse least squares fitting for high dimensional data. Theoretical results show that the minimizers of the $L_2$-$L_p$ problem have various attractive features due to the concavity and non-Lipschitzian property of the regularization function $\|\cdot\|^p_p$. In this paper, we show that the $L_q$-$L_p$ minimization problem is strongly NP-hard for any $p\in [0,1)$ and $q\ge 1$, including its smoothed version. On the other hand, we show that, by choosing parameters $(p,\lambda)$ carefully, a minimizer, global or local, will have certain desired sparsity. We believe that these results provide new theoretical insights to the studies and applications of the concave regularized optimization problems.

90 citations

01 Jan 2007
TL;DR: The notion of minimizing the maximal length of a tour in MDVRP is explored and a heuristic method based on region partitioning, which is potentially useful for general network applications is introduced.
Abstract: The Multi-Depot Vehicle Routing Problem (MDVRP) is a generalization of the Single-Depot Vehicle Routing Problem (SDVRP) in which vehicle(s) start from multiple depots and return to their depots of origin at the end of their assigned tours. The traditional objective in MDVRP is to minimize the total length of all the tours, and existing literature handles this problem with a variety of assumptions and constraints. In this paper, we explore the notion of minimizing the maximal length of a tour in MDVRP (“min-max MDVRP”). We also introduce a heuristic method based on region partitioning, which is potentially useful for general network applications. A comparison of the computational implementations for three heuristics is included. Although this model is advantageous for real-world applications, to the best of our knowledge no prior

79 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal Article
TL;DR: In this paper, integer programming formulations for four types of discrete hub location problems are presented: the p-hub median problem, the uncapacitated hub location problem, p -hub center problems and hub covering problems.

727 citations

Journal ArticleDOI
TL;DR: The AG method is generalized to solve nonconvex and possibly stochastic optimization problems and it is demonstrated that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general non Convex smooth optimization problems by using first-order information, similarly to the gradient descent method.
Abstract: In this paper, we generalize the well-known Nesterov's accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method. We then consider an important class of composite optimization problems and show that the AG method can solve them uniformly, i.e., by using the same aggressive stepsize policy as in the convex case, even if the problem turns out to be nonconvex. We demonstrate that the AG method exhibits an optimal rate of convergence if the composite problem is convex, and improves the best known rate of convergence if the problem is nonconvex. Based on the AG method, we also present new nonconvex stochastic approximation methods and show that they can improve a few existing rates of convergence for nonconvex stochastic optimization. To the best of our knowledge, this is the first time that the convergence of the AG method has been established for solving nonconvex nonlinear programming in the literature.

578 citations