scispace - formally typeset
Search or ask a question

Showing papers in "Optimization and Engineering in 2018"


Journal ArticleDOI
TL;DR: This paper proposes a method that first solves iteratively a set of regularized MPCCs using an off-the-shelf nonlinear solver to find a local optimal solution, then uses local optimal information to reduce the computational burden of solving the Fortuny-Amat reformulation of the MPCC to global optimality using off- the-Shelf mixed-integer solvers.
Abstract: Many optimization models in engineering are formulated as bilevel problems. Bilevel optimization problems are mathematical programs where a subset of variables is constrained to be an optimal solution of another mathematical program. Due to the lack of optimization software that can directly handle and solve bilevel problems, most existing solution methods reformulate the bilevel problem as a mathematical program with complementarity conditions (MPCC) by replacing the lower-level problem with its necessary and sufficient optimality conditions. MPCCs are single-level non-convex optimization problems that do not satisfy the standard constraint qualifications and therefore, nonlinear solvers may fail to provide even local optimal solutions. In this paper we propose a method that first solves iteratively a set of regularized MPCCs using an off-the-shelf nonlinear solver to find a local optimal solution. Local optimal information is then used to reduce the computational burden of solving the Fortuny-Amat reformulation of the MPCC to global optimality using off-the-shelf mixed-integer solvers. This method is tested using a wide range of randomly generated examples. The results show that our method outperforms existing general-purpose methods in terms of computational burden and global optimality.

55 citations


Journal ArticleDOI
TL;DR: The ultimate goal of the work is the design of inversion methods that integrate complementary data, and rigorously follow mathematical and physical principles, in an attempt to support clinical decision making, which requires reliable, high-fidelity algorithms with a short time-to-solution.
Abstract: PDE-constrained optimization problems find many applications in medical image analysis, for example, neuroimaging, cardiovascular imaging, and oncologic imaging. We review the related literature and give examples of the formulation, discretization, and numerical solution of PDE-constrained optimization problems for medical imaging. We discuss three examples. The first is image registration, the second is data assimilation for brain tumor patients, and the third is data assimilation in cardiovascular imaging. The image registration problem is a classical task in medical image analysis and seeks to find pointwise correspondences between two or more images. Data assimilation problems use a PDE-constrained formulation to link a biophysical model to patient-specific data obtained from medical images. The associated optimality systems turn out to be sets of nonlinear, multicomponent PDEs that are challenging to solve in an efficient way. The ultimate goal of our work is the design of inversion methods that integrate complementary data, and rigorously follow mathematical and physical principles, in an attempt to support clinical decision making. This requires reliable, high-fidelity algorithms with a short time-to-solution. This task is complicated by model and data uncertainties, and by the fact that PDE-constrained optimization problems are ill-posed in nature, and in general yield high-dimensional, severely ill-conditioned systems after discretization. These features make regularization, effective preconditioners, and iterative solvers that, in many cases, have to be implemented on distributed-memory architectures to be practical, a prerequisite. We showcase state-of-the-art techniques in scientific computing to tackle these challenges.

34 citations


Journal ArticleDOI
TL;DR: A certified reduced basis approach for the strong- and weak-constraint four-dimensional variational (4D-Var) data assimilation problem for a parametrized PDE model to generate reduced order approximations for the state, adjoint, initial condition, and model error.
Abstract: We propose a certified reduced basis approach for the strong- and weak-constraint four-dimensional variational (4D-Var) data assimilation problem for a parametrized PDE model. While the standard strong-constraint 4D-Var approach uses the given observational data to estimate only the unknown initial condition of the model, the weak-constraint 4D-Var formulation additionally provides an estimate for the model error and thus can deal with imperfect models. Since the model error is a distributed function in both space and time, the 4D-Var formulation leads to a large-scale optimization problem for every given parameter instance of the PDE model. To solve the problem efficiently, various reduced order approaches have therefore been proposed in the recent past. Here, we employ the reduced basis method to generate reduced order approximations for the state, adjoint, initial condition, and model error. Our main contribution is the development of efficiently computable a posteriori upper bounds for the error of the reduced basis approximation with respect to the underlying high-dimensional 4D-Var problem. Numerical results are conducted to test the validity of our approach.

32 citations


Journal ArticleDOI
TL;DR: This work focuses on large-scale linear systems with multiplicative parameter-state coupling as they arise in the discretization of parametric linear time-dependent partial differential equations and employs a simplicial decomposition algorithm for an optimal sensor placement and set forth formulae for the efficient evaluation of all required quantities.
Abstract: We consider large-scale dynamical systems in which both the initial state and some parameters are unknown. These unknown quantities must be estimated from partial state observations over a time window. A data assimilation framework is applied for this purpose. Specifically, we focus on large-scale linear systems with multiplicative parameter-state coupling as they arise in the discretization of parametric linear time-dependent partial differential equations. Another feature of our work is the presence of a quantity of interest different from the unknown parameters, which is to be estimated based on the available data. In this setting, we employ a simplicial decomposition algorithm for an optimal sensor placement and set forth formulae for the efficient evaluation of all required quantities. As a guiding example, we consider a thermo-mechanical PDE system with the temperature constituting the system state and the induced displacement at a certain reference point as the quantity of interest.

28 citations


Journal ArticleDOI
TL;DR: In this paper, a modal shape optimization of a transonic wing using mathematically-extracted modal design variables is presented, which is used for deriving design variables using a singular value decomposition of a set of training aerofoils to obtain an efficient, reduced set of orthogonal "modes" that represent typical aerodynamic design parameters.
Abstract: Aerodynamic shape optimization of a transonic wing using mathematically-extracted modal design variables is presented. A novel approach is used for deriving design variables using a singular value decomposition of a set of training aerofoils to obtain an efficient, reduced set of orthogonal ‘modes’ that represent typical aerodynamic design parameters. These design parameters have previously been tested on geometric shape recovery problems and aerodynamic shape optimization in two dimensions, and shown to be efficient at covering a large portion of the design space; the work is extended here to consider their use in three dimensions. Wing shape optimization in transonic flow is performed using an upwind flow-solver and parallel gradient-based optimizer, and a small number of global deformation modes are compared to a section-based local application of these modes and to a previously-used section-based domain element approach to deformations. An effective geometric deformation localization method is also presented, to ensure global modes can be reconstructed exactly by superposition of local modes. The modal approach is shown to be particularly efficient, with improved convergence over the domain element method, and only 10 modal design variables result in a 28% drag reduction.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed to generate an almost constant Kelvin (magnetic) force in a target subdomain, moving along a prescribed trajectory, by solving a minimization problem with a tracking type cost functional.
Abstract: Motivated by problems arising in magnetic drug targeting, we propose to generate an almost constant Kelvin (magnetic) force in a target subdomain, moving along a prescribed trajectory. This is carried out by solving a minimization problem with a tracking type cost functional. The magnetic sources are assumed to be dipoles and the control variables are the magnetic field intensity, the source location and the magnetic field direction. The resulting magnetic field is shown to effectively steer the drug concentration, governed by a drift-diffusion PDE, from an initial to a desired location with limited spreading.

25 citations


Journal ArticleDOI
TL;DR: In this paper, a dual-weighted residual approach for goal-oriented adaptive finite elements is presented which is based on the concept of C-stationarity, and the overall error representation depends on primal residuals weighted by approximate dual quantities and vice versa.
Abstract: This paper is concerned with the development and implementation of an adaptive solution algorithm for the optimal control of a time-discrete Cahn–Hilliard–Navier–Stokes system with variable densities. The free energy density associated with the Cahn–Hilliard system incorporates the double-obstacle potential which yields an optimal control problem for a family of coupled systems in each time instant of a variational inequality of fourth order and the Navier–Stokes equation. A dual-weighted residual approach for goal-oriented adaptive finite elements is presented which is based on the concept of C-stationarity. The overall error representation depends on primal residuals weighted by approximate dual quantities and vice versa as well as various complementarity mismatch errors. Details of the numerical realization of the adaptive concept and a report on the numerical tests are given.

24 citations


Journal ArticleDOI
TL;DR: A mathematical model is presented for an industrially relevant avionic system and a constraint generation procedure for the scheduling of such systems is presented and the optimisation approach can be used to create schedules for such instances within a reasonable time.
Abstract: In modern integrated modular avionic systems, applications share hardware resources on a common avionic platform. Such an architecture necessitates strict requirements on the spatial and temporal p ...

21 citations


Journal ArticleDOI
TL;DR: This work presents a robust optimization framework that is applicable to general nonlinear programs (NLP) with uncertain parameters that employs quadratic models of the involved functions that can be handled efficiently with standard NLP solvers.
Abstract: We present a robust optimization framework that is applicable to general nonlinear programs (NLP) with uncertain parameters. We focus on design problems with partial differential equations (PDE), which involve high computational cost. Our framework addresses the uncertainty with a deterministic worst-case approach. Since the resulting min–max problem is computationally intractable, we propose an approximate robust formulation that employs quadratic models of the involved functions that can be handled efficiently with standard NLP solvers. We outline numerical methods to build the quadratic models, compute their derivatives, and deal with high-dimensional uncertainties. We apply the presented approach to the parametrized shape optimization of systems that are governed by different kinds of PDE and present numerical results.

19 citations


Journal ArticleDOI
TL;DR: This paper considers locally weighted regression models to build the necessary surrogates, and three ideas for appropriate and effective use of locally weighted scatterplot smoothing (LOWESS) models for surrogate optimization are presented.
Abstract: We consider engineering design optimization problems where the objective and/or constraint functions are evaluated by means of computationally expensive blackboxes. Our practical optimization strategy consists of solving surrogate optimization problems in the search step of the mesh adaptive direct search algorithm. In this paper, we consider locally weighted regression models to build the necessary surrogates, and present three ideas for appropriate and effective use of locally weighted scatterplot smoothing (LOWESS) models for surrogate optimization. First, a method is proposed to reduce the computational cost of LOWESS models. Second, a local scaling coefficient is introduced to adapt LOWESS models to the density of neighboring points while retaining smoothness. Finally, an appropriate order error metric is used to select the optimal shape coefficient of the LOWESS model. Our surrogate-assisted optimization approach utilizes LOWESS models to both generate and rank promising candidates found in the search and poll steps. The “real” blackbox functions that govern the original optimization problem are then evaluated at these ranked candidates with an opportunistic strategy, reducing CPU time significantly. Computational results are reported for four engineering design problems with up to six variables and six constraints. The results demonstrate the effectiveness of the LOWESS models as well as the order error metric for surrogate optimization.

18 citations


Journal ArticleDOI
TL;DR: A rolling-horizon reoptimization framework is presented that allows us to study different policies that impact the quality of the implemented solution, so the authors can identify the optimal set of policies.
Abstract: We study a maritime inventory routing problem, in which shipments between production and consumption nodes are carried out by a fleet of vessels. The vessels have specific capacities and can be chartered under different agreements. The inventory levels of all consumption nodes and some production nodes should be maintained within specified bounds; for the remaining production nodes, orders should be picked up within pre-defined time windows. We propose a discrete-time mixed-integer programming model. In the face of new information and uncertainty, this optimization model has to be re-solved, as the horizon is rolled forward. We discuss how to account for different sources of uncertainty. We present a rolling-horizon reoptimization framework that allows us to study different policies that impact the quality of the implemented solution, so we can identify the optimal set of policies.

Journal ArticleDOI
TL;DR: In this article, a consensus-based alternating direction method of multipliers (ADMM) approach is proposed to solve the multi-area coordinated network-constrained unit commitment (NCUC) problem in a distributed manner.
Abstract: This paper discusses a consensus-based alternating direction method of multipliers (ADMM) approach to solve the multi-area coordinated network-constrained unit commitment (NCUC) problem in a distributed manner. Due to political and technical difficulties, it is neither practical nor feasible to solve the multi-area coordination problem in a centralized fashion, which requires full access to all the data of individual areas. In comparison, in the proposed fully distributed approach, local NCUC problems of individual areas can be solved independently, and only limited information is exchanged among adjacent areas to facilitate the multi-area coordination. Furthermore, since traditional ADMM can guarantee convergence only for convex problems, this paper discusses several strategies to mitigate oscillations, enhance convergence performance, and derive good-enough feasible solutions, including: (1) a tie-line power-flow-based area coordination strategy is designed to reduce the number of global consensus variables; (2) different penalty parameters ρ are assigned to individual consensus variables and are updated via certain rules during the iterative procedure, which reduces the impact of the initial values of ρ on the convergence performance; (3) heuristic rules are adopted to fix certain unit commitment variables to avoid oscillations during the iterative procedure; and (4) an asynchronous distributed strategy is studied, which solves NCUC subproblems of small areas multiple times and exchanges information with adjacent areas more frequently within one complete run of slower NCUC subproblems of large areas. Numerical cases illustrate the effectiveness of the proposed asynchronous fully distributed NCUC approach, and we investigate key factors that would affect its convergence performance.

Journal ArticleDOI
TL;DR: It is brought out that the pattern search algorithm offers superior performance in comparison with the genetic algorithm for this class of optimization problem and offers a viable tool for optimizing trajectories for the considered class of vehicles.
Abstract: In this work, trajectory optimization of an aerodynamically controlled hypersonic boost glide class of flight vehicle is presented. In order to meet the mission constraints such as controllability, skin temperature, and terminal conditions etc., the trajectory is optimized using a pattern search algorithm with the lift to drag (L/D) ratio as a control parameter. It is brought out that the approach offers a viable tool for optimizing trajectories for the considered class of vehicles. Further, the effects of the constraints on trajectory shape and performance are studied and the analysis is used to bring out an optimal vehicle configuration at the initial stage of the design process itself. The research also reveals that the pattern search algorithm offers superior performance in comparison with the genetic algorithm for this class of optimization problem.

Journal ArticleDOI
TL;DR: This paper aims at improving the secondary control process to restrict the fluctuations in both the voltage and frequency signals in islanded microgrids by embedding an on-line self-optimizing control approach embedded in the MG’s central controller.
Abstract: Dealing with islanded microgrids (MGs), this paper aims at improving the secondary control process to restrict the fluctuations in both the voltage and frequency signals. With the aim of retrieving these parameters at the nominal values, an intelligent control scheme is devised to adjust the corresponding control parameters. To do so, an on-line self-optimizing control approach is embedded in the MG’s central controller. In the tuning process, evolutionary-based techniques such as genetic algorithms provide proper initial adjustment for the parameters. Subsequently, an artificial neural network (ANN) is triggered to provide accurate online modification of the control parameters. Specifically, the training capability of the ANN mechanism along with its extensibility feature avoids the dependency of the controller on the operating point conditions and accommodates different changes and uncertainty reflections. Detailed simulation studies are conducted to investigate the performance of the proposed approach, and the results are discussed in depth.

Journal ArticleDOI
TL;DR: This work considers optimal shape design problems for polymer spin packs which are widely used in the production of synthetic fibers and nonwoven materials and gets an elegant formulation of this state constrained optimization problem, in which geometric constraints on the boundary can also be included.
Abstract: We consider optimal shape design problems for polymer spin packs which are widely used in the production of synthetic fibers and nonwoven materials. The design goal is the minimization of the residence time of the polymer, which can be achieved by adjusting the wall shear stress along the boundary. Depending on the specific industrial setting we construct two tailored algorithms. First, we consider the design in three spatial dimensions based on a PDE constrained shape optimization problem. Here, the constraint is given by the Stokes flow. Second, we change the design goal and want to construct shapes in two spatial dimensions which allow for a lower bound on the wall shear stress. This can be incorporated as an additional state constraint. By relaxing this condition and employing the method of mapping we can pull-back the problem onto a fixed reference domain. We get an elegant formulation of this state constrained optimization problem, in which geometric constraints on the boundary can also be included. After discretization we end up with a large-scale NLP which can be handled by existing solvers. Finally, we present numerical results underlining the feasibility of our approach.

Journal ArticleDOI
TL;DR: This paper investigates the applicability of the method for computing the mean–variance Markowitz customer portfolio optimization problem and provides the parameter conditions under which the penalty regularized expected utility of a given optimal portfolio admits a unique solution.
Abstract: This paper considers the subject of penalty regularized expected utilities and investigates the applicability of the method for computing the mean–variance Markowitz customer portfolio optimization problem. We penalize the large values by introducing a penalty term expressed as least-squares in order to avoid an explosive number of solutions. This penalty term is known as the Tikhonov regularization parameter. Tikhonov’s regularization is one of the most popular approaches to solve discrete ill-posed problems and, in our case, it plays a fundamental role in order to ensure the convergence to a unique portfolio solution. In this sense, we first provide the parameter conditions under which the penalty regularized expected utility of a given optimal portfolio admits a unique solution. A crucial problem concerning Tikhonov’s regularization is the proper choice of the regularization parameter because it can modify (sometimes significantly) the shape of the original functional. The main objective of this paper is to derive a method for regularization in an optimal way. For solving the problem, the parameters of the regularized poly-linear optimization problem are balanced simultaneously. Then, we prove that the original Markowitz portfolio optimization problem converges to an exact solution (with the minimal weighted norm). We consider a projection gradient method for finding the extremal points including the proof of convergence of the method. We show how to select the parameters of the algorithm in order to guarantee the convergence of the suggested procedure. Finally, we present a numerical example to illustrate the practical implications of the theoretical issues of a penalty regularized portfolio optimization problem.

Journal ArticleDOI
TL;DR: In this article, an improved binary differential evolution (IBDE) algorithm was proposed to optimize PWM control laws of power inverters, which focuses on the designs of the adaptive crossover and parameterless mutation strategies without imposing an additional computational burden.
Abstract: Stochastic optimization methods inspired by biological evolution system have been widely employed to optimize PWM control laws of power inverters. But the existing approaches impose a serious computational burden and difficult parameter tuning issues. However, the differential evolution (DE) algorithm has the superiority of simple implementation and few parameters to tune. Thus, we propose an improved binary DE (IBDE) algorithm for optimizing PWM control laws of power inverters. The proposed algorithm focuses on the designs of the adaptive crossover and parameterless mutation strategies without imposing an additional computational burden. In numerical experiments, a single-phase full-bridge and two-level three-phase inverters are considered, and the optimal PWM control law is calculated to maximize the closeness of the controlled inductor current to sinusoidal reference current by using the proposed algorithm. Experimental results indicate that IBDE can obtain high quality output waveform that is a very good approximation to the sinusoidal reference waveform. Moreover, the spectrum analysis for the optimal PWM control law obtained by IBDE indicates that the lower odd-order harmonics are eliminated, while the existing peer algorithms cannot do well. We also carry out experiments on sensitivity analysis with respect to several important parameters.

Journal ArticleDOI
TL;DR: A simple heuristic based on the alternative direction method of multipliers is proposed for the compliance minimization of a truss, where the number of available nodes is limited.
Abstract: This paper addresses the compliance minimization of a truss, where the number of available nodes is limited. It is shown that this optimization problem can be recast as a second-order cone programming with a cardinality constraint. We propose a simple heuristic based on the alternative direction method of multipliers. The efficiency of the proposed method is compared with a global optimization approach based on mixed-integer second-order cone programming. Numerical experiments demonstrate that the proposed method often finds a solution having a good objective value with small computational cost.

Journal ArticleDOI
TL;DR: A new evolutionary multi-architecture multi-objective optimization algorithm is presented to support design concept selection when faced with challenges in the design of revolutionary aerospace vehicles.
Abstract: The design of revolutionary aerospace vehicles is characterized by large design spaces, a lack of established baselines, and some uncertainty in the design and regulatory requirements that such vehicles will need to meet. A new evolutionary multi-architecture multi-objective optimization algorithm is presented to support design concept selection when faced with such challenges. The proposed approach allows designers to efficiently and exhaustively generate variable-oriented architectures that can be further optimized and compared. It provides a dynamic decision-making environment able to identify trends and trade-offs, and prioritize designs. The application of the proposed methodology to suborbital vehicles highlights key promising technological enablers, which can be leveraged to design high-performance and robust concepts.

Journal ArticleDOI
TL;DR: The computational properties of the optimal subgradient algorithm (OSGA) for applications of linear inverse problems involving high-dimensional data and several Nesterov-type optimal methods adapted to solve nonsmooth problems by simply passing a subgradient instead of the gradient are studied.
Abstract: This paper studies the computational properties of the optimal subgradient algorithm (OSGA) for applications of linear inverse problems involving high-dimensional data. First, such convex problems are formulated as a class of convex problems with multi-term composite objective functions involving linear mappings. Next, an efficient procedure for computing the first-order oracle for such problems is provided and OSGA is equipped with some prox-functions such that the OSGA subproblem is solved in a closed form. Further, a comprehensive comparison among the most popular first-order methods is given. Then, several Nesterov-type optimal methods (originally proposed for smooth problems) are adapted to solve nonsmooth problems by simply passing a subgradient instead of the gradient, where the results of these subgradient methods are competitive and totally interesting for solving nonsmooth problems. Finally, numerical results with several inverse problems (deblurring with isotropic total variation, elastic net, and $$\ell _1$$ -minimization) show the efficiency of OSGA and the adapted Nesterov-type optimal methods for large-scale problems. For the deblurring problem, the efficiency measures of the improvement on the signl-to-noise ratio and the peak signal-to-noise ratio are used. The software package implementing OSGA is publicly available.

Journal ArticleDOI
TL;DR: It turns out that the proposed model for the network extension problem for multiple demand scenarios and the proposed scenario decomposition are able to solve these challenging instances to optimality within a reasonable amount of time.
Abstract: Today’s gas markets demand more flexibility from the network operators which in turn have to invest in their network infrastructure. As these investments are very cost-intensive and long-lasting, network extensions should not only focus on a single bottleneck scenario, but should increase the flexibility to fulfill different demand scenarios. In this work, we formulate a model for the network extension problem for multiple demand scenarios and propose a scenario decomposition in order to solve the resulting challenging optimization tasks. In fact, each subproblem consists of a mixed-integer nonlinear optimization problem. Valid bounds on the objective value are derived even without solving the subproblems to optimality. Furthermore, we develop heuristics that prove capable of improving the initial solutions substantially. The results of computational experiments on realistic network topologies are presented. It turns out that our method is able to solve these challenging instances to optimality within a reasonable amount of time.

Journal ArticleDOI
TL;DR: The Block-Simultaneous Direction Method of Multipliers (bSDMM) as mentioned in this paper is a generalization of the linearized alternating direction method of multipliers to optimize a real-valued function f of multiple arguments with potentially multiple constraints on each of them.
Abstract: We introduce a generalization of the linearized Alternating Direction Method of Multipliers to optimize a real-valued function f of multiple arguments with potentially multiple constraints $$g_\circ$$ on each of them. The function f may be nonconvex as long as it is convex in every argument, while the constraints $$g_\circ$$ need to be convex but not smooth. If f is smooth, the proposed Block-Simultaneous Direction Method of Multipliers (bSDMM) can be interpreted as a proximal analog to inexact coordinate descent methods under constraints. Unlike alternative approaches for joint solvers of multiple-constraint problems, we do not require linear operators $${{\mathsf {L}}}$$ of a constraint function $$g({{\mathsf {L}}}\ \cdot )$$ to be invertible or linked between each other. bSDMM is well-suited for a range of optimization problems, in particular for data analysis, where f is the likelihood function of a model and $${{\mathsf {L}}}$$ could be a transformation matrix describing e.g. finite differences or basis transforms. We apply bSDMM to the Non-negative Matrix Factorization task of a hyperspectral unmixing problem and demonstrate convergence and effectiveness of multiple constraints on both matrix factors. The algorithms are implemented in python and released as an open-source package.

Journal ArticleDOI
TL;DR: A well-suited regression technique is investigated which transforms the dataset to a smaller, focused response model in each optimisation loop and delivers a proper regression accuracy, which results in data-reduction for the model to be optimised.
Abstract: An implementation of updating techniques similar to finite element updating in structural dynamics is developed for thermal material inspection using adaptive response surfaces to approximate experimental parameters. In general, thermal models contain high nonlinearities in their parameters, which influences updating accuracies. This is further investigated in this work. Several adaptive response surface regression methods are compared: interpolation, piecewise spline and polynomial regression functions. Next, the influence of the choice of optimisation parameters is discussed and compared with several global and local optimisation routines. Finally, a well-suited regression technique is investigated which transforms the dataset to a smaller, focused response model in each optimisation loop and delivers a proper regression accuracy. This results in data-reduction for the model to be optimised.

Journal ArticleDOI
TL;DR: This special journal edition, entitled “PDE-Constrained Optimization”, features eight papers that demonstrate new formulations, solution strategies, and innovative algorithms for a range of applications that demonstrate the impactfulness on the engineering and science communities.
Abstract: Partial differential equation (PDE) constrained optimization is designed to solve control, design, and inverse problems with underlying physics. A distinguishing challenge of this technique is the handling of large numbers of optimization variables in combination with the complexities of discretized PDEs. Over the last several decades, advances in algorithms, numerical simulation, software design, and computer architectures have allowed for the maturation of PDE constrained optimization (PDECO) technologies with subsequent solutions to complicated control, design, and inverse problems. This special journal edition, entitled “PDE-Constrained Optimization”, features eight papers that demonstrate new formulations, solution strategies, and innovative algorithms for a range of applications. In particular, these contributions demonstrate the impactfulness on our engineering and science communities. This paper offers brief remarks to provide some perspective and background for PDECO, in addition to summaries of the eight papers.

Journal ArticleDOI
TL;DR: This work presents a strategy for the recovery of a sparse solution of a common problem in acoustic engineering, which is the reconstruction of sound source levels and locations applying microphone array measurements, by combining popular splitting algorithms and matrix differential theory in a novel framework.
Abstract: We present a strategy for the recovery of a sparse solution of a common problem in acoustic engineering, which is the reconstruction of sound source levels and locations applying microphone array measurements. The considered task bears similarities to the basis pursuit formalism but also relies on additional model assumptions that are challenging from a mathematical point of view. Our approach reformulates the original task as a convex optimisation model. The sought solution shall be a matrix with a certain desired structure. We enforce this structure through additional constraints. By combining popular splitting algorithms and matrix differential theory in a novel framework we obtain a numerically efficient strategy. Besides a thorough theoretical consideration we also provide an experimental setup that certifies the usability of our strategy. Finally, we also address practical issues, such as the handling of inaccuracies in the measurement and corruption of the given data. We provide a post processing step that is capable of yielding an almost perfect solution in such circumstances.

Journal ArticleDOI
TL;DR: Computational results are provided that indicate the usefulness of both the model reformulation and the adapted bound tightening technique for deterministic global optimization of ideal multi-component distillation column designs.
Abstract: This paper addresses the problem of determining cost-minimal process designs for ideal multi-component distillation columns. The special case of binary distillation was considered in the former work (Ballerstein et al. in Optim Eng 16(2):409–440, 2015. https://doi.org/10.1007/s11081-014-9267-5 ). Therein, a problem-specific bound-tightening strategy based on monotonic mole fraction profiles of single components was developed to solve the corresponding mixed-integer nonlinear problems globally. In the multi-component setting, the mole fraction profiles of single components may not be monotonic. Therefore the bound-tightening strategy from the binary case cannot be applied directly. In this follow-up paper, a model reformulation for ideal multi-component distillation columns is presented. The reformulation is achieved by suitable aggregations of the involved components. Proofs are given showing that mole fraction profiles of aggregated components are monotonic. This property is then used to adapt the bound-tightening strategy from the binary case to the proposed model reformulation. Computational results are provided that indicate the usefulness of both the model reformulation and the adapted bound tightening technique for deterministic global optimization of ideal multi-component distillation column designs.

Journal ArticleDOI
TL;DR: This paper uses parametric variational inequality problems for the purpose of describing entire solution sets of generalized Nash games with shared constraints and proves two theoretical results and introduces a computational method that practitioners can implement in applied problems modeled as generalized Nash game, under assumptions present in the current literature.
Abstract: In this paper we use parametric variational inequality problems for the purpose of describing entire solution sets of generalized Nash games with shared constraints. We prove two theoretical results and we introduce a computational method that practitioners can implement in applied problems modeled as generalized Nash games, under assumptions present in the current literature. Further, we give illustrative examples of how our computational technique is used to derive solution sets of known generalized Nash games previously not solved by existing techniques. We close with the presentation of an applied problem formulated as a generalized Nash game, namely a model of a joint implementation environmental accord between countries. We discuss the possible advantages of modeling it within a generalized Nash game framework.

Journal ArticleDOI
TL;DR: A new class of generalized dimming algorithms GDA(n) is developed, which depends on a single parameter, allowing us to steer between power efficiency and smoothness of the solution, and allows them to be treated as regularizations of the sorted sector covering (SSC) algorithm.
Abstract: One of the main aspects of today’s computing, especially on mobile devices, is power consumption. It affects the lifetime of batteries and has ecological aspects. In the near future, a significant proportion of the energy of mobile devices will be spent on displays. Thus, dimming, especially local dimming of displays, increases the comfort of these mobile devices. A convenient side-effect of local dimming is contrast enhancement and a better black level. Local dimming has three main aspects: the image processing aspect, the optimization aspect of the core algorithm and real-time requirements. We deal with the optimizer part, also focusing on real-time aspects. In this article, a new class of generalized dimming algorithms GDA(n) is developed. This class depends on a single parameter, allowing us to steer between power efficiency and smoothness of the solution. The smoothness properties of the proposed algorithms allow them to be treated as regularizations of the sorted sector covering (SSC) algorithm. The SSC algorithm forms a foundation for our algorithms, and it will be described later in this article. The implementation of the proposed algorithms is quite simple, e.g. on the basis of an existing implementation of the SSC, and they are highly effective. Most important, their smoothness is an inherent part of the algorithm, thus reducing flickering effects before they are created. Numerical examples comparing GDA(n) to established algorithms are given, substantiating the efficiency and quality of the new method. By steering the parameter n, we can switch from a smooth distribution of the LED values (n small) to a volatile distribution of the LED values (n large), while preserving the required brightness of the backlight of the display. In the first case, we suppress LED flashlighting and, especially for videos, flickering. In the latter case we adapt the LED backlight better to the image brightness, obtaining dark LED values in dark areas of the image and bright LED values in bright areas of the image.

Journal ArticleDOI
TL;DR: This work implements the chance constraint formulation into the direct method for linear constraints by showing that its problem statement can be understood as a linear robust optimization problem.
Abstract: Early phase distributed system design can be accomplished using solution spaces that provide an interval of permissible values for each functional parameter. The feasibility property guarantees fulfillment of all design requirements for all possible realizations. Flexibility denotes the size measure of the intervals, with higher flexibility benefiting the design process. Two methods are available for solution space identification. The direct method solves a computationally cheap optimization problem. The indirect method employs a sampling approach that requires a relaxation of the feasibility property through re-formulation as a chance constraint. Even for high probabilities of fulfillment, $$P>0.99$$ , this results in substantial increases in flexibility, which offsets the risk of infeasibility. This work implements the chance constraint formulation into the direct method for linear constraints by showing that its problem statement can be understood as a linear robust optimization problem. Approximations of chance constraints from the literature are transferred into the context of solution spaces. From this, we derive a theoretical value for the safety parameter $$\varOmega$$ . A further modification is presented for use cases, where some intervals are already predetermined. A problem from vehicle safety is used to compare the modified direct and indirect methods and discuss suitable choices of $$\varOmega$$ . We find that the modified direct method is able to identify solution spaces with similar flexibility, while maintaining its cost advantage.

Journal ArticleDOI
TL;DR: In this article, an optimal packing density problem for material flows on conveyor belts in two spatial dimensions is studied, where the control problem is concerned with the initial configuration of parts on the belt to ensure a high overall flow rate and to further reduce congestion.
Abstract: We are interested in an optimal packing density problem for material flows on conveyor belts in two spatial dimensions. The control problem is concerned with the initial configuration of parts on the belt to ensure a high overall flow rate and to further reduce congestion. An adjoint approach is used to compare the optimization results from the microscopic model based on a system of ordinary differential equations with the corresponding macroscopic model relying on a hyperbolic conservation law. Computational results highlight similarities and differences of both optimization models and emphasize the benefits of the macroscopic approach.