scispace - formally typeset
Search or ask a question

Showing papers in "Optimization and Engineering in 2020"


Journal ArticleDOI
TL;DR: A novel stochastic optimization model that simultaneously optimizes the short-term extraction sequence, shovel relocation, scheduling of a heterogeneous hauling fleet, and downstream allocation of extracted materials in open-pit mining complexes is presented.
Abstract: This article presents a novel stochastic optimization model that simultaneously optimizes the short-term extraction sequence, shovel relocation, scheduling of a heterogeneous hauling fleet, and downstream allocation of extracted materials in open-pit mining complexes. The proposed stochastic optimization formulation considers geological uncertainty in addition to uncertainty related to equipment performances and truck cycle times. The method is applied at a real-world mining complex, stressing the benefits of optimizing the short-term production schedule and fleet management simultaneously. Compared to a conventional two-step approach, where the production schedule is optimized first before optimizing the allocation of the mining fleet, the costs generated by shovel movements are reduced by 56% and lost production due to shovel relocation is cut by 54%. Furthermore, the required number of trucks shows a more balanced profile, reducing total truck operational costs by 3.1% over an annual planning horizon, as well as the required haulage capacity in the most haulage-intense periods by 25%. A metaheuristic solution method is utilized to solve the large optimization problem in a reasonable timespan.

31 citations


Journal ArticleDOI
TL;DR: It is shown that the parallelized SDDiP algorithm allows in reasonable amounts of time the solution of multistage stochastic programming models of which the extensive form has quadrillions of variables and constraints.
Abstract: We address the long-term planning of electric power infrastructure under uncertainty. We propose a Multistage Stochastic Mixed-integer Programming formulation that optimizes the generation expansion to meet the projected electricity demand over multiple years while considering detailed operational constraints, intermittency of renewable generation, power flow between regions, storage options, and multiscale representation of uncertainty (strategic and operational). To be able to solve this large-scale model, which grows exponentially with the number of stages in the scenario tree, we decompose the problem using Stochastic Dual Dynamic Integer Programming (SDDiP). The SDDiP algorithm is computationally expensive but we take advantage of parallel processing to solve it more efficiently. The proposed formulation and algorithm are applied to a case study in the region managed by the Electric Reliability Council of Texas for scenario trees considering natural gas price and carbon tax uncertainty for the reference case, and a hypothetical case without nuclear power. We show that the parallelized SDDiP algorithm allows in reasonable amounts of time the solution of multistage stochastic programming models of which the extensive form has quadrillions of variables and constraints.

29 citations


Journal ArticleDOI
TL;DR: In this article, a profit-maximizing mixed-integer linear program (MILP) was proposed to determine a dispatch schedule for the individual sub-systems with a sub-hourly time fidelity.
Abstract: Concentrating solar power (CSP) tower technologies capture thermal radiation from the sun utilizing a field of solar-tracking heliostats. When paired with inexpensive thermal energy storage (TES), CSP technologies can dispatch electricity during peak-market-priced hours, day or night. The cost of utility-scale photovoltaic (PV) systems has dropped significantly in the last decade, resulting in inexpensive energy production during daylight hours. The hybridization of PV and CSP with TES systems has the potential to provide continuous and stable energy production at a lower cost than a PV or CSP system alone. Hybrid systems are gaining popularity in international markets as a means to increase renewable energy portfolios across the world. Historically, CSP-PV hybrid systems have been evaluated using either monthly averages of hourly PV production or scheduling algorithms that neglect the time-of-production value of electricity in the market. To more accurately evaluate a CSP-PV-battery hybrid design, we develop a profit-maximizing mixed-integer linear program ($${\mathcal {H}}$$) that determines a dispatch schedule for the individual sub-systems with a sub-hourly time fidelity. We present the mathematical formulation of such a model and show that it is computationally expensive to solve. To improve model tractability and reduce solution times, we offer techniques that: (1) reduce the problem size, (2) tighten the linear programming relaxation of ($${\mathcal {H}}$$) via reformulation and the introduction of cuts, and (3) implement an optimization-based heuristic (that can yield initial feasible solutions for ($${\mathcal {H}}$$) and, at any rate, yields near-optimal solutions). Applying these solution techniques results in a 79% improvement in solve time, on average, for our 48-h instances of ($${\mathcal {H}}$$); corresponding solution times for an annual model run decrease by as much as 93%, where such a run consists of solving 365 instances of ($${\mathcal {H}}$$), retaining only the first 24 h’ worth of the solution, and sliding the time window forward 24 h. We present annual system metrics for two locations and two markets that inform design practices for hybrid systems and lay the groundwork for a more exhaustive policy analysis. A comparison of alternative hybrid systems to the CSP-only system demonstrates that hybrid models can almost double capacity factors while resulting in a 30% improvement related to various economic metrics.

29 citations


Journal ArticleDOI
Huan Zhu1, Qianwang Deng1, Like Zhang1, Xiang Hu1, Wenhui Lin1 
TL;DR: This paper takes workers into account and considers the effects of their learning abilities on the processing time and energy consumption of a new low carbon flexible job shop scheduling problem considering worker learning (LFJSP-WL).
Abstract: Green low carbon flexible job shop problems have been extensively studied in recent decades, while most of them ignore the influence of workers. In this paper, we take workers into account and consider the effects of their learning abilities on the processing time and energy consumption. And then a new low carbon flexible job shop scheduling problem considering worker learning (LFJSP-WL) is investigated. To reduce carbon emission (CE), a novel CE assessment of machines is presented which combines the production scheduling strategies based on worker learning. A memetic algorithm (MA) is tailored to solve the LFJSP-WL with objectives of minimizing the makespan, total CE and total cost of workers. In LFJSP-WL, a three-layer chromosome encoding method is adopted and several approaches considering the problem characteristics are designed in population initialization, crossover and mutation. Besides, four effective neighborhood structures are developed to enhance the exploitation and exploration capacities, and the elite pool strategy is presented to reserve elite solutions along each iteration. The Taguchi method of DOE is used to obtain the best combination of the key parameters used in MA. Computational experiments conducted show that the MA is able to easily obtain better solutions for most of the tested 22 challenging problem instances compared to two other well-known algorithms, demonstrating its superior performance for the proposed LFJSP-WL.

25 citations


Journal ArticleDOI
TL;DR: This work demonstrates that semidefinite relaxations of large problem instances with on the order of 10,000 buses can be solved reliably and to reasonable accuracy within minutes, and compares different state-of-the-art solvers to calculate the optimality gap.
Abstract: Semidefinite relaxation techniques have shown great promise for nonconvex optimal power flow problems. However, a number of independent numerical experiments have led to concerns about scalability and robustness of existing SDP solvers. To address these concerns, we investigate some numerical aspects of the problem and compare different state-of-the-art solvers. Our results demonstrate that semidefinite relaxations of large problem instances with on the order of 10,000 buses can be solved reliably and to reasonable accuracy within minutes. Furthermore, the semidefinite relaxation of a test case with 25,000 buses can be solved reliably within half an hour; the largest test case with 82,000 buses is solved within 8 h. We also compare the lower bound obtained via semidefinite relaxation to locally optimal solutions obtained with nonlinear optimization methods and calculate the optimality gap.

25 citations


Journal ArticleDOI
TL;DR: This work integrates accurate thermodynamic and transport properties via artificial neural networks (ANNs) and solve the design problems with MAiNGO in a reduced-space formulation and shows that monoaromatic hydrocarbons are a promising group of WFs.
Abstract: The performance of an organic Rankine cycle (ORC) relies on process design and operation. Simultaneous optimization of design and operation for a range of working fluids (WFs) is therefore a promising approach for WF selection. For this, deterministic global process optimization can guarantee to identify a global optimum, in contrast to local or stochastic global solution approaches. However, providing accurate thermodynamic models for a large number of WFs while maintaining computational tractability of the resulting optimization problems are open research questions. We integrate accurate thermodynamic and transport properties via artificial neural networks (ANNs) and solve the design problems with MAiNGO in a reduced-space formulation. We illustrate the approach for an ORC process for waste heat recovery of a diesel truck. After an automated preselection of 122 WFs, ANNs are automatically trained for the 37 selected WFs based on data retrieved from the thermodynamic library CoolProp. Then, we perform deterministic global optimization of design and operation for every WF individually. Therein, the trade-off between net power generation and investment cost is investigated by multiobjective optimization. Further, a thermoeconomic optimization finds a compromise between both objectives. The results show that, for the given conditions, monoaromatic hydrocarbons are a promising group of WFs. In future work, the proposed method and the trained ANNs can be applied to the design of a variety of energy processes.

22 citations


Journal ArticleDOI
TL;DR: It is observed that handling the Type-E objective provides a clear advantage to maximize the line efficiency, and allowing the multiplication of the capacity of the workstations help improve the line Efficiency enormously.
Abstract: Recovering the end-of-life (EOL) products helps companies reduce the purchasing cost for goods and materials that can be removed from EOL products and reused. This also contributes to the efforts aiming at reducing the environmental consequences of hazardous materials. Disassembly lines play a vital role in the disassembling process of EOL products. This research introduces the Type-E multi-manned disassembly line balancing problem and proposes efficient linear and non-linear models to solve the problem. The main contribution of this work is the simultaneous optimization of the two conflicting objectives, i.e. cycle time and the number of workstations to maximize the efficiency of disassembly lines. Another contribution of the work is that the workstations may operate in a multi-manned environment (with more than one worker in a workstation) in certain conditions to maximize the line efficiency. The problem is defined and modelled mathematically. Numerical examples are exhibited to illustrate the solutions for problems. A comprehensive computational study is conducted to solve the test problems using the proposed models and the results are compared to the literature. It is observed that handling the Type-E objective provides a clear advantage to maximize the line efficiency. Furthermore, allowing the multiplication of the capacity of the workstations help improve the line efficiency enormously. This is thanks to the increased opportunity in the process of assigning tasks to workstations.

20 citations


Journal ArticleDOI
TL;DR: This work introduces a new double projection method for solving variational inequalities in real Hilbert spaces and presents two convergence theorems of the proposed method, weak convergence result which requires pseudomonotonicity, Lipschitz and sequentially weakly continuity of the associated mapping and strong convergence theorem with rate of convergence which requires LipsChitz continuity and strongly pseudomonotone only.
Abstract: In this work we are concerned with variational inequalities in real Hilbert spaces and introduce a new double projection method for solving it. The algorithm is motivated by the Korpelevich extragradient method, the subgradient extragradient method of Gibali et al. and Popov’s method. The proposed scheme combines some of the advantages of the methods mentioned above, first it requires only one orthogonal projection onto the feasible set of the problem while the next computation has a closed formula. Second, only one mapping evaluation is required per each iteration and there is also a usage of an adaptive step size rule that avoids the need to know the Lipschitz constant of the associated mapping. We present two convergence theorems of the proposed method, weak convergence result which requires pseudomonotonicity, Lipschitz and sequentially weakly continuity of the associated mapping and strong convergence theorem with rate of convergence which requires Lipschitz continuity and strongly pseudomonotone only. Primary numerical experiments and comparisons demonstrate the advantages and potential applicability of the new scheme.

20 citations


Journal ArticleDOI
TL;DR: A generic simulation–optimization framework to generate short-term production schedules for improving the schedule adherence using an iterative approach is proposed and increases the adherence of the short- term schedules generated over iterations.
Abstract: Mine operations are supported by a short-term production schedule, which defines where and when mining activities are performed. However, deviations can be observed in this short-term production schedule because of several sources of uncertainty and their inherent complexity. Therefore, schedules that are more likely to be reproduced in reality should be generated so that they will have a high adherence when executed. Unfortunately, prior estimation of the schedule adherence is difficult. To overcome this problem, we propose a generic simulation–optimization framework to generate short-term production schedules for improving the schedule adherence using an iterative approach. In each iteration of this framework, a short-term schedule is generated using a mixed-integer linear programming model that is simulated later using a discrete-event simulation model. As a case study, we apply this approach to a real Bench and Fill mine, wherein we measure the discrepancies among the level of movement of material with respect to the schedule obtained from the optimization model and the average of the simulated schedule using the mine schedule material’s adherence index. The values of this index decreased with the iterations, from 13.1% in the first iteration to 4.8% in the last iteration. This improvement is explained because the effects of the operational uncertainty within the optimization model can be considered by integrating the simulation. As a conclusion, the proposed framework increases the adherence of the short-term schedules generated over iterations. Moreover, these increases in the adherence of schedules are not obtained at the expense of the Net Present Value.

19 citations


Journal ArticleDOI
TL;DR: A dynamic programming approach for deciding the feasibility of a booking in tree-shaped networks even for nonlinear flow models is presented and it turns out that the hardness of the problem mainly depends on the combination of the chosen physics model as well as the specific network structure under consideration.
Abstract: As a consequence of the liberalisation of the European gas market in the last decades, gas trading and transport have been decoupled. At the core of this decoupling are so-called bookings and nominations. Bookings are special long-term capacity right contracts that guarantee that a specified amount of gas can be supplied or withdrawn at certain entry or exit nodes of the network. These supplies and withdrawals are nominated at the day-ahead. The special property of bookings then is that they need to be feasible, i.e., every nomination that complies with the given bookings can be transported. While checking the feasibility of a nomination can typically be done by solving a mixed-integer nonlinear feasibility problem, the verification of feasibility of a set of bookings is much harder. The reason is the robust nature of feasibility of bookings-namely that for a set of bookings to be feasible, all compliant nominations, i.e., infinitely many, need to be checked for feasibility. In this paper, we consider the question of how to verify the feasibility of given bookings for a number of special cases. For our physics model we impose a steady-state potential-based flow model and disregard controllable network elements. For this case we derive a characterisation of feasible bookings, which is then used to show that the problem is in coNP for the general case but can be solved in polynomial time for linear potential-based flow models. Moreover, we present a dynamic programming approach for deciding the feasibility of a booking in tree-shaped networks even for nonlinear flow models. It turns out that the hardness of the problem mainly depends on the combination of the chosen physics model as well as the specific network structure under consideration. Thus, we give an overview over all settings for which the hardness of the problem is known and finally present a list of open problems.

17 citations


Journal ArticleDOI
TL;DR: A hybrid optimization approach for multi-criteria optimal design of a compliant positioning platform for nanoindentation tester that mimics the biomechanical behavior of beetle so as to allow a linear motion is developed.
Abstract: This paper develops a hybrid optimization approach for multi-criteria optimal design of a compliant positioning platform for nanoindentation tester. The platform mimics the biomechanical behavior of beetle so as to allow a linear motion. Structure of the beetle-liked mechanism consists of six legs arranging in a symmetric topology. Amplification ratio and static characteristics of the platform are analyzed by finite element analysis (FEA). To improve the performances of the platform, the main geometric parameters of the platform are optimized by an efficient hybrid approach of the Taguchi method (TM), response surface methodology (RSM), improved adaptive neuro-fuzzy inference system (ANFIS), and teaching learning based optimization (TLBO). Numerical data are collected by integrating of the RSM and FEA. Signal to noise ratios are determined and the weight factor of each response is calculated. The suitable ANFIS’s parameters are optimized through the TM. The results found that trapezoidal-shaped MFs is the best type for the safety factor and the displacement. The optimal ANFIS’s parameters for the safety factor and the displacement were determined at the number of input MFs of 4, trapmf, hybrid learning method, and linear output MFs. According to improved ANFIS establishments, TLBO algorithm is utilized for solving the multi-objective optimization. Analysis of variance and sensitivity are investigated to determine the significant effects of design factors on the responses. The simulated and experimental validations are in a good agreement with the predicted results.

Journal ArticleDOI
TL;DR: This paper demonstrates a generalized optimization framework with rapid design space exploration capabilities in which a Multifidelity approach is directly adjusted to the emerging needs of the design, developed to be easily applicable and efficient in computationally expensive multidisciplinary problems.
Abstract: The advantages of multidisciplinary design are well understood, but not yet fully adopted by the industry where methods should be both fast and reliable. For such problems, minimum computational cost while providing global optimality and extensive design information at an early conceptual stage is desired. However, such a complex problem consisting of various objectives and interacting disciplines is associated with a challenging design space. This provides a large pool of possible designs, requiring an efficient exploration scheme with the ability to provide sufficient feedback early in the design process. This paper demonstrates a generalized optimization framework with rapid design space exploration capabilities in which a Multifidelity approach is directly adjusted to the emerging needs of the design. The methodology is developed to be easily applicable and efficient in computationally expensive multidisciplinary problems. To accelerate such a demanding process, Surrogate Based Optimization methods in the form of both Radial Basis Function and Kriging models are employed. In particular, a modification of the standard Kriging approach to account for Multifidelity data inputs is proposed, aiming to increasing its accuracy without increasing its training cost. The surrogate optimization problem is solved by a Particle Swarm Optimization algorithm and two constraint handling methods are implemented. The surrogate model modifications are visually demonstrated in a 1D and 2D test case, while the Rosenbrock and Sellar functions are used to examine the scalability and adaptability behaviour of the method. Our particular Multiobjective formulation is demonstrated in the common RAE2822 airfoil design problem. In this paper, the framework assessment focuses on our infill sampling approach in terms of design and objective space exploration for a given computational cost.

Journal ArticleDOI
TL;DR: The optimization approach in this paper provides a theoretical foundation for next-generation Supervisory Control and Data Acquisition in support of a Dynamic Monitoring and Decision Systems (DyMonDS) for a multi-layered interactive market implementation in which the grid users follow their sub-objectives and the higher layers coordinate interconnected sub-systems and the high-level system objectives.
Abstract: The ideas in this paper are motivated by an increased need for systematic data-enabled resource management of large-scale electric energy systems. The basic control objective is to manage uncertain disturbances, power imbalances in particular, by optimizing available power resources. To that end, we start with a centralized optimal control problem formulation of system-level performance objective subject to complex interconnection constraints and constraints representing highly heterogeneous internal dynamics of system components. To manage spatial complexity, an inherent multi-layered structure is utilized by modeling interconnection constraints in terms of unified power variables and their dynamics. Similarly, the internal dynamics of components and sub-systems (modules), including their primary automated feedback control, is modeled so that their input–output characterization is also expressed in terms of power variables. This representation is shown to be key to managing the multi-spatial complexity of the problem. In this unifying energy/power state space, the system constraints are all fundamentally convex, resulting in the convex dynamic optimization problem, for typically utilized quadratic cost functions. Based on this, an interactive multi-layered modeling and control method is introduced. While the approach is fundamentally based on the primal–dual decomposition of the centralized problem, this is formulated for the first time for the couple real-reactive power problem. It is also is proposed for the first time to utilize sensitivity functions of distributed agents for solving the primal distributed problem. Iterative communication complexity typically required for convergence of point-wise information exchange is replaced by the embedded distributed optimization by the modules when creating these functions. A theoretical proof of the convergence claim is given. Notably, the inherent multi-temporal complexity is managed by performing model predictive control (MPC)-based decision making when solving distributed primal problems. The formulation enables distributed decision-makers to value uncertainties and related risks according to their preferences. Ultimately, the distributed decision making results in creating a bid function to be used at the coordinating market-clearing level. The optimization approach in this paper provides a theoretical foundation for next-generation Supervisory Control and Data Acquisition (SCADA) in support of a Dynamic Monitoring and Decision Systems (DyMonDS) for a multi-layered interactive market implementation in which the grid users follow their sub-objectives and the higher layers coordinate interconnected sub-systems and the high-level system objectives. This forms a theoretically sound basis for designing IT-enabled protocols for secure operations, planning, and markets.

Journal ArticleDOI
TL;DR: The paper presents a generative design approach, particularly for simulation-driven designs, using a genetic algorithm (GA), which is structured based on a novel offspring selection strategy, which outperforms the baseline GA selection techniques, such as tournament and ranking selections.
Abstract: The paper presents a generative design approach, particularly for simulation-driven designs, using a genetic algorithm (GA), which is structured based on a novel offspring selection strategy. The proposed selection approach commences while enumerating the offsprings generated from the selected parents. Afterwards, a set of eminent offsprings is selected from the enumerated ones based on the following merit criteria: space-fillingness to generate as many distinct offsprings as possible, resemblance/non-resemblance of offsprings to the good/bad individuals, non-collapsingness to produce diverse simulation results and constrain-handling for the selection of offsprings satisfying design constraints. The selection problem itself is formulated as a multi-objective optimization problem. A greedy technique is employed based on non-dominated sorting, pruning, and selecting the representative solution. According to the experiments performed using three different application scenarios, namely simulation-driven product design, mechanical design and user-centred product design, the proposed selection technique outperforms the baseline GA selection techniques, such as tournament and ranking selections.

Journal ArticleDOI
TL;DR: In this paper, a mixed-integer programming model for solving the long-term planning problem of an underground mine is presented, which establishes the sequence of mining for a horizon of 20 years, determines which lens of the geological model will be mined and in what order, while respecting the operational constraints.
Abstract: We present a mixed-integer programming model for solving the long-term planning problem of an underground mine. This model, which establishes the sequence of mining for a horizon of 20 years, determines which lens of the geological model will be mined and in what order, while respecting the operational constraints. For each lens to be mined, a specific cut-off grade has to be selected to maximize the net present value. The choice of a cut-off grade affects the volume and the average grade of each lens, which increases the size of problems to be solved. To reduce the computation time, different acceleration strategies and a Fix-and-Optimize heuristic are proposed. Computational experiments on instances of different sizes are performed to (1) assess the quality of the solution found by each method and (2) present the impact of the variable cut-off grade.

Journal ArticleDOI
TL;DR: An adaptive optimization scheme for multi-period production scheduling in open-pit mining under geological uncertainty that allows us to solve practical instances of the problem and produces an operational policy that reduces the risk of the production schedule.
Abstract: Mine planning optimization aims at maximizing the profit obtained from extracting valuable ore. Beyond its theoretical complexity—the open-pit mining problem with capacity constraints reduces to a knapsack problem with precedence constraints, which is NP-hard—practical instances of the problem usually involve a large to very large number of decision variables, typically of the order of millions for large mines. Additionally, any comprehensive approach to mine planning ought to consider the underlying geostatistical uncertainty as only limited information obtained from drill hole samples of the mineral is initially available. In this regard, as blocks are extracted sequentially, information about the ore grades of blocks yet to be extracted changes based on the blocks that have already been mined. Thus, the problem lies in the class of multi-period large scale stochastic optimization problems with decision-dependent information uncertainty. Such problems are exceedingly hard to solve, so approximations are required. This paper presents an adaptive optimization scheme for multi-period production scheduling in open-pit mining under geological uncertainty that allows us to solve practical instances of the problem. Our approach is based on a rolling-horizon adaptive optimization framework that learns from new information that becomes available as blocks are mined. By considering the evolution of geostatistical uncertainty, the proposed optimization framework produces an operational policy that reduces the risk of the production schedule. Our numerical tests with mines of moderate sizes show that our rolling horizon adaptive policy gives consistently better results than a non-adaptive stochastic optimization formulation, for a range of realistic problem instances.

Journal ArticleDOI
TL;DR: A novel solution strategy, based on multi-horizon scenario trees, is proposed, which confirms the interest of the approach, particularly regarding a more efficient management of hydro-plants, because non-convex operational regions are considered by the model.
Abstract: For an electric power mix subject to uncertainty, the stochastic unit-commitment problem finds short-term optimal generation schedules that satisfy several system-wide constraints. In regulated electricity markets, this very practical and important problem is used by the system operator to decide when each unit is to be started or stopped, and to define how to generate enough energy to meet the load. For hydro-dominated systems, an accurate description of the hydro-production function involves non-convex relations. This feature, combined with the fine time discretization needed to represent uncertainty of renewable generation, yields a large-scale mathematical optimization model that is nonlinear and has mixed-integer variables. To make the problem tractable, a novel solution strategy, based on multi-horizon scenario trees, is proposed. The approach deals in a first level with the integer decision variables representing whether units are on or off. Once units are committed, the expected operational cost is minimized by solving a continuous second-level problem, which is separable by scenarios. The coordination between the two decision levels is done by means of a bundle-like variant of Benders decomposition that proves very efficient for the considered setting. To assess the quality of the optimal commitment on out-of-sample scenarios, a new simulation technique, based on certain sustainable pseudo-distance is proposed. For the numerical experiments, a mix of hydro, thermal, and wind power plants extracted from the Brazilian power system is considered. The results confirm the interest of the approach, particularly regarding a more efficient management of hydro-plants, because non-convex operational regions are considered by the model.

Journal ArticleDOI
TL;DR: Equivalent formulations of the QP problem are proposed with the intent of them being more amenable to the considered methods, and methods are tested and results are compared for a number of aircraft assembly simulation problems.
Abstract: A special class of quadratic programming (QP) problems is considered in this paper. This class emerges in simulation of assembly of large-scale compliant parts, which involves the formulation and solution of contact problems. The considered QP problems can have up to 20,000 unknowns, the Hessian matrix is fully populated and ill-conditioned, while the matrix of constraints is sparse. Variation analysis and optimization of assembly process usually require massive computations of QP problems with slightly different input data. The following optimization methods are adapted to account for the particular features of the assembly problem: an interior point method, an active-set method, a Newton projection method, and a pivotal algorithm for the linear complementarity problems. Equivalent formulations of the QP problem are proposed with the intent of them being more amenable to the considered methods. The methods are tested and results are compared for a number of aircraft assembly simulation problems.

Journal ArticleDOI
TL;DR: An optimal control problem (OCP) governed by the NTVD system and subject to continuous state inequality constraints arising from engineering specifications is proposed, where the time-varying function is the control function to be chosen such that system cost and system robustness are optimized.
Abstract: In this paper, we consider a nonlinear time-varying dynamical (NTVD) system with uncertain system parameters assigned to their nominal values in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumoniae. Some important properties of the NTVD system are discussed. Our goal is to choose a time-varying function for the NTVD system. Thus, an optimal control problem (OCP) governed by the NTVD system and subject to continuous state inequality constraints arising from engineering specifications is proposed, where the time-varying function is the control function to be chosen such that system cost (the relative error between experimental data and the simulated output of the system) and system robustness (robustness of the system with respect to uncertain system parameters) is optimized. Based on the actual fermentation process, the time-varying function is specified by a four-piecewise linear function with unknown kinetic parameters and switching instants. The resulting OCP is approximated as a sequence of nonlinear mathematical programming subproblems by the time-scaling transformation, the constraint transcription and the locally smoothing approximation techniques. A parallel global optimization algorithm, based on a novel combination of limited information particle swarm optimization and local search strategy, is then developed to solve these subproblems. Numerical results show the effectiveness and applicability of our proposed algorithm.

Journal ArticleDOI
TL;DR: This work proposes an FPT ($k$) algorithm with performance guarantee of $69+\epsilon$ for any HCKM instances in this paper, which is known to be at least APX-hard.
Abstract: Hard-capacitated k-means (HCKM) is one of the fundamental problems remaining open in combinatorial optimization and engineering. In HCKM, one is required to partition a given n-point set into k disjoint clusters with known capacity so as to minimize the sum of within-cluster variances. It is known to be at least APX-hard, and most of the work on it has been done from a meta heuristic or bi-criteria approximation perspective. To the best our knowledge, no constant approximation algorithm or existence proof of such an algorithm is known. As our main contribution, we propose an FPT(k) approximation algorithm with constant performance guarantee for HCKM in this paper.

Journal ArticleDOI
TL;DR: The surrogate management framework is extended to incorporate concurrent evaluations at the SEARCH step by comparing two different infill approaches: single search multiple error sampling and expected improvement constant liar approaches, which outperform their single-infill counterparts for a fixed computational time budget on bound constrained problems.
Abstract: The surrogate management framework (SMF) is an effective approach for derivative-free optimization of expensive objective functions. The SMF is typically comprised of surrogate-based infill methods (SEARCH step) coupled to pattern search optimization (POLL step). Although the latter is easy to parallelize, parallelization of the SEARCH step requires surrogate-based strategies that generate multiple candidates at each iteration. The impact of such SEARCH methods on SMF performance remains poorly explored. In this paper, we extend the SMF to incorporate concurrent evaluations at the SEARCH step by comparing two different infill approaches: single search multiple error sampling and expected improvement constant liar approaches. These variants are generalized to address non-linearly constrained problems by the filter method. The proposed methods are benchmarked for different infill sizes, while accounting for the variability in initialization. We then demonstrate the proposed methods on two shape optimization problems motivated by hemodynamically-driven surgical design. Surrogate-based multiple-infill strategies outperform their single-infill counterparts for a fixed computational time budget on bound constrained problems. Insights drawn from this study have implications not only on future instances of the SMF, but also for other surrogate-based and hybrid parallel infill methods for derivative-free optimization.

Journal ArticleDOI
TL;DR: Given an area of the ocean that should be endowed with a sonar system for surveillance, this paper formulates two natural sensor placement problems and considers two different sensor models: definite range (“cookie-cutter”) and probabilistic.
Abstract: A multistatic sonar system consists of one or more sources that are able to emit underwater sound, and receivers that listen to the reflected sound waves. Knowing the speed of sound in water, the time when the sound was sent from a source, and the arrival time of the sound at one or more receivers, it is possible to determine the location of surrounding objects. The propagation of underwater sound is a complex phenomenon that depends on various attributes of the water (density, pressure, temperature, and salinity) and the emitted sound (pulse length and volume), as well as the reflection properties of the water’s surface. These effects can be approximated by nonlinear equations. Furthermore, natural obstacles in the water, such as the coastline, need to be taken into consideration. Given an area of the ocean that should be endowed with a sonar system for surveillance, this paper formulates two natural sensor placement problems. In the first, the goal is to maximize the area covered by a fixed number of sources and receivers. In the second, the goal is to cover the entire area with a minimum-cost set of equipment. For each problem, this paper considers two different sensor models: definite range (“cookie-cutter”) and probabilistic. It thus addresses four problem variants using integer nonlinear formulations. Each variant can be reformulated as an integer linear program in one of several ways; this paper discusses these reformulations, then compares them numerically using a test bed from coastlines around the world and a state-of-the-art mixed-integer program solver (IBM ILOG CPLEX).

Journal ArticleDOI
TL;DR: In this paper, the authors consider the simultaneous optimization of the reliability and the cost of a ceramic component in a bi-objective PDE constrained shape optimization problem and propose a probabilistic Weibull-type model to assess the probability of failure of the component under tensile load.
Abstract: We consider the simultaneous optimization of the reliability and the cost of a ceramic component in a biobjective PDE constrained shape optimization problem. A probabilistic Weibull-type model is used to assess the probability of failure of the component under tensile load, while the cost is assumed to be proportional to the volume of the component. Two different gradient-based optimization methods are suggested and compared at 2D test cases. The numerical implementation is based on a first discretize then optimize strategy and benefits from efficient gradient computations using adjoint equations. The resulting approximations of the Pareto front nicely exhibit the trade-off between reliability and cost and give rise to innovative shapes that compromise between these conflicting objectives.

Journal ArticleDOI
TL;DR: A risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program which is motivated by the impact of market impact costs on large portfolio rebalancing operations is proposed.
Abstract: We define a regularized variant of the dual dynamic programming algorithm called DDP-REG to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the stochastic dual dynamic programming (SDDP) algorithm recently introduced which was studied for linear problems only and with less general prox-centers. We show the convergence of DDP-REG and SDDP-REG. We assess the performance of DDP-REG and SDDP-REG on portfolio models with direct transaction and market impact costs. In particular, we propose a risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program. The formulation is motivated by the impact of market impact costs on large portfolio rebalancing operations. Numerical simulations show that DDP-REG is much quicker than DDP on all problem instances considered (up to 184 times quicker than DDP) and that SDDP-REG is quicker on the instances of portfolio selection problems with market impact costs tested and much faster on the instance of risk-neutral multistage stochastic linear program implemented (8.2 times faster).

Journal ArticleDOI
TL;DR: A novel nonconvex and nonsmooth reformulation of the original NP-hard RPCA model is proposed that adds a redundant semidefinite cone constraint and solves small subproblems using a PALM algorithm.
Abstract: We introduce a novel approach for robust principal component analysis (RPCA) for a partially observed data matrix. The aim is to recover the data matrix as a sum of a low-rank matrix and a sparse matrix so as to eliminate erratic noise (outliers). This problem is known to be NP-hard in general. A classical approach to solving RPCA is to consider convex relaxations. One such heuristic involves the minimization of the (weighted) sum of a nuclear norm part, that promotes a low-rank component, with an $$\ell _1$$ norm part, to promote a sparse component. This results in a well-structured convex problem that can be efficiently solved by modern first-order methods. However, first-order methods often yield low accuracy solutions. Moreover, the heuristic of using a norm consisting of a weighted sum of norms may lose some of the advantages that each norm had when used separately. In this paper, we propose a novel nonconvex and nonsmooth reformulation of the original NP-hard RPCA model. The new model adds a redundant semidefinite cone constraint and solves small subproblems using a PALM algorithm. Each subproblem results in an exposing vector for a facial reduction technique that is able to reduce the size significantly. This makes the problem amenable to efficient algorithms in order to obtain high-level accuracy. We include numerical results that confirm the efficacy of our approach.

Journal ArticleDOI
TL;DR: A new approach for production optimization in the context of closed-loop reservoir management (CLRM) is introduced by considering the impact of future measurements within the optimization framework by anticipating the fact that additional information will become available in the future.
Abstract: The exploitation of subsurface hydrocarbon reservoirs is achieved through the control of production and injection wells (i.e., by prescribing time-varying pressures and flow rates) to create conditions that make the hydrocarbons trapped in the pores of the rock formation flow to the surface. The design of production strategies to exploit these reservoirs in the most efficient way requires an optimization framework that reflects the nature of the operational decisions and geological uncertainties involved. This paper introduces a new approach for production optimization in the context of closed-loop reservoir management (CLRM) by considering the impact of future measurements within the optimization framework. CLRM enables instrumented oil fields to be operated more efficiently through the systematic use of life-cycle production optimization and computer-assisted history matching. Recently, we have proposed a methodology to assess the value of information (VOI) of measurements in such a CLRM approach a-priori, i.e. during the field development planning phase, to improve the planned history matching component of CLRM. The reasoning behind the a-priori VOI analysis unveils an opportunity to also improve our approach to the production optimization problem by anticipating the fact that additional information (e.g., production measurements) will become available in the future. Here, we show how the more conventional optimization approach can be combined with VOI considerations to come up with a novel workflow, which we refer to as informed production optimization. We illustrate the concept with a simple water flooding problem in a two-dimensional five-spot reservoir and the results obtained confirm that this new approach can lead to significantly better decisions in some cases. © 2019, The Author(s).

Journal ArticleDOI
TL;DR: A novel online portfolio strategy by aggregating expert advice using the weak aggregating algorithm that outperforms all expert strategies in the pool besides best expert strategy and performs almost as well asbest expert strategy.
Abstract: This paper concerns online portfolio selection problem. In this problem, no statistical assumptions are made about the future asset prices. Although existing universal portfolio strategies have been shown to achieve good performance, it is not easy, almost impossible, to determine upfront which strategy will achieve the maximum final cumulative wealth for online portfolio selection tasks. This paper proposes a novel online portfolio strategy by aggregating expert advice using the weak aggregating algorithm. We consider a pool of universal portfolio strategies as experts, and compute the portfolio by aggregating the portfolios suggested by these expert strategies according to their previous performance. Through our analysis, we establish theoretical results and illustrate empirical performance. We theoretically prove that our strategy is universal, i.e., it asymptotically performs almost as well as the best constant rebalanced portfolio determined in hindsight. We also conduct extensive experiments to illustrate the effectiveness of the proposed strategy by using daily stock data collected from the American and Chinese stock markets. Numerical results show that the proposed strategy outperforms all expert strategies in the pool besides best expert strategy and performs almost as well as best expert strategy.

Journal ArticleDOI
TL;DR: A new reformulation of the joint rectangular chance constrained geometric programs where the random parameters are elliptically distributed and pairwise independent is presented and new convex approximations based on the variable transformation together with piecewise linear approximation methods are proposed.
Abstract: This paper discusses joint rectangular chance or probabilistic constrained geometric programs. We present a new reformulation of the joint rectangular chance constrained geometric programs where the random parameters are elliptically distributed and pairwise independent. As this reformulation is not convex, we propose new convex approximations based on the variable transformation together with piecewise linear approximation methods. For the latter, we provide a theoretical bound for the number of segments in the worst case. Our numerical results show that our approximations are asymptotically tight.

Journal ArticleDOI
TL;DR: The benefit of the relaxations for three case studies on the design of bottoming cycles of combined cycle power plants using the open-source deterministic global solver MAiNGO and the derived relaxations can make design problems tractable for global optimization.
Abstract: The IAPWS-IF97 (Wagner et al. (2000) J Eng Gas Turbines Power 122:150) is the state-of-the-art model for the thermodynamic properties of water and steam for industrial applications and is routinely used for simulations of steam power cycles and utility systems. Its use in optimization-based design, however, has been limited because of its complexity. In particular, deterministic global optimization of problems with the IAPWS-IF97 is challenging because general-purpose methods lead to rather weak convex and concave relaxations, thus resulting in slow convergence. Furthermore, the original domains of many functions from the IAPWS-IF97 are nonconvex, while common global solvers construct relaxations over rectangular domains. Outside the original domains, however, many of the functions take very large values that lead to even weaker relaxations. Therefore, we develop tighter relaxations of relevant functions from the IAPWS-IF97 on the basis of an analysis of their monotonicity and convexity properties. We modify the functions outside their original domains to enable tighter relaxations, while we keep them unchanged on their original domains where they have physical meaning. We discuss the benefit of the relaxations for three case studies on the design of bottoming cycles of combined cycle power plants using our open-source deterministic global solver MAiNGO. The derived relaxations result in drastic reductions in computational time compared with McCormick relaxations and can make design problems tractable for global optimization.

Journal ArticleDOI
TL;DR: A new efficient twice parametric kernel function that combines the parametric classic function with the paramometric kernel function trigonometric barrier term given by Bouafia et al. to develop primal–dual interior-point algorithms for solving linear programming problems is introduced.
Abstract: Recently, Bouafia et al. (J Optim Theory Appl 170:528–545, 2016) investigated a new efficient kernel function that differs from self-regular kernel functions. The kernel function has a trigonometric barrier term. This paper introduces a new efficient twice parametric kernel function that combines the parametric classic function with the parametric kernel function trigonometric barrier term given by Bouafia et al. (J Optim Theory Appl 170:528–545, 2016) to develop primal–dual interior-point algorithms for solving linear programming problems. In addition, we obtain the best complexity bound for large and small-update primal–dual interior point methods. This complexity estimate improves results obtained in Li and Zhang (Oper Res Lett 43(5):471–475, 2015), Peyghami and Hafshejani (Numer Algorithms 67:33–48, 2014) and matches the best bound obtained in Bai et al. (J Glob Optim 54:353–366) and Peng et al. (J Comput Technol 6:61–80, 2001). Finally, our numerical experiments on some test problems confirm that the new kernel function has promising applications compared to the kernel function given by Fathi-Hafshejani (Optimization 67(10):1605–1630, 2018).