scispace - formally typeset
Search or ask a question

Showing papers in "Optimization and Engineering in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: they employ two norms for measuring variance (L2, L1) and two sparsityinducing norms (L0, L 1), which are used in two different ways (constraint, penalty) and give a unifying reformulation which is solved via a natural alternating maximization (AM) method.
Abstract: Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM

35 citations


Journal ArticleDOI
TL;DR: The performance of Bayesianoptimization with Deep Gaussian Processes is assessed on analytical test cases and aerospace design optimization problems and compared to the state-of-the-art stationary and non-stationary Bayesian Optimization approaches.
Abstract: Bayesian Optimization using Gaussian Processes is a popular approach to deal with optimization involving expensive black-box functions. However, because of the assumption on the stationarity of the covariance function defined in classic Gaussian Processes, this method may not be adapted for non-stationary functions involved in the optimization problem. To overcome this issue, Deep Gaussian Processes can be used as surrogate models instead of classic Gaussian Processes. This modeling technique increases the power of representation to capture the non-stationarity by considering a functional composition of stationary Gaussian Processes, providing a multiple layer structure. This paper investigates the application of Deep Gaussian Processes within Bayesian Optimization context. The specificities of this optimization method are discussed and highlighted with academic test cases. The performance of Bayesian Optimization with Deep Gaussian Processes is assessed on analytical test cases and aerospace design optimization problems and compared to the state-of-the-art stationary and non-stationary Bayesian Optimization approaches.

31 citations


Journal ArticleDOI
TL;DR: This paper investigates the subgradient extragradient algorithm for solving variational inequality problems in real Hilbert spaces and considers it with inertial extrapolation terms and self-adaptive step sizes and presents a relaxed version of this method with seemingly easier to implement conditions on the inertial factor and the relaxation parameter.
Abstract: Various versions of inertial subgradient extragradient methods for solving variational inequalities have been and continue to be studied extensively in the literature. In many of the versions that were proposed and studied, the inertial factor, which speeds up the convergence of the method, is assumed to be less than 1, and in many cases, stringent conditions are also required in order to obtain convergence. Several of the conditions assumed in the literature make the proposed inertial subgradient extragradient method computationally difficult to implement in some cases. In the present paper, we investigate the subgradient extragradient algorithm for solving variational inequality problems in real Hilbert spaces and consider it with inertial extrapolation terms and self-adaptive step sizes. We present a relaxed version of this method with seemingly easier to implement conditions on the inertial factor and the relaxation parameter. In the method we propose, the inertial factor can be chosen in a special case to be 1, a choice which is not possible in the inertial subgradient extragradient methods proposed in the literature. We also provide some numerical examples which illustrate the effectiveness and competitiveness of our algorithm.

30 citations


Journal ArticleDOI
TL;DR: In this article, two evolutionary algorithms, particle swarm optimization (PSO) and genetic algorithm (GA), were used to train the ANN parameters in order to overcome the ANN drawbacks, such as slow learning speed and frequent trapping at local optimum.
Abstract: Real-time and short-term prediction of river flow is essential for efficient flood management. To obtain accurate flow predictions, a reliable rainfall-runoff model must be used. This study proposes the application of two evolutionary algorithms, particle swarm optimization (PSO) and genetic algorithm (GA), to train the artificial neural network (ANN) parameters in order to overcome the ANN drawbacks, such as slow learning speed and frequent trapping at local optimum. These hybrid ANN-PSO and ANN-GA approaches were validated to equip natural hazard decision makers with a robust tool for forecasting real-time streamflow as a function of combinations of different lagged rainfall and streamflow in a small catchment in Southeast Queensland, Australia. Different input combinations of lagged rainfall and streamflow (delays of one, two and three days) were tested to investigate the sensitivity of the model to the number of delayed days, and to identify the effective model input combinations for the accurate prediction of real-time streamflow, which has not yet been recognized in other studies. The results indicated that the ANN-PSO model significantly outperformed the ANN-GA model in terms of convergence speed, accuracy, and fitness function evaluation. Additionally, it was found that the rainfall and streamflow with 3-day lag time had less impact on the predicted streamflow of the studied basin, confirming that the flow of the studied river is significantly correlated with only 2-day lagged rainfall and streamflow.

28 citations


Journal ArticleDOI
TL;DR: Simulation results show that the suggested model schedules the appliances in an optimal way, resulting in electricity-cost and peaks reductions without compromising users’ comfort, and confirm the superiority of HHO algorithm in comparison with other optimization techniques.
Abstract: With arrival of advanced technologies, automated appliances in residential sector are still in unlimited growth. Therefore, the design of new management schemes becomes necessary to be achieved for the electricity demand in an effort to ensure safety of domestic installations. To this end, the Demand Side Management (DSM) is one of suggested solution which played a significant role in micro-grid and Smart Grid systems. DSM program allows end-users to communicate with the grid operator so they can contribute in making decisions and assist the utilities to reduce the peak power demand through peak periods. This can be done by managing loads in a smart way, while keeping up customer loyalty. Nowadays, several DSM programs are proposed in the literature, almost all of them are focused on the domestic sector energy management system. In this original work, four heuristics optimization algorithms are proposed for energy scheduling in smart home, which are: bat algorithm, grey wolf optimizer, moth flam optimization, algorithm, and Harris hawks optimization (HHO) algorithm. The proposed model used in this experiment is based on two different electricity pricing schemes: Critical-Peak-Price and Real-Time-Price. In addition, two operational time intervals (60 min and 12 min) were considered to evaluate the consumer’s demand and behavior of the suggested scheme. Simulation results show that the suggested model schedules the appliances in an optimal way, resulting in electricity-cost and peaks reductions without compromising users’ comfort. Hence, results confirm the superiority of HHO algorithm in comparison with other optimization techniques.

25 citations


Journal ArticleDOI
TL;DR: The third hybrid method was applied to a basin network optimization problem and outperformed PSO with filter method and genetic algorithm with implicit filtering and improved the global minima and robustness versus PSO.
Abstract: Particle swarm optimization (PSO) is one of the most commonly used stochastic optimization algorithms for many researchers and scientists of the last two decades, and the pattern search (PS) method is one of the most important local optimization algorithms. In this paper, we test three methods of hybridizing PSO and PS to improve the global minima and robustness. All methods let PSO run first followed by PS. The first method lets PSO use a large number of particles for a limited number of iterations. The second method lets PSO run normally until tolerance is reached. The third method lets PSO run normally until the average particle distance from the global best location is within a threshold. Numerical results using non-differentiable test functions reveal that all three methods improve the global minima and robustness versus PSO. The third hybrid method was also applied to a basin network optimization problem and outperformed PSO with filter method and genetic algorithm with implicit filtering.

25 citations


Journal ArticleDOI
TL;DR: A reduced-space formulation for trained one-class support vector machines is developed and it is shown that the formulation outperforms common full-space formulations by a factor of over 3000, making it a viable tool for engineering applications.
Abstract: Data-driven models are becoming increasingly popular in engineering, on their own or in combination with mechanistic models. Commonly, the trained models are subsequently used in model-based optimization of design and/or operation of processes. Thus, it is critical to ensure that data-driven models are not evaluated outside their validity domain during process optimization. We propose a method to learn this validity domain and encode it as constraints in process optimization. We first perform a topological data analysis using persistent homology identifying potential holes or separated clusters in the training data. In case clusters or holes are identified, we train a one-class classifier, i.e., a one-class support vector machine, on the training data domain and encode it as constraints in the subsequent process optimization. Otherwise, we construct the convex hull of the data and encode it as constraints. We finally perform deterministic global process optimization with the data-driven models subject to their respective validity constraints. To ensure computational tractability, we develop a reduced-space formulation for trained one-class support vector machines and show that our formulation outperforms common full-space formulations by a factor of over 3000, making it a viable tool for engineering applications. The method is ready-to-use and available open-source as part of our MeLOn toolbox ( https://git.rwth-aachen.de/avt.svt/public/MeLOn ).

23 citations


Journal ArticleDOI
TL;DR: A complementarity-constrained nonlinear optimization model for the time-dependent control of district heating networks and proposes an instantaneous control approach for the discretized problem, discusses practically relevant penalty formulations, and presents preprocessing techniques used to simplify the mixing model at the nodes of the network.
Abstract: We develop a complementarity-constrained nonlinear optimization model for the time-dependent control of district heating networks. The main physical aspects of water and heat flow in these networks are governed by nonlinear and hyperbolic 1d partial differential equations. In addition, a pooling-type mixing model is required at the nodes of the network to treat the mixing of different water temperatures. This mixing model can be recast using suitable complementarity constraints. The resulting problem is a mathematical program with complementarity constraints subject to nonlinear partial differential equations describing the physics. In order to obtain a tractable problem, we apply suitable discretizations in space and time, resulting in a finite-dimensional optimization problem with complementarity constraints for which we develop a suitable reformulation with improved constraint regularity. Moreover, we propose an instantaneous control approach for the discretized problem, discuss practically relevant penalty formulations, and present preprocessing techniques that are used to simplify the mixing model at the nodes of the network. Finally, we use all these techniques to solve realistic instances. Our numerical results show the applicability of our techniques in practice.

23 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the Box-Behnken design for the optimization of the extraction process of plum seeds (Prunus domestica L.) using ultrasound-assisted extraction.
Abstract: This paper aimed to optimize the extraction of antioxidants from plum seeds (Prunus domestica L.) using ultrasound-assisted extraction. The Box–Behnken design was used for the optimization of the extraction process. The four extraction parameters, such as the extraction time (10–40 min), ethanol concentration (20–100%, v/v), liquid-to-solid ratio (10–30 cm3 g−1), and extraction temperature (30–70 °C) were varied to investigate their impact on the content of antioxidants. Using HPLC methods, the following phenolic compounds were identified and quantified per 100 g dry weight: rutin (6.39 mg), epigallocatechin (1.94 mg), gallic acid (0.64 mg), ferulic acid (14.30 mg), syringic acid (0.87 mg), epicatechin (0.95 mg), caffeic acid (0.30 mg), and coumaric acid (11.18 mg). The content of amygdalin was 1.5 mg g−1 of the dry extract obtained under optimal conditions. Antioxidant activity was determined using the DPPH assay and estimated based on the half-maximal inhibitory concentration (IC50). The IC50 value of the extract obtained under optimal conditions was 0.94 mg cm−3. The proposed green technique is an economical and alternative procedure for the extraction of antioxidants by application of non-toxic and eco-friendly solvents. From the ecological point of view, it is acceptable due to the reduction of waste either after plum cultivation or production of alcoholic beverages.

22 citations


Journal ArticleDOI
TL;DR: The results show that the control system behaviour can approach the DP benchmark if the superimposed controller bandwidth is tuned along with the allocation cost function weighting coefficients, where a fast controller tuning relates to better thermal comfort while a slow tuning results in improved efficiency.
Abstract: In order to increase the driving range of battery electric vehicles, while maintaining a high level of thermal comfort inside the passenger cabin, it is necessary to design an energy management system which optimally synthesizes multiple control actions of heating, ventilation and air-conditioning (HVAC) system. To gain an insight into optimal control actions and set a control benchmark, the paper first proposes an algorithm of dynamic programming (DP)-based optimisation of HVAC control variables, which minimises the conflicting criteria of passenger thermal comfort and HVAC efficiency. Next, a hierarchical structure of thermal comfort control system is proposed, which consists of optimised low-level feedback controllers, optimisation-based control allocation algorithm that sets references for the low-level controllers, and a superimposed cabin temperature controller that commands the cooling capacity to the allocation algorithm. Finally, the overall control system is verified by simulation for cool-down scenario, and the simulation results are compared with the DP benchmark. The results show that the control system behaviour can approach the DP benchmark if the superimposed controller bandwidth is tuned along with the allocation cost function weighting coefficients, where a fast controller tuning relates to better thermal comfort while a slow tuning results in improved efficiency.

21 citations


Journal ArticleDOI
TL;DR: The GOPS algorithm improves on earlier algorithms by (a) new center points are selected based on bivariate non-dominated sorting of previously evaluated points with additional constraints to ensure the objective value is below a target percentile and (b) as iterations increase, the number of centers decreases, and thenumber of evaluation points per center increases.
Abstract: This paper describes a new parallel global surrogate-based algorithm Global Optimization in Parallel with Surrogate (GOPS) for the minimization of continuous black-box objective functions that might have multiple local minima, are expensive to compute, and have no derivative information available. The task of picking P new evaluation points for P processors in each iteration is addressed by sampling around multiple center points at which the objective function has been previously evaluated. The GOPS algorithm improves on earlier algorithms by (a) new center points are selected based on bivariate non-dominated sorting of previously evaluated points with additional constraints to ensure the objective value is below a target percentile and (b) as iterations increase, the number of centers decreases, and the number of evaluation points per center increases. These strategies and the hyperparameters controlling them significantly improve GOPS’s parallel performance on high dimensional problems in comparison to other global optimization algorithms, especially with a larger number of processors. GOPS is tested with up to 128 processors in parallel on 14 synthetic black-box optimization benchmarking test problems (in 10, 21, and 40 dimensions) and one 21-dimensional parameter estimation problem for an expensive real-world nonlinear lake water quality model with partial differential equations that takes 22 min for each objective function evaluation. GOPS numerically significantly outperforms (especially on high dimensional problems and with larger numbers of processors) the earlier algorithms SOP and PSD-MADS-VNS (and these two algorithms have outperformed other algorithms in prior publications).

Journal ArticleDOI
TL;DR: This work investigates Gaussian process based Bayesian optimization, which iteratively builds up and improves a surrogate model of the objective function, at the same time accounting for uncertainties encountered during the optimization process.
Abstract: In this work, we advocate using Bayesian techniques for inversely identifying material parameters for multiscale crystal plasticity models. Multiscale approaches for modeling polycrystalline materials may significantly reduce the effort necessary for characterizing such material models experimentally, in particular when a large number of cycles is considered, as typical for fatigue applications. Even when appropriate microstructures and microscopic material models are identified, calibrating the individual parameters of the model to some experimental data is necessary for industrial use, and the task is formidable as even a single simulation run is time consuming (although less expensive than a corresponding experiment). For solving this problem, we investigate Gaussian process based Bayesian optimization, which iteratively builds up and improves a surrogate model of the objective function, at the same time accounting for uncertainties encountered during the optimization process. We describe the approach in detail, calibrating the material parameters of a high-strength steel as an application. We demonstrate that the proposed method improves upon comparable approaches based on an evolutionary algorithm and performing derivative-free methods.

Journal ArticleDOI
TL;DR: In this article, two alternative Bayesian optimization-based approaches are proposed to solve mixed variable optimization problems, in which the objective and constraint functions can depend simultaneously on continuous and discrete variables.
Abstract: Within the framework of complex system design, it is often necessary to solve mixed variable optimization problems, in which the objective and constraint functions can depend simultaneously on continuous and discrete variables Additionally, complex system design problems occasionally present a variable-size design space This results in an optimization problem for which the search space varies dynamically (with respect to both number and type of variables) along the optimization process as a function of the values of specific discrete decision variables Similarly, the number and type of constraints can vary as well In this paper, two alternative Bayesian optimization-based approaches are proposed in order to solve this type of optimization problems The first one consists of a budget allocation strategy allowing to focus the computational budget on the most promising design sub-spaces The second approach, instead, is based on the definition of a kernel function allowing to compute the covariance between samples characterized by partially different sets of variables The results obtained on analytical and engineering related test-cases show a faster and more consistent convergence of both proposed methods with respect to the standard approaches

Journal ArticleDOI
TL;DR: This work establishes the local superlinear convergence of the hybrid methodology in an infinite-dimensional setting and proves that convergence can take place in stronger norms than that of the Hilbert space if initial error and problem data permit.
Abstract: We propose a semismooth Newton-type method for nonsmooth optimal control problems. Its particular feature is the combination of a quasi-Newton method with a semismooth Newton method. This reduces the computational costs in comparison to semismooth Newton methods while maintaining local superlinear convergence. The method applies to Hilbert space problems whose objective is the sum of a smooth function, a regularization term, and a nonsmooth convex function. In the theoretical part of this work we establish the local superlinear convergence of the method in an infinite-dimensional setting and discuss its application to sparse optimal control of the heat equation subject to box constraints. We verify that the assumptions for local superlinear convergence are satisfied in this application and we prove that convergence can take place in stronger norms than that of the Hilbert space if initial error and problem data permit. In the numerical part we provide a thorough study of the hybrid approach on two optimal control problems, including an engineering problem from magnetic resonance imaging that involves bilinear control of the Bloch equations. We use this problem to demonstrate that the new method is capable of solving nonconvex, nonsmooth large-scale real-world problems. Among others, the study addresses mesh independence, globalization techniques, and limited-memory methods. We observe throughout that algorithms based on the hybrid methodology are several times faster in runtime than their semismooth Newton counterparts.

Journal ArticleDOI
TL;DR: This work describes a new logical expression system implementation for Pyomo allowing for a more intuitive description of logical propositions in GDP, and describes two new logic-based global optimization solver implementations built on Pyomo.
Abstract: We present three core principles for engineering-oriented integrated modeling and optimization tool sets—intuitive modeling contexts, systematic computer-aided reformulations, and flexible solution strategies—and describe how new developments in Pyomo.GDP for Generalized Disjunctive Programming (GDP) advance this vision. We describe a new logical expression system implementation for Pyomo.GDP allowing for a more intuitive description of logical propositions. The logical expression system supports automated reformulation of these logical constraints to linear constraints. We also describe two new logic-based global optimization solver implementations built on Pyomo.GDP that exploit logical structure to avoid “zero-flow” numerical difficulties that arise in nonlinear network design problems when nodes or streams disappear. These new solvers also demonstrate the capability to link to external libraries for expanded functionality within an integrated implementation. We present these new solvers in the context of a flexible array of solution paths available to GDP models. Finally, we present results on a new library of GDP models demonstrating the value of multiple solution approaches.

Journal ArticleDOI
TL;DR: A system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed- integer nonlinear and mixed-integer linear modeling approaches, leading to a Deterministic Equivalent of a two-stage stochastic optimization program.
Abstract: The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.

Journal ArticleDOI
TL;DR: This paper presents a control system design methodology for the drill-string rotary drive and draw-works hoist system aimed at their coordinated control for the purpose of establishing a fully-automated mechatronic system suitable for borehole drilling applications.
Abstract: This paper presents a control system design methodology for the drill-string rotary drive and draw-works hoist system aimed at their coordinated control for the purpose of establishing a fully-automated mechatronic system suitable for borehole drilling applications. Both the drill-string rotary drive and the draw-works hoist drive are equipped with proportional-integral (PI) speed controllers, which are readily available within modern controlled electrical drives. Moreover, the rotary speed control system is equipped with torsional active damping system and drill-string back-spinning prevention scheme for the case of stuck drill-bit scenario, whereas the draw-works-based drill-bit normal force control system is extended with the auxiliary control system aimed at timely prevention of the drill-string torsional overload. The design of proposed control systems has been based on suitable reduced-order control-oriented process models and a practical tuning methodology based on the damping optimum criterion aimed at achieving the desired level of closed-loop system damping. The functionality of the proposed cross-axis control system has been systematically verified, first by experimental tests of individual rotary/vertical axis control systems on a downscaled laboratory experimental setup, followed by thorough simulation study of the overall control system for realistic scenarios encountered in the field.

Journal ArticleDOI
TL;DR: This work proposes hierarchical decompositions to split the computations between a master problem and a set of decoupled subproblems which brings about organizational flexibility and distributed computation.
Abstract: Energy management can play a significant role in energy savings and temperature control of buildings, which consume a major share of energy resources worldwide. Model predictive control (MPC) has become a popular technique for energy management, arguably for its ability to cope with complex dynamics and system constraints. The MPC algorithms found in the literature are mostly centralized, with a single controller collecting signals and performing the computations. However, buildings are dynamic systems obtained by the interconnection of subsystems, with a distributed structure which is not necessarily explored by standard MPC. To this end, this work proposes hierarchical decompositions to split the computations between a master problem (centralized component) and a set of decoupled subproblems (distributed components) which brings about organizational flexibility and distributed computation. Three general methods are considered for hierarchical control and optimization, namely bilevel optimization, Benders and Lagrangean decomposition. Results are reported from a numerical analysis of the decompositions and a simulated application to the energy management of a building, in which a limited source of chilled water is distributed among HVAC units.

Journal ArticleDOI
TL;DR: In this article, a model predictive control strategy was formulated as a MILP optimization problem to schedule the heat supply of the cogeneration plants, heat pump and gas boilers as a function of heat load, waste heat production and electricity price forecasts.
Abstract: District heating and cooling networks are a key infrastructure to decarbonise the heating and cooling sector. Besides the design of new networks according to the principles of the 4th and 5th generation, operational aspects may significantly contribute to improve the efficiency of existing networks from both economic and environmental standpoints. This article is the second step of a work that aims to exploit the flexibility of existing networks and improve their economic and environmental performance, using the district heating network of Verona as a case study. In particular, the first part of the research demonstrated through numerical simulations that the thermal inertia of the water contained in the pipes can be used to shift the heat production of the generators over time by acting on the flow rate circulating in the network. This article shifts the focus from the heat distribution side to the heat supply. A model predictive control strategy was formulated as a MILP optimization problem to schedule the heat supply of the cogeneration plants, heat pump and gas boilers as a function of heat load, waste heat production and electricity price forecasts. Computer simulations of considered district heating network were carried out executing the optimization with a rolling-horizon scheme over two typical weeks. Results show that the proposed look-ahead control achieves a reduction in the operational costs of about 12.5% and 5.8%, respectively in a middle season and a winter representative week. Increasing the flexibility of the system with a centralized heat storage tank connected to the CHP and HP units, these percentage rise to respectively 20% and 6.3%. In the warmest periods, when the total installed power of the CHP and HP plants is sufficient to supply the entire heat demand during the peak, and the modulation of these plants has a higher impact, the cost reduction related to the additional thermal energy storage is more relevant.

Journal ArticleDOI
TL;DR: The modular open-source framework GRAMPC-D for model predictive control of distributed systems is presented and uses the concept of neighbor approximation to enhance convergence speed.
Abstract: The modular open-source framework GRAMPC-D for model predictive control of distributed systems is presented in this paper. The modular concept allows to solve optimal control problems in a centralized and distributed fashion using the same problem description. It is tailored to computational efficiency with the focus on embedded hardware. The distributed solution is based on the alternating direction method of multipliers and uses the concept of neighbor approximation to enhance convergence speed. The presented framework can be accessed through C++ and Python and also supports plug-and-play and data exchange between agents over a network.

Journal ArticleDOI
TL;DR: This paper provides a tutorial overview of robust optimization in power systems, including robust optimization and adaptive robust optimization, and introduces distributionally robust optimization.
Abstract: This paper provides a tutorial overview of robust optimization in power systems, including robust optimization and adaptive robust optimization. We also introduce distributionally robust optimization. For illustration purposes, we describe and analyze a short-term operation problem and a long-term planning one. The operation problem allows identifying the transmission line whose failure has the higher impact on the operation of the system (worst contingency). From a planning perspective, we describe and analyze the problem of identifying which are the most critical transmission lines (vulnerabilities) to be protected against intentional attacks or natural disasters. We provide as well a distributionally robust version of this problem. The operation problem is a robust optimization one, while the planning problem is an adaptive robust optimization one, including a distributionally robust variant.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the sequential quadratic programming (SQP) method for the numerical solution of an optimal control problem governed by a quasilinear parabolic partial differential equation.
Abstract: Based on the theoretical framework recently proposed by Bonifacius and Neitzel (Math Control Relat Fields 8(1):1–34, 2018. https://doi.org/10.3934/mcrf.2018001 ) we discuss the sequential quadratic programming (SQP) method for the numerical solution of an optimal control problem governed by a quasilinear parabolic partial differential equation. Following well-known techniques, convergence of the method in appropriate function spaces is proven under some common technical restrictions. Particular attention is payed to how the second order sufficient conditions for the optimal control problem and the resulting $$L^2$$ -local quadratic growth condition influence the notion of “locality” in the SQP method. Further, a new regularity result for the adjoint state, which is required during the convergence analysis, is proven. Numerical examples illustrate the theoretical results.

Journal ArticleDOI
TL;DR: A novel evolutionary interactive method called interactive K-RVEA is developed, which is suitable for computationally expensive problems and uses surrogate models to replace the original expensive objective functions to reduce the computation time.
Abstract: In this paper, we develop a novel evolutionary interactive method called interactive K-RVEA, which is suitable for computationally expensive problems. We use surrogate models to replace the original expensive objective functions to reduce the computation time. Typically, in interactive methods, a decision maker provides some preferences iteratively and the optimization algorithm narrows the search according to those preferences. However, working with surrogate models will introduce some inaccuracy to the preferences, and therefore, it would be desirable that the decision maker can work with the solutions that are evaluated with the original objective functions. Therefore, we propose a novel model management strategy to incorporate the decision maker’s preferences to select some of the solutions for both updating the surrogate models (to improve their accuracy) and to show them to the decision maker. Moreover, we solve a simulation-based computationally expensive optimization problem by finding an optimal configuration for an energy system of a heterogeneous business building complex. We demonstrate how a decision maker can interact with the method and how the most preferred solution is chosen. Finally, we compare our method with another interactive method, which does not have any model management strategy, and shows how our model management strategy can help the algorithm to follow the decision maker’s preferences.

Journal ArticleDOI
TL;DR: A highly efficient monolithic solver is developed based on an approximated Newton scheme for the primal equation and a preconditioned Richardson iteration for the dual problem to solve the resulting ill conditioned linear problems.
Abstract: In this paper we consider optimal control of nonlinear time-dependent fluid structure interactions. To determine a time-dependent control variable a BFGS algorithm is used, whereby gradient information is computed via a dual problem. To solve the resulting ill conditioned linear problems occurring in every time step of state and dual equation, we develop a highly efficient monolithic solver that is based on an approximated Newton scheme for the primal equation and a preconditioned Richardson iteration for the dual problem. The performance of the presented algorithms is tested for one 2d and one 3d example numerically.

Journal ArticleDOI
TL;DR: Two new rounding heuristics that exploit the value and a physical interpretation of the continuous relaxation solution are developed and a steepest-descent improvement heuristic is applied to obtain satisfactory solutions to both two- and three-dimensional inverse problems.
Abstract: We present a convection–diffusion inverse problem that aims to identify an unknown number of sources and their locations. We model the sources using a binary function, and we show that the inverse problem can be formulated as a large-scale mixed-integer nonlinear optimization problem. We show empirically that current state-of-the-art mixed-integer solvers cannot solve this problem and that applying simple rounding heuristics to solutions of the relaxed problem can fail to identify the correct number and location of the sources. We develop two new rounding heuristics that exploit the value and a physical interpretation of the continuous relaxation solution, and we apply a steepest-descent improvement heuristic to obtain satisfactory solutions to both two- and three-dimensional inverse problems. We also provide the code used in our numerical experiments in open-source format.

Journal ArticleDOI
TL;DR: A new matheuristic that integrates components from exact algorithms, machine learning techniques, and heuristics (local improvement and randomized search) is proposed for integrated stochastic optimization of mineral value chains.
Abstract: Mineral value chains, also known as mining complexes, involve mining, processing, stockpiling, waste management and transportation activities. Their optimization is typically partitioned into separate stages, considered sequentially. An integrated stochastic optimization of these stages has been shown to increase the net present value of the related mining projects and operations, reduce risk in meeting production targets, and lead to more robust and coordinated schedules. However, it entails solving a larger and more complex stochastic optimization problem than separately optimizing individual components of a mineral value chain does. To tackle this complex optimization problem, a new matheuristic that integrates components from exact algorithms (relaxation and decomposition), machine learning techniques (reinforcement learning and artificial neural networks), and heuristics (local improvement and randomized search) is proposed. A general mathematical formulation that serves as the basis for the proposed methodology is also introduced, and results of computational experiments are presented.

Journal ArticleDOI
TL;DR: In this article, an efficient and compact MATLAB code for 3D stress-based sensitivity analysis is presented, which includes the finite element analysis and p-norm stress sensitivity analysis based on the adjoint method.
Abstract: This paper presents an efficient and compact MATLAB code for three-dimensional stress-based sensitivity analysis. The 146 lines code includes the finite element analysis and p-norm stress sensitivity analysis based on the adjoint method. The 3D sensitivity analysis for p-norm global stress measure is derived and explained in detail accompanied by corresponding MATLAB code. The correctness of the analytical sensitivity is verified by comparison with finite difference approximation. The nonlinear optimization solver is chosen as the Method of moving asymptotes (MMA). Three typical volume-constrained stress minimization problems are presented to verify the effectiveness of sensitivity analysis code. The MATLAB code presented in this paper can be extended to resolve different stress related 3D topology optimization problems. The complete program for sensitivity analysis is given in the Appendix and is intended for educational purposes. MATLAB code is additionally provided in electronic supplementary material for a simple cantilever beam optimization.

Journal ArticleDOI
TL;DR: To find an optimal tool path guaranteeing minimal production time and high quality of the workpiece, a mixed-integer linear programming problem is derived that takes thermal conduction and radiation during the process into account and aims to minimize temperature gradients inside the material.
Abstract: We consider two mathematical problems that are connected and occur in the layer-wise production process of a workpiece using wire-arc additive manufacturing. As the first task, we consider the automatic construction of a honeycomb structure, given the boundary of a shape of interest. In doing this, we employ Lloyd’s algorithm in two different realizations. For computing the incorporated Voronoi tesselation we consider the use of a Delaunay triangulation or alternatively, the eikonal equation. We compare and modify these approaches with the aim of combining their respective advantages. Then in the second task, to find an optimal tool path guaranteeing minimal production time and high quality of the workpiece, a mixed-integer linear programming problem is derived. The model takes thermal conduction and radiation during the process into account and aims to minimize temperature gradients inside the material. Its solvability for standard mixed-integer solvers is demonstrated on several test-instances. The results are compared with manufactured workpieces.

Journal ArticleDOI
TL;DR: Two mixed integer programs are developed in order to use the methods of mathematical optimization in the context of topology optimization on the basis of a fitted ground structure method for additive manufacturing.
Abstract: One crucial advantage of additive manufacturing regarding the optimization of lattice structures is that there is a reduction in manufacturing constraints compared to classical manufacturing methods. To make full use of these advantages and to exploit the resulting potential, it is necessary that lattice structures are designed using optimization. Against this backdrop, two mixed integer programs are developed in order to use the methods of mathematical optimization in the context of topology optimization on the basis of a fitted ground structure method. In addition, an algorithm driven product design process is presented to systematically combine the areas of mathematical optimization, computer aided design, finite element analysis and additive manufacturing. Our developed computer aided design tool serves as an interface between state-of-the-art mathematical solvers and computer aided design software and is used for the generation of design data based on optimization results. The first mixed integer program focuses on powder-based additive manufacturing, including a preprocessing that allows a multi-material topology optimization. The second mixed integer program generates support-free lattice structures for additive manufacturing processes usually depending on support structures, by considering geometry-based design rules for inclined and support-free cylinders and assumptions for location and orientation of parts within a build volume. The problem to strengthen a lattice structure by local thickening or beam addition or both, with the objective function to minimize costs, is modeled. In doing so, post-processing is excluded. An optimization of a static area load with a practice-oriented number of connection nodes and beams was manufactured using the powder-based additive manufacturing system EOS INT P760.

Journal ArticleDOI
TL;DR: A new framework for strengthening cutting planes of nonlinear convex constraints, to obtain tighter outer approximations and proves that both types of cuts are valid and that the second type of cut can dominate both the first type and the original cut.
Abstract: Generating polyhedral outer approximations and solving mixed-integer linear relaxations remains one of the main approaches for solving convex mixed-integer nonlinear programming (MINLP) problems. There are several algorithms based on this concept, and the efficiency is greatly affected by the tightness of the outer approximation. In this paper, we present a new framework for strengthening cutting planes of nonlinear convex constraints, to obtain tighter outer approximations. The strengthened cuts can give a tighter continuous relaxation and an overall tighter representation of the nonlinear constraints. The cuts are strengthened by analyzing disjunctive structures in the MINLP problem, and we present two types of strengthened cuts. The first type of cut is obtained by reducing the right-hand side value of the original cut, such that it forms the tightest generally valid inequality for a chosen disjunction. The second type of cut effectively uses individual right-hand side values for each term of the disjunction. We prove that both types of cuts are valid and that the second type of cut can dominate both the first type and the original cut. We use the cut strengthening in conjunction with the extended supporting hyperplane algorithm, and numerical results show that the strengthening can significantly reduce both the number of iterations and the time needed to solve convex MINLP problems.