scispace - formally typeset
Search or ask a question

Showing papers in "Optimization and Engineering in 2022"


Journal ArticleDOI
TL;DR: In this paper , a simple, compact and efficient 90-line pedagogical MATLAB code for topology optimization using hexagonal elements (honeycomb tessellation) is presented.
Abstract: This paper provides a simple, compact and efficient 90-line pedagogical MATLAB code for topology optimization using hexagonal elements (honeycomb tessellation). Hexagonal elements provide nonsingular connectivity between two juxtaposed elements and, thus, subdue checkerboard patterns and point connections inherently from the optimized designs. A novel approach to generate honeycomb tessellation is proposed. The element connectivity matrix and corresponding nodal coordinates array are determined in 5 (7) and 4 (6) lines, respectively. Two additional lines for the meshgrid generation are required for an even number of elements in the vertical direction. The code takes a fraction of a second to generate meshgrid information for the millions of hexagonal elements. Wachspress shape functions are employed for the finite element analysis, and compliance minimization is performed using the optimality criteria method. The provided Matlab code and its extensions are explained in detail. Options to run the optimization with and without filtering techniques are provided. Steps to include different boundary conditions, multiple load cases, active and passive regions, and a Heaviside projection filter are also discussed. The code is provided in Appendix~A, and it can also be downloaded along with supplementary materials from \url{https://github.com/PrabhatIn/HoneyTop90}.

8 citations


Journal ArticleDOI
TL;DR: In this article , the performance of a SWAT model in a geologically heterogeneous basin was optimized by incorporating geological properties through semi-automatic calibration strategies, and the results from the four calibration schemes were evaluated both statistically and by assessing their plausibility.
Abstract: Abstract Hydrological models are frequently used for water resources management. One of the most widely used is the Soil and Water Assessment Tool (SWAT). However, one weakness of SWAT is its simplicity in modeling groundwater, which might affect the representation of hydrological processes. Therefore, modeling strategies that are geared towards achieving more realistic simulations would increase the reliability and credibility of SWAT model predictions. In this study, the performance of a SWAT model in a geologically heterogeneous basin was optimized by incorporating geological properties through semi-automatic calibration strategies. Based on its geology, the basin was split into four regions, and a default calibration (Scheme I) was compared with three designed calibration schemes: a zonal calibration (Scheme II), obtaining a parameter set in each of the regions, a zonal calibration after introducing an impervious layer in an aquifuge region (Scheme III), and a final calibration scheme (Scheme IV) where an aquifer region was re-calibrated, changing a parameter controlling the required content of water in the aquifer for return flow to increase groundwater flow. The results from the four schemes were evaluated both statistically and by assessing their plausibility to determine which one resulted in the best model performance and the most realistic simulations. All schemes resulted in a satisfactory statistical model performance, but the sequential optimization in the final scheme realistically reproduced the heterogenous hydrological behavior of the geological regions within the basin. To the best of our knowledge, our work addresses this issue for the first time, providing new insights about how to simulate catchments including aquifuge substrates.

8 citations


Journal ArticleDOI
TL;DR: In this article , the authors embellish a mixed-integer program that prescribes a set of renewable energy, conventional generation, and storage technologies to procure, along with a corresponding dispatch strategy.
Abstract: We embellish a mixed-integer program that prescribes a set of renewable energy, conventional generation, and storage technologies to procure, along with a corresponding dispatch strategy. Specifically, we add combined heat and power to this set. The model minimizes fixed and operational costs less incentives for the use of various technologies, subject to a series of component interoperability and system-wide constraints. The resulting mixed-integer linear program contains hundreds of thousands of variables and constraints. We demonstrate how to efficiently formulate and solve the corresponding instances such that we produce near-optimal solutions in minutes. A previous rendition of the model required hours of solution time for the same instances.

8 citations




Journal ArticleDOI
TL;DR: The ultimate goal is to set the standards for creating a FAIR database of chemical process flowsheets, which would be of great value for future data analysis and processing.
Abstract: SFILES are a text-based notation for chemical process flowsheets. They were originally proposed by d’Anterroches (Process flow sheet generation & design through a group contribution approach) who was inspired by the text-based SMILES notation for molecules. The text-based format has several advantages compared to flowsheet images regarding the storage format, computational accessibility, and eventually for data analysis and processing. However, the original SFILES version cannot describe essential flowsheet configurations unambiguously, such as the distinction between top and bottom products. Neither is it capable of describing the control structure required for the safe and reliable operation of chemical processes. Also, there is no publicly available software for decoding or encoding chemical process topologies to SFILES. We propose the SFILES 2.0 with a complete description of the extended notation and naming conventions. Additionally, we provide open-source software for the automated conversion between flowsheet graphs and SFILES 2.0 strings. This way, we hope to encourage researchers and engineers to publish their flowsheet topologies as SFILES 2.0 strings. The ultimate goal is to set the standards for creating a FAIR database of chemical process flowsheets, which would be of great value for future data analysis and processing.

5 citations


Journal ArticleDOI
TL;DR: In this paper , the authors compare the conventional problem formulations with less common ones (using equilibrium constraints, step functions, or multiplications of binary and continuous variables to model disjunctions) using three case studies.
Abstract: Abstract Superstructure optimization is a powerful but computationally demanding task that can be used to select the optimal structure among many alternatives within a single optimization. In chemical engineering, such problems naturally arise in process design, where different process alternatives need to be considered simultaneously to minimize a specific objective function (e.g., production costs or global warming impact). Conventionally, superstructure optimization problems are either formulated with the Big-M or the Convex Hull reformulation approach. However, for problems containing nonconvex functions, it is not clear whether these yield the most computationally efficient formulations. We therefore compare the conventional problem formulations with less common ones (using equilibrium constraints, step functions, or multiplications of binary and continuous variables to model disjunctions) using three case studies. First, a minimalist superstructure optimization problem is used to derive conjectures about their computational performance. These conjectures are then further investigated by two more complex literature benchmarks. Our analysis shows that the less common approaches tend to result in a smaller problem size, while keeping relaxations comparably tight—despite the introduction of additional nonconvexities. For the considered case studies, we demonstrate that all reformulation approaches can further benefit from eliminating optimization variables by a reduced-space formulation. For superstructure optimization problems containing nonconvex functions, we therefore encourage to also consider problem formulations that introduce additional nonconvexities but reduce the number of optimization variables.

5 citations


Journal ArticleDOI
TL;DR: In this article , a multi-objective Bayesian Optimization algorithm based on Deep Gaussian Process is proposed in order to jointly model the objective functions, which allows to take advantage of the correlations (linear and non-linear) between the objectives, and speed up the convergence to the Pareto front.
Abstract: Bayesian Optimization has become a widely used approach to perform optimization involving computationally intensive black-box functions, such as the design optimization of complex engineering systems. It is often based on Gaussian Process regression as a Bayesian surrogate model of the exact functions. Bayesian Optimization has been applied to single and multi-objective optimization problems. In case of multi-objective optimization, the Bayesian models used in optimization often consider the multiple objectives separately and do not take into account the possible correlation between them near the Pareto front. In this paper, a Multi-Objective Bayesian Optimization algorithm based on Deep Gaussian Process is proposed in order to jointly model the objective functions. It allows to take advantage of the correlations (linear and non-linear) between the objectives in order to improve the search space exploration and speed up the convergence to the Pareto front. The proposed algorithm is compared to classical Bayesian Optimization in four analytical functions and two aerospace engineering problems.

5 citations



Journal ArticleDOI
TL;DR: In this paper , a mathematical framework that optimally adapts the treatment-length of an RT plan based on acquired mid-treatment biomarker information, while accounting for the inexact nature of this information is presented.
Abstract: Abstract Traditionally, optimization of radiation therapy (RT) treatment plans has been done before the initiation of RT course, using population-wide estimates for patients’ response to therapy. However, recent technological advancements have enabled monitoring individual patient response during the RT course, in the form of biomarkers. Although biomarker data remains subject to substantial uncertainties, information extracted from this data may allow the RT plan to be adapted in a biologically informative way. We present a mathematical framework that optimally adapts the treatment-length of an RT plan based on the acquired mid-treatment biomarker information, while accounting for the inexact nature of this information. We formulate the adaptive treatment-length optimization problem as a 2-stage problem, wherein the information about the model parameters gathered during the first stage influences the decisions in the second stage. Using Adjustable Robust Optimization (ARO) techniques we derive explicit optimal decision rules for the stage-2 decisions and solve the optimization problem. The problem allows for multiple worst-case optimal solutions. To discriminate between these, we introduce the concept of Pareto Adjustable Robustly Optimal solutions. In numerical experiments using lung cancer patient data, the ARO method is benchmarked against several other static and adaptive methods. In the case of exact biomarker information, there is sufficient space to adapt, and numerical results show that taking into account both robustness and adaptability is not necessary. In the case of inexact biomarker information, accounting for adaptability and inexactness of biomarker information is particularly beneficial when robustness (w.r.t. organ-at-risk (OAR) constraint violations) is of high importance. If minor OAR violations are allowed, a nominal folding horizon approach (NOM-FH) is a good performing alternative, which can outperform ARO. Both the difference in performance and the magnitude of OAR violations of NOM-FH are highly influenced by the biomarker information quality.

4 citations



Journal ArticleDOI
TL;DR: In this article , a revenue-maximizing non-convex mixed-integer, quadradically-constrained program is proposed to support real-time decision support in a CSP.
Abstract: Concentrating solar power (CSP) plants present a promising path towards utility-scale renewable energy. The power tower, or central receiver, configuration can achieve higher operating temperatures than other forms of CSP, and, like all forms of CSP, naturally pairs with comparatively inexpensive thermal energy storage, which allows CSP plants to dispatch electricity according to market price incentives and outside the hours of solar resource availability. Currently, CSP plants commonly include a steam Rankine power cycle and several heat exchange components to generate high-pressure steam using stored thermal energy. The efficiency of the steam Rankine cycle depends on the temperature of the plant’s operating fluid, and so is a main concern of plant operators. However, the variable nature of the solar resource and the conservatism with which the receiver is operated prevent perfect control over the receiver outlet temperature. Therefore, during periods of solar variability, collection occurs at lower-than-design temperature. To support operator decisions in a real-time setting, we develop a revenue-maximizing non-convex mixed-integer, quadradically-constrained program which determines a dispatch schedule with sub-hourly time fidelity and considers temperature-dependent power cycle efficiency. The exact nonlinear formulation proves intractable for real-time decision support. We present exact and inexact techniques to improve problem tractability that include a hybrid nonlinear and linear formulation. Our approach admits solutions within approximately 3% of optimality, on average, within a five-minute time limit, demonstrating its usability for decision support in a real-time setting.

Journal ArticleDOI
TL;DR: In this paper , a semi-intrusive approach for robust design optimization is presented, which requires derivatives with respect to design variables, random variables as well as mixed derivatives, and is implemented as an add-on for commercial software.
Abstract: Abstract A semi-intrusive approach for robust design optimization is presented. The stochastic moments of the objective function and constraints are estimated using a Taylor series-based approach, which requires derivatives with respect to design variables, random variables as well as mixed derivatives. The required derivatives with respect to design variables are determined using the intrusive adjoint method available in commercial software. The partial derivatives with respect to random parameters as well as the mixed second derivatives are approximated non-intrusively using finite differences. The presented approach provides a semi-intrusive procedure for robust design optimization at reasonable computational cost while allowing an arbitrary choice of random parameters. The approach is implemented as an add-on for commercial software. The method and its limitations are demonstrated by academic test cases and industrial applications.

Journal ArticleDOI
TL;DR: In this article , the authors present a system optimization tool, HOPS, that has been adopted as a central component of the Virgin Hyperloop design process, and discuss the choice of objective function, the use of a convex optimization technique called geometric programming, and the level of modeling fidelity that has allowed us to capture the system's many intertwined, and often recursive, design relationships.
Abstract: Hyperloop system design is a uniquely coupled problem because it involves the simultaneous design of a complex, high-performance vehicle and its accompanying infrastructure. In the clean-sheet design of this new mode of high-speed mass transportation there is an excellent opportunity for the application of rigorous system optimization techniques. This work presents a system optimization tool, HOPS, that has been adopted as a central component of the Virgin Hyperloop design process. We discuss the choice of objective function, the use of a convex optimization technique called geometric programming, and the level of modeling fidelity that has allowed us to capture the system’s many intertwined, and often recursive, design relationships. We also highlight the ways in which the tool has been used. Because organizational confidence in a model is as vital as its technical merit, we close with a discussion of the measures taken to build stakeholder trust in HOPS.

Journal ArticleDOI
TL;DR: In this article , a refined inertial difference-of-convex (DC) algorithm (RInDCA) is proposed, which is based on InDCA with an enlarged inertial step-size.
Abstract: In this paper we consider the difference-of-convex (DC) programming problems, whose objective function is the difference of two convex functions. The classical DC Algorithm (DCA) is well-known for solving this kind of problems, which generally returns a critical point. Recently, an inertial DC algorithm (InDCA) equipped with heavy-ball inertial-force procedure was proposed in de Oliveira et al. (Set-Valued and Variational Analysis 27(4):895--919, 2019), which potentially helps to improve both the convergence speed and the solution quality. Based on InDCA, we propose a refined inertial DC algorithm (RInDCA) equipped with enlarged inertial step-size compared with InDCA. Empirically, larger step-size accelerates the convergence. We demonstrate the subsequential convergence of our refined version to a critical point. In addition, by assuming the Kurdyka-{\L}ojasiewicz (KL) property of the objective function, we establish the sequential convergence of RInDCA. Numerical simulations on checking copositivity of matrices and image denoising problem show the benefit of larger step-size.


Journal ArticleDOI
TL;DR: In this article , the effects of the changes in prices and available carbon sink are considered in management of wood purchasing at the level of local districts, and the results testify that the model optimizes wood purchasing in the districts at the way of CO 2 emission allowance market.
Abstract: Abstract The faster market changes of EU’s CO 2 emission allowance price have increased operation challenges in wood supply of forest industry. The objectives of this study are to present basics of its data-driven modeling for purchasing renewable forest wood. Particularly, the effects of the changes in prices and available carbon sink are considered in management of wood purchasing at the level of the local districts. Two scenarios described procurement situations in non-renewable carbon sinks. The results were compared to the scenario in renewable carbon sink of carbon–neutral forestry. Time-varying emission-allowance parameters of models affected wood purchase and deliveries in the districts. Therefore, cost efficiency of wood-supply operations, as well as the utilization rate of renewable wood resources, can be optimized by data-driven dynamic wood-flow models in digitalized decision support. In addition, the results testify that the model optimizes wood purchasing in the districts at the way of CO 2 emission allowance market. Therefore, by using the model wood-supply operations could be optimized toward carbon neutrality, which is important success factor of forest industry.


Journal ArticleDOI
TL;DR: Investigations on overall accuracy and computation time and in the preliminary results reveal that SO based approaches can provide cost-effective improvements in the predictive power of data mining models.


Journal ArticleDOI
TL;DR: In this paper , an integrated methodology to define the routes of platform supply vessels and port schedules in a three-phase framework is presented, where the first phase decomposes the problem using a clustering heuristic and then solves periodic supply vessel routing problems for each cluster.
Abstract: In this study, we deal with a real-world problem on oil and gas upstream logistics, comprehending the transport of goods from ports to maritime units, through vessels called Platform Supply Vessels (PSVs). We present an integrated methodology to define the routes of these vessels and port schedules in a three-phase framework. In the first phase, we decompose the problem using a clustering heuristic and then solve periodic supply vessel routing problems for each cluster. The second phase employs a mixed-integer programming model for port scheduling and berth allocation. Finally, in the third phase, given port departure times, the routes are re-sequenced to respect opening time constraints at installations, aiming to reduce waiting times and to balance the intervals between successive services. The framework was validated and evaluated considering a real scenario from an industrial partner located in Rio de Janeiro, Brazil. The experiments’ results revealed that the framework could consistently and significantly outperform the solution adopted by the company in terms of economic costs.



Journal ArticleDOI
TL;DR: In this article , a new exact method to calculate worst-case parameter realizations in two-stage robust optimization problems with categorical or binary-valued uncertain data is presented, where the binary parameters switch on or off constraints as these are commonly encountered in applications.
Abstract: This paper presents a new exact method to calculate worst-case parameter realizations in two-stage robust optimization problems with categorical or binary-valued uncertain data. Traditional exact algorithms for these problems, notably Benders decomposition and column-and-constraint generation, compute worst-case parameter realizations by solving mixed-integer bilinear optimization subproblems. However, their numerical solution can be computationally expensive not only due to their resulting large size after reformulating the bilinear terms, but also because decision-independent bounds on their variables are typically unknown. We propose an alternative Lagrangian dual method that circumvents these difficulties and is readily integrated in either algorithm. We specialize the method to problems where the binary parameters switch on or off constraints as these are commonly encountered in applications, and discuss extensions to problems that lack relatively complete recourse and to those with integer recourse. Numerical experiments provide evidence of significant computational improvements over existing methods.




Journal ArticleDOI
TL;DR: In this article , a robust approximation of joint chance constrained DC optimal power flow in combination with a model-based prediction of uncertain power supply via R-vine copulas is presented to optimize the discrete curtailment of solar feed-in in an electrical distribution network.
Abstract: Abstract We present a robust approximation of joint chance constrained DC optimal power flow in combination with a model-based prediction of uncertain power supply via R-vine copulas. It is applied to optimize the discrete curtailment of solar feed-in in an electrical distribution network and guarantees network stability under fluctuating feed-in. This is modeled by a two-stage mixed-integer stochastic optimization problem proposed by Aigner et al. (Eur J Oper Res (2022) https://doi.org/10.1016/j.ejor.2021.10.051 ). The solution approach is based on the approximation of chance constraints via robust constraints using suitable uncertainty sets. The resulting robust optimization problem has a known equivalent tractable reformulation. To compute uncertainty sets that lead to an inner approximation of the stochastic problem, an R-vine copula model is fitted to the distribution of the multi-dimensional power forecast error, i.e., the difference between the forecasted solar power and the measured feed-in at several network nodes. The uncertainty sets are determined by encompassing a sufficient number of samples drawn from the R-vine copula model. Furthermore, an enhanced algorithm is proposed to fit R-vine copulas which can be used to draw conditional samples for given solar radiation forecasts. The experimental results obtained for real-world weather and network data demonstrate the effectiveness of the combination of stochastic programming and model-based prediction of uncertainty via copulas. We improve the outcomes of previous work by showing that the resulting uncertainty sets are much smaller and lead to less conservative solutions while maintaining the same probabilistic guarantees.

Journal ArticleDOI
TL;DR: In this paper , the authors formulated the bilevel problem with mixed-integer decision variables in the upper and the lower level, and proposed an algorithm to solve this problem based on the deterministic and global algorithm by Djelassi, Glass and Mitsos.
Abstract: Abstract Energy-intensive production sites are often supplied with energy by on-site energy systems. Commonly, the scheduling of the systems is performed sequentially, starting with the scheduling of the production system. Often, the on-site energy system is operated by a different company than the production system. In consequence, the production and the energy system schedule their operation towards misaligned objectives leading in general to suboptimal schedules for both systems. To reflect the independent optimization with misaligned objectives, the scheduling problem of the production system can be formulated as a bilevel problem. We formulate the bilevel problem with mixed-integer decision variables in the upper and the lower level, and propose an algorithm to solve this bilevel problem based on the deterministic and global algorithm by Djelassi, Glass and Mitsos (J Glob Optim 75:341–392, 2019. https://doi.org/10.1007/s10898-019-00764-3) for bilevel problems with coupling equality constraints. The algorithm works by discretizing the independent lower-level variables. In the scheduling problem considered herein, the only coupling equality constraints are energy balances in the lower level. Since an intuitive distinction is missing between dependent and independent variables, we specialize the algorithm and add a procedure to identify independent variables to be discretized. Thereby, we preserve convergence guarantees. The performance of the algorithm is demonstrated in two case studies. In the case studies, the production system favors different technologies for the energy supply than the energy system. By solving the bilevel problem, the production system identifies an energy demand, which leads to minimal cost. Additionally, we demonstrate the benefits of solving the bilevel problem instead of solving the common integrated or sequential problem.