scispace - formally typeset
Search or ask a question

Showing papers in "A Quarterly Journal of Operations Research in 2018"



Journal ArticleDOI
TL;DR: This paper presents the convex programming problems underlying SVM focusing on supervised binary classification and analyzes the most important and used optimization methods for SVM training problems, and discusses how the properties of these problems can be incorporated in designing useful algorithms.
Abstract: Support Vector Machine (SVM) is one of the most important class of machine learning models and algorithms, and has been successfully applied in various fields. Nonlinear optimization plays a crucial role in SVM methodology, both in defining the machine learning models and in designing convergent and efficient algorithms for large-scale training problems. In this paper we present the convex programming problems underlying SVM focusing on supervised binary classification. We analyze the most important and used optimization methods for SVM training problems, and we discuss how the properties of these problems can be incorporated in designing useful algorithms.

26 citations


Journal ArticleDOI
TL;DR: Under suitable assumptions on the generalized convexity of objective and constraint functions, sufficient conditions for LU-optimal solutions for constrained interval-valued optimization problems involving inequality, equality and set constraints in Banach spaces are given.
Abstract: Fritz John and Karush–Kuhn–Tucker necessary conditions for local LU-optimal solutions of the constrained interval-valued optimization problems involving inequality, equality and set constraints in Banach spaces in terms of convexificators are established. Under suitable assumptions on the generalized convexity of objective and constraint functions, sufficient conditions for LU-optimal solutions are given. The dual problems of Mond–Weir and Wolfe types are studied together with weak and strong duality theorems for them.

17 citations


Journal ArticleDOI
TL;DR: The H-BFGS quasi-Newton multiobjective optimization method provides a higher-order accuracy in approximating the second order curvature of the problem functions than the BFGS and SS- BFGS methods, and has some benefits compared to the other methods as shown in the numerical results.
Abstract: This work is an attempt to develop multiobjective versions of some well-known single objective quasi-Newton methods, including BFGS, self-scaling BFGS (SS-BFGS), and the Huang BFGS (H-BFGS). A comprehensive and comparative study of these methods is presented in this paper. The Armijo line search is used for the implementation of these methods. The numerical results show that the Armijo rule does not work the same way for the multiobjective case as for the single objective case, because, in this case, it imposes a large computational effort and significantly decreases the speed of convergence in contrast to the single objective case. Hence, we consider two cases of all multi-objective versions of quasi-Newton methods: in the presence of the Armijo line search and in the absence of any line search. Moreover, the convergence of these methods without using any line search under some mild conditions is shown. Also, by introducing a multiobjective subproblem for finding the quasi-Newton multiobjective search direction, a simple representation of the Karush–Kuhn–Tucker conditions is derived. The H-BFGS quasi-Newton multiobjective optimization method provides a higher-order accuracy in approximating the second order curvature of the problem functions than the BFGS and SS-BFGS methods. Thus, this method has some benefits compared to the other methods as shown in the numerical results. All mentioned methods proposed in this paper are evaluated and compared with each other in different aspects. To do so, some well-known test problems and performance assessment criteria are employed. Moreover, these methods are compared with each other with regard to the expended CPU time, the number of iterations, and the number of function evaluations.

12 citations


Journal ArticleDOI
TL;DR: Lin et al. as discussed by the authors derived analytic formulas for the overall efficacy of corneal collagen crosslinking (CXL) based on coupled kinetic equations including both nonoxygen-mediated (NOM) and oxygenmediated (OM) type-II mechanisms.
Abstract: Aims: To derive analytic formulas for the overall efficacy of corneal collagen crosslinking (CXL) based on coupled kinetic equations including both non-oxygen-mediated (NOM) and oxygenmediated (OM) type-II mechanisms. Study Design: modeling the kinetics of CXL. Place and Duration of Study: Taipei, Taiwan, between June, 2017 and January 2018. Methodology: Coupled kinetic equations are derived under the quasi-steady state condition for the 3-pathway mechanisms of CXL. For type-I CXL, the riboflavin triplet state [T3] may interact directly with the stroma collagen substrate [A] under NOM, or with the ground-state oxygen [O2] to form reactive oxygen species [O-] under OM. For type-II process, [T3] interacts with [O2] to form a singlet oxygen [ 1 O2]. Both reactive oxygen species (ROS), [O-] and [ 1 O2], can relax to [O2], or interact with the extracellular matrix (or the stroma substrate [A]) for crosslinking. Results: In the first 3 to 20 seconds, CXL efficacy is governed by both type-I and –II mechanisms, and after that period type-I, NOM is the predominant contribution, while oxygen for OM only plays a limited and transient role, in contrary to the conventionally believed OM-dominant mechanism. The riboflavin profile has a much slower depletion rate than that of oxygen profile. The ratio between NOM-type-I and OM depends on the relative initial concentration of [A] and [1O2] and their diffusion depths in the stroma. The overall CXL efficacy is proportional to the UV light dose (or fluence), the Original Research Article Lin; OR, 8(1): 1-11, 2018; Article no.OR.39089 2 riboflavin, C (z, t), and oxygen, [O2], initial concentration, where efficacy is limited by the depletion of either C (z, t) or [O2]. Conclusion: Resupply of riboflavin and/or oxygen concentration under a controlled-concentrationmethod (CCM) during the UV exposure may improve the overall efficacy, specially for the accelerated CXL which has lower efficacy than the standard Dresden low-power (under noncontrolled concentration).

12 citations


Journal ArticleDOI
TL;DR: Kowalski as discussed by the authors discusses how the notion of the Church was understood and explained by authors of sixteenth-century Polish Catholic and Evangelical catechisms and illustrates the reception of Evangelical theologies in the Kingdom of Poland and in the Grand Duchy of Lithuania.
Abstract: Th e article discusses how the notion of the Church was understood and explained by authors of sixteenth-century Polish Catholic and Evangelical catechisms. Teaching of the constitution of a church was a basic pastoral duty and part of the rudimentary knowledge provided to the faithful. Six Catholic catechisms of the years 1553 to 1600 and thirteen Evangelical ones, which were published between 1543 and 1609, constitute the main source base, the latter manuals being penned under the infl uence of Luther, Melanchthon as well as South-German and Swiss theologians. Following the Tridentine programme, the Catholic authors present their Church as unifi ed under the Pope’s authority and the only inheritor of the works of the Apostles. IDEA OF “TRUE” CHURCH IN POLISH CATECHISMS 47 Th e veracity of its teaching is testifi ed to with God’s unnatural interventions. Th e Protestant authors explain the basically coherent, relevant ideas of the Reformation’s protagonists. Th ey teach about “the visible and outward Church”, which is manifested by all those congregations that are fed by God’s pure Word, and where the sacraments are duly administered. Th ere is also “the inward and invisible Church”, which the faithful confess in the Credo. It comprises all disciples of Christ, who is the only head of His Church. Th us, the teaching on the Church presented in the Evangelical sources that are employed belongs to the mainstream of sixteenth-century Protestantism and aptly illustrates the reception of Evangelical theologies in the Kingdom of Poland and in the Grand Duchy of Lithuania. In the analysed sources, arguments for the veracity of the Church are always supplemented with the refutation of contradictory standpoints through reference to the Bible and the Church Fathers, mostly to Augustine. Despite strong polemical tone, the Biblical grounds of the Church could contribute to communication and understanding between Christians of antagonistic denominations, and this could sometimes result in conversion. Th e explanation of ecclesiological rudiments was easier for the Catholic clergy, who referred to tradition and emotions, while Evangelical pastors could not ignore the abstract concepts of the “veracity” and “spiritual connectedness” of Christians, which were more diffi cult to render to the laity. Waldemar Kowalski, professor in history at the Jan Kochanowski University in Kielce. His scholarly interest encompasses interconfessional and ethnic relations, history of the Catholic Church in Poland from the fi fteenth to the eighteenth century, emigration from the British Isles to Central Europe from the sixteenth to the seventeenth century, auxiliary sciences of history, and – in particular – the history of archives and epigraphy. E-mail: kowalski@ujk.edu.pl Trans. by Bartosz Wójcik First published as: “‘To jest owczarnia onego Dobrego Pasterza’. Pojęcie ‘prawdziwego’ Kościoła w polskich szesnastowiecznych katechizmach,” Odrodzenie i Reformacja w Polsce 60 (2016), pp. 29–71 Th e publication of this English translation has received additional funding from the Ministry of Science and Higher Education of the Republic of Poland

12 citations


Journal ArticleDOI
TL;DR: It is shown that under suitable assumptions on generalized convexity, sufficient optimality conditions for efficient solutions of unconstrained vector equilibrium problem without constraints in Banach spaces with stable functions are established.
Abstract: This article presents necessary and sufficient optimality conditions for weakly efficient solution, Henig efficient solution, globally efficient solution and superefficient solution of vector equilibrium problem without constraints in terms of contingent derivatives in Banach spaces with stable functions. Using the steadiness and stability on a neighborhood of optimal point, necessary optimality conditions for efficient solutions are derived. Under suitable assumptions on generalized convexity, sufficient optimality conditions are established. Without assumptions on generalized convexity, a necessary and sufficient optimality condition for efficient solutions of unconstrained vector equilibrium problem is also given. Many examples to illustrate for the obtained results in the paper are derived as well.

11 citations


Book ChapterDOI
TL;DR: This work presents the optimal system found by the systematic design of a decentralized water supply system for skyscrapers and highlights the energy savings compared to a conventional system design.
Abstract: The energy-efficiency of technical systems can be improved by a systematic design approach. Technical Operations Research (TOR) employs methods known from Operations Research to find a global optimal layout and operation strategy of technical systems. We show the practical usage of this approach by the systematic design of a decentralized water supply system for skyscrapers. All possible network options and operation strategies are modeled by a Mixed-Integer Nonlinear Program. We present the optimal system found by our approach and highlight the energy savings compared to a conventional system design.

11 citations


Book ChapterDOI
TL;DR: An interdisciplinary approach to exploit the intrinsic structure of these large-scale linear problems to be able to solve them on massively parallel high-performance computers.
Abstract: Current linear energy system models (ESM) acquiring to provide sufficient detail and reliability frequently bring along problems of both high intricacy and increasing scale. Unfortunately, the size and complexity of these problems often prove to be intractable even for commercial state-of-the-art linear programming solvers. This article describes an interdisciplinary approach to exploit the intrinsic structure of these large-scale linear problems to be able to solve them on massively parallel high-performance computers. A key aspect are extensions to the parallel interior-point solver PIPS-IPM originally developed for stochastic optimization problems. Furthermore, a newly developed GAMS interface to the solver as well as some GAMS language extensions to model block-structured problems will be described.

11 citations


Journal ArticleDOI
TL;DR: The block rearrangement algorithm with variance equalization (BRAVE) as discussed by the authors was proposed to find optimal blocks of columns and rearranges them using a carefully motivated heuristic.
Abstract: Several problems in operations research, such as the assembly line crew scheduling problem and the k-partitioning problem can be cast as the problem of finding the intra-column rearrangement (permutation) of a matrix such that the row sums show minimum variability. A necessary condition for optimality of the rearranged matrix is that for every block containing one or more columns it must hold that its row sums are oppositely ordered to the row sums of the remaining columns. We propose the block rearrangement algorithm with variance equalization (BRAVE) as a suitable method to achieve this situation. It uses a carefully motivated heuristic—based on an idea of variance equalization—to find optimal blocks of columns and rearranges them. When applied to the number partitioning problem, we show that BRAVE outperforms the well-known greedy algorithm and the Karmarkar–Karp differencing algorithm.

11 citations


Book ChapterDOI
TL;DR: This work considers a logistics company that repeatedly has to pick up goods at different sites and proposes a hybrid algorithm for this problem, which consists of a tabu search procedure for routing and some packing heuristics with different tasks.
Abstract: The capacitated vehicle routing problem with three-dimensional loading constraints (3L-CVRP) combines vehicle routing and three-dimensional loading with additional packing constraints concerning, for example, the stability of packed goods. We consider a logistics company that repeatedly has to pick up goods at different sites. Often, the load of one site exceeds the volume capacity of a vehicle. Therefore, we focus on the 3L-CVRP with split delivery and propose a hybrid algorithm for this problem. It consists of a tabu search procedure for routing and some packing heuristics with different tasks. One packing heuristic generates packing plans for shuttle tours involving special sites with large-volume sets of goods. Another heuristic cares for packing plans for tours with numerous sites. The hybrid algorithm is tested with a set of instances which differs from often used 3L-CVRP test instances and comes from real industrial data, with up to 46 sites and 1549 boxes to be transported. The algorithm yields good results within short computing times of less than 1 min.

Book ChapterDOI
TL;DR: WorHP Zen is presented, a sensitivity analysis module for the nonlinear programming solver WORHP that is capable of efficient calculation of parametric sensitivities using an existing factorization, efficient sparse storage of these derivatives, and real-time updates to calculate an approximated solution of a perturbed optimization problem.
Abstract: Nonlinear optimization problems that arise in real-world applications usually depend on parameter data. Parametric sensitivity analysis is concerned with the effects on the optimal solution caused by changes of these. The calculated sensitivities are of high interest because they improve the understanding of the optimal solution and allow the formulation of real-time capable update algorithms. We present WORHP Zen, a sensitivity analysis module for the nonlinear programming solver WORHP that is capable of the following: (i) Efficient calculation of parametric sensitivities using an existing factorization; (ii) efficient sparse storage of these derivatives, and (iii) real-time updates to calculate an approximated solution of a perturbed optimization problem. An example application of WORHP Zen in the context of parameter identification is presented.

Book ChapterDOI
TL;DR: This work focuses on three such algorithms, namely the classes of large neighborhood search and diving heuristics as well as Simplex pricing strategies, and proposes a selection strategy that is updated based on the observed runtime behavior.
Abstract: State-of-the-art solvers for mixed integer programs (MIP) govern a variety of algorithmic components. Ideally, the solver adaptively learns to concentrate its computational budget on those components that perform well on a particular problem, especially if they are time consuming. We focus on three such algorithms, namely the classes of large neighborhood search and diving heuristics as well as Simplex pricing strategies. For each class we propose a selection strategy that is updated based on the observed runtime behavior, aiming to ultimately select only the best algorithms for a given instance. We review several common strategies for such a selection scenario under uncertainty, also known as Multi Armed Bandit Problem. In order to apply those bandit strategies, we carefully design reward functions to rank and compare each individual heuristic or pricing algorithm within its respective class. Finally, we discuss the computational benefits of using the proposed adaptive selection within the SCIP Optimization Suite on publicly available MIP instances.

Journal ArticleDOI
TL;DR: It is proved that the vector scheduling problem with rejection on a single machine is NP-hard and two approximation algorithms running in polynomial time are designed.
Abstract: In this paper, we study a vector scheduling problem with rejection on a single machine, in which each job is characterized by a d-dimension vector and a penalty, in the sense that, jobs can be either rejected by paying a certain penalty or assigned to the machine. The objective is to minimize the sum of the maximum load over all dimensions of the total vector of all accepted jobs, and the total penalty of rejected jobs. We prove that the problem is NP-hard and design two approximation algorithms running in polynomial time. When d is a fixed constant, we present a fully polynomial time approximation scheme.

Book ChapterDOI
TL;DR: The interdependencies of different energy aware scheduling approaches and especially a dilemma between peak power minimization and demand charge reduction can be shown.
Abstract: Managing energy consumption more sustainably and efficiently has been gaining increasing importance in all industrial planning processes. Energy aware scheduling (EAS) can be seen as a part of that trend. Overall, EAS can be subdivided into three main approaches. In detail, the energy consumption can be reduced by specific planning, time-dependent electricity cost might be exploited or the peak power may be decreased. In contrast to the majority of EAS models these ideas are adopted simultaneously in the proposed new extensive MILP formulation. In order to affect peak load and energy consumption, variable discrete production rates as well as heterogeneous parallel machines with different levels of efficiency are considered. As a result, the interdependencies of different energy aware scheduling approaches and especially a dilemma between peak power minimization and demand charge reduction can be shown.

Journal ArticleDOI
TL;DR: This paper investigates the coordination problem of a supply chain composed of a manufacturer exhibiting corporate social responsibility (CSR) and a retailer faced with random demand and presents the concavity condition of EP related to the CSR effect factor in the case of uniformly distributed RDF and linear demand in price.
Abstract: This paper investigates the coordination problem of a supply chain (SC) composed of a manufacturer exhibiting corporate social responsibility (CSR) and a retailer faced with random demand. The random demand is made up of the multiplication of price-dependent demand and random demand factor (RDF), plus the CSR-dependent demand. The centralized decision problem of the SC is an extension of the existing price setting newsvendor problem (PSNP). It is found that the sufficient condition for the quasi-concavity of expected profit (EP) on PSNP can not ensure the quasi-concavity of EP of the SC. Then, the concavity condition of EP related to the CSR effect factor is presented in the case of uniformly distributed RDF and linear demand in price, and the concavity of EP is proven under centralized decision. For decentralized decision under manufacturer’s Stackelberg game, the manufacturer determines wholesale price and its CSR investment, and then the retailer decides the order quantity and the retail price. The standard revenue-sharing (RS) contract is found not able to coordinate the SC, so a modified RS (MRS) contract is proposed to coordinate the SC. Finally, numerical examples illustrate the validity of the theoretical analysis and the coordination effectiveness of the MRS contract via Matlab.

Book ChapterDOI
TL;DR: Methods of tropical optimization are applied to handle problems of rating alternatives on the basis of the log-Chebyshev approximation of pairwise comparison matrices to derive a direct solution in a closed form.
Abstract: We apply methods of tropical optimization to handle problems of rating alternatives on the basis of the log-Chebyshev approximation of pairwise comparison matrices. We derive a direct solution in a closed form, and investigate the obtained solution when it is not unique. Provided the approximation problem yields a set of score vectors, rather than a unique (up to a constant factor) one, we find those vectors in the set, which least and most differentiate between the alternatives with the highest and lowest scores, and thus can be representative of the entire solution.

Journal ArticleDOI
TL;DR: A fresh review of works on this challenge on the Internet where new economic systems operate is made, to make some of the assumptions of market rule implementation obsolete.
Abstract: Market makers choose and design market rules to serve certain objectives, such as to maximize revenue from the sales in the case of a single seller and multiple buyers. Given such rules, market participants play against each other to maximize their utility function values on goods acquired, possibly by hiding or misrepresenting their information needed in the implementation of market rules. Today’s Internet economy has changed the information collection process and may make some of the assumptions of market rule implementation obsolete. Here we make a fresh review of works on this challenge on the Internet where new economic systems operate.

Book ChapterDOI
TL;DR: Stochastic and robust mixed-integer programming formulations are developed to hedge against demand uncertainty and are evaluated based on adjusted real-world data sets in terms of runtime and solution quality.
Abstract: In the process industry markets are facing new challenges: while product life cycles are becoming shorter, the differentiation of products grows. This leads to varying and uncertain product demands in time and location. As a reaction, the research focus shifts to modular production, which allow for a more flexible production network. Using small-scale plants, production locations can be located in direct proximity to resources or customers. In response to short-term demand changes, capacity modifications can be made by shifting modular units between locations or numbering up. In order to benefit from the flexibility of modular production, the structure of the network requests dynamic adaptions in every period. Subsequently, once the customer demand realizes, an optimal match between disposed production capacities and customer orders has to be determined. This decision situation imposes new challenges on planning tools, since frequent adjustments of the network configuration have to be computed based on uncertain demand. We develop stochastic and robust mixed-integer programming formulations to hedge against demand uncertainty. In a computational study the novel formulations are evaluated based on adjusted real-world data sets in terms of runtime and solution quality.

Book ChapterDOI
TL;DR: This paper believes this paper is the first implementation of Benders method to solve USA\(p\)HMP and UMA(p)HMP, and a novel method of accelerating Bender method is applied.
Abstract: We consider the well-known uncapacitated p-hub median problem with multiple allocation (UMApHMP), and single allocation (USApHMP). These problems have received significant attention in the literature because while they are easy to state and understand, they are hard to solve. They also find practical applications in logistics and telecommunication network design. Due to the inherent complexity of these problems, we apply a modified Benders decomposition method to solve large instances of the UMApHMP and USApHMP. The Benders decomposition approach does, however, suffer from slow convergence mainly due to the high degeneracy of subproblems. To resolve this, we apply a novel method of accelerating Benders method. We improve the performance of the accelerated Benders method by more appropriately choosing parameters for generating cuts, and by solving subproblems more efficiently using minimum cost network flow algorithms. We implement our approach on well-known benchmark data sets in the literature and compare our computational results for our implementations of existing methods and commercial solvers. The computational results confirms that our approach is efficient and enables us to solve larger single- and multiple allocation hub median instances. We believe this paper is the first implementation of Benders method to solve USA\(p\)HMP and UMA\(p\)HMP.

Book ChapterDOI
TL;DR: An integer programming model is proposed for this problem, a structural property of line plans in the static (or single period) “unimodal demand” case is derived, and approaches to the solution of the multi-period version that rely on clustering the demand into peak and off-peak service periods are considered.
Abstract: Bus rapid transit systems in developing and newly industrialized countries often consist of a trunk with a path topology. On this trunk, several overlapping lines are operated which provide direct connections. The demand varies heavily over the day, with morning and afternoon peaks typically in reverse directions. We propose an integer programming model for this problem, derive a structural property of line plans in the static (or single period) “unimodal demand” case, and consider approaches to the solution of the multi-period version that rely on clustering the demand into peak and off-peak service periods. An application to the Metrobus system of Istanbul is discussed.

Journal ArticleDOI
TL;DR: New necessary and sufficient conditions to carry out a compact linearization approach for a general class of binary quadratic problems subject to assignment constraints are introduced and a polynomial-time combinatorial algorithm is given that can be used as a heuristic otherwise.
Abstract: We introduce and prove new necessary and sufficient conditions to carry out a compact linearization approach for a general class of binary quadratic problems subject to assignment constraints that has been proposed by Liberti (4OR 5(3):231–245, 2007, https://doi.org/10.1007/s10288-006-0015-3 ). The new conditions resolve inconsistencies that can occur when the original method is used. We also present a mixed-integer linear program to compute a minimally sized linearization. When all the assignment constraints have non-overlapping variable support, this program is shown to have a totally unimodular constraint matrix. Finally, we give a polynomial-time combinatorial algorithm that is exact in this case and can be used as a heuristic otherwise.

Book ChapterDOI
TL;DR: The numerical behavior of an existing solver is discussed in order to determine whether the authors' intuitive understanding of this behavior is correct and whether such predictions can be used to make better algorithmic choices.
Abstract: We investigate how the numerical properties of the LP relaxations evolve throughout the solution procedure in a solver employing the branch-and-cut algorithm. The long-term goal of this work is to determine whether the effect on the numerical conditioning of the LP relaxations resulting from the branching and cutting operations can be effectively predicted and whether such predictions can be used to make better algorithmic choices. In a first step towards this goal, we discuss here the numerical behavior of an existing solver in order to determine whether our intuitive understanding of this behavior is correct.

Book ChapterDOI
TL;DR: In this article, the authors focus on Robotic Mobile Fulfillment Systems in e-commerce distribution centers, which are designed to increase pick rates by employing mobile robots bringing movable storage units (so-called pods) to pick and replenishment stations as needed, and back to the storage area afterwards.
Abstract: In our work we focus on Robotic Mobile Fulfillment Systems in e-commerce distribution centers. These systems were designed to increase pick rates by employing mobile robots bringing movable storage units (so-called pods) to pick and replenishment stations as needed, and back to the storage area afterwards. One advantage of this approach is that repositioning of inventory can be done continuously, even during pick and replenishment operations. This is primarily accomplished by bringing a pod to a storage location different than the one it was fetched from, a process we call passive pod repositioning. Additionally, this can be done by explicitly bringing a pod from one storage location to another, a process we call active pod repositioning. In this work we introduce first mechanisms for the latter technique and conduct a simulation-based experiment to give first insights of their effect.

Book ChapterDOI
TL;DR: A mixed-integer linear program is discussed to describe the static load distribution in a two-dimensional space as a first starting point and the results indicate there is an opportunity to use optimization for the initial design within a defined assembly space.
Abstract: The combination of optimization tools of any kind and additive manufacturing should be able to improve lightweight construction. In this article we discuss a mixed-integer linear program to describe the static load distribution in a two-dimensional space as a first starting point. The results indicate there is an opportunity to use optimization for the initial design within a defined assembly space. A design example illustrates first results.

Journal ArticleDOI
TL;DR: It is proved that the set of all points in the payoff space of a normal form game with two players corresponding to the utilities of players in an efficient Nash equilibrium, the so-called nondominated Nash points, is finite.
Abstract: We study the connection between biobjective mixed integer linear programming and normal form games with two players. We first investigate computing Nash equilibria of normal form games with two players using single-objective mixed integer linear programming. Then, we define the concept of efficient (Pareto optimal) Nash equilibria. This concept is precisely equivalent to the concept of efficient solutions in multi-objective optimization, where the solutions are Nash equilibria. We prove that the set of all points in the payoff (or objective) space of a normal form game with two players corresponding to the utilities of players in an efficient Nash equilibrium, the so-called nondominated Nash points, is finite. We demonstrate that biobjective mixed integer linear programming, where the utility of each player is an objective function, can be used to compute the set of nondominated Nash points. Finally, we illustrate how the nondominated Nash points can be used to determine the disagreement point of a bargaining problem.

Book ChapterDOI
TL;DR: This paper gives a deterministic online primal-dual algorithm, evaluated using the standard competitive analysis in which an online algorithm is compared to the optimal offline algorithm which knows the entire sequence of demands in advance and is optimal.
Abstract: Theoretical study of real-life leasing scenarios was initiated in 2005 with a simple leasing model defined as follows. Demands arrive with time and need to be served by leased resources. Different types of leases are available, each with a fixed duration and price, respecting economy of scale (longer leases cost less per unit time). An algorithm is to lease resources at minimum possible costs in order to serve each arriving demand, without knowing future demands. In this paper, we generalize this model and introduce the Lease-or-Decline and Lease-or-Delay leasing models. In the Lease-or-Decline model, not all demands need to be served, i.e., the algorithm may decline a demand as long as a penalty associated with it is paid. In the Lease-or-Delay model, each demand has a deadline and can be served any day before its deadline as long as a penalty is paid for each delayed day. The goal is to minimize the total cost of purchased leases and penalties paid. For each of these models we give a deterministic online primal-dual algorithm, evaluated using the standard competitive analysis in which an online algorithm is compared to the optimal offline algorithm which knows the entire sequence of demands in advance and is optimal.

Book ChapterDOI
TL;DR: The modulo network simplex method is extended, a well-established heuristic for the periodic timetabling problem, by integrating a passenger (re)routing step into the pivot operations, and it is shown that you can indeed find timetables with much shorter total travel time, when you take the passengers’ travel paths into consideration.
Abstract: Periodic timetabling is an important strategic planning problem in public transport. The task is to determine periodic arrival and departure times of the lines in a given network, minimizing the travel time of the passengers. We extend the modulo network simplex method (Nachtigall and Opitz, Solving periodic timetable optimisation problems by modulo simplex calculations 2008 [6]), a well-established heuristic for the periodic timetabling problem, by integrating a passenger (re)routing step into the pivot operations. Computations on real-world networks show that we can indeed find timetables with much shorter total travel time, when we take the passengers’ travel paths into consideration.