scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Letters in 2020"


Journal ArticleDOI
TL;DR: Decision dependent distributionally robust optimization models, where the ambiguity sets of probability distributions can depend on the decision variables, are studied, to allow solutions of such problems using global optimization techniques within the framework of a cutting surface algorithm.
Abstract: We study decision dependent distributionally robust optimization models, where the ambiguity sets of probability distributions can depend on the decision variables. These models arise in situations with endogenous uncertainty. The developed framework includes two-stage decision dependent distributionally robust stochastic programming as a special case. Decision dependent generalizations of five types of ambiguity sets are considered. These sets are based on bounds on moments, covariance matrix, Wasserstein metric, Phi-divergence and Kolmogorov–Smirnov test. For the finite support case, we use linear, conic or Lagrangian duality to give reformulations of these models with a finite number of constraints. Reformulations are also given for the continuous support case for moment, covariance, Wasserstein and Kolmogorov–Smirnov based models. These reformulations allow solutions of such problems using global optimization techniques within the framework of a cutting surface algorithm. The importance of decision dependence in the ambiguity set is demonstrated with the help of a numerical example modeling simultaneous determination of order quantity and selling price for a newsvendor problem.

54 citations


Journal ArticleDOI
TL;DR: This work investigates the efficiency of subset selection regression techniques for developing surrogate functions that balance both accuracy and complexity and indicates that subset selection-based regression functions exhibit promising performance when the dimensionality is low, while interpolation performs better for higher dimensional problems.
Abstract: Optimization of simulation-based or data-driven systems is a challenging task, which has attracted significant attention in the recent literature. A very efficient approach for optimizing systems without analytical expressions is through fitting surrogate models. Due to their increased flexibility, nonlinear interpolating functions, such as radial basis functions and Kriging, have been predominantly used as surrogates for data-driven optimization; however, these methods lead to complex nonconvex formulations. Alternatively, commonly used regression-based surrogates lead to simpler formulations, but they are less flexible and inaccurate if the form is not known a priori. In this work, we investigate the efficiency of subset selection regression techniques for developing surrogate functions that balance both accuracy and complexity. Subset selection creates sparse regression models by selecting only a subset of original features, which are linearly combined to generate a diverse set of surrogate models. Five different subset selection techniques are compared with commonly used nonlinear interpolating surrogate functions with respect to optimization solution accuracy, computation time, sampling requirements, and model sparsity. Our results indicate that subset selection-based regression functions exhibit promising performance when the dimensionality is low, while interpolation performs better for higher dimensional problems.

53 citations


Journal ArticleDOI
TL;DR: In this article, two new algorithms are introduced for solving a pseudomontone variational inequality problem with a Lipschitz condition in a Hilbert space, which are constructed around three methods: the subgradient extragradient method, the inertial method and the viscosity method.
Abstract: In this paper, two new algorithms are introduced for solving a pseudomontone variational inequality problem with a Lipschitz condition in a Hilbert space. The algorithms are constructed around three methods: the subgradient extragradient method, the inertial method and the viscosity method. With a new stepsize rule is incorporated, the algorithms work without any information of Lipschitz constant of operator. The weak convergence of the first algorithm is established, while the second one is strongly convergent which comes from the viscosity method. In order to show the computational effectiveness of our algorithms, some numerical results are provided.

44 citations


Journal ArticleDOI
TL;DR: A new algorithm for solving variational inequality problems with monotone and Lipschitz-continuous mappings in real Hilbert spaces with strong convergence theorem under certain mild assumptions is introduced.
Abstract: In this paper, we introduce a new algorithm for solving variational inequality problems with monotone and Lipschitz-continuous mappings in real Hilbert spaces. Our algorithm requires only to compute one projection onto the feasible set per iteration. We prove under certain mild assumptions, a strong convergence theorem for the proposed algorithm to a solution of a variational inequality problem. Finally, we give some numerical experiments illustrating the performance of the proposed algorithm for variational inequality problems.

40 citations


Journal ArticleDOI
TL;DR: This work formalizes this new problem variant by means of a mathematical formulation and proposes a matheuristic approach (POPMUSIC) for solving it.
Abstract: The Multi-Depot Cumulative Capacitated Vehicle Routing Problem is a variation of the recently proposed Capacitated Cumulative Vehicle Routing Problem, where several depots can be considered as starting points of routes. Its objective aims at minimizing the sum of arrival times at customers for providing service. Practical considerations imply to address the delivery of customers from multiple depots where the service quality level depends on the customer waiting time and the delivering vehicles may be able to depart from different points. Those scenarios require theoretical models to support the decision-making process as well as for measuring the quality of the solutions provided by approximate approaches. In the present work, we formalize this new problem variant by means of a mathematical formulation and propose a matheuristic approach (POPMUSIC) for solving it.

30 citations


Journal ArticleDOI
TL;DR: This work studies the split feasibility problem with multiple output sets in Hilbert spaces and proposes two new algorithms that establish a weak convergence theorem and a strong convergence theorem for the first and the second.
Abstract: We study the split feasibility problem with multiple output sets in Hilbert spaces. In order to solve this problem, we propose two new algorithms. We establish a weak convergence theorem for the first one and a strong convergence theorem for the second.

30 citations


Journal ArticleDOI
TL;DR: It is shown that the singular value condition σ max (B) < σ min (A) leads to the unique solvability of the absolute value equation.
Abstract: In this note, we show that the singular value condition $$\sigma _{\max }(B) < \sigma _{\min }(A)$$ leads to the unique solvability of the absolute value equation $$Ax + B|x| = b$$ for any b. This result is superior to those appeared in previously published works by Rohn (Optim Lett 3:603–606, 2009).

27 citations


Journal ArticleDOI
TL;DR: A definition of chirality based on group theory is presented and it is shown to be equivalent to the usual one in the case of Euclidean spaces, and it permits to define Chirality in metric spaces which are not Euclideans.
Abstract: A definition of chirality based on group theory is presented. It is shown to be equivalent to the usual one in the case of Euclidean spaces, and it permits to define chirality in metric spaces which are not Euclidean.

23 citations


Journal ArticleDOI
TL;DR: This paper first establishes necessary and sufficient optimality conditions for robust approximate optimal solutions of this uncertain convex optimization problem, then introduces a Wolfe-type robust approximateDual problem and investigates robust approximate duality relations between them.
Abstract: This paper provides some new results on robust approximate optimal solutions for convex optimization problems with data uncertainty. By using robust optimization approach (worst-case approach), we first establish necessary and sufficient optimality conditions for robust approximate optimal solutions of this uncertain convex optimization problem. Then, we introduce a Wolfe-type robust approximate dual problem and investigate robust approximate duality relations between them. Moreover, we obtain some robust approximate saddle point theorems for this uncertain convex optimization problem. We also show that our results encompass as special cases some optimization problems considered in the recent literature.

22 citations


Journal ArticleDOI
Jun Yang1, Hongwei Liu1
TL;DR: A new algorithm for solving equilibrium problems involving Lipschitz-type and pseudomonotone bifunctions in real Hilbert space is introduced and it is proved the iterative sequence generated by the algorithm converge strongly to a common solution of equilibrium problem and a fixed point problem without the knowledge of the LipsChitz- type constants of bifunction.
Abstract: In this paper, we first introduce and analyze a new algorithm for solving equilibrium problems involving Lipschitz-type and pseudomonotone bifunctions in real Hilbert space. The algorithm uses a new step size, we prove the iterative sequence generated by the algorithm converge strongly to a common solution of equilibrium problem and a fixed point problem without the knowledge of the Lipschitz-type constants of bifunction. Finally, another similar algorithm is proposed and numerical experiments are reported to illustrate the efficiency of the proposed algorithms.

22 citations


Journal ArticleDOI
TL;DR: This article exploits the effective lower dimensionality with axis-aligned projections and optimize on a partitioning of the input space and overcome issues of GP hyper-parameter learning in the presence of inconsistencies.
Abstract: Key challenges of Bayesian optimization in high dimensions are both learning the response surface and optimizing an acquisition function. The acquisition function selects a new point to evaluate the black-box function. Both challenges can be addressed by making simplifying assumptions, such as additivity or intrinsic lower dimensionality of the expensive objective. In this article, we exploit the effective lower dimensionality with axis-aligned projections and optimize on a partitioning of the input space. Axis-aligned projections introduce a multiplicity of outputs for a single input that we refer to as inconsistency. We model inconsistencies with a Gaussian process (GP) derived from quantile regression. We show that the quantile GP and the partitioning of the input space increases data-efficiency. In particular, by modeling only a quantile function, we overcome issues of GP hyper-parameter learning in the presence of inconsistencies.

Journal ArticleDOI
TL;DR: In this paper, the authors present a composition rule involving quasiconvex functions that generalizes the classical composition rule for convex functions, which complements well-known rules for the curvature of quasicovex function under increasing functions and pointwise maximums.
Abstract: We present a composition rule involving quasiconvex functions that generalizes the classical composition rule for convex functions. This rule complements well-known rules for the curvature of quasiconvex functions under increasing functions and pointwise maximums. We refer to the class of optimization problems generated by these rules, along with a base set of quasiconvex and quasiconcave functions, as disciplined quasiconvex programs. Disciplined quasiconvex programming generalizes disciplined convex programming, the class of optimization problems targeted by most modern domain-specific languages for convex optimization. We describe an implementation of disciplined quasiconvex programming that makes it possible to specify and solve quasiconvex programs in CVXPY 1.0.

Journal ArticleDOI
TL;DR: This work proposes a novel method for the derivation of generalized affine decision rules for linear mixed-integer ARO problems through multi-parametric programming, that lead to the exact and global solution of the ARO problem.
Abstract: Adjustable robust optimization (ARO) involves recourse decisions (i.e. reactive actions after the realization of the uncertainty, ‘wait-and-see’) as functions of the uncertainty, typically posed in a two-stage stochastic setting. Solving the general ARO problems is challenging, therefore ways to reduce the computational effort have been proposed, with the most popular being the affine decision rules, where ‘wait-and-see’ decisions are approximated as affine adjustments of the uncertainty. In this work we propose a novel method for the derivation of generalized affine decision rules for linear mixed-integer ARO problems through multi-parametric programming, that lead to the exact and global solution of the ARO problem. The problem is treated as a multi-level programming problem and it is then solved using a novel algorithm for the exact and global solution of multi-level mixed-integer linear programming problems. The main idea behind the proposed approach is to solve the lower optimization level of the ARO problem parametrically, by considering ‘here-and-now’ variables and uncertainties as parameters. This will result in a set of affine decision rules for the ‘wait-and-see’ variables as a function of ‘here-and-now’ variables and uncertainties for their entire feasible space. A set of illustrative numerical examples are provided to demonstrate the potential of the proposed novel approach.

Journal ArticleDOI
TL;DR: In this algorithm, a novel linear relaxation technique is presented for deriving the linear relaxation programming of problem L MP, which has separable characteristics and can be used to acquire the upper bound of the optimal value to problem LMP.
Abstract: This article presents a rectangular branch-and-bound algorithm with standard bisection rule for solving linear multiplicative problem (LMP). In this algorithm, a novel linear relaxation technique is presented for deriving the linear relaxation programming of problem LMP, which has separable characteristics and can be used to acquire the upper bound of the optimal value to problem LMP. Thus, to obtain a global optimal solution for problem LMP, the main computational work of the algorithm involves the solutions of a sequence of linear programming problems. Moreover, the proof of its convergence property and the numerical result show the feasibility and efficiency of the presented algorithm.

Journal ArticleDOI
TL;DR: Several inertial-like algorithms for solving equilibrium problems (EP) in real Hilbert spaces are introduced using the resolvent of the EP associated bifunction and combines the inertial and the Mann-type technique.
Abstract: In this paper, we introduce several inertial-like algorithms for solving equilibrium problems (EP) in real Hilbert spaces. The algorithms are constructed using the resolvent of the EP associated bifunction and combines the inertial and the Mann-type technique. Under mild and standard conditions imposed on the cost bifunction and control parameters strong convergence of the algorithms is established. We present several numerical examples to illustrate the behavior of our schemes and emphasize their convergence advantages compared with some related methods.

Journal ArticleDOI
TL;DR: A linear mathematical model for the resource-constrained project scheduling problem with resource transfer time is presented and a branch-and-bound embedded genetic algorithm with a new precedence-based coding method which adapts to the structure of the problem is proposed.
Abstract: Motivated by the resource transfer time between different stations in the aircraft moving assembly line, this study addresses the resource-constrained project scheduling problem with resource transfer time, which aims at minimizing the makespan of the project while respecting precedence relations and resource constraints. We assume that the resource transfer time is known and deterministic in advance. The resource transfer time and the precedence of activities are coupled with each other, which means that the transfer time of resource changes according to the precedence of activities, while the transfer time affects the decision of the precedence of activities at the same time. We present a linear mathematical model for the problem and propose a branch-and-bound embedded genetic algorithm with a new precedence-based coding method which adapts to the structure of the problem. A series of experimental tests reveal that the branch-and-bound embedded genetic algorithm outperforms the existing algorithm proposed in the literature in finding high quality solutions.

Journal ArticleDOI
TL;DR: In this model the polyhedral k-norm, intermediate between L-1 and L-∞ norms, plays a significant role, allowing us to come out with a DC (Difference of Convex) optimization problem that is tackled by means of DCA algorithm.
Abstract: We treat the Feature Selection problem in the Support Vector Machine (SVM) framework by adopting an optimization model based on use of the L-0 pseudo-norm. The objective is to control the number of non-zero components of normal vector to the separating hyperplane, while maintaining satisfactory classification accuracy. In our model the polyhedral k-norm , intermediate between L-1 and L-∞ norms, plays a significant role, allowing us to come out with a DC (Difference of Convex) optimization problem that is tackled by means of DCA algorithm. The results of several numerical experiments on benchmark classification datasets are reported.

Journal ArticleDOI
TL;DR: A fractional differential equation (FDE) based algorithm for convex optimization is presented in this paper, which generalizes ordinary differential equations based algorithm by providing an additional tunable parameter α in α ∈ ( 0, 1] .
Abstract: A fractional differential equation (FDE) based algorithm for convex optimization is presented in this paper, which generalizes ordinary differential equation (ODE) based algorithm by providing an additional tunable parameter $$\alpha \in (0,1]$$. The convergence of the algorithm is analyzed. For the strongly convex case, the algorithm achieves at least the Mittag-Leffler convergence, while for the general case, the algorithm achieves at least an $$O(1/t^\alpha )$$ convergence rate. Numerical simulations show that the FDE based algorithm may have faster or slower convergence speed than the ODE based one, depending on specific problems.

Journal ArticleDOI
TL;DR: A hybrid algorithm combining heuristic with dynamic programming algorithm (H-DP) to obtain satisfactory solutions within reasonable time for the bounded parallel-batching scheduling problem considering job rejection, deteriorating jobs, setup time, and non-identical job sizes is proposed.
Abstract: This paper studies the bounded parallel-batching scheduling problem considering job rejection, deteriorating jobs, setup time, and non-identical job sizes. Each job will be either rejected with a certain penalty cost, or accepted and further processed in batches on a single machine. There is a setup time before processing each batch, and the objective is to minimize the sum of the makespan and the total penalty. Several useful preliminaries for arranging accepted job with identical size are proposed. Based on these preliminaries, we first investigate a special case where all the jobs are considered to have the identical size, and develop a dynamic programming algorithm to solve it. The preliminaries help to reduce the complexity of the dynamic programming algorithm from $$ O\left( {n!n^{2} \sum olimits_{i = 1}^{n} {w_{j} } } \right) $$ to $$ O\left( {n^{2} \sum olimits_{i = 1}^{n} {w_{j} } } \right) $$ . For the general problem with non-identical job sizes, we propose a hybrid algorithm combining heuristic with dynamic programming algorithm (H-DP) to obtain satisfactory solutions within reasonable time. Finally, the effectiveness and efficiency of the H-DP algorithm are illustrated by a series of computational experiments.

Journal ArticleDOI
TL;DR: A mixed-integer linear programming model is presented along with reformulations, decomposition approaches, and approximation strategies to improve tractability and show how large instances can be solved in real time to within 1% of the true optimal solution.
Abstract: Due to time-varying utility prices, peak demand charges, and variable-efficiency equipment, optimal operation of heating ventilation, and air conditioning systems in campuses or large buildings is nontrivial. Given forecasts of ambient conditions and utility prices, system energy requirements can be reduced by optimizing heating/cooling load within buildings and then choosing the best combination of large chillers, boilers, etc., to meet that load while accounting for switching constraints and equipment performance. With the presence of energy storage, utility costs can be further reduced by temporally shifting production, which adds an additional layer of complexity. Furthermore, due to changes in market and weather conditions, it is necessary to revise a given schedule regularly as updated information is received, which means the problem must be tractable in real time (e.g., solvable within 15 min). In this paper, we present a mixed-integer linear programming model for this problem along with reformulations, decomposition approaches, and approximation strategies to improve tractability. Simulations are presented to illustrate the effectiveness of these methods. By removing symmetry from identical equipment, decomposing the problem into subproblems, and approximating longer-timescale behavior, large instances can be solved in real time to within 1% of the true optimal solution.

Journal ArticleDOI
TL;DR: The present paper addresses the issue of defining a reliable algorithm for the case of p = 3 and proposes a specific algorithm and shows numerical experiments.
Abstract: In a recent paper (Birgin et al. in Math Program 163(1):359–368, 2017), it was shown that, for the smooth unconstrained optimization problem, worst-case evaluation complexity $$O(\epsilon ^{-(p+1)/p})$$ may be obtained by means of algorithms that employ sequential approximate minimizations of p-th order Taylor models plus $$(p+1)$$-th order regularization terms. The aforementioned result, which assumes Lipschitz continuity of the p-th partial derivatives, generalizes the case $$p=2$$, known since 2006, which has already motivated efficient implementations. The present paper addresses the issue of defining a reliable algorithm for the case $$p=3$$. With that purpose, we propose a specific algorithm and we show numerical experiments.

Journal ArticleDOI
TL;DR: This work studies the approximation algorithm for maximizing a non-decreasing set function under d-knapsack constraint based on the diminishing-return ratio for set functions and provides an effective method for the maximization on set functions no matter they are submodular or not.
Abstract: Maximizing constrained submodular functions lies at the core of substantial machine learning and data mining. Specially, the case that the data come in a streaming fashion receives more attention in recent decades. In this work, we study the approximation algorithm for maximizing a non-decreasing set function under d-knapsack constraint. Based on the diminishing-return ratio for set functions, a non-trivial algorithm is devised for maximizing the set function without submodularity. Our results cover some known results and provide an effective method for the maximization on set functions no matter they are submodular or not. We also run the algorithm to handle the application of support selection for sparse linear regression. Numerical results show that the output quality of the algorithm is good.

Journal ArticleDOI
TL;DR: The equivalence between nonemptiness of solution set of the inclusion problem and the coercivity condition is discussed and the boundedness of solutionSet of theclusion problem is studied.
Abstract: In this paper, we consider the inclusion problems for maximal monotone set-valued vector fields defined on Hadamard manifolds. We discuss the equivalence between nonemptiness of solution set of the inclusion problem and the coercivity condition. The boundedness of solution set of the inclusion problem is studied. An application of our results to optimization problems in Hadamard manifolds is also presented.

Journal ArticleDOI
TL;DR: Convergence of a single time-scale stochastic subgradient method with subgradient averaging for constrained problems with a nonsmooth and nonconvex objective function having the property of generalized differentiability is proved.
Abstract: We prove convergence of a single time-scale stochastic subgradient method with subgradient averaging for constrained problems with a nonsmooth and nonconvex objective function having the property of generalized differentiability. As a tool of our analysis, we also prove a chain rule on a path for such functions.

Journal ArticleDOI
TL;DR: A preprocessing approach for fourth degree pseudo-Boolean polynomials based on an exact integer programming model that minimizes the number of auxiliary variables and penalty magnitude is presented.
Abstract: Pseudo-Boolean functions (PBF) are closed algebraic representations of set functions that are closely related to nonlinear binary optimizations and have numerous applications. Algorithms for PBF of degree two (quadratic) are NP-Hard and third and fourth degree functions are increasingly difficult to solve. However, the higher degree terms can be reformulated to a lower degree by adding variables and corresponding penalty constraints. These additional constraints can then be transformed to the objective function via penalties to create Quadratic Unconstrained Binary Optimization problems for which there are many solution techniques, such as tabu search and quantum annealing. Shortcomings of reformulation are the possibility of large numbers of auxiliary variables and constraints along with large penalty terms. In this paper, we address these shortcomings by presenting a preprocessing approach for fourth degree pseudo-Boolean polynomials based on an exact integer programming model that minimizes the number of auxiliary variables and penalty magnitude. Experimental results compare worst case, naive, greedy and minimal substitution methods and illustrate the efficacy of minimizing substitutions and penalty magnitude.

Journal ArticleDOI
TL;DR: The presented mathematical model minimizes the energy costs due to the above terms under certain assumptions and constraints and covers the aforementioned objectives by focusing on energy losses and costs of flights under the scenario of a controlled free flight and a unified airspace.
Abstract: The growth in demand for air transport has generated new challenges for capacity and safety. In response, manufacturers develop new types of aircraft while airlines open new routes and adapt their fleet. This excessive demand for air transport also leads to the need for further investments in airport expansion and ATM modernization. The current work was focused on the ATM problem with respect to new procedures, such as free flight, for addressing the air capacity issues in an environmental approach. The study was triggered by and aligned with the following performance objectives set by EUROCONTROL and the European Commission: (1) to improve ATM safety whilst accommodating air traffic growth; (2) to increase the ATM network efficiency; (3) to strengthen ATM’s contribution to aviation security and to environmental objectives; (4) to match capacity and air transport growth. The proposed mathematical model covers the aforementioned objectives by focusing on energy losses and costs of flights under the scenario of a controlled free flight and a unified airspace. The factors enhanced in the model were chosen based on their impact on the ATM energy efficiency, such as the airborne delays and flight duration, the delays due to ground holding, the flight cancellation, the flight speed deviations and the flight level alterations. Therefore, the presented mathematical model minimizes the energy costs due to the above terms under certain assumptions and constraints. Finally, simulation case studies, used as proof tests, have been conducted under different ATM scenarios to examine the complexity and the efficiency of the developed model.

Journal ArticleDOI
TL;DR: By constructing explicitly minimal resolving sets for the folded n-cube, upper bounds on the metric dimension of this graph are obtained.
Abstract: A subset S of vertices in a graph G is called a resolving set for G if for arbitrary two distinct vertices $$u, v\in V$$, there exists a vertex x from S such that the distances $$d(u, x) e d(v, x)$$. The metric dimension of G is the minimum cardinality of a resolving set of G. A minimal resolving set is a resolving set which has no proper subsets that are resolving sets. Let $$\Box _{n}$$ denote the folded n-cube. In this paper, we consider the metric dimension of $$\Box _{n}$$. By constructing explicitly minimal resolving sets for $$\Box _{n}$$, we obtain upper bounds on the metric dimension of this graph.

Journal ArticleDOI
TL;DR: This work proposes an application-based characterization of dynDGP instances, where the main criteria are the presence or absence of a skeletal structure, and the rigidity of such a skeletalructure.
Abstract: The dynamical Distance Geometry Problem (dynDGP) is the problem of finding a realization in a Euclidean space of a weighted undirected graph G representing an animation by relative distances, so that the distances between realized vertices are as close as possible to the edge weights. In the dynDGP, the vertex set of the graph G is the set product of V , representing certain objects, and T , representing time as a sequence of discrete steps. We suppose moreover that distance information is given together with the priority of every distance value. The dynDGP is a special class of the DGP where the dynamics of the problem comes to play an important role. In this work, we propose an application-based characterization of dynDGP instances, where the main criteria are the presence or absence of a skeletal structure, and the rigidity of such a skeletal structure. Examples of considered applications include: multi-robot coordination, crowd simulations, and human motion retargeting.

Journal ArticleDOI
TL;DR: A Variable Neighborhood Search algorithm is used to schedule product manufacturing and handling tasks in the aim to minimize the maximum completion time of a job set and an improved lower bound with new calculation method is presented.
Abstract: In job-shop manufacturing systems, an efficient production schedule acts to reduce unnecessary costs and better manage resources. For the same purposes, modern manufacturing cells, in compliance with industry 4.0 concepts, use material handling systems in order to allow more control on the transport tasks. In this paper, a job-shop scheduling problem in vehicle based manufacturing facility that is mainly related to job assignment to resources is addressed. The considered job-shop production cell has two types of resources: processing resources that accomplish fabrication tasks for specific products, and transporting resources that assure parts’ transport to the processing area. A Variable Neighborhood Search algorithm is used to schedule product manufacturing and handling tasks in the aim to minimize the maximum completion time of a job set and an improved lower bound with new calculation method is presented. Experimental tests are conducted to evaluate the efficiency of the proposed approach.

Journal ArticleDOI
TL;DR: In this paper, a flexible approach for computing the resolvent of the sum of weakly monotone operators in real Hilbert spaces is proposed, which relies on splitting methods where strong convergence is guaranteed.
Abstract: We propose a flexible approach for computing the resolvent of the sum of weakly monotone operators in real Hilbert spaces. This relies on splitting methods where strong convergence is guaranteed. We also prove linear convergence under Lipschitz continuity assumption. The approach is then applied to computing the proximity operator of the sum of weakly convex functions, and particularly to finding the best approximation to the intersection of convex sets.