scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Optimization Theory and Applications in 2019"


Journal ArticleDOI
TL;DR: Mitsos et al. as discussed by the authors proposed a method based on relaxations of algorithms using McCormick relaxations in a reduced space employing the convex and concave envelopes of the nonlinear activation function.
Abstract: Artificial neural networks are used in various applications for data-driven black-box modeling and subsequent optimization. Herein, we present an efficient method for deterministic global optimization of optimization problems with artificial neural networks embedded. The proposed method is based on relaxations of algorithms using McCormick relaxations in a reduced space (Mitsos et al. in SIAM J Optim 20(2):573–601, 2009) employing the convex and concave envelopes of the nonlinear activation function. The optimization problem is solved using our in-house deterministic global solver. The performance of the proposed method is shown in four optimization examples: an illustrative function, a fermentation process, a compressor plant and a chemical process. The results show that computational solution time is favorable compared to a state-of-the-art global general-purpose optimization solver.

128 citations


Journal ArticleDOI
TL;DR: The theoretical developments for the tensor complementarity problem and related models, including the nonemptiness and compactness of the solution set, global uniqueness and solvability, error bound theory, stability and continuity analysis, and so on are described.
Abstract: Tensors (hypermatrices) are multidimensional analogs of matrices. The tensor complementarity problem is a class of nonlinear complementarity problems with the involved function being defined by a tensor, which is also a direct and natural extension of the linear complementarity problem. In the last few years, the tensor complementarity problem has attracted a lot of attention, and has been studied extensively, from theory to solution methods and applications. This work, with its three parts, aims at contributing to review the state-of-the-art of studies for the tensor complementarity problem and related models. In this part, we describe the theoretical developments for the tensor complementarity problem and related models, including the nonemptiness and compactness of the solution set, global uniqueness and solvability, error bound theory, stability and continuity analysis, and so on. The developments of solution methods and applications for the tensor complementarity problem are given in the second part and the third part, respectively. Some further issues are proposed in all the parts.

67 citations


Journal ArticleDOI
TL;DR: This work shows that in the limit the iterates of the alternating direction method of multipliers either satisfy a set of first-order optimality conditions or produce a certificate of either primal or dual infeasibility for optimization problems with linear or quadratic objective functions and conic constraints.
Abstract: The alternating direction method of multipliers is a powerful operator splitting technique for solving structured optimization problems. For convex optimization problems, it is well known that the algorithm generates iterates that converge to a solution, provided that it exists. If a solution does not exist, then the iterates diverge. Nevertheless, we show that they yield conclusive information regarding problem infeasibility for optimization problems with linear or quadratic objective functions and conic constraints, which includes quadratic, second-order cone, and semidefinite programs. In particular, we show that in the limit the iterates either satisfy a set of first-order optimality conditions or produce a certificate of either primal or dual infeasibility. Based on these results, we propose termination criteria for detecting primal and dual infeasibility.

59 citations


Journal ArticleDOI
TL;DR: A robust nonsmooth multiobjective optimization problem related to a multiobjectives optimization with data uncertainty is investigated and the weak, strong and converse robust duality results between the primal one and its dual problems under the generalized convexity assumptions are obtained.
Abstract: In this paper, we investigate a robust nonsmooth multiobjective optimization problem related to a multiobjective optimization with data uncertainty. We firstly introduce two kinds of generalized convex functions, which are not necessary to be convex. Robust necessary optimality conditions for weakly robust efficient solutions and properly robust efficient solutions of the problem are established by a generalized alternative theorem and the robust constraint qualification. Further, robust sufficient optimality conditions for weakly robust efficient solutions and properly robust efficient solutions of the problem are also derived. The Mond–Weir-type dual problem and Wolfe-type dual problem are formulated. Finally, we obtain the weak, strong and converse robust duality results between the primal one and its dual problems under the generalized convexity assumptions.

58 citations


Journal ArticleDOI
TL;DR: The first practical application is about a class of multi-person noncooperative games, which is modeled as a tensor complementarity problem, and particularly, an explicit relationship between the solutions to these two classes of problems is presented.
Abstract: We have reviewed some theoretical and algorithmic developments for tensor complementarity problems and related models in the first part and the second part of this paper, respectively. In this part, we present a survey for some applications of tensor complementarity problems and polynomial complementarity problems. We first describe some equivalent classes of tensor complementarity problems and polynomial complementarity problems, since many practical problems can be modeled as forms of those equivalent problems; and then, we review three practical applications of tensor complementarity problems and polynomial complementarity problems. The first practical application is about a class of multi-person noncooperative games, which is modeled as a tensor complementarity problem, and particularly, an explicit relationship between the solutions to these two classes of problems is presented. The second practical problem is about the hypergraph clustering problem, which can be solved by a tensor complementarity problem. The third practical problem is about a class of traffic equilibrium problems, which is modeled as a polynomial complementarity problem. Some further issues are given.

57 citations


Journal ArticleDOI
TL;DR: A semidefinite relaxation method based on a polynomial optimization model is presented so that all solutions of the tensor complementarity problem can be found under the assumption that the solution set of the problem is finite.
Abstract: This work, with its three parts, reviews the state-of-the-art of studies for the tensor complementarity problem and some related models. In the first part of this paper, we have reviewed the theoretical developments of the tensor complementarity problem and related models. In this second part, we review the developments of solution methods for the tensor complementarity problem. It has been shown that the tensor complementarity problem is equivalent to some known optimization problems, or related problems such as systems of tensor equations, systems of nonlinear equations, and nonlinear programming problems, under suitable assumptions. By solving these reformulated problems with the help of structures of the involved tensors, several numerical methods have been proposed so that a solution of the tensor complementarity problem can be found. Moreover, based on a polynomial optimization model, a semidefinite relaxation method is presented so that all solutions of the tensor complementarity problem can be found under the assumption that the solution set of the problem is finite. Further applications of the tensor complementarity problem will be given and discussed in the third part of this paper.

52 citations


Journal ArticleDOI
TL;DR: A flexible algorithm for non-smooth non-convex optimization is proposed for which (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error is proved.
Abstract: We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward-Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions), replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in signal/image processing and machine learning.

33 citations


Journal ArticleDOI
TL;DR: The article is focused on the investigation of the necessary optimality conditions in the form of Pontryagin’s maximum principle for optimal control problems with state constraints and a number of results are presented.
Abstract: The article is focused on the investigation of the necessary optimality conditions in the form of Pontryagin’s maximum principle for optimal control problems with state constraints. A number of results on this topic, which refine the existing ones, are presented. These results concern the nondegenerate maximum principle under weakened controllability assumptions and also the continuity of the measure Lagrange multiplier.

33 citations


Journal ArticleDOI
TL;DR: A new approach to characterize robust optimal solution sets of this class of uncertain optimization problems via its dual problem is proposed and several results on characterizations of robust optimal solutions obtained in recent literature can be obtained using this approach.
Abstract: In this paper, we deal with robust optimal solution sets for a class of optimization problems with data uncertainty in both the objective and constraints. We first introduce a mixed-type robust dual problem of this class of uncertain optimization problems and explore robust strong duality relations between them. Then, we propose a new approach to characterize robust optimal solution sets of this class of uncertain optimization problems via its dual problem. Moreover, we show that several results on characterizations of robust optimal solution sets of uncertain optimization problems obtained in recent literature can be obtained using our approach.

33 citations


Journal ArticleDOI
TL;DR: By separating the differential and the non-differential parts of the generalized absolute value equations, a class of modified Newton-type iteration methods are proposed and involves the well-known Picard iteration method as the special case.
Abstract: In this paper, by separating the differential and the non-differential parts of the generalized absolute value equations, a class of modified Newton-type iteration methods are proposed. The modified Newton-type iteration method involves the well-known Picard iteration method as the special case. Convergence properties of the new iteration schemes are analyzed in detail. In particular, some specific sufficient conditions are presented for two special coefficient matrices. Finally, two numerical examples are given to illustrate the effectiveness of the proposed modified Newton-type iteration methods.

32 citations


Journal ArticleDOI
TL;DR: A class of non-Euclidean gradient-like inequalities is introduced, allowing to prove linear convergence of a Bregman gradient method for nonconvex minimization, even when neither strong convexity nor Lipschitz gradient continuity holds.
Abstract: The gradient method is well known to globally converge linearly when the objective function is strongly convex and admits a Lipschitz continuous gradient. In many applications, both assumptions are often too stringent, precluding the use of gradient methods. In the early 1960s, after the amazing breakthrough of Łojasiewicz on gradient inequalities, it was observed that uniform convexity assumptions could be relaxed and replaced by these inequalities. On the other hand, very recently, it has been shown that the Lipschitz gradient continuity can be lifted and replaced by a class of functions satisfying a non-Euclidean descent property expressed in terms of a Bregman distance. In this note, we combine these two ideas to introduce a class of non-Euclidean gradient-like inequalities, allowing to prove linear convergence of a Bregman gradient method for nonconvex minimization, even when neither strong convexity nor Lipschitz gradient continuity holds.

Journal ArticleDOI
TL;DR: A sixth-kind Chebyshev collocation method will be proposed to solve numerically this inverse problem and to obtain the unknown boundary function and a regularization method based on the mollification technique with the generalized cross-validation criterion is utilized.
Abstract: In this paper, we consider an inverse reaction–diffusion–convection problem in which one of the boundary conditions is unknown. A sixth-kind Chebyshev collocation method will be proposed to solve numerically this problem and to obtain the unknown boundary function. Since this inverse problem is generally ill-posed, to find an optimal stable solution, we will utilize a regularization method based on the mollification technique with the generalized cross-validation criterion. The error estimate of the numerical solution is investigated. Finally, to authenticate the validity and effectiveness of the proposed algorithm, some numerical test problems are presented.

Journal ArticleDOI
TL;DR: In this paper, a Kojima-Megiddo-Mizuno type continuation method for solving tensor complementarity problems is introduced, and it is shown that there exists a bounded continuation trajectory when the tensor is strictly semi-positive and any limit point tracing the trajectory gives a solution.
Abstract: We introduce a Kojima–Megiddo–Mizuno type continuation method for solving tensor complementarity problems. We show that there exists a bounded continuation trajectory when the tensor is strictly semi-positive and any limit point tracing the trajectory gives a solution of the tensor complementarity problem. Moreover, when the tensor is strong strictly semi-positive, tracing the trajectory will converge to the unique solution. Some numerical results are given to illustrate the effectiveness of the method.

Journal ArticleDOI
Marcus Carlsson1
TL;DR: For optimization problems where the $$\ell ^2$$ℓ2-term contains a singular matrix, it is proved that the regularizations never move the global minima.
Abstract: We provide theory for the computation of convex envelopes of non-convex functionals including an $$\ell ^2$$ -term and use these to suggest a method for regularizing a more general set of problems. The applications are particularly aimed at compressed sensing and low-rank recovery problems, but the theory relies on results which potentially could be useful also for other types of non-convex problems. For optimization problems where the $$\ell ^2$$ -term contains a singular matrix, we prove that the regularizations never move the global minima. This result in turn relies on a theorem concerning the structure of convex envelopes, which is interesting in its own right. It says that at any point where the convex envelope does not touch the non-convex functional, we necessarily have a direction in which the convex envelope is affine.

Journal ArticleDOI
TL;DR: Two splitting methods for solving horizontal linear complementarity problems characterized by matrices with positive diagonal elements are proposed and it is proved the convergence of the methods under some assumptions on the diagonal dominance of the matrices of the problem.
Abstract: In this paper, we propose two splitting methods for solving horizontal linear complementarity problems characterized by matrices with positive diagonal elements. The proposed procedures are based on the Jacobi and on the Gauss–Seidel iterations and differ from existing techniques in that they act directly and simultaneously on both matrices of the problem. We prove the convergence of the methods under some assumptions on the diagonal dominance of the matrices of the problem. Several numerical experiments, including large-scale problems of practical interest, demonstrate the capabilities of the proposed methods in various situations.

Journal ArticleDOI
TL;DR: The case, when one operator is Lipschitz continuous but not necessarily a subdifferential operator and the other operator is strongly monotone, arises in optimization methods based on primal–dual approaches, and new linear convergence results are provided.
Abstract: The Douglas–Rachford method is a popular splitting technique for finding a zero of the sum of two subdifferential operators of proper, closed, and convex functions and, more generally, two maximally monotone operators. Recent results concerned with linear rates of convergence of the method require additional properties of the underlying monotone operators, such as strong monotonicity and cocoercivity. In this paper, we study the case, when one operator is Lipschitz continuous but not necessarily a subdifferential operator and the other operator is strongly monotone. This situation arises in optimization methods based on primal–dual approaches. We provide new linear convergence results in this setting.

Journal ArticleDOI
TL;DR: A robust second-order numerical integration scheme for a class of nonlinear optimal control problems subject to a system of fractional differential equations and a gradient-based optimization method is applied to the discretized problem.
Abstract: This paper presents a numerical algorithm for solving a class of nonlinear optimal control problems subject to a system of fractional differential equations. We first propose a robust second-order numerical integration scheme for the system. The objective is approximated by the trapezoidal rule. We then apply a gradient-based optimization method to the discretized problem. Formulas for calculating the gradients are derived. Computational results demonstrate that our method is able to generate accurate numerical approximations for problems with multiple states and controls. It is also robust with respect to the fractional orders of derivatives.

Journal ArticleDOI
TL;DR: An efficient numerical method based on a new class of basis functions with control parameters, called generalized polynomials, and the Lagrange multipliers method that results in a system of algebraic equations with unknown coefficients and control parameters can be simply solved.
Abstract: This paper deals with an efficient numerical method for solving two-dimensional variable-order fractional optimal control problem. The dynamic constraint of two-dimensional variable-order fractional optimal control problem is given by the classical partial differential equations such as convection–diffusion, diffusion-wave and Burgers’ equations. The presented numerical approach is essentially based on a new class of basis functions with control parameters, called generalized polynomials, and the Lagrange multipliers method. First, generalized polynomials are introduced and an explicit formulation for their variable-order fractional operational matrix is obtained. Then, the state and control functions are expanded in terms of generalized polynomials with unknown coefficients and control parameters. By using the residual function and its 2-norm, the under consideration problem is transformed into an optimization one. Finally, the necessary conditions of optimality results in a system of algebraic equations with unknown coefficients and control parameters can be simply solved. Some illustrative examples are given to demonstrate accuracy and efficiency of the proposed method.

Journal ArticleDOI
TL;DR: A simple and direct approach for solving linear-quadratic mean-field stochastic control problems and is based on a suitable version of the martingale formulation for verification theorems in control theory.
Abstract: We propose a simple and direct approach for solving linear-quadratic mean-field stochastic control problems. We study both finite-horizon and infinite-horizon problems and allow notably some coefficients to be stochastic. Extension to the common noise case is also addressed. Our method is based on a suitable version of the martingale formulation for verification theorems in control theory. The optimal control involves the solution to a system of Riccati ordinary differential equations and to a linear mean-field backward stochastic differential equation; existence and uniqueness conditions are provided for such a system. Finally, we illustrate our results through an application to the production of an exhaustible resource.

Journal ArticleDOI
TL;DR: This paper derives a new search direction satisfying the sufficient descent condition based on a quadratic model in a two-dimensional subspace, and design a new strategy for the choice of initial stepsize, and establishes the global convergence and the R-linear convergence of the proposed method.
Abstract: The Barzilai–Borwein conjugate gradient methods, which were first proposed by Dai and Kou (Sci China Math 59(8):1511–1524, 2016), are very interesting and very efficient for strictly convex quadratic minimization. In this paper, we present an efficient Barzilai–Borwein conjugate gradient method for unconstrained optimization. Motivated by the Barzilai–Borwein method and the linear conjugate gradient method, we derive a new search direction satisfying the sufficient descent condition based on a quadratic model in a two-dimensional subspace, and design a new strategy for the choice of initial stepsize. A generalized Wolfe line search is also proposed, which is nonmonotone and can avoid a numerical drawback of the original Wolfe line search. Under mild conditions, we establish the global convergence and the R-linear convergence of the proposed method. In particular, we also analyze the convergence for convex functions. Numerical results show that, for the CUTEr library and the test problem collection given by Andrei, the proposed method is superior to two famous conjugate gradient methods, which were proposed by Dai and Kou (SIAM J Optim 23(1):296–320, 2013) and Hager and Zhang (SIAM J Optim 16(1):170–192, 2005), respectively.

Journal ArticleDOI
TL;DR: A general concept of well-posedness in the sense of Tykhonov is introduced for abstract problems formulated on metric spaces and characterize it in terms of properties for a family of approximating sets in the framework of real normed spaces.
Abstract: We introduce a general concept of well-posedness in the sense of Tykhonov for abstract problems formulated on metric spaces and characterize it in terms of properties for a family of approximating sets. Then, we illustrate these results in the study of some relevant particular problems with history-dependent operators: a fixed point problem, a nonlinear operator equation, a variational inequality and a hemivariational inequality, both formulated in the framework of real normed spaces. For each problem, we clearly indicate the approximating sets, characterize its well-posedness by using our abstract results, then we state and prove specific results which guarantee the well-posedness under appropriate assumptions on the data. For part of the problems, we provide the continuous dependence of the solution with respect to the data and/or present specific examples.

Journal ArticleDOI
TL;DR: The main results are investigated by compactness of fractional resolvent operator family, and the optimal control results are derived without uniqueness of solutions for controlled evolution equations.
Abstract: This paper is mainly concerned with controlled stochastic evolution equations of Sobolev type for the Caputo and Riemann–Liouville fractional derivatives. Some sufficient conditions are established for the existence of mild solutions and optimal state-control pairs of the limited Lagrange optimal systems. The main results are investigated by compactness of fractional resolvent operator family, and the optimal control results are derived without uniqueness of solutions for controlled evolution equations.

Journal ArticleDOI
TL;DR: Local convergence properties of the Levenberg–Marquardt method are considered, when applied to nonzero-residue nonlinear least-squares problems under an error bound condition, which is weaker than requiring full rank of the Jacobian in a neighborhood of a stationary point.
Abstract: The Levenberg–Marquardt method is widely used for solving nonlinear systems of equations, as well as nonlinear least-squares problems. In this paper, we consider local convergence properties of the method, when applied to nonzero-residue nonlinear least-squares problems under an error bound condition, which is weaker than requiring full rank of the Jacobian in a neighborhood of a stationary point. Differently from the zero-residue case, the choice of the Levenberg–Marquardt parameter is shown to be dictated by (i) the behavior of the rank of the Jacobian and (ii) a combined measure of nonlinearity and residue size in a neighborhood of the set of (possibly non-isolated) stationary points of the sum of squares function.

Journal ArticleDOI
TL;DR: A new methodology to cope with the mathematical difficulties arising from the presence of stochastic coefficients and random jumps is developed and an explicit equilibrium investment strategy in a deterministic coefficients case is obtained and proved to be unique.
Abstract: This paper studies a kind of time-inconsistent linear–quadratic control problem in a more general framework with stochastic coefficients and random jumps. The time inconsistency comes from the dependence of the terminal cost on the current state as well as the presence of a quadratic term of the expected terminal state in the objective functional. Instead of finding a global optimal control, we look for a time-consistent locally optimal equilibrium solution within the class of open-loop controls. A general sufficient and necessary condition for equilibrium controls via a flow of forward–backward stochastic differential equations is derived. This paper further develops a new methodology to cope with the mathematical difficulties arising from the presence of stochastic coefficients and random jumps. As an application, we study a mean-variance portfolio selection problem in a jump-diffusion financial market; an explicit equilibrium investment strategy in a deterministic coefficients case is obtained and proved to be unique.

Journal ArticleDOI
TL;DR: Using characterizations of several set relations via the oriented distance function, some suitable subsets of the scalarization image space are constructed to obtain equivalent characterizations for various robust solutions for uncertain multiobjective optimization problems based on a set approach.
Abstract: In this paper, we characterize different kinds of multiobjective robustness concepts via the well-known oriented distance function. By using characterizations of several set relations via the oriented distance function, together with the help of image space analysis, we construct some suitable subsets of the scalarization image space to obtain equivalent characterizations for various robust solutions for uncertain multiobjective optimization problems based on a set approach.

Journal ArticleDOI
TL;DR: In this article, a diffusion equation with fractional time derivative with nonsingular Mittag-Leffler kernel in Hilbert spaces is considered and a distributed controlled fractional diffusion problem is considered, where a unique optimal control can act on the system in order to approach the state of the system by a given state at minimal cost.
Abstract: In this paper, we consider a diffusion equation with fractional time derivative with nonsingular Mittag-Leffler kernel in Hilbert spaces. We first prove the existence and uniqueness of solution by means of a spectral argument. Then, we consider a distributed controlled fractional diffusion problem. We show that there exists a unique optimal control, which can act on the system in order to approach the state of the system by a given state at minimal cost. Finally, using the Euler–Lagrange first-order optimality condition, we obtain an optimality system, which characterizes the optimal control.

Journal ArticleDOI
TL;DR: Describing these models as controlled sweeping processes with pointwise/hard control and state constraints and applying new necessary optimality conditions for such systems allow us to develop efficient procedures to solve naturally formulated optimal control problems.
Abstract: The paper is mostly devoted to applications of a novel optimal control theory for perturbed sweeping/Moreau processes to two practical dynamical models. The first model addresses mobile robot dynamics with obstacles, and the second one concerns control and optimization of traffic flows. Describing these models as controlled sweeping processes with pointwise/hard control and state constraints and applying new necessary optimality conditions for such systems allow us to develop efficient procedures to solve naturally formulated optimal control problems for the models under consideration and completely calculate optimal solutions in particular situations.

Journal ArticleDOI
TL;DR: In order to solve the time optimal control problem for the original system, this work constructs a sequence of Meyer approximations for which the desired optimal control and optimal time are well derived.
Abstract: We investigate time optimal control of a system governed by a class of non-instantaneous impulsive differential equations in Banach spaces. We use an appropriate linear transformation technique to transfer the original impulsive system into an approximate one, and then we prove the existence and uniqueness of their mild solutions. Moreover, we show the existence of optimal controls for Meyer problems of the approximate. Further, in order to solve the time optimal control problem for the original system, we construct a sequence of Meyer approximations for which the desired optimal control and optimal time are well derived.

Journal ArticleDOI
TL;DR: It is proved that the new method is globally convergent for general nonlinear functions, under some standard assumptions, and it is shown that the search direction satisfies the sufficient descent property independent of the line search.
Abstract: In this paper, a modified version of the spectral conjugate gradient algorithm suggested by Jian, Chen, Jiang, Zeng and Yin is proposed. It is proved that the new method is globally convergent for general nonlinear functions, under some standard assumptions. Based on the modified secant condition and quasi-Newton directions, some new spectral parameters are introduced. It is shown that the search direction satisfies the sufficient descent property independent of the line search. Numerical experiments indicate a promising behavior of the new algorithm, especially for large-scale problems.

Journal ArticleDOI
TL;DR: This paper presents a dynamic non-diagonal regularization for interior point methods and proposes a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly.
Abstract: In this paper, we present a dynamic non-diagonal regularization for interior point methods. The non-diagonal aspect of this regularization is implicit, since all the off-diagonal elements of the regularization matrices are cancelled out by those elements present in the Newton system, which do not contribute important information in the computation of the Newton direction. Such a regularization has multiple goals. The obvious one is to improve the spectral properties of the Newton system solved at each iteration of the interior point method. On the other hand, the regularization matrices introduce sparsity to the aforementioned linear system, allowing for more efficient factorizations. We also propose a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly. This alleviates the need of finding specific regularization values through experimentation, which is the most common approach in the literature. We provide perturbation bounds for the eigenvalues of the non-regularized system matrix and then discuss the spectral properties of the regularized matrix. Finally, we demonstrate the efficiency of the method applied to solve standard small- and medium-scale linear and convex quadratic programming test problems.