scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 1997"


Journal ArticleDOI
TL;DR: It is shown that some basic linear control design problems are NP-hard, implying that, unless P=NP, they cannot be solved by polynomial time algorithms.
Abstract: We show that some basic linear control design problems are NP-hard, implying that, unless P=NP, they cannot be solved by polynomial time algorithms. The problems that we consider include simultaneous stabilization by output feedback, stabilization by state or output feedback in the presence of bounds on the elements of the gain matrix, and decentralized control. These results are obtained by first showing that checking the existence of a stable matrix in an interval family of matrices is NP-hard.

411 citations


Journal ArticleDOI
TL;DR: In this article, the equivalence between exact controllability and exponential stabilizability for an abstract conservative system with bounded control was shown. But the equivalences were not extended to the case of distributed control/damping.
Abstract: In this paper we note the equivalence between exact controllability and exponential stabilizability for an abstract conservative system with bounded control. This enables us to establish a frequency domain characterization for the exact controllability/uniform exponential decay property of second-order elastic systems, such as the wave equation and the Petrovsky equation, with (locally) distributed control/damping. A piecewise multiplier method for frequency domain is introduced. For several classes of PDEs on regions which are not necessarily smooth, we obtain a sufficient condition for the subregion on which the application of control/damping will yield the exact controllability/uniform exponential decay property. This result provides useful information for designing the location of controllers/dampers for distributed systems with a law of conservation.

278 citations


Journal ArticleDOI
TL;DR: In this paper, the state-constrained optimal control problems governed by semilinear parabolic equations are studied and conditions for normality of optimality conditions are given.
Abstract: This paper deals with state-constrained optimal control problems governed by semilinear parabolic equations. We establish a minimum principle of Pontryagin's type. To deal with the state constraints, we introduce a penalty problem by using Ekeland's principle. The key tool for the proof is the use of a special kind of spike perturbations distributed in the domain where the controls are defined. Conditions for normality of optimality conditions are given.

257 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider methods for minimizing a convex function f that generate a sequence {xk} by taking xk+1 to be an approximate minimizer of f(x)+Dh(x,xk)/ck, where ck > 0 and Dh is the D-function of a Bregman function h.
Abstract: We consider methods for minimizing a convex function f that generate a sequence {xk} by taking xk+1 to be an approximate minimizer of f(x)+Dh(x,xk)/ck, where ck > 0 and Dh is the D-function of a Bregman function h. Extensions are made to B-functions that generalize Bregman functions and cover more applications. Convergence is established under criteria amenable to implementation. Applications are made to nonquadratic multiplier methods for nonlinear programs.

251 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study the ergodic control problem of switching diffusions representing a typical hybrid system that arises in numerous applications such as fault-tolerant control systems, flexible manufacturing systems, etc.
Abstract: We study the ergodic control problem of switching diffusions representing a typical hybrid system that arises in numerous applications such as fault-tolerant control systems, flexible manufacturing systems, etc. Under fairly general conditions, we establish the existence of a stable, nonrandomized Markov policy which almost surely minimizes the pathwise long-run average cost. We then study the corresponding Hamilton--Jacobi--Bellman (HJB) equation and establish the existence of a unique solution in a certain class. Using this, we characterize the optimal policy as a minimizing selector of the Hamiltonian associated with the HJB equations. As an example we apply the results to a failure-prone manufacturing system and obtain closed form solutions for the optimal policy.

233 citations


Journal ArticleDOI
TL;DR: The paper addresses the problem of computing state variables for systems of linear differential-algebraic equations of various forms and considers the behavioral one.
Abstract: Modeling of physical systems consists of writing the equations describing a phenomenon and yields as a result a set of differential-algebraic equations. As such, state-space models are not a natural starting point for modeling, while they have utmost importance in the simulation and control phase. The paper addresses the problem of computing state variables for systems of linear differential-algebraic equations of various forms. The point of view from which the problem is considered is the behavioral one, as put forward in [J. C. Willems, Automatica J. IFAC, 22 (1986), pp. 561--580; Dynamics Reported, 2 (1989), pp. 171--269; IEEE Trans. Automat. Control, 36 (1991), pp. 259--294].

166 citations


Journal ArticleDOI
TL;DR: In this article, a new definition of configuration controllability for mechanical systems whose Lagrangian is kinetic energy with respect to a Riemannian metric minus potential energy is presented.
Abstract: In this paper we present a definition of "configuration controllability" for mechanical systems whose Lagrangian is kinetic energy with respect to a Riemannian metric minus potential energy. A computable test for this new version of controllability is derived. This condition involves an object which we call the symmetric product. Of particular interest is a definition of "equilibrium controllability" for which we are able to derive computable sufficient conditions. Examples illustrate the theory.

160 citations


Journal ArticleDOI
TL;DR: In this article, the Lagrangian reduction technique was used for optimal control of nonholonomic systems with a non-holonomic momentum equation, such as the snakeboard and principal bundles.
Abstract: In this paper we establish necessary conditions for optimal control using the ideas of Lagrangian reduction in the sense of reduction under a symmetry group. The techniques developed here are designed for Lagrangian mechanical control systems with symmetry. The benefit of such an approach is that it makes use of the special structure of the system, especially its symmetry structure, and thus it leads rather directly to the desired conclusions for such systems. Lagrangian reduction can do in one step what one can alternatively do by applying the Pontryagin maximum principle followed by an application of Poisson reduction. The idea of using Lagrangian reduction in the sense of symmetry reduction was also obtained by Bloch and Crouch [Proc. 33rd CDC, IEEE, 1994, pp. 2584--2590] in a somewhat different context, and the general idea is closely related to those in Montgomery [Comm. Math. Phys., 128 (1990), pp. 565--592] and Vershik and Gershkovich [Dynamical Systems VII, V. Arnold and S. P. Novikov, eds., Springer-Verlag, 1994]. Here we develop this idea further and apply it to some known examples, such as optimal control on Lie groups and principal bundles (such as the ball and plate problem) and reorientation examples with zero angular momentum (such as the satellite with moveable masses). However, one of our main goals is to extend the method to the case of nonholonomic systems with a nontrivial momentum equation in the context of the work of Bloch, Krishnaprasad, Marsden, and Murray [Arch. Rational Mech. Anal., (1996), to appear]. The snakeboard is used to illustrate the method.

150 citations


Journal ArticleDOI
TL;DR: In this article, an approach to nonlinear filtering based on the Cameron-Martin version of the Wiener chaos expansion is proposed, which allows one to separate the computations involving the observations from those dealing only with the system parameters and to shift the latter off-line.
Abstract: The objective of this paper is to develop an approach to nonlinear filtering based on the Cameron--Martin version of Wiener chaos expansion. This approach gives rise to a new numerical scheme for nonlinear filtering. The main feature of this algorithm is that it allows one to separate the computations involving the observations from those dealing only with the system parameters and to shift the latter off-line.

143 citations


Journal ArticleDOI
TL;DR: Based on Fischer's function, a new nonsmooth equations approach is presented for solving nonlinear complementarity problems in this article, where a local and Q-quadratic convergence result is established.
Abstract: Based on Fischer's function, a new nonsmooth equations approach is presented for solving nonlinear complementarity problems. Under some suitable assumptions, a local and Q-quadratic convergence result is established for the generalized Newton method applied to the system of nonsmooth equations, which is a reformulation of nonlinear complementarity problems. To globalize the generalized Newton method, a hybrid method combining the generalized Newton method with the steepest descent method is proposed. Global and Q-quadratic convergence is established for this hybrid method. Some numerical results are also reported.

135 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived necessary conditions of optimality in the form of a maximum principle for a class of optimal control problems, certain of whose controls are represented by measures and whose state trajectories are functions of bounded variation.
Abstract: Necessary conditions of optimality, in the form of a maximum principle, are derived for a class of optimal control problems, certain of whose controls are represented by measures and whose state trajectories are functions of bounded variation. State trajectories are interpreted as robust solutions of the dynamic equations, a concept of solutions which takes account of the interaction between the measure control and the state variables during the jumps. The maximum principle which is derived improves on earlier optimality conditions for problems of this nature, by allowing nonsmooth data, measurable time dependence, and a possibly time-varying constraint set for the conventional controls.

Journal ArticleDOI
TL;DR: In this article, the authors consider a controlled Markov chain on a general state space whose transition probabilities are parameterized by an unknown parameter belonging to a compact metric space and construct an adaptive control rule which uses the optimal control law(s) at a relative frequency of 1 - O(n-1 log n) and show that this relative frequency gives an asymptotic optimal balance between the control objective and the amount of information needed to learn about the unknown parameter.
Abstract: We consider a controlled Markov chain on a general state space whose transition probabilities are parameterized by an unknown parameter belonging to a compact metric space. There is a one-step reward associated with each pair of control and the following state of the process. Given a finite set of stationary control laws, under each of which the Markov chain is uniformly recurrent, an optimal control law is this set is one that maximizes the long-run average reward. In ignorance of the parameter value, we construct an adaptive control rule which uses the optimal control law(s) at a relative frequency of 1 - O(n-1 log n) and show that this relative frequency gives an asymptotically optimal balance between the control objective and the amount of information needed to learn about the unknown parameter. The basic idea underlying this construction is to introduce suitable "uncertainty adjustments" via sequential testing theory into the certainty-equivalence rule, thus resolving the apparent dilemma between control and information.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the limiting behavior of trajectories of control affine systems with continuous inputs, and showed that under very general conditions the trajectories generated by the trajectory of an extended system of the form of a sequence of Lie brackets converged to trajectories in an affine system.
Abstract: In [SIAM J. Control Optim.}, 37 (1997), to appear], [Limiting process of control-affine systems with Holder continuous inputs, submitted], we have studied the limiting behavior of trajectories of control affine systems $\Sigma\,:\, \dot{x}=\sum_{k=1}^m u_k f_k(x)$ generated by a sequence $\{u^j\}\subseteq L^1([0,T],\rr^m)$, where the $f_k$ are smooth vector fields on a smooth manifold $M$. We have shown that under very general conditions the trajectories of $\Sigma$ generated by the $u^j$ converge to trajectories of an {extended system} of $\Sigma$ of the form $ \Sigma_{ext}\,:\,\dot{x}=\sum_{k=1}^r v_kf_k(x)$, where $f_k,k=1,\ldots,m$, are the same as in $\Sigma$ and $f_{m+1},\ldots,f_r$ are Lie brackets of $f_1,\ldots,f_m$. In this paper, we will apply these convergence results to solve the inverse problem; i.e., given any trajectory $\gamma$ of an extended system $\Sigma_{ext}$, find trajectories of $\Sigma$ that converge to $\gamma$ uniformly. This is done by means of a universal construction that only involves the knowledge of the $v_k, k=1,\ldots,r$, and the structure of the Lie brackets in $\Sigma_{ext}$ but does not depend on the manifold $M$ and the vector fields $f_1,\ldots,f_m$. These results can be applied to approximately track an arbitrary smooth path in $M$ for controllable systems $\Sigma$, which in particular gives an alternative approach to the motion planning problem for nonholonomic systems.

Journal ArticleDOI
TL;DR: In this paper, the degeneracy phenomenon in optimal control problems with state constraints was studied and it was shown that this phenomenon occurs because of the incompleteness of the standard variants of the Pontryagin's maximum principle for problems of state constraints.
Abstract: In this paper we study the degeneracy phenomenon in optimal control problems with state constraints. It is shown that this phenomenon occurs because of the incompleteness of the standard variants of Pontryagin's maximum principle for problems with state constraints. A new maximum principle containing additional information about the behavior of the Hamiltonian at the endtimes is developed. We also obtain some sufficient and necessary conditions for nondegeneracy and pointwise nontriviality of the maximum principle. The results obtained pertain to optimal control problems with systems described by differential inclusions and ordinary differential equations.

Journal ArticleDOI
TL;DR: In this paper, an estimator in the form of an infinite-dimensional linear evolution system having the state and parameter estimates as its states is defined, and convergence of the state estimator is established via a Lyapunov estimate.
Abstract: The on-line or adaptive identification of parameters in abstract linear and nonlinear infinite-dimensional dynamical systems is considered. An estimator in the form of an infinite-dimensional linear evolution system having the state and parameter estimates as its states is defined. Convergence of the state estimator is established via a Lyapunov estimate. The finite-dimensional notion of a plant being sufficiently rich or persistently excited is extended to infinite dimensions. Convergence of the parameter estimates is established under the additional assumption that the plant is persistently excited. A finite-dimensional approximation theory is developed, and convergence results are established. Numerical results for examples involving the estimation of both constant and functional parameters in one-dimensional linear and nonlinear heat or diffusion equations and the estimation of stiffness and damping parameters in a one-dimensional wave equation with Kelvin--Voigt viscoelastic damping are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors considered robust and risk-sensitive control of discrete time finite state systems on an infinite horizon and characterized the solution of the state feedback robust control problem in terms of the value of an average cost dynamic game.
Abstract: In this paper we consider robust and risk-sensitive control of discrete time finite state systems on an infinite horizon. The solution of the state feedback robust control problem is characterized in terms of the value of an average cost dynamic game. The risk-sensitive stochastic optimal control problem is solved using the policy iteration algorithm, and the optimal rate is expressed in terms of the value of a stochastic dynamic game with average cost per unit time criterion. By taking a small noise limit, a deterministic dynamic game which is closely related to the robust control problem is obtained.

Journal ArticleDOI
TL;DR: Weighted averages of Kiefer--Wolfowitz-type procedures, which are driven by larger step lengths than usual, can achieve the optimal rate of convergence because a priori knowledge of a lower bound on the smallest eigenvalue of the Hessian matrix is avoided.
Abstract: Weighted averages of Kiefer--Wolfowitz-type procedures, which are driven by larger step lengths than usual, can achieve the optimal rate of convergence. A priori knowledge of a lower bound on the smallest eigenvalue of the Hessian matrix is avoided. The asymptotic mean squared error of the weighted averaging algorithm is the same as would emerge using a Newton-type adaptive algorithm. Several different gradient estimates are considered; one of them leads to a vanishing asymptotic bias. This gradient estimate applied with the weighted averaging algorithm usually yields a better asymptotic mean squared error than applied with the standard algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors extend the above result to the class of exponentially stable regular systems and show how the parameters $k$ and $\Gamma_0$ can be tuned adaptively.
Abstract: It is well known that closing the loop around an exponentially stable, finite-dimensional, linear, time-invariant plant with square transfer-function matrix $\BG(s)$ compensated by a controller of the form $(k/s)\Gamma_0$, where $k\in {\Bbb R}$ and $\Gamma_0\in {\Bbb R}\mm$, will result in an exponentially stable closed-loop system which achieves tracking of arbitrary constant reference signals, provided that (i) all the eigenvalues of $\BG(0)\Gamma_0$ have positive real parts and (ii) the gain parameter $k$ is positive and sufficiently small. In this paper we consider a rather general class of infinite-dimensional linear systems, called regular systems, for which convenient representations are known to exist, both in time and in frequency domain. The purpose of the paper is twofold: (i) we extend the above result to the class of exponentially stable regular systems and (ii) we show how the parameters $k$ and $\Gamma_0$ can be tuned adaptively. The resulting adaptive tracking controllers are not based on system identification or parameter estimation algorithms, nor is the injection of probing signals required.

Journal ArticleDOI
TL;DR: In this paper, the authors studied linear time-invariant delay-differential systems from the behavioral point of view and characterized the controllability of the behaviors in terms of the rank of associated matrices.
Abstract: We will study linear time-invariant delay-differential systems from the behavioral point of view as it was introduced for dynamical systems by Willems Dynam. Report., 2 (1989), pp. 171--269]. A ring ${\cal H}$ which lies between ${\Bbb R}[s,z,z^{-1}]$ and ${\Bbb R}(s)[z,z^{-1}]$ will be presented, whose elements can be interpreted as a generalized version of delay-differential operators on ${\cal C}^{\infty}({\Bbb R},{\Bbb R})$. In this framework, a behavior is the kernel of such an operator. Using the ring ${\cal H}$, an algebraic characterization of inclusion, respectively, equality of the behaviors under consideration, is given. Finally, controllability of the behaviors is characterized in terms of the rank of the associated matrices. In the case of time-delay state-space systems this criterion becomes the known Hautus criterion for spectral controllability.

Journal ArticleDOI
TL;DR: In this article, it was shown that the drag of a body traveling at uniform velocity in a fluid governed by the stationary Navier-Stokes equations is a mapping in a ball centered at 0.
Abstract: This paper is concerned with the computation of the drag $T$ associated with a body traveling at uniform velocity in a fluid governed by the stationary Navier--Stokes equations. It is assumed that the fluid fills a domain of the form $\Omega+u$, where $\Omega\subset\reels^3$ is a reference domain and $u$ is a displacement field. We assume only that $\Omega$ is a Lipschitz domain and that $u$ is Lipschitz-continuous. We prove that, at least when the velocity of the body is sufficiently small, $u\mapsto T(\Omega+u)$ is a $C^{\infty}$ mapping (in a ball centered at $0$). We also compute the derivative at $0$.

Journal ArticleDOI
TL;DR: In this paper, it was shown that under rather general assumptions an exactly controllable problem is uniformly stabilizable with arbitrarily prescribed decay rates, under the Riccati equation and other general assumptions.
Abstract: We prove that under rather general assumptions an exactly controllable problem is uniformly stabilizable with arbitrarily prescribed decay rates. Our approach is direct and constructive and avoids many of the technical difficulties associated with the usual methods based on Riccati equations. We give several applications for the wave equation and for Petrovsky systems.

Journal ArticleDOI
TL;DR: In this paper, the authors deal with characterizations of stability and regularity properties of set-valued mappings in infinite dimensions, which are of great importance for applications to many aspects in optimization and control.
Abstract: This paper deals with effective characterizations of stability and regularity properties of set-valued mappings in infinite dimensions, which are of great importance for applications to many aspects in optimization and control. The main purpose is to obtain verifiable necessary and sufficient conditions for these properties that are expressed in terms of constructive generalized differential structures at reference points and are convenient for applications. Based on advanced techniques in nonsmooth analysis, new dual criteria are proven in this direction under minimal assumptions. Applications of such point conditions are given to sensitivity analysis for parametric constraint and variational systems which describe sets of feasible and optimal solutions to various optimization and related problems.

Journal ArticleDOI
TL;DR: In this article, the authors studied the stability of affine in control stochastic differential systems and provided sufficient conditions for the existence of control Lyapunov functions leading to stabilizing feedback laws.
Abstract: The purpose of this paper is to study the asymptotic stability in probability of affine in the control stochastic differential systems. Sufficient conditions for the existence of control Lyapunov functions leading to the existence of stabilizing feedback laws which are smooth, except possibly at the equilibrium point of the system, are provided.

Journal ArticleDOI
TL;DR: This paper develops an approach for obtaining discrete approximations to nonlinear (affine) control systems that are of higher than first order of accuracy with respect to the discretization step h, and proves accuracy O(h2) is proven for appropriate Runge--Kutta-type discretized methods.
Abstract: This paper develops an approach for obtaining discrete approximations to nonlinear (affine) control systems that are of higher than first order of accuracy with respect to the discretization step h. The approach consists of two parts: first the set ${\cal U}$ of measurable admissible controls is replaced by an appropriate finite-dimensional subset ${\cal U}_N$; then the differential equations corresponding to controls from ${\cal U}_N$ (which are in a reasonable sense "regular") are discretized by single step discretization methods. The main result estimates the accuracy in the first part, measured in terms of a prescribed collection of performance indexes. The result can be interpreted both in the context of approximation of optimal control problems and in the context of approximation of the reachable set. In the first case, accuracy O(h2) is proven for appropriate Runge--Kutta-type discretization methods, without explicitly or implicitly requiring any regularity of the optimal solutions. In the case of a convex reachable set we obtain O(h2) approximation with respect to the Hausdorff distance and O(h1.5) accuracy in the nonconvex case. An application to the time-aggregation of discrete-time control systems is also presented.

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for existence of estimators and controllers that achieve the desired performance criterion when such a measurement delay is present are given in terms of the familiar pair of algebraic Riccati equations.
Abstract: Most physical processes exhibit transport delay in the measured output, and it is well known that this can have disastrous effects on system stability and performance if it is not accounted for. In this paper, we give necessary and sufficient conditions for existence of estimators and controllers that achieve the desired ${\cal H}_\infty$ performance criterion when such a measurement delay is present. We also give the complete characterization of all controllers and estimators that achieve the desired performance criterion. The necessary and sufficient conditions are easy to check and are given in terms of the familiar pair of algebraic Riccati equations that appear in the nondelay versions of the corresponding ${\cal H}_\infty$ problems, along with an additional Riccati differential equation. Explicit state-space formulas for the controllers and estimators are also obtained. They have a linear periodic structure and are easily implementable. To obtain these results, we first obtain state-space results for a "modified" Nehari problem, which may be of independent interest (see Problem 5 in section 2).

Journal ArticleDOI
TL;DR: It is proven that the approximate controllable condition is necessary and the complete controllability condition is sufficient for the partially observed linear Gaussian control system to attain the arbitrarily small neighborhood of each point in the state space with probability arbitrarily closely to one.
Abstract: The controllability notions for partially observed stochastic systems are defined. Their relation with complete and approximate controllabilities is shown. In particular, it is proven that the approximate controllability condition is necessary and the complete controllability condition is sufficient for the partially observed linear Gaussian control system to attain the arbitrarily small neighborhood of each point in the state space with probability arbitrarily closely to one.

Journal ArticleDOI
TL;DR: It is shown that, under suitable assumptions on problem data, the iterative algorithm converges to a solution of the optimality conditions, provided that this parameter is properly chosen.
Abstract: This paper considers the optimal control problem of minimizing control effort subject to multiple performance constraints on output covariance matrices $Y_i$ of the form $Y_i \leq \overline{Y}_i$, where $\overline{Y}_i$ is given. The contributions of this paper are a set of conditions that characterize global optimality, and an iterative algorithm for finding a solution to the optimality conditions. This iterative algorithm is completely described up to a user-specified parameter. We show that, under suitable assumptions on problem data, the iterative algorithm converges to a solution of the optimality conditions, provided that this parameter is properly chosen. Both discrete- and continuous-time problems are considered.

Journal ArticleDOI
TL;DR: The notion of observation-compatible systems is introduced and it is shown that prioritized synchronous composition (PSC) of observed systems can be used as a mechanism of control of nondeterministic systems under partial observation in presence of driven events.
Abstract: In this paper we extend our earlier work on supervisory control of nondeterministic systems using prioritized synchronization as the mechanism of control and trajectory model as the modeling formalism by considering design of supervisors under partial observation. We introduce the notion of observation-compatible systems and show that prioritized synchronous composition (PSC) of observation-compatible systems can be used as a mechanism of control of nondeterministic systems under partial observation in presence of driven events. Necessary and sufficient conditions that depend on the trajectory model as opposed to the language model of the plant are obtained for the existence of centralized as well as decentralized supervision. Our work on centralized control shows that the results of the traditional supervisory control can be ``extended" to the above setting, provided that the supervisor is deterministic and the observation mask is projection type. On the other hand, our work on decentralized control is based on a new relation between controllability, observability, co-observability, and PSC that we derive in this paper.

Journal ArticleDOI
TL;DR: In this article, a cost function that minimizes a cost functional that takes into account both the fast motion, supposing, say, tracking a fast target, and the slow dynamics is presented.
Abstract: Controlled coupled slow and fast motions are examined in a singular perturbations setting. The objective is to minimize a cost functional that takes into account both the fast motion, supposing, say, tracking a fast target, and the slow dynamics. A method is offered to cope with the possibility that the fast flow has nonstationary limits. Invariant measures of the fast motion are then the controlled objects on the infinitesimal scale. Optimal amalgamation of them on the slow scale induces the variational limit, whose solutions are near optimal solutions of the perturbed system.

Journal ArticleDOI
TL;DR: In this paper, augmented Lagrangian methods are proposed to solve state and control constrained optimal control problems, based on the Lagrangians formulation of nonsmooth convex optimization in Hilbert spaces developed in [K. Ito and K. Kunisch, 1994].
Abstract: We propose augmented Lagrangian methods to solve state and control constrained optimal control problems. The approach is based on the Lagrangian formulation of nonsmooth convex optimization in Hilbert spaces developed in [K. Ito and K. Kunisch, Augmented Lagrangian Methods for Nonsmooth Convex Optimization in Hilbert Spaces, preprint, 1994]. We investigate a linear optimal control problem with a boundary control function as in [M. Bergounioux, Numer. Funct. Anal. Optim., 14 (1993), pp. 515--543]. Both the equation and the constraints are augmented. The proposed methods are general and can be adapted to a much wider class of problems.