Abstract: In many areas of science one often has a given matrix, representing for example a measured data set and is required to find a matrix that is closest in a suitable norm to the matrix and possesses additionally a structure,inherited from the model used or coming from the application. We call these problems structured matrix nearness problems. We look at three different groups of these problems that come from real applications, analyze the properties of the corresponding matrix structure, and propose algorithms to solve them efficiently.The first part of this thesis concerns the nearness problem of finding the nearest $k$ factor correlation matrix $C(X) =\diag(I_n -XX^T)+XX^T$ to a given symmetric matrix, subject to natural nonlinear constraints on the elements of the $n\times k$ matrix $X$, where distance is measured in the Frobenius norm.Such problems arise, for example, when one is investigating factor models of collateralized debt obligations (CDOs) or multivariate time series. We examine several algorithms for solving the nearness problem that differ in whether or not they can take account of the nonlinear constraints and in their convergence properties. Our numerical experiments show that the performance of the methods depends strongly on the problem, but that, among our tested methods, the spectral projected gradient method is the clear winner.In the second part we look at two two-sided optimization problems where the matrix of unknowns $Y\in\R^{n\times p}$ lies in the Stiefel manifold. These two problems come from an application in atomic chemistry where one is looking for atomic orbitals with prescribed occupation numbers. We analyze these two problems, propose an analytic optimal solution of the first and show that an optimal solution of the second problem can be found by solving a convex quadratic programming problem with box constraints and $p$ unknowns. We prove that the latter problem can be solved by the active-set method in at most $2p$ iterations. Subsequently, we analyze the set of optimal solutions $\mathcal{C}=\{Y\in\R^{n\times p}:Y^TY=I_p, Y^TNY=D\}$ of the first problem for $N$ symmetric and $D$ diagonal and find that a slight modification of it is a Riemannian manifold. We derive the geometric objects required to make an optimization over this manifold possible. We propose an augmented Lagrangian-based algorithm that uses these geometric tools and allows us to optimize an arbitrary smooth function over $\mathcal{C}$. This algorithm can be used to select a particular solution out of the latter set $\mathcal{C}$ by posing a new optimization problem. We compare it numerically with a similar algorithm that,however, does not apply these geometric tools and find that our algorithm yields better performance.The third part is devoted to low rank nearness problems in the $Q$-norm, where the matrix of interest is additionally of linear structure, meaning it lies in the set spanned by $s$ predefined matrices $U_1,\ldots, U_s\in\{0,1\}^{n\times p}$. These problems are often associated with model reduction, for example in speech encoding, filter design, or latent semantic indexing. We investigate three approaches that support any linear structure and examine further the geometric reformulation by Schuermans et al.\ (2003). We improve their algorithm in terms of reliability by applying the augmented Lagrangian method and show in our numerical tests that the resulting algorithm yields better performance than other existing methods.