scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Control and Optimization in 2017"


Journal Article
TL;DR: The Julia programming language and its design is introduced---a dance between specialization and abstraction, which recognizes what remains the same after computation, and which is best left untouched as they have been built by the experts.

1,730 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider continuous differentiable functions with min-max saddle points and study the asymptotic convergence properties of the associated saddle-point dynamics (gradient descent in the first variable and gradient ascent in the second one).
Abstract: This paper considers continuously differentiable functions of two vector variables that have (possibly a continuum of) min-max saddle points. We study the asymptotic convergence properties of the associated saddle-point dynamics (gradient descent in the first variable and gradient ascent in the second one). We identify a suite of complementary conditions under which the set of saddle points is asymptotically stable under the saddle-point dynamics. Our first set of results is based on the convexity-concavity of the function defining the saddle-point dynamics to establish the convergence guarantees. For functions that do not enjoy this feature, our second set of results relies on properties of the linearization of the dynamics, the function along the proximal normals to the saddle set, and the linearity of the function in one variable. We also provide global versions of the asymptotic convergence results. Various examples illustrate our discussion.

143 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal control of general stochastic McKean-Vlasov equation under common noise is studied. But the authors focus on the control of the value function in the Wasserstein space of probability measures, which is proved from a flow property of the controlled state process.
Abstract: We study the optimal control of general stochastic McKean-Vlasov equation. Such problem is motivated originally from the asymptotic formulation of cooperative equilibrium for a large population of particles (players) in mean-field interaction under common noise. Our first main result is to state a dynamic programming principle for the value function in the Wasserstein space of probability measures, which is proved from a flow property of the conditional law of the controlled state process. Next, by relying on the notion of differentiability with respect to probability measures due to P.L. Lions [32], and Ito's formula along a flow of conditional measures, we derive the dynamic programming Hamilton-Jacobi-Bellman equation, and prove the viscosity property together with a uniqueness result for the value function. Finally, we solve explicitly the linear-quadratic stochastic McKean-Vlasov control problem and give an application to an interbank systemic risk model with common noise.

117 citations


Journal ArticleDOI
TL;DR: Hu et al. as mentioned in this paper derived a necessary and sufficient condition for equilibrium control via a flow of forward-backward stochastic differential equations and proved that the explicit equilibrium control constructed in [Y. Hu, H. Jin, and X. Zhou, SIAM J. Control.
Abstract: In this paper, we continue our study on a general time-inconsistent stochastic linear--quadratic control problem originally formulated in [Y. Hu, H. Jin, and X. Y. Zhou, SIAM J. Control. Optim., 50 (2012), pp. 1548--1572]. We derive a necessary and sufficient condition for equilibrium controls via a flow of forward--backward stochastic differential equations. When the state is one dimensional and the coefficients in the problem are all deterministic, we prove that the explicit equilibrium control constructed in [Y. Hu, H. Jin, and X. Y. Zhou, SIAM J. Control. Optim., 50 (2012), pp. 1548--1572] is indeed unique. Our proof is based on the derived equivalent condition for equilibria as well as a stochastic version of the Lebesgue differentiation theorem. Finally, we show that the equilibrium strategy is unique for a mean-variance portfolio selection model in a complete financial market where the risk-free rate is a deterministic function of time but all the other market parameters are possibly stochastic pro...

98 citations


Journal ArticleDOI
TL;DR: It is shown that a pseudo-Boolean function in the proper form can play the role of Lyapunov functions for BNs, and a converse Lyap unov theorem as well as a necessary and sufficient condition are obtained for the asymptotical stability.
Abstract: This paper investigates the Lyapunov-based stability analysis and the construction of Lyapunov functions for Boolean networks (BNs) and establishes a new framework of Lyapunov theory for BNs via the semitensor product of matrices. First, we study how to define a Lyapunov function for BNs. A proper form of pseudo-Boolean functions is found, and the concept of (strict-) Lyapunov functions is thus given. It is shown that a pseudo-Boolean function in the proper form can play the role of Lyapunov functions for BNs, based on which several Lyapunov-based stability results are obtained. Second, we study how to construct a Lyapunov function for BNs and propose two methods for this problem: one is a definition-based method, and the other is a structure-based one. Third, the existence of strict-Lyapunov functions is studied, and a converse Lyapunov theorem as well as a necessary and sufficient condition are obtained for the asymptotical stability. Finally, as an application, the obtained results are applied to the s...

96 citations


Journal ArticleDOI
TL;DR: For one-dimensional parabolic partial differential equations with disturbances at both boundaries and distributed disturbances, input-to-state stability (ISS) estimates in various norms were provided in this article.
Abstract: For one-dimensional parabolic partial differential equations with disturbances at both boundaries and distributed disturbances we provide input-to-state stability (ISS) estimates in various norms. Due to the lack of an ISS Lyapunov functional for boundary disturbances, the proof methodology uses (i) an eigenfunction expansion of the solution, and (ii) a finite-difference scheme. The ISS estimate for the sup-norm leads to a refinement of the well-known maximum principle for the heat equation. Finally, the obtained results are applied to quasi-static thermoelasticity models that involve nonlocal boundary conditions. Small-gain conditions that guarantee the global exponential stability of the zero solution for such models are derived.

94 citations


Journal ArticleDOI
TL;DR: In this paper, the authors rigorously connect the problem of optimal control of McKean-Vlasov dynamics with large systems of interacting controlled state processes, and show that the empirical distributions of near-optimal control-state pairs for the $n$-state systems, as $n tends to infinity, admit limit points in distribution (if the objective functions are suitably coercive).
Abstract: This paper rigorously connects the problem of optimal control of McKean--Vlasov dynamics with large systems of interacting controlled state processes. Precisely, the empirical distributions of near-optimal control-state pairs for the $n$-state systems, as $n$ tends to infinity, admit limit points in distribution (if the objective functions are suitably coercive), and every such limit is supported on the set of optimal control-state pairs for the McKean--Vlasov problem. Conversely, any distribution on the set of optimal control-state pairs for the McKean--Vlasov problem can be realized as a limit in this manner. Arguments are based on controlled martingale problems, which lend themselves naturally to existence proofs; along the way it is shown that a large class of McKean--Vlasov control problems admit optimal Markovian controls.

93 citations


Journal ArticleDOI
TL;DR: A protocol for the average consensus problem on any fixed undirected graph whose convergence time scales linearly in the total number nodes $n$ and has error which is $O(L \sqrt{n/T})$.
Abstract: We describe a protocol for the average consensus problem on any fixed undirected graph whose convergence time scales linearly in the total number nodes $n$. The protocol relies only on nearest-neig...

75 citations


Journal ArticleDOI
TL;DR: It is shown that the set of distributed strategies is asymptotically team-optimal, and the asymPTotically optimal social cost value can be obtained explicitly.
Abstract: This paper investigates social optima of mean field linear-quadratic-Gaussian (LQG) control models with Markov jump parameters. The common objective of the agents is to minimize a social cost---the cost average of the whole society. In the cost functions there are coupled mean field terms. First, we consider the centralized case and get a parameterized equation of mean field effect. Then, we design a set of distributed strategies by solving a limiting optimal control problem in an augmented state space subject to the consistency requirement for mean field approximation. It is shown that the set of distributed strategies is asymptotically team-optimal, and the asymptotically optimal social cost value can be obtained explicitly. The optimal social average cost is compared with the optimal individual cost in mean field games by virtue of the explicit expressions, and the difference is further illustrated by a numerical example.

72 citations


Journal ArticleDOI
TL;DR: In this article, a time-inconsistent stochastic optimal control problem with a recursive cost functional is studied, and an approximate equilibrium strategy is introduced, which is time-consistent and locally approximately optimal.
Abstract: A time-inconsistent stochastic optimal control problem with a recursive cost functional is studied. Equilibrium strategy is introduced, which is time-consistent and locally approximately optimal. By means of multiperson hierarchical differential games associated with partitions of the time interval, a family of approximate equilibrium strategy is constructed, and by sending the mesh size of the time interval partition to zero, an equilibrium Hamilton--Jacobi--Bellman (HJB) equation is derived through which the equilibrium value function can be identified and the equilibrium strategy can be obtained. Moreover, a well-posedness result of the equilibrium HJB equation is established under certain conditions, and a verification theorem is proved. Finally, an illustrative example is presented, and some comparisons of different definitions of equilibrium strategy are put in order.

66 citations


Journal ArticleDOI
TL;DR: In this paper, the null controllability of evolution equations with memory terms is studied, where the problem is reduced to the obtention of suitable observability estimates for the adjoint system.
Abstract: This article is devoted to studying the null controllability of evolution equations with memory terms The problem is challenging not only because the state equation contains memory terms but also because the classical controllability requirement at the final time has to be reinforced, involving the contribution of the memory term, to ensure that the solution reaches the equilibrium Using duality arguments, the problem is reduced to the obtention of suitable observability estimates for the adjoint system We first consider finite-dimensional dynamical systems involving memory terms and derive rank conditions for controllability Then the null controllability property is established for some parabolic equations with memory terms, by means of Carleman estimates

Journal ArticleDOI
TL;DR: In this article, the design of saturated control in the context of partial differential equations is studied. But the authors focus on a Kortewegde Vries equation, which is a nonlinear mathematical model of waves on shallow water surfaces.
Abstract: This article deals with the design of saturated controls in the context of partial differential equations. It focuses on a Korteweg–de Vries equation, which is a nonlinear mathematical model of waves on shallow water surfaces. Two different types of saturated controls are considered. The well-posedness is proven applying a Banach fixed-point theorem, using some estimates of this equation and some properties of the saturation function. The proof of the asymptotic stability of the closed-loop system is separated in two cases: (i) when the control acts on all the domain, a Lyapunov function together with a sector condition describing the saturating input is used to conclude on the stability; (ii) when the control is localized, we argue by contradiction. Some numerical simulations illustrate the stability of the closed-loop nonlinear partial differential equation.

Journal ArticleDOI
TL;DR: First, a state feedback law is designed to exponentially stabilize the closed-loop system with an arbitrarily fast convergence rate, and collocated and anticollocated observers are designed, using a single boundary measurement for each plant.
Abstract: The problem of output feedback boundary stabilization is considered for $n$ coupled plants, distributed over the one-dimensional spatial domain [0,1] where they are governed by linear reaction-diffusion partial differential equations (PDEs). All plants have constant parameters and each is equipped with its own scalar boundary control input, acting at one end of the domain. First, a state feedback law is designed to exponentially stabilize the closed-loop system with an arbitrarily fast convergence rate. Then, collocated and anticollocated observers are designed, using a single boundary measurement for each plant. The exponential convergence of the observed state towards the actual one is demonstrated for both observers, with a convergence rate that can be made as fast as desired. Finally, the state feedback controller and the preselected, either collocated or anticollocated, observer are coupled together to yield an output feedback stabilizing controller. The distinct treatments are proposed separately fo...

Journal ArticleDOI
TL;DR: In this paper, the authors consider a diffusion type problem in which the diffusion operator is the $s$th power of a positive definite operator having a discrete spectrum in the space of the discrete spectrum and prove existence, uniqueness, and differentiability properties with respect to the fractional parameter.
Abstract: In this paper, we consider a rather general linear evolution equation of fractional type, namely, a diffusion type problem in which the diffusion operator is the $s$th power of a positive definite operator having a discrete spectrum in ${\mathbb R}^+$. We prove existence, uniqueness, and differentiability properties with respect to the fractional parameter $s$. These results are then employed to derive existence as well as first-order necessary and second-order sufficient optimality conditions for a minimization problem, which is inspired by considerations in mathematical biology. In this problem, the fractional parameter $s$ serves as the “control parameter” that needs to be chosen in such a way as to minimize a given cost functional. This problem constitutes a new class of identification problems: while usually in identification problems the type of the differential operator is prescribed and one or several of its coefficient functions need to be identified, in the present case one has to determine the ...

Journal ArticleDOI
TL;DR: By employing the limited differentiability properties of the control-to-state map, first-order necessary optimality conditions in qualified form are established, which are equivalent to the purely primal condition saying that the directional derivative of the reduced objective in feasible directions is nonnegative.
Abstract: This paper is concerned with an optimal control problem governed by a semilinear, nonsmooth operator differential equation. The nonlinearity is locally Lipschitz-continuous and directionally differentiable but not Gâteaux-differentiable. By employing the limited differentiability properties of the control-to-state map, first-order necessary optimality conditions in qualified form are established, which are equivalent to the purely primal condition saying that the directional derivative of the reduced objective in feasible directions is nonnegative. The paper ends with the application of the general results to a semilinear heat equation.

Journal ArticleDOI
TL;DR: The delay, considered as a design parameter, is intentionally induced using a class of delay-based controllers, aiming at improving closed-loop system performance by deploying a pole-placement technique to attain the fastest velocity of response to control commands.
Abstract: This paper explores the deliberate introduction of delays in feedback loops and their positive impact on linear time-invariant, single-input, single-output (LTI-SISO) systems. The delay, considered as a design parameter, is intentionally induced using a class of delay-based controllers, aiming at improving closed-loop system performance by deploying a pole-placement technique to attain the fastest velocity of response to control commands. The technique, based on an algebraic geometric analysis, is able to extract exact tuning formulas for the controller gains and the delay.

Journal ArticleDOI
TL;DR: In this paper, error feedback controllers for robust output tracking and disturbance rejection of a regular linear system with nonsmooth reference and disturbance signals are presented. But the robustness of these controllers depends on the behavior of the plant on the imaginary axis.
Abstract: We construct two error feedback controllers for robust output tracking and disturbance rejection of a regular linear system with nonsmooth reference and disturbance signals. We show that for sufficiently smooth signals the output converges to the reference at a rate that depends on the behavior of the transfer function of the plant on the imaginary axis. In addition, we construct a controller that can be designed to achieve robustness with respect to a given class of uncertainties in the system, and we present a novel controller structure for output tracking and disturbance rejection without the robustness requirement. We also generalize the internal model principle for regular linear systems with boundary disturbance and for controllers with unbounded input and output operators. The construction of controllers is illustrated with an example where we consider output tracking of a nonsmooth periodic reference signal for a two-dimensional heat equation with boundary control and observation, and with periodi...

Journal ArticleDOI
TL;DR: This article considers a linear-quadratic mean field game between a leader (dominating player) and a group of followers (agents) under the Stackelberg game setting as proposed in [A. Bensoussan, M. Chau, and S. Yam, Appl. Optim., 74 (2016), pp. 91-128].
Abstract: In this article, we consider a linear-quadratic mean field game between a leader (dominating player) and a group of followers (agents) under the Stackelberg game setting as proposed in [A. Bensoussan, M. Chau, and S. Yam, Appl. Math. Optim., 74 (2016), pp. 91-128], so that the evolution of each individual follower is now also subjected to delay effects from both his/her state and control variables, as well as those of the leader. The overall Stackelberg game is solved by tackling three subproblems hierarchically. Their resolution corresponds to the establishment of the existence and uniqueness of the solutions of three different forward-backward stochastic functional differential equations, which we manage by applying the unified continuation method as first developed in, for example, [Y. Hu and S. Peng, Probab. Theory Related Fields, 103 (1995), pp. 273-283] and [X. Xu, Fully Coupled Forward-Backward Stochastic Functional Differential Equations and Applications to Quadratic Optimal Control, preprint, arX...

Journal ArticleDOI
TL;DR: The results contribute an efficient framework for solving time-inconsistent CVaR-based dynamic optimization and extend the applicability of the proposed algorithm to a more general class of risk metrics, which includes mean-variance and median-deviation.
Abstract: We consider continuous-time stochastic optimal control problems featuring conditional value-at-risk (CVaR) in the objective. The major difficulty in these problems arises from time inconsistency, which prevents us from directly using dynamic programming. To resolve this challenge, we convert to an equivalent bilevel optimization problem in which the inner optimization problem is standard stochastic control. Furthermore, we provide conditions under which the outer objective function is convex and differentiable. We compute the outer objective's value via a Hamilton--Jacobi--Bellman equation and its gradient via the viscosity solution of a linear parabolic equation, which allows us to perform gradient descent. The significance of this result is that we provide an efficient dynamic-programming-based algorithm for optimal control of CVaR without lifting the state space. To broaden the applicability of the proposed algorithm, we propose convergent approximation schemes in cases where our key assumptions do not...

Journal ArticleDOI
TL;DR: In this paper infinite horizon optimal control problems for nonlinear high-dimensional dynamical systems are studied and a reduced-order model is derived for the dynamical system, using the method of proper orthogonal decomposition (POD).
Abstract: In this paper infinite horizon optimal control problems for nonlinear high-dimensional dynamical systems are studied. Nonlinear feedback laws can be computed via the value function characterized as the unique viscosity solution to the corresponding Hamilton--Jacobi--Bellman (HJB) equation which stems from the dynamic programming approach. However, the bottleneck is mainly due to the curse of dimensionality, and HJB equations are solvable only in a relatively small dimension. Therefore, a reduced-order model is derived for the dynamical system, using the method of proper orthogonal decomposition (POD). The resulting errors in the HJB equations are estimated by an a priori error analysis, which is utilized in the numerical approximation to ensure a desired accuracy for the POD method. Numerical experiments illustrates the theoretical findings.

Journal ArticleDOI
TL;DR: This work proves existence and uniqueness of a solution to the BSDE system and characterize both the value function and the optimal strategy in terms of the unique solution to that system.
Abstract: We study an optimal execution problem in illiquid markets with both instantaneous and persistent price impact and stochastic resilience when only absolutely continuous trading strategies are admissible. In our model the value function can be described by a three-dimensional system of backward stochastic differential equations (BSDE) with a singular terminal condition in one component. We prove existence and uniqueness of a solution to the BSDE system and characterize both the value function and the optimal strategy in terms of the unique solution to the BSDE system. Our existence proof is based on an asymptotic expansion of the BSDE system at the terminal time that allows us to express the system in terms of a equivalent system with finite terminal value but singular driver.

Journal ArticleDOI
TL;DR: This paper studies boundary controllability of the Korteweg--de Vries equation posed on a finite interval, in which, because of the third-order character of the equation, three boundary conditions are required to secure the well-posedness of the system.
Abstract: This paper studies boundary controllability of the Korteweg--de Vries equation posed on a finite interval, in which, because of the third-order character of the equation, three boundary conditions are required to secure the well-posedness of the system. We consider the cases where one, two, or all three of those boundary data are employed as boundary control inputs. The system is first linearized around the origin and the corresponding linear system is proved to be exactly boundary controllable if using two or three boundary control inputs. In the case where only one control input is allowed to be used, the linearized system is known to be only null controllable if the single control input acts on the left end of the spatial domain. By contrast, if the single control input acts on the right end of the spatial domain, the linearized system is shown to be exactly controllable if and only if the length of the spatial domain does not belong to a set of critical values. Moreover, the nonlinear system is shown ...

Journal ArticleDOI
TL;DR: It is shown that the new probabilistic similarity relations, inspired by a notion of simulation developed for finite-state models, can be effectively employed over general Markov decision processes for verification purposes, and specifically for control refinement from abstract models.
Abstract: In this work we introduce new approximate similarity relations that are shown to be key for policy (or control) synthesis over general Markov decision processes. The models of interest are discrete-time Markov decision processes, endowed with uncountably infinite state spaces and metric output (or observation) spaces. The new relations, underpinned by the use of metrics, allow, in particular, for a useful trade-off between deviations over probability distributions on states, and distances between model outputs. We show that the new probabilistic similarity relations, inspired by a notion of simulation developed for finite-state models, can be effectively employed over general Markov decision processes for verification purposes, and specifically for control refinement from abstract models.

Journal ArticleDOI
TL;DR: A new point of view on formation control is provided, and a partial solution is proposed by exhibiting a class of rigid graphs and control laws for which all stable equilibria are target configurations.
Abstract: Formation control deals with the design of decentralized control laws that stabilize mobile, autonomous agents at prescribed distances from each other. We call any configuration of the agents a tar...

Journal ArticleDOI
TL;DR: A data-driven order reduction method for nonlinear control systems, drawing on recent progress in machine learning and statistical dimensionality reduction, which leads to a closed, reduced order dynamical system which captures the essential input-output characteristics of the original model.
Abstract: We introduce a data-driven model approximation method for nonlinear control systems, drawing on recent progress in machine learning and statistical-dimensionality reduction. The method is based on embedding the nonlinear system in a high- (or infinite-) dimensional reproducing kernel Hilbert space (RKHS) where linear balanced truncation may be carried out implicitly. This leads to a nonlinear reduction map which can be combined with a representation of the system belonging to an RKHS to give a closed, reduced order dynamical system which captures the essential input-output characteristics of the original model. Working in RKHS provides a convenient, general functional-analytical framework for theoretical understanding. Empirical simulations illustrating the approach are also provided.

Journal ArticleDOI
TL;DR: This paper extends the analysis of the DeGroot--Friedkin model to two general scenarios where the interpersonal influence network is not necessarily strongly connected and where the individuals form opinions with reducible relative interactions.
Abstract: Our recent work [Jia et al., SIAM Rev., 57 (2015), pp. 367--397] proposes the DeGroot--Friedkin dynamical model for the analysis of social influence networks. This dynamical model describes the evolution of self-appraisals in a group of individuals forming opinions in a sequence of issues. Under a strong connectivity assumption, the model predicts the existence and semiglobal attractivity of equilibrium configurations for self-appraisals and social power in the group. In this paper, we extend the analysis of the DeGroot--Friedkin model to two general scenarios where the interpersonal influence network is not necessarily strongly connected and where the individuals form opinions with reducible relative interactions. In the first scenario, the relative interaction digraph is reducible with globally reachable nodes; in the second scenario, the condensation of the relative interaction digraph has multiple aperiodic sinks. For both scenarios, we provide the explicit mathematical formulations of the DeGroot--Fr...

Journal ArticleDOI
TL;DR: This paper investigates the exact controllability of first order one-dimensional quasi-linear hyperbolic systems by internal controls that are localized in space in some part of the domain by using the notion of algebraic solvability due to Gromov.
Abstract: In this paper we investigate the exact controllability of $n \times n$ first order one-dimensional quasi-linear hyperbolic systems by $m

Journal ArticleDOI
TL;DR: A variational formula is derived for the optimal growth rate of reward in the infinite horizon risk-sensitive control problem for discrete time Markov decision processes with compact metric state and action spaces by extending a formula of Donsker and Varadhan for the Perron-Frobenius eigenvalue of a positive operator.
Abstract: We derive a variational formula for the optimal growth rate of reward in the infinite horizon risk-sensitive control problem for discrete time Markov decision processes with compact metric state and action spaces, extending a formula of Donsker and Varadhan for the Perron--Frobenius eigenvalue of a positive operator. This leads to a concave maximization formulation of the problem of determining this optimal growth rate.

Journal ArticleDOI
TL;DR: In this paper, a distributed optimal control of a time-discrete Cahn-Hilliard-Navier-Stokes system with variable densities is studied, and the existence of solutions to the primal system and of optimal controls is established for the original problem as well as for a family of regularized problems.
Abstract: This paper is concerned with the distributed optimal control of a time-discrete Cahn--Hilliard--Navier--Stokes system with variable densities. It focuses on the double-obstacle potential which yields an optimal control problem for a family of coupled systems in each time instant of a variational inequality of fourth order and the Navier--Stokes equation. By proposing a suitable time-discretization, energy estimates are proved, and the existence of solutions to the primal system and of optimal controls is established for the original problem as well as for a family of regularized problems. The latter correspond to Moreau--Yosida-type approximations of the double-obstacle potential. The consistency of these approximations is shown, and first-order optimality conditions for the regularized problems are derived. Through a limit process with respect to the regularization parameter, a stationarity system for the original problem is established. The resulting system corresponds to a function space version of C-s...

Journal ArticleDOI
TL;DR: The Kalman-Bucy filter has been studied extensively in the literature since the seminal 1961 paper of Kalman and Bucy as mentioned in this paper, and it is generally the only finite-dimensional exact instance of the Bayes filter for continuous state-space models.
Abstract: The Kalman--Bucy filter is the optimal state estimator for an Ornstein--Uhlenbeck diffusion given that the system is partially observed via a linear diffusion-type (noisy) sensor. Under Gaussian assumptions, it provides a finite-dimensional exact implementation of the optimal Bayes filter. It is generally the only such finite-dimensional exact instance of the Bayes filter for continuous state-space models. Consequently, this filter has been studied extensively in the literature since the seminal 1961 paper of Kalman and Bucy. The purpose of this work is to review, re-prove and refine existing results concerning the dynamical properties of the Kalman--Bucy filter so far as they pertain to filter stability and convergence. The associated differential matrix Riccati equation is a focal point of this study with a number of bounds, convergence, and eigenvalue inequalities rigorously proven. New results are also given in the form of exponential and comparison inequalities for both the filter and the Riccati flow.