Convergence in Nonlinear Filtering for Stochastic Delay Systems
read more
Citations
On the Moments of the Modulus of Continuity of Itô Processes
Conditional Ergodicity in Infinite Dimension
Approximation of Nonlinear Filters for Markov Systems with Delayed Observations
Time Discretisation and Rate of Convergence for the Optimal Control of Continuous-Time Stochastic Systems with Delay
Conditional ergodicity in infinite dimension
References
Statistics of random processes
Feynman-Kac formulae : genealogical and interacting particle systems with applications
Stochastic Filtering Theory
On the optimal filtering of diffusion processes
Related Papers (5)
Convergence of the Grünwald-Letnikov scheme for time-fractional diffusion
On minimum-contrast estimation for hilbert space-valued stochastic differential equations
Frequently Asked Questions (12)
Q2. What is the last statement of the Theorem?
Since the filter π is continuous in time, the last statement of the Theorem is an immediate consequence of the convergence in probability of π̃n to π.
Q3. what is the approximation xn of the state process?
The approximation Xn of the state process is the linear interpolation of the Euler discretization scheme with step δ = δn = T/n, with τ = mδ (as in Chang [4], for the sake of simplicity, the authors assume that T/τ is rational)1: Xn(`δ) = η(`δ), −m ≤ ` ≤ 0, Xn((` + 1)δ) = Xn(`δ) + a(`δ, Π`δX n)δ+b(`δ, Π`δX n) [ W̃ ((` + 1)δ)−
Q4. what is the optimal filtering problem for a partially observable model?
In [4], Chang considers the optimal filtering problem for a partially observable model, when the state process and the observation process are real valued and satisfy the stochastic nonlinear differential equationsdX(t) = a(t, ΠtX)dt + dW̃t, Π0X = ηdY (t) = h(t, ΠtX)dt + dWt, Y (0) = 0where W̃ and W are two independent Brownian motions.
Q5. what is the law of the approximation of the state process?
under P 0,n, the processes Xn and Y n are independent and the law of the approximated state process is invariant under P and P 0,n, hence, for t ∈ [`δ, (` + 1)δ), 0 ≤ ` ≤ n,σnt (φ) = E [φ(ΠtX n)Lnt (X n(·), y0, yδ, · · · , y`δ, y)] ∣∣∣∣ y0=Y n0 ,y1=Y n δ ,··· ,y`=Y n`δ,y=Y nt .
Q6. What is the typical situation when dealing with approximation problems?
When dealing with approximation problems the following typical situation can arise: the model is (X,Y ), and therefore the authors observe Y , while the models (Xn,Y n) are merely more manageable approximations, despite the fact that it may be impossible to observe the approximate process Y n.
Q7. what is the probability density of a dimensional random vector?
τ0 < · · · < τn = 0 the (n + 1) − dimensional random vector (η(τi); 0 ≤ i ≤ n) has a probability density w.r.t. the Lebesgue measure in Rn+1.
Q8. What is the simplest way to prove the weak convergence of the observation process?
With the approximations (14) or (15) for the observation process it is natural to takeX nt = (t, ΠtXn), (22)orX nt = (t, Πδ·bt/δcXn), (23)as an approximation for the signal process.
Q9. What is the condition for (B4) to hold?
To conclude this section observe that a sufficient condition for (B4’) to hold is that(B4”) lim n→∞ E (∫ T 0 |hn(X ns )− h(Xs)|2 ds ) =0.
Q10. What is the way to describe the optimal filter?
Y n(`δ) = h(`δ, Π`δXn)δ + [ W ((` + 1)δ)−W (`δ)], 0 ≤ ` ≤ n. (17)Then, the author shows that the optimal filter for the discrete time approximation can be designed by an explicit procedure and he verifies the weak convergence of the approximation process and the approximation filter to the original ones.
Q11. What is the main result of the theorem 4.2?
The authors will prove the convergence of Xn to X , in a sense stronger than weak convergence, in Proposition 4.5 below (see also Remark 4.6).
Q12. what is the bound for n,(T )?
taking into account (29) and (31), and invoking Gronwall’s inequality the authors get a bound for φnσ,`(T ) = φ n σnN ,`(T ), uniform in n and N .