62. Classical Filtering With Linear Algebra#
Contents
62.1. Overview#
This is a sequel to the earlier lecture Classical Control with Linear Algebra.
That lecture used linear algebra – in particular, the LU decomposition – to formulate and solve a class of linear-quadratic optimal control problems.
In this lecture, we’ll be using a closely related decomposition, the Cholesky decomposition , to solve linear prediction and filtering problems.
We exploit the useful fact that there is an intimate connection between two superficially different classes of problems:
deterministic linear-quadratic (LQ) optimal control problems
linear least squares prediction and filtering problems
The first class of problems involves no randomness, while the second is all about randomness.
Nevertheless, essentially the same mathematics solves both type of problem.
This connection, which is often termed “duality,” is present whether one uses “classical” or “recursive” solution procedures.
In fact we saw duality at work earlier when we formulated control and prediction problems recursively in lectures LQ dynamic programming problems, A first look at the Kalman filter, and The permanent income model.
A useful consequence of duality is that
With every LQ control problem there is implicitly affiliated a linear least squares prediction or filtering problem.
With every linear least squares prediction or filtering problem there is implicitly affiliated a LQ control problem.
An understanding of these connections has repeatedly proved useful in cracking interesting applied problems.
For example, Sargent [Sar87] [chs. IX, XIV] and Hansen and Sargent [HS80] formulated and solved control and filtering problems using
In this lecture we investigate these ideas using mostly elementary linear algebra.
62.1.1. References#
Useful references include [Whi63], [HS80], [Orf88], [AP91], and [Mut60].
62.2. Infinite Horizon Prediction and Filtering Problems#
We pose two related prediction and filtering problems.
We let
where
We impose no conditions on the zeros of
A second covariance stationary process is
where
We also assume that
The linear least squares prediction problem is to find the
That is, the problem is to find a
The linear least squares filtering problem is to find a
Interesting versions of these problems related to the permanent income theory were studied by [Mut60].
62.2.1. Problem formulation#
These problems are solved as follows.
The covariograms of
The covariance and cross covariance generating functions are defined as
The generating functions can be computed by using the following facts.
Let
That is,
Let
Then, as shown for example in [Sar87] [ch. XI], it is true that
Applying these formulas to (62.1) – (62.4), we have
The key step in obtaining solutions to our problems is to factor the covariance generating function
The solutions of our problems are given by formulas due to Wiener and Kolmogorov.
These formulas utilize the Wold moving average representation of the
where
Here
Equation (62.9) is the condition that
Condition (62.9) requires that
This will be true if and only if the zeros of
It is an implication of (62.9) that
Consequently, an implication of (62.8) is
that the covariance generating function of
It remains to discuss how
Combining (62.6) and (62.10) gives
Therefore, we have already showed constructively how to factor the covariance generating function
We now introduce the annihilation operator:
In words,
We have defined the solution of the prediction problem as
Assuming that the roots of
We have defined the solution of the filtering problem as
The Wiener-Kolomogorov formula for
or
Formulas (62.13) and (62.14) are discussed in detail in [Whi83] and [Sar87].
The interested reader can there find several examples of the use of these formulas in economics Some classic examples using these formulas are due to [Mut60].
As an example of the usefulness of formula (62.14), we let
where
Suppose that at time
given knowledge of
We shall use (62.14) to obtain the answer.
Using the standard formulas (62.6), we have that
Then (62.14) becomes
In order to evaluate the term in the annihilation operator, we use the following result from [HS80].
Proposition Let
where , where , for
Then
and, alternatively,
where
Applying formula (62.17) of the proposition to evaluating (62.15) with
or
Thus, we have
This formula is useful in solving stochastic versions of problem 1 of lecture Classical Control with Linear Algebra in which the randomness emerges because
The problem is to maximize
where
where
and
The problem is to maximize (62.19) with respect to a contingency plan
expressing
The solution of this problem can be achieved in two steps.
First, ignoring the uncertainty, we can solve the problem assuming that
The solution is, from above,
or
Second, the solution of the problem under uncertainty is obtained by replacing the terms on the right-hand side of the above expressions with their linear least squares predictors.
Using (62.18) and (62.20), we have the following solution
62.3. Finite Dimensional Prediction#
Let
Here
We shall regard the random variables as being
ordered in time, so that
For example,
In this case,
We shall be interested in constructing
where
The solution of this problem can be exhibited by first constructing an
orthonormal basis of random variables
Since
or
where
Form the random variable
Then
It is convenient to write out the equations
or
We also have
Notice from (62.23) that
Therefore, we have that for
For
Representation (62.25) is an orthogonal decomposition of
62.3.1. Implementation#
Code that computes solutions to LQ control and filtering problems using the methods described here and in Classical Control with Linear Algebra can be found in the file control_and_filter.jl.
Here’s how it looks
using LinearAlgebra, Statistics
using Polynomials.PolyCompat, LinearAlgebra
import Polynomials.PolyCompat: roots, coeffs
function LQFilter(d, h, y_m;
r = nothing,
beta = nothing,
h_eps = nothing)
m = length(d) - 1
m == length(y_m) ||
throw(ArgumentError("y_m and d must be of same length = $m"))
# define the coefficients of phi up front
phi = zeros(2m + 1)
for i in (-m):m
phi[m - i + 1] = sum(diag(d * d', -i))
end
phi[m + 1] = phi[m + 1] + h
# if r is given calculate the vector phi_r
if isnothing(r)
k = nothing
phi_r = nothing
else
k = size(r, 1) - 1
phi_r = zeros(2k + 1)
for i in (-k):k
phi_r[k - i + 1] = sum(diag(r * r', -i))
end
if isnothing(h_eps) == false
phi_r[k + 1] = phi_r[k + 1] + h_eps
end
end
# if beta is given, define the transformed variables
if isnothing(beta)
beta = 1.0
else
d = beta .^ (collect(0:m) / 2) * d
y_m = y_m * beta .^ (-collect(1:m) / 2)
end
return (; d, h, y_m, m, phi, beta, phi_r, k)
end
function construct_W_and_Wm(lqf, N)
(; d, m) = lqf
W = zeros(N + 1, N + 1)
W_m = zeros(N + 1, m)
# terminal conditions
D_m1 = zeros(m + 1, m + 1)
M = zeros(m + 1, m)
# (1) constuct the D_{m+1} matrix using the formula
for j in 1:(m + 1)
for k in j:(m + 1)
D_m1[j, k] = dot(d[1:j, 1], d[(k - j + 1):k, 1])
end
end
# Make the matrix symmetric
D_m1 = D_m1 + D_m1' - Diagonal(diag(D_m1))
# (2) Construct the M matrix using the entries of D_m1
for j in 1:m
for i in (j + 1):(m + 1)
M[i, j] = D_m1[i - j, m + 1]
end
end
M
# Euler equations for t = 0, 1, ..., N-(m+1)
phi, h = lqf.phi, lqf.h
W[1:(m + 1), 1:(m + 1)] = D_m1 + h * I
W[1:(m + 1), (m + 2):(2m + 1)] = M
for (i, row) in enumerate((m + 2):(N + 1 - m))
W[row, (i + 1):(2m + 1 + i)] = phi'
end
for i in 1:m
W[N - m + i + 1, (end - (2m + 1 - i) + 1):end] = phi[1:(end - i)]
end
for i in 1:m
W_m[N - i + 2, 1:((m - i) + 1)] = phi[(m + 1 + i):end]
end
return W, W_m
end
function roots_of_characteristic(lqf)
(; m, phi) = lqf
# Calculate the roots of the 2m-polynomial
phi_poly = Poly(phi[end:-1:1])
proots = roots(phi_poly)
# sort the roots according to their length (in descending order)
roots_sorted = sort(proots, by = abs)[end:-1:1]
z_0 = sum(phi) / polyval(poly(proots), 1.0)
z_1_to_m = roots_sorted[1:m] # we need only those outside the unit circle
lambda = 1 ./ z_1_to_m
return z_1_to_m, z_0, lambda
end
function coeffs_of_c(lqf)
(; m) = lqf
z_1_to_m, z_0, lambda = roots_of_characteristic(lqf)
c_0 = (z_0 * prod(z_1_to_m) * (-1.0)^m)^(0.5)
c_coeffs = coeffs(poly(z_1_to_m)) * z_0 / c_0
return c_coeffs
end
function solution(lqf)
z_1_to_m, z_0, lambda = roots_of_characteristic(lqf)
c_0 = coeffs_of_c(lqf)[end]
A = zeros(m)
for j in 1:m
denom = 1 - lambda / lambda[j]
A[j] = c_0^(-2) / prod(denom[1:m .!= j])
end
return lambda, A
end
function construct_V(lqf; N = nothing)
if isnothing(N)
error("N must be provided!!")
end
if !(N isa Integer)
throw(ArgumentError("N must be Integer!"))
end
(; phi_r, k) = lqf
V = zeros(N, N)
for i in 1:N
for j in 1:N
if abs(i - j) <= k
V[i, j] = phi_r[k + abs(i - j) + 1]
end
end
end
return V
end
function simulate_a(lqf, N)
V = construct_V(N + 1)
d = MVNSampler(zeros(N + 1), V)
return rand(d)
end
function predict(lqf, a_hist, t)
N = length(a_hist) - 1
V = construct_V(N + 1)
aux_matrix = zeros(N + 1, N + 1)
aux_matrix[1:(t + 1), 1:(t + 1)] = Matrix(I, t + 1, t + 1)
L = cholesky(V).U'
Ea_hist = inv(L) * aux_matrix * L * a_hist
return Ea_hist
end
function optimal_y(lqf, a_hist, t = nothing)
(; beta, y_m, m) = lqf
N = length(a_hist) - 1
W, W_m = construct_W_and_Wm(lqf, N)
F = lu(W, Val(true))
L, U = F
D = diagm(0 => 1.0 ./ diag(U))
U = D * U
L = L * diagm(0 => 1.0 ./ diag(D))
J = reverse(Matrix(I, N + 1, N + 1), dims = 2)
if isnothing(t) # if the problem is deterministic
a_hist = J * a_hist
# transform the a sequence if beta is given
if beta != 1
a_hist = reshape(a_hist * (beta^(collect(N:0) / 2)), N + 1, 1)
end
a_bar = a_hist - W_m * y_m # a_bar from the lecutre
Uy = \(L, a_bar) # U @ y_bar = L^{-1}a_bar from the lecture
y_bar = \(U, Uy) # y_bar = U^{-1}L^{-1}a_bar
# Reverse the order of y_bar with the matrix J
J = reverse(Matrix(I, N + m + 1, N + m + 1), dims = 2)
y_hist = J * vcat(y_bar, y_m) # y_hist : concatenated y_m and y_bar
# transform the optimal sequence back if beta is given
if beta != 1
y_hist = y_hist .* beta .^ (-collect((-m):N) / 2)
end
else # if the problem is stochastic and we look at it
Ea_hist = reshape(predict(a_hist, t), N + 1, 1)
Ea_hist = J * Ea_hist
a_bar = Ea_hist - W_m * y_m # a_bar from the lecutre
Uy = \(L, a_bar) # U @ y_bar = L^{-1}a_bar from the lecture
y_bar = \(U, Uy) # y_bar = U^{-1}L^{-1}a_bar
# Reverse the order of y_bar with the matrix J
J = reverse(Matrix(I, N + m + 1, N + m + 1), dims = 2)
y_hist = J * vcat(y_bar, y_m) # y_hist : concatenated y_m and y_bar
end
return y_hist, L, U, y_bar
end
optimal_y (generic function with 2 methods)
Let’s use this code to tackle two interesting examples.
62.3.2. Example 1#
Consider a stochastic process with moving average representation
where
We want to use the Wiener-Kolmogorov formula (62.13) to compute the linear least squares forecasts
We can do everything we want by setting
m = 1
y_m = zeros(m)
d = [1.0, -2.0]
r = [1.0, -2.0]
h = 0.0
example = LQFilter(d, h, y_m, r = d)
(d = [1.0, -2.0], h = 0.0, y_m = [0.0], m = 1, phi = [-2.0, 5.0, -2.0], beta = 1.0, phi_r = [-2.0, 5.0, -2.0], k = 1)
The Wold representation is computed by example.coefficients_of_c().
Let’s check that it “flips roots” as required
coeffs_of_c(example)
2-element Vector{Float64}:
2.0
-1.0
roots_of_characteristic(example)
([2.0], -2.0, [0.5])
Now let’s form the covariance matrix of a time series vector of length
Then we’ll take a Cholesky decomposition of
V = construct_V(example, N = 5)
5×5 Matrix{Float64}:
5.0 -2.0 0.0 0.0 0.0
-2.0 5.0 -2.0 0.0 0.0
0.0 -2.0 5.0 -2.0 0.0
0.0 0.0 -2.0 5.0 -2.0
0.0 0.0 0.0 -2.0 5.0
Notice how the lower rows of the “moving average representations” are converging to the appropriate infinite history Wold representation
F = cholesky(V)
Li = F.L
5×5 LowerTriangular{Float64, Matrix{Float64}}:
2.23607 ⋅ ⋅ ⋅ ⋅
-0.894427 2.04939 ⋅ ⋅ ⋅
0.0 -0.9759 2.01187 ⋅ ⋅
0.0 0.0 -0.9941 2.00294 ⋅
0.0 0.0 0.0 -0.998533 2.00073
Notice how the lower rows of the “autoregressive representations” are converging to the appropriate infinite history autoregressive representation
L = inv(Li)
5×5 LowerTriangular{Float64, Matrix{Float64}}:
0.447214 ⋅ ⋅ ⋅ ⋅
0.19518 0.48795 ⋅ ⋅ ⋅
0.0946762 0.236691 0.49705 ⋅ ⋅
0.0469898 0.117474 0.246696 0.499266 ⋅
0.0234518 0.0586295 0.123122 0.249176 0.499817
Remark Let
Then define
The term multiplying
Then it can be proved directly that
and that the zeros of
62.3.3. Example 2#
Consider a stochastic process
where
Let’s find a Wold moving average representation for
Let’s use the Wiener-Kolomogorov formula (62.13) to compute the linear least squares forecasts
We proceed in the same way as example 1
m = 2
y_m = [0.0, 0.0]
d = [1, 0, -sqrt(2)]
r = [1, 0, -sqrt(2)]
h = 0.0
example = LQFilter(d, h, y_m, r = d)
(d = [1.0, 0.0, -1.4142135623730951], h = 0.0, y_m = [0.0, 0.0], m = 2, phi = [-1.4142135623730951, 0.0, 3.0000000000000004, 0.0, -1.4142135623730951], beta = 1.0, phi_r = [-1.4142135623730951, 0.0, 3.0000000000000004, 0.0, -1.4142135623730951], k = 2)
coeffs_of_c(example)
3-element Vector{Float64}:
1.4142135623731025
-0.0
-1.0000000000000078
roots_of_characteristic(example)
([1.1892071150027195, -1.1892071150027195], -1.4142135623731136, [0.8408964152537157, -0.8408964152537157])
V = construct_V(example, N = 8)
8×8 Matrix{Float64}:
3.0 0.0 -1.41421 0.0 … 0.0 0.0 0.0
0.0 3.0 0.0 -1.41421 0.0 0.0 0.0
-1.41421 0.0 3.0 0.0 0.0 0.0 0.0
0.0 -1.41421 0.0 3.0 -1.41421 0.0 0.0
0.0 0.0 -1.41421 0.0 0.0 -1.41421 0.0
0.0 0.0 0.0 -1.41421 … 3.0 0.0 -1.41421
0.0 0.0 0.0 0.0 0.0 3.0 0.0
0.0 0.0 0.0 0.0 -1.41421 0.0 3.0
F = cholesky(V)
Li = F.L
Li[(end - 2):end, :]
3×8 Matrix{Float64}:
0.0 0.0 0.0 -0.92582 0.0 1.46385 0.0 0.0
0.0 0.0 0.0 0.0 -0.966092 0.0 1.43759 0.0
0.0 0.0 0.0 0.0 0.0 -0.966092 0.0 1.43759
L = inv(Li)
8×8 LowerTriangular{Float64, Matrix{Float64}}:
0.57735 ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅
0.0 0.57735 ⋅ ⋅ ⋅ ⋅ ⋅
0.308607 0.0 0.654654 ⋅ ⋅ ⋅ ⋅
0.0 0.308607 0.0 0.654654 ⋅ ⋅ ⋅
0.19518 0.0 0.414039 0.0 ⋅ ⋅ ⋅
0.0 0.19518 0.0 0.414039 … 0.68313 ⋅ ⋅
0.131165 0.0 0.278243 0.0 0.0 0.695608 ⋅
0.0 0.131165 0.0 0.278243 0.459078 0.0 0.695608
62.3.4. Prediction#
It immediately follows from the “orthogonality principle” of least squares (see [AP91] or [Sar87] [ch. X]) that
This can be interpreted as a finite-dimensional version of the Wiener-Kolmogorov
We can use (62.26) to represent the linear least squares projection of
the vector
We have
This formula will be convenient in representing the solution of control problems under uncertainty.
Equation (62.23) can be recognized as a finite dimensional version of a moving average representation.
Equation (62.22) can be viewed as a finite dimension version of an autoregressive representation.
Notice that even
if the
If
Further, if
That is,
the “bottom” rows of
This last observation gives one simple and widely-used practical way of
forming a finite
First, form the covariance matrix
The last row of
This method can readily be generalized to multivariate systems.
62.4. Combined Finite Dimensional Control and Prediction#
Consider the finite-dimensional control problem, maximize
where
The variables
Maximization is over choices of
We saw in the lecture Classical Control with Linear Algebra that the solution of this problem under certainty could be represented in feedback-feedforward form
for some
Using a version of formula (62.26), we can express
where
(We have reversed the time axis in dating the
The time axis can be reversed in representation (62.27) by replacing
The optimal decision rule to use at time
62.5. Exercises#
62.5.1. Exercise 1#
Let
where
Find the Wold moving average representation for
Find a formula for the
Find a formula for the
62.5.2. Exercise 2#
(Multivariable Prediction) Let
where
Let
Let
Define the covariograms as
Then define the matrix covariance generating function, as in (61.21), only interpret all the objects in (61.21) as matrices.
Show that the covariance generating functions are given by
A factorization of
where the zeros of
A vector Wold moving average representation of
where
That is,
The optimum predictor of
If