Discrete Fractional Calculus (2015)
1. Basic Difference Calculus
1.9. Vector Difference Equations
In this section, we will examine the properties of the linear vector difference equation with variable coefficients
(1.40)
and the corresponding homogeneous system
(1.41)
where and the real n × 1 matrix function A(t) will be assumed to be a regressive matrix function on (that is I + A(t) is nonsingular for all ). With these assumptions, it is easy to show that for any the initial value problem
where is a given n × 1 constant vector, has a unique solution on To solve the nonhomogeneous difference equation (1.40) we will see that we first want to be able to solve the corresponding homogeneous difference equation (1.41). The matrix equation analogue of the homogeneous vector difference equation (1.41) is
(1.42)
where U(t) is an n × n matrix function. Note that U(t) is a solution of (1.42) if and only if each of its column vectors is a solution of (1.41). From the uniqueness of solutions to IVPs for the vector equation (1.40) we have that the matrix IVP
where and U 0 is a given n × n constant matrix has a unique solution on
Theorem 1.78.
Assume A(t) is a regressive matrix function on . If is a solution of (1.42) , then either det for all or det for all .
Proof.
Since is a solution of (1.42) on ,
Therefore,
(1.43)
for all . Now either or . Since det [I + A(t)] ≠ 0 for all , we have by (1.43) that if , then for all , while if , then for all . □
Definition 1.79.
We say that is a fundamental matrix of the vector difference equation (1.41) provided is a solution of the matrix equation (1.42) and for .
Definition 1.80.
If A is a regressive matrix function on , then we define the matrix exponential function, e A (t, t 0), based at to be the unique solution of the matrix IVP
From Exercise 1.71 we have that is a fundamental matrix of if and only if its columns are n linearly independent solutions of the vector equation on . To find a formula for the matrix exponential function, e A (t, t 0), we want to solve the IVP
Iterating this equation we get
where it is understood that and for
Example 1.81.
If A is an n × n constant matrix and I + A is invertible, then
Similar to the proof of Theorem 1.16 one can prove (Exercise 1.73) the following theorem.
Theorem 1.82.
The set of all n × n regressive matrix functions on with the addition, ⊕ defined by
is a group. Furthermore, the additive inverse of a regressive matrix function A defined on is given by
In the next theorem we give several properties of the matrix exponential. To prove part (vii) of the this theorem we will use the following lemma.
Lemma 1.83.
Assume Y (t) and are invertible matrices. Then
Proof.
Taking the difference of both sides of Y (t)Y −1(t) = I we get that
Solving this last equation for we get that
Similarly, one can use Y −1(t)Y (t) = I to get that
□
Theorem 1.84.
Assume A and B are regressive matrix functions on and Then the following hold:
(i)
(ii)
e A (s,s) = I;
(iii)
for
(iv)
e A (t,s) is a fundamental matrix of (1.41) ;
(v)
(vi)
(semigroup property) e A (t,r)e A (r,s) = e A (t,s) holds for
(vii)
(viii)
where A ∗ denotes the conjugate transpose of the matrix A;
(ix)
B(t)e A (t,t 0 ) = e A (t,t 0 )B(t), if A(t) and B(τ) commute for all
(x)
e A (t,s)e B (t,s) = e A⊕B (t,s), if A(t) and B(τ) commute for all .
Proof.
Note that (i) and (ii) follow from the definition of the matrix exponential. Part (iii) follows from Theorem 1.78 and part (ii). Parts (i) and (iii) imply part (iv) holds. Since , we have that
and hence (v) holds. To see that the semigroup property (vi) holds, fix and set Then
Next we show that First note that if r = s, then . Hence we can assume that s ≠ r. For the case r > s ≥ a, we have that
Similarly, for the case s > r ≥ a one can show that Hence, by the uniqueness of solutions for IVPs we get that e A (t, r)e A (r, s) = e A (t, s). To see that (vii) holds, fix and let
Then
Since and satisfy the same matrix IVP we get
Taking the conjugate transpose of both sides of this last equation we get that part (vii) holds. The proof of (viii) is Exercise 1.78 and the proof of (ix) is Exercise 1.79.
To see that (x) holds, let , . Then by the product rule
for . Since , we have that and e A⊕B (t, s) satisfy the same matrix IVP. Hence, by the uniqueness theorem for solutions of matrix IVPs we get the desired result e A (t, s)e B (t, s) = e A⊕B (t, s). □
Now for any nonsingular matrix U 0, the solution U(t) of (1.42) with U(t 0) = U 0 is a fundamental matrix of (1.41), so there are always infinitely many fundamental matrices of (1.41). In particular, if A is a regressive matrix function on , then is a fundamental matrix of the vector equation
The following theorem characterizes fundamental matrices for (1.41).
Theorem 1.85.
If is a fundamental matrix for (1.41) , then is another fundamental matrix if and only if there is a nonsingular constant matrix C such that
for .
Proof.
Let , where is a fundamental matrix of (1.41) and C is nonsingular constant matrix. Then is nonsingular for all , and
Therefore is a fundamental matrix of (1.41).
Conversely, assume and are fundamental matrices of (1.41). For some , let
Then and are both solutions of (1.42) satisfying the same initial condition at t 0. By uniqueness,
for all . □
The proof of the following theorem is similar to that of Theorem 1.85 and is left as an exercise (Exercise 1.68).
Theorem 1.86.
If is a fundamental matrix of (1.41) , then the general solution of (1.41) is given by
where c is an arbitrary constant column vector.
Hence we see to solve the vector equation (1.41) we just need to find the fundamental matrix We will set off to prove the Putzer algorithm (Theorem 1.88) which will give us a nice formula e A (t, 0), for when A is a constant n × n matrix. In the proof of this theorem we will use the Cayley–Hamilton Theorem which states that every square constant matrix satisfies its own characteristic equation. We now give an example to illustrate this important theorem.
Example 1.87.
Show directly that the matrix
satisfies its own characteristic equation. The characteristic equation of A is
Then
and so A does satisfy its own characteristic equation.
The proof of the next theorem is motivated by the fact that by the Cayley–Hamilton Theorem an n by n constant matrix A can be written as a linear combination of the matrices I, A, A 2, , A n−1 and therefore every nonnegative integer power A t of A can also be written as a linear combination of I, A, A 2, , A n−1.
Theorem 1.88 (Putzer’s Algorithm).
Let be the (not necessarily distinct) eigenvalues of the constant n by n matrix A, with each eigenvalue repeated as many times as its multiplicity. Define the matrices M k , 0 ≤ k ≤ n, recursively by
Then
where the p k (t), 1 ≤ k ≤ n are chosen so that
(1.44)
and
(1.45)
Proof.
Let the matrices M k , 0 ≤ k ≤ n be defined as in the statement of this theorem. Since for each fixed t ≥ 0, A t is a linear combination of I, A, A 2, , A n−1, we also have that for each fixed t, A t is a linear combination of M 0, M 1, M 2, , M n−1, that is
for t ≥ 0. It remains to show that the p k ’s are as in the statement of this theorem. Since A t+1 = A ⋅ A t , we have that
(1.46)
where in the second to the last step we have replaced k by k − 1 in the first sum and used the fact that (by the Cayley–Hamilton Theorem) M n = 0. Note that equation (1.46) is satisfied if p k (t), are chosen to satisfy the system (1.44). Since , we must have (1.45) is satisfied. □
The following example shows how we can use the Putzer algorithm to find the exponential function e A (t, 0) when A is a constant matrix. This method is called finding the matrix exponential e A (t, 0) using the Putzer algorithm.
Example 1.89.
Use the Putzer algorithm (Theorem 1.88) to find e A (t, 0), , where
Note . So to find e A (t, 0) we just need to find B t where
We now apply Putzer’s algorithm (Theorem 1.88) to find B t . The characteristic equation for B is given by . Hence the eigenvalues of B are given by . It follows that M 0 = I and
To find p 1(t) we now solve the IVP
It follows that p 1(t) = 1. Next to find p 2(t) we solve the IVP
This gives us the IVP
Using the variation of constants formula in Theorem 1.68 we get
It follows that
It follows from this that
is a general solution of
Example 1.90.
Use Putzer’s algorithm for finding the matrix exponential e A (t, 0) to solve the vector equation
where A is the regressive matrix given by
Let B: = I + A, then e A (t, 0) = [I + A] t = B t , where
The characteristic equation of the constant matrix B is given by
and so the characteristic values are It follows that
Next we solve the IVP
Hence p 1(t) solves the IVP
Since p 1(0) = 1, we have that p 1(t) = e 2(t, 0) = 3 t . Also p 2(t) solves the IVP
It follows that p 2(t) solves the IVP
Using the variation of constants formula in Theorem 1.68 we get that
Hence by Putzer’s algorithm
Since e A (t, 0) is a fundamental matrix of , we have by Theorem 1.86 that
is a general solution.
Fundamental matrices can be used to solve the nonhomogeneous equation (1.40).
Theorem 1.91 (Variation of Parameters (Constants)).
Assume is a fundamental matrix of (1.41) . Then the unique solution of (1.40) that satisfies the initial condition y(a) = y 0 is given by the variation of parameters formula
(1.47)
for .
Proof.
Let y(t) be given by (1.47) for . Using the vector version of the Leibniz formula (1.29), we have
Consequently, y(t) defined by (1.47) is a solution of the nonhomogeneous equation, and also we have that y(a) = y 0. □
A special case of the above theorem is the following result.
Theorem 1.92.
Assume A(t) is a regressive matrix function on and assume . Then the unique solution of the IVP
is given by the variation of constants formula
for .
Proof.
Since is a fundamental matrix of , we have by Theorem 1.91, that the solution of our IVP in the statement of this theorem is given by
where in the last two steps we used properties (viii) and (vi) in Theorem 1.84. □
Example 1.93.
Solve the system
From Exercise 1.72, we can choose
Then
From (1.47), we have for t ≥ 0,