Vector Difference Equations - Basic Difference Calculus - Discrete Fractional Calculus

Discrete Fractional Calculus (2015)

1. Basic Difference Calculus

1.9. Vector Difference Equations

In this section, we will examine the properties of the linear vector difference equation with variable coefficients

 $$\displaystyle{ \Delta y(t)\ =\ A(t)y(t) + f(t),\quad t \in \mathbb{N}_{a}, }$$

(1.40)

and the corresponding homogeneous system

 $$\displaystyle{ \Delta u(t)\ =\ A(t)u(t),\quad t \in \mathbb{N}_{a}, }$$

(1.41)

where  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}_{n}$$ and the real n × 1 matrix function A(t) will be assumed to be a regressive matrix function on  $$\mathbb{N}_{a}$$ (that is I + A(t) is nonsingular for all  $$t \in \mathbb{N}_{a}$$ ). With these assumptions, it is easy to show that for any  $$t_{0} \in \mathbb{N}_{a}$$ the initial value problem

 $$\displaystyle\begin{array}{rcl} \Delta x(t)& =& A(t)x(t) + f(t) {}\\ x(t_{0})& =& x_{0}, {}\\ \end{array}$$

where  $$x_{0} \in \mathbb{R}^{n}$$ is a given n × 1 constant vector, has a unique solution on  $$\mathbb{N}_{a}.$$ To solve the nonhomogeneous difference equation (1.40) we will see that we first want to be able to solve the corresponding homogeneous difference equation (1.41). The matrix equation analogue of the homogeneous vector difference equation (1.41) is

 $$\displaystyle{ \Delta U(t)\ =\ A(t)U(t),\quad t \in \mathbb{N}_{a} }$$

(1.42)

where U(t) is an n × n matrix function. Note that U(t) is a solution of (1.42) if and only if each of its column vectors is a solution of (1.41). From the uniqueness of solutions to IVPs for the vector equation (1.40) we have that the matrix IVP

 $$\displaystyle{\Delta U(t) = A(t)U(t),\quad U(t_{0}) = U_{0},}$$

where  $$t_{0} \in \mathbb{N}_{a}$$ and U 0 is a given n × n constant matrix has a unique solution on  $$\mathbb{N}_{a}.$$

Theorem 1.78.

Assume A(t) is a regressive matrix function on  $$\mathbb{N}_{a}$$ . If  $$\Phi (t)$$ is a solution of (1.42) , then either det  $$\Phi (t)\neq 0$$ for all  $$t \in \mathbb{N}_{a}$$ or det  $$\Phi (t) = 0$$ for all  $$t \in \mathbb{N}_{a}$$ .

Proof.

Since  $$\Phi (t)$$ is a solution of (1.42) on  $$\mathbb{N}_{a}$$ ,

 $$\displaystyle{\Phi (t + 1)\ = [I + A(t)]\Phi (t),\quad t \in \mathbb{N}_{a}.}$$

Therefore,

 $$\displaystyle{ \text{det}\ \Phi (t + 1) = \text{det}\ [I + A(t)]\ \text{det}\ \Phi (t), }$$

(1.43)

for all  $$t \in \mathbb{N}_{a}$$ . Now either  $$\text{det}\ \Phi (a)\neq 0$$ or  $$\text{det}\ \Phi (a) = 0$$ . Since det [I + A(t)] ≠ 0 for all  $$t \in \mathbb{N}_{a}$$ , we have by (1.43) that if  $$\text{det}\ \Phi (a)\neq 0$$ , then  $$\text{det}\ \Phi (t)\neq 0$$ for all  $$t \in \mathbb{N}_{a}$$ , while if  $$\text{det}\ \Phi (a) = 0$$ , then  $$\text{det}\ \Phi (t) = 0$$ for all  $$t \in \mathbb{N}_{a}$$ . □ 

Definition 1.79.

We say that  $$\Phi (t)$$ is a fundamental matrix of the vector difference equation (1.41) provided  $$\Phi (t)$$ is a solution of the matrix equation (1.42) and  $$\det \Phi (t)\neq 0$$ for  $$t \in \mathbb{N}_{a}$$ .

Definition 1.80.

If A is a regressive matrix function on  $$\mathbb{N}_{a}$$ , then we define the matrix exponential function, e A (t, t 0), based at  $$t_{0} \in \mathbb{N}_{a}$$ to be the unique solution of the matrix IVP

 $$\displaystyle{\Delta U(t) = A(t)U(t),\quad U(t_{0}) = I.}$$

From Exercise 1.71 we have that  $$\Phi (t)$$ is a fundamental matrix of  $$\Delta u(t) = A(t)u(t)$$ if and only if its columns are n linearly independent solutions of the vector equation  $$\Delta u(t) = A(t)u(t)$$ on  $$\mathbb{N}_{a}$$ . To find a formula for the matrix exponential function, e A (t, t 0), we want to solve the IVP

 $$\displaystyle{U(t + 1) = [I + A(t)]U(t),\quad t \in \mathbb{N}_{a},\quad U(t_{0}) = I.}$$

Iterating this equation we get

 $$\displaystyle{e_{A}(t,t_{0}) = \left \{\begin{array}{@{}l@{\quad }l@{}} \;^{{\ast}}\prod _{ s=t_{0}}^{t-1}[I + A(s)],\quad t \in \mathbb{N}_{ t_{0}} \quad \\ \quad \prod _{s=t}^{t_{0}-1}[I + A(s)]^{-1},\quad t \in \mathbb{N}_{a}^{t_{0}-1},\quad \end{array} \right.}$$

where it is understood that  $$\prod _{s=t_{0}}^{t_{0}-1}[I + A(s)] = e_{A}(t_{0},t_{0}) = I$$ and for  $$t \in \mathbb{N}_{t_{0}+1}$$

 $$\displaystyle{\;^{{\ast}}\prod _{ s=t_{0}}^{t-1}[I + A(s)]:= [I + A(t - 1)][I + A(t - 2)]\cdots [I + A(t_{ 0})].}$$

Example 1.81.

If A is an n × n constant matrix and I + A is invertible, then

 $$\displaystyle{e_{A}(t,t_{0}) = (I + A)^{t-t_{0} },\quad t \in \mathbb{N}_{a}.}$$

Similar to the proof of Theorem 1.16 one can prove (Exercise 1.73) the following theorem.

Theorem 1.82.

The set of all n × n regressive matrix functions on  $$\mathbb{N}_{a}$$ with the addition, ⊕ defined by

 $$\displaystyle{(A \oplus B)(t):= A(t) + B(t) + A(t)B(t),\quad t \in \mathbb{N}_{a}}$$

is a group. Furthermore, the additive inverse of a regressive matrix function A defined on  $$\mathbb{N}_{a}$$ is given by

 $$\displaystyle{(\ominus A)(t):= -[I + A(t)]^{-1}A(t),\quad t \in \mathbb{N}_{ a}.}$$

In the next theorem we give several properties of the matrix exponential. To prove part (vii) of the this theorem we will use the following lemma.

Lemma 1.83.

Assume Y (t) and  $$Y (\sigma (t))$$ are invertible matrices. Then

 $$\displaystyle{\Delta Y ^{-1}(t) = -Y ^{-1}(\sigma (t))\Delta Y (t)Y ^{-1}(t) = -Y ^{-1}(t)\Delta Y (t)Y ^{-1}(\sigma (t)).}$$

Proof.

Taking the difference of both sides of Y (t)Y −1(t) = I we get that

 $$\displaystyle{Y (\sigma (t))\Delta Y ^{-1}(t) + \Delta Y (t)Y ^{-1}(t) = 0.}$$

Solving this last equation for  $$\Delta Y ^{-1}(t)$$ we get that

 $$\displaystyle{\Delta Y ^{-1}(t) = -Y ^{-1}(\sigma (t))\Delta Y (t)Y ^{-1}(t).}$$

Similarly, one can use Y −1(t)Y (t) = I to get that

 $$\displaystyle{\Delta Y ^{-1}(t) = -Y ^{-1}(t)\Delta Y (t)Y ^{-1}(\sigma (t)).}$$

 □ 

Theorem 1.84.

Assume A and B are regressive matrix functions on  $$\mathbb{N}_{a}$$ and  $$s,r \in \mathbb{N}_{a}.$$ Then the following hold:

(i)

 $$\Delta e_{A}(t,s) = A(t)e_{A}(t,s);$$

(ii)

e A (s,s) = I;

(iii)

 $$\det e_{A}(t,s)\neq 0$$ for  $$t \in \mathbb{N}_{a};$$

(iv)

e A (t,s) is a fundamental matrix of (1.41) ;

(v)

 $$e_{A}(\sigma (t),s) = [I + A(t)]e_{A}(t,s);$$

(vi)

(semigroup property) e A (t,r)e A (r,s) = e A (t,s) holds for  $$t,r,s \in \mathbb{N}_{a};$$

(vii)

 $$e_{A}^{-1}(t,s) = e_{\ominus A^{{\ast}}}^{{\ast}}(t,s);$$

(viii)

 $$e_{A}(t,s) = e_{A}^{-1}(s,t) = e_{\ominus A^{{\ast}}}^{{\ast}}(s,t),$$ where A denotes the conjugate transpose of the matrix A;

(ix)

B(t)e A (t,t 0 ) = e A (t,t 0 )B(t), if A(t) and B(τ) commute for all  $$t,\tau \in \mathbb{N}_{a};$$

(x)

e A (t,s)e B (t,s) = e AB (t,s), if A(t) and B(τ) commute for all  $$t,\tau \in \mathbb{N}_{a}$$ .

Proof.

Note that (i) and (ii) follow from the definition of the matrix exponential. Part (iii) follows from Theorem 1.78 and part (ii). Parts (i) and (iii) imply part (iv) holds. Since  $$\Phi (\sigma (t)) = \Phi (t) + \Delta \Phi (t)$$ , we have that

 $$\displaystyle{e_{A}(\sigma (t),s) = e_{A}(t,s) + \Delta e_{A}(t,s) = [I + A(t)]e_{A}(t,s)}$$

and hence (v) holds. To see that the semigroup property (vi) holds, fix  $$r,s \in \mathbb{N}_{a}$$ and set  $$\Phi (t) = e_{A}(t,r)e_{A}(r,s).$$ Then

 $$\displaystyle\begin{array}{rcl} \Delta \Phi (t)& =& \Delta e_{A}(t,r)e_{A}(r,s) {}\\ & =& A(t)e_{A}(t,r)e_{A}(r,s) {}\\ & =& A(t)\Phi (t). {}\\ \end{array}$$

Next we show that  $$\Phi (s) = e_{A}(s,r)e_{A}(r,s)) = I.$$ First note that if r = s, then  $$\Phi (s) = \Phi (s,s)\Phi (s,s) = I$$ . Hence we can assume that sr. For the case r > s ≥ a, we have that

 $$\displaystyle\begin{array}{rcl} \Phi (s)& =& e_{A}(s,r)e_{A}(r,s) = \left (\prod _{\tau =s}^{r-1}[I + A(\tau )]^{-1}\right )\left (\;^{{\ast}}\prod _{ \tau =s}^{r-1}[I + A(\tau )]\right ) {}\\ & =& [I + A(s)]^{-1}\cdots [I + A(r - 1)]^{-1}[I + A(r - 1)]\cdots [I + A(s)] = I. {}\\ \end{array}$$

Similarly, for the case s > r ≥ a one can show that  $$\Phi (s) = I.$$ Hence, by the uniqueness of solutions for IVPs we get that e A (t, r)e A (r, s) = e A (t, s). To see that (vii) holds, fix  $$s \in \mathbb{N}_{a}$$ and let

 $$\displaystyle{Y (t):= \left [e_{A}^{-1}(t,s)\right ]^{{\ast}},\quad t \in \mathbb{N}_{ a}.}$$

Then

 $$\displaystyle\begin{array}{rcl} \Delta Y (t)& =& \left [\Delta e_{A}^{-1}(t,s)\right ]^{{\ast}} {}\\ & =& -\left [e_{A}^{-1}(\sigma (t),s)\Delta e_{ A}(t,s)e_{A}^{-1}(t,s)\right ]^{{\ast}} {}\\ & =& -\left [e_{A}^{-1}(\sigma (t),s)A(t)\right ]^{{\ast}} {}\\ & =& -\left [\left ([I + A(t)]e_{A}(t,s)\right )^{-1}A(t)\right ]^{{\ast}} {}\\ & =& -\left [e_{A}^{-1}(t,s)[I + A(t)]^{-1}A(t)\right ]^{{\ast}} {}\\ & =& -A^{{\ast}}(t)[I + A^{{\ast}}(t)]^{-1}\left [e_{ A}^{-1}(t,s)\right ]^{{\ast}} {}\\ & =& (\ominus A^{{\ast}})(t)Y (t). {}\\ \end{array}$$

Since  $$\left [e_{A}^{-1}(t,s)\right ]^{{\ast}}$$ and  $$e_{\ominus A^{{\ast}}}(t,s)$$ satisfy the same matrix IVP we get

 $$\displaystyle{\left (e_{A}^{-1}(t,s)\right )^{{\ast}} = e_{ \ominus A^{{\ast}}}(t,s).}$$

Taking the conjugate transpose of both sides of this last equation we get that part (vii) holds. The proof of (viii) is Exercise 1.78 and the proof of (ix) is Exercise 1.79.

To see that (x) holds, let  $$\Phi (t) = e_{A}(t,s)e_{B}(t,s)$$ ,  $$t \in \mathbb{N}_{a}$$ . Then by the product rule

 $$\displaystyle\begin{array}{rcl} \Delta \Phi (t)& =& \Delta \left [e_{A}(t,s)e_{B}(t,s)\right ] {}\\ & =& e_{A}(\sigma (t),s)\Delta e_{B}(t,s) + \Delta e_{A}(t,s)e_{B}(t,s) {}\\ & =& [1 + A(t)]e_{A}(t,s)B(t)e_{B}(t,s) + A(t)e_{A}(t,s)e_{B}(t,s) {}\\ & =& [A(t) + B(t) + A(t)B(t)]e_{A}(t,s)e_{B}(t,s) {}\\ & =& [(A \oplus B)(t)]\Phi (t) {}\\ \end{array}$$

for  $$t \in \mathbb{N}_{a}$$ . Since  $$\Phi (s) = I$$ , we have that  $$\Phi (t)$$ and e AB (t, s) satisfy the same matrix IVP. Hence, by the uniqueness theorem for solutions of matrix IVPs we get the desired result e A (t, s)e B (t, s) = e AB (t, s).  □ 

Now for any nonsingular matrix U 0, the solution U(t) of (1.42) with U(t 0) = U 0 is a fundamental matrix of (1.41), so there are always infinitely many fundamental matrices of (1.41). In particular, if A is a regressive matrix function on  $$\mathbb{N}_{a}$$ , then  $$\Phi (t) = e_{A}(t,t_{0})$$ is a fundamental matrix of the vector equation  $$\Delta u(t) = A(t)u(t).$$

The following theorem characterizes fundamental matrices for (1.41).

Theorem 1.85.

If  $$\Phi (t)$$ is a fundamental matrix for (1.41) , then  $$\Psi (t)$$ is another fundamental matrix if and only if there is a nonsingular constant matrix C such that

 $$\displaystyle{\Psi (t)\ =\ \Phi (t)C,}$$

for  $$t \in \mathbb{N}_{a}$$ .

Proof.

Let  $$\Psi (t) = \Phi (t)C$$ , where  $$\Phi (t)$$ is a fundamental matrix of (1.41) and C is nonsingular constant matrix. Then  $$\Psi (t)$$ is nonsingular for all  $$t \in \mathbb{N}_{a}$$ , and

 $$\displaystyle\begin{array}{rcl} \Delta \Psi (t)\ & =& \Delta \Phi (t)C {}\\ & =& \ A(t)\Phi (t)C {}\\ & =& \ A(t)\Psi (t). {}\\ \end{array}$$

Therefore  $$\Psi (t)$$ is a fundamental matrix of (1.41).

Conversely, assume  $$\Phi (t)$$ and  $$\Psi (t)$$ are fundamental matrices of (1.41). For some  $$t_{0} \in \mathbb{N}_{a}$$ , let

 $$\displaystyle{C\:=\ \Phi ^{-1}(t_{ 0})\Psi (t_{0}).}$$

Then  $$\Psi (t)$$ and  $$\Phi (t)C$$ are both solutions of (1.42) satisfying the same initial condition at t 0. By uniqueness,

 $$\displaystyle{\Psi (t)\ =\ \Phi (t)C,}$$

for all  $$t \in \mathbb{N}_{a}$$ . □ 

The proof of the following theorem is similar to that of Theorem 1.85 and is left as an exercise (Exercise 1.68).

Theorem 1.86.

If  $$\Phi (t)$$ is a fundamental matrix of (1.41) , then the general solution of (1.41) is given by

 $$\displaystyle{u(t)\ =\ \Phi (t)c,}$$

where c is an arbitrary constant column vector.

Hence we see to solve the vector equation (1.41) we just need to find the fundamental matrix  $$\Phi (t) = e_{A}(t,t_{0}).$$ We will set off to prove the Putzer algorithm (Theorem 1.88) which will give us a nice formula e A (t, 0), for  $$t \in \mathbb{N}_{0}$$ when A is a constant n × n matrix. In the proof of this theorem we will use the Cayley–Hamilton Theorem which states that every square constant matrix satisfies its own characteristic equation. We now give an example to illustrate this important theorem.

Example 1.87.

Show directly that the matrix

 $$\displaystyle{A = \left [\begin{array}{*{10}c} 2&-1\\ 3 &-4 \end{array} \right ]}$$

satisfies its own characteristic equation. The characteristic equation of A is

 $$\displaystyle{\lambda ^{2} + 2\lambda - 5 = 0.}$$

Then

 $$\displaystyle\begin{array}{rcl} A^{2} + 2A - 5I& =& \left [\begin{array}{*{10}c} \;\;\;1 & 2 \\ -6&13 \end{array} \right ] + \left [\begin{array}{*{10}c} 4&-2\\ 6 &-8 \end{array} \right ] -\left [\begin{array}{*{10}c} 5&0\\ 0 &5 \end{array} \right ] {}\\ & =& \left [\begin{array}{*{10}c} 0&0\\ 0 &0 \end{array} \right ] {}\\ \end{array}$$

and so A does satisfy its own characteristic equation.

The proof of the next theorem is motivated by the fact that by the Cayley–Hamilton Theorem an n by n constant matrix A can be written as a linear combination of the matrices I, A, A 2,  $$\ldots$$ , A n−1 and therefore every nonnegative integer power A t of A can also be written as a linear combination of I, A, A 2,  $$\ldots$$ , A n−1.

Theorem 1.88 (Putzer’s Algorithm).

Let  $$\lambda _{1},\lambda _{2},\ldots,\lambda _{n}$$ be the (not necessarily distinct) eigenvalues of the constant n by n matrix A, with each eigenvalue repeated as many times as its multiplicity. Define the matrices M k , 0 ≤ k ≤ n, recursively by

 $$\displaystyle\begin{array}{rcl} M_{0}& =& I {}\\ M_{k}& =& (A -\lambda _{k}I)M_{k-1},\quad 1 \leq k \leq n. {}\\ \end{array}$$

Then

 $$\displaystyle{A^{t} =\sum _{ k=0}^{n-1}p_{ k+1}(t)M_{k},\quad t \in \mathbb{N}_{0},}$$

where the p k (t), 1 ≤ k ≤ n are chosen so that

 $$\displaystyle{ \left [\begin{array}{*{10}c} p_{1}(t + 1) \\ p_{2}(t + 1)\\ \vdots \\ p_{n}(t + 1) \end{array} \right ]\ \ = \left [\begin{array}{*{10}c} \lambda _{1} & 0 &0&\cdots & 0 \\ 1& \lambda _{2} & 0&\cdots & 0 \\ 0& 1 & \lambda _{3} & \cdots & 0\\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0&\cdots &0& 1 &\lambda _{n} \end{array} \right ]\left [\begin{array}{*{10}c} p_{1}(t) \\ p_{2}(t)\\ \vdots \\ p_{n}(t) \end{array} \right ] }$$

(1.44)

and

 $$\displaystyle{ \left [\begin{array}{*{10}c} p_{1}(0) \\ p_{2}(0)\\ \vdots \\ p_{n}(0) \end{array} \right ] = \left [\begin{array}{*{10}c} 1\\ 0\\ \vdots \\ 0 \end{array} \right ]. }$$

(1.45)

Proof.

Let the matrices M k , 0 ≤ k ≤ n be defined as in the statement of this theorem. Since for each fixed t ≥ 0, A t is a linear combination of I, A, A 2,  $$\ldots$$ , A n−1, we also have that for each fixed t, A t is a linear combination of M 0, M 1, M 2,  $$\ldots$$ , M n−1, that is

 $$\displaystyle{A^{t}\ =\ \sum _{ i=0}^{n-1}p_{ k+1}(t)M_{k}}$$

for t ≥ 0. It remains to show that the p k  ’s are as in the statement of this theorem. Since A t+1 = A ⋅ A t , we have that

 $$\displaystyle\begin{array}{rcl} \sum _{k=0}^{n-1}p_{ k+1}(t + 1)M_{k}& =& A\sum _{k=0}^{n-1}p_{ k+1}(t)M_{k} \\ & =& \sum _{k=0}^{n-1}p_{ k+1}(t)\left [AM_{k}\right ] \\ & =& \sum _{k=0}^{n-1}p_{ k+1}(t)\left [M_{k+1} +\lambda _{k+1}M_{k}\right ] \\ & =& \sum _{k=0}^{n-1}p_{ k+1}(t)M_{k+1} +\sum _{ k=0}^{n-1}\lambda _{ k+1}p_{k+1}(t)M_{k} \\ & =& \sum _{k=1}^{n-1}p_{ k}(t)M_{k} +\sum _{ k=0}^{n-1}p_{ k+1}(t)\lambda _{k+1}M_{k}, \\ & =& \lambda _{1}p_{1}(t)M_{0} +\sum _{ k=1}^{n-1}\left [p_{ k}(t) +\lambda _{k+1}p_{k+1}(t)\right ]M_{k},{}\end{array}$$

(1.46)

where in the second to the last step we have replaced k by k − 1 in the first sum and used the fact that (by the Cayley–Hamilton Theorem) M n  = 0. Note that equation (1.46) is satisfied if p k (t),  $$k = 1,2,\ldots,n,$$ are chosen to satisfy the system (1.44). Since  $$A^{0} = I = p_{1}(0)I +\ldots +p_{n}(0)M_{n-1}$$ , we must have (1.45) is satisfied. □ 

The following example shows how we can use the Putzer algorithm to find the exponential function e A (t, 0) when A is a constant matrix. This method is called finding the matrix exponential e A (t, 0) using the Putzer algorithm.

Example 1.89.

Use the Putzer algorithm (Theorem 1.88) to find e A (t, 0),  $$t \in \mathbb{N}_{0}$$ , where

 $$\displaystyle{A:= \left [\begin{array}{*{10}c} 1&2\\ 1 &\;2 \end{array} \right ].}$$

Note  $$e_{A}(t,0) =\prod _{ \tau =0}^{t-1}\left [I + A\right ] = (I + A)^{t}$$ . So to find e A (t, 0) we just need to find B t where

 $$\displaystyle{B:= I+A = \left [\begin{array}{*{10}c} 2&2\\ 1 &3 \end{array} \right ].}$$

We now apply Putzer’s algorithm (Theorem 1.88) to find B t . The characteristic equation for B is given by  $$\lambda ^{2} - 5\lambda + 4 = (\lambda -1)(\lambda -4) = 0$$ . Hence the eigenvalues of B are given by  $$\lambda _{1} = 1,$$  $$\lambda _{2} = 4$$ . It follows that M 0 = I and

 $$\displaystyle{M_{1} = B-\lambda _{1}I = \left [\begin{array}{*{10}c} 1&2\\ 1 &2 \end{array} \right ].}$$

To find p 1(t) we now solve the IVP

 $$\displaystyle{p_{1}(t + 1) =\lambda _{1}p(t) = p_{1}(t),\quad p_{1}(0) = 1.}$$

It follows that p 1(t) = 1. Next to find p 2(t) we solve the IVP

 $$\displaystyle{p_{2}(t + 1) = p_{1}(t) +\lambda _{2}p_{2}(t) = 1 + 4p_{2}(t),\quad p_{2}(0) = 0.}$$

This gives us the IVP

 $$\displaystyle{\Delta p_{2}(t) = 3p_{2}(t) + 1,\quad p_{2}(0) = 0.}$$

Using the variation of constants formula in Theorem 1.68 we get

 $$\displaystyle\begin{array}{rcl} p_{2}(t)& =& \int _{0}^{t}e_{ 3}(t,\sigma (s))\Delta s {}\\ & =& \int _{0}^{t}e_{ \ominus 3}(\sigma (s),t)\Delta s {}\\ & =& \int _{0}^{t}[1 + \ominus 3]e_{ \ominus 3}(s,t)\Delta s {}\\ & =& \frac{1} {4}\int _{0}^{t}e_{ \ominus 3}(s,t)\Delta s {}\\ & =& -\frac{1} {3}e_{\ominus 3}(s,t)\vert _{s=0}^{s=t} {}\\ & =& -\frac{1} {3}e_{\ominus 3}(t,t) + \frac{1} {3}e_{\ominus 3}(0,t) {}\\ & =& -\frac{1} {3} + \frac{1} {3}e_{3}(t,0) {}\\ & =& -\frac{1} {3} + \frac{1} {3}4^{t}. {}\\ \end{array}$$

It follows that

 $$\displaystyle\begin{array}{rcl} e_{A}(t,0)& =& p_{1}(t)M_{0} + p_{2}(t)M_{1} {}\\ & =& \left [\begin{array}{*{10}c} 1&0\\ 0 &1 \end{array} \right ] +\bigg (-\frac{1} {3} + \frac{1} {3}4^{t}\bigg)\left [\begin{array}{*{10}c} 1&2 \\ 1&2 \end{array} \right ] {}\\ & =& \frac{1} {3}\left [\begin{array}{*{10}c} \;\;2 + 4^{t} &-2 + 2 \cdot 4^{t} \\ -1 + 4^{t}& \;\;1 + 2 \cdot 4^{t} \end{array} \right ]. {}\\ \end{array}$$

It follows from this that

 $$\displaystyle{y(t) = c_{1}\left [\begin{array}{*{10}c} \;\;2 + 4^{t} \\ -1 + 4^{t} \end{array} \right ]+c_{2}\left [\begin{array}{*{10}c} -2 + 2 \cdot 4^{t} \\ \;\;1 + 2 \cdot 4^{t} \end{array} \right ]}$$

is a general solution of

 $$\displaystyle{\Delta y(t) = \left [\begin{array}{*{10}c} 1&2\\ 1 &\;2 \end{array} \right ]y(t),\quad t \in \mathbb{N}_{0}.}$$

Example 1.90.

Use Putzer’s algorithm for finding the matrix exponential e A (t, 0) to solve the vector equation

 $$\displaystyle{\Delta u(t) = Au(t),\quad t \in \mathbb{N}_{0},}$$

where A is the regressive matrix given by

 $$\displaystyle{A = \left [\begin{array}{*{10}c} \;\;1 &1\\ -1 &3 \end{array} \right ].}$$

Let B: = I + A, then e A (t, 0) = [I + A] t  = B t , where

 $$\displaystyle{B = \left [\begin{array}{*{10}c} \;\;2 &1\\ -1 &4 \end{array} \right ].}$$

The characteristic equation of the constant matrix B is given by

 $$\displaystyle{\lambda ^{2} - 6\lambda + 9 = 0}$$

and so the characteristic values are  $$\lambda _{1} =\lambda _{2} = 3.$$ It follows that

 $$\displaystyle{M_{0} = I = \left [\begin{array}{*{10}c} 1&0\\ 0 &1 \end{array} \right ]\quad \mbox{ and}\quad M_{1} = \left (B - 3I\right )M_{0} = \left [\begin{array}{*{10}c} -1&1\\ -1 &1 \end{array} \right ].}$$

Next we solve the IVP

 $$\displaystyle{p(t+1) = \left [\begin{array}{*{10}c} \;3&0\\ 1 &3 \end{array} \right ]p(t),\quad p(0) = \left [\begin{array}{*{10}c} 1\\ 0 \end{array} \right ].}$$

Hence p 1(t) solves the IVP

 $$\displaystyle{p_{1}(t + 1) = 3p_{1}(t),\quad p_{1}(0) = 1.}$$

Since  $$\Delta p_{1}(t) = 2p_{1}(t),$$ p 1(0) = 1, we have that p 1(t) = e 2(t, 0) = 3 t . Also p 2(t) solves the IVP

 $$\displaystyle{p_{2}(t + 1) = p_{1}(t) + 3p_{2}(t),\quad p_{2}(0) = 0.}$$

It follows that p 2(t) solves the IVP

 $$\displaystyle{\Delta p_{2}(t) = 2p_{2}(t) + e_{2}(t,0),\quad p_{2}(0) = 0.}$$

Using the variation of constants formula in Theorem 1.68 we get that

 $$\displaystyle\begin{array}{rcl} p_{2}(t)& =& \int _{0}^{t}e_{ 2}(t,\sigma (\tau ))e_{2}(\tau,0)\Delta \tau {}\\ & =& \int _{0}^{t}e_{ \ominus 2}(\sigma (\tau ),t)e_{2}(\tau,0)\Delta \tau {}\\ & =& \frac{1} {3}\int _{0}^{t}e_{ \ominus 2}(\tau,t)e_{2}(\tau,0)\Delta \tau {}\\ & =& \frac{1} {3}\int _{0}^{t}e_{ \ominus 2}(\tau,t)e_{2}(\tau,0)\Delta \tau {}\\ & =& \frac{1} {3}\int _{0}^{t}e_{ 2}(t,\tau )e_{2}(\tau,0)\Delta \tau {}\\ & =& \frac{1} {3}e_{2}(t,0)\int _{0}^{t}1\Delta \tau {}\\ & =& \frac{1} {3}te_{2}(t,0) {}\\ & =& \frac{1} {3}t3^{t} {}\\ \end{array}$$

Hence by Putzer’s algorithm

 $$\displaystyle\begin{array}{rcl} e_{A}(t,0)& =& B^{t} {}\\ & =& p_{1}(t)M_{0} + p_{2}(t)M_{1} {}\\ & =& 3^{t}\left [\begin{array}{*{10}c} 1&0 \\ 0&1 \end{array} \right ] + \frac{1} {3}t3^{t}\left [\begin{array}{*{10}c} -1&1 \\ -1&1 \end{array} \right ] {}\\ & =& 3^{t}\left [\begin{array}{*{10}c} 1 -\frac{1} {3}t& \frac{1} {3}t \\ -\frac{1} {3}t &1 + \frac{1} {3}t \end{array} \right ]. {}\\ \end{array}$$

Since e A (t, 0) is a fundamental matrix of  $$\Delta u(t) = Au(t)$$ , we have by Theorem 1.86 that

 $$\displaystyle\begin{array}{rcl} u(t)& =& 3^{t}\left [\begin{array}{*{10}c} 1 -\frac{1} {3}t& \frac{1} {3}t \\ -\frac{1} {3}t &1 + \frac{1} {3}t \end{array} \right ]c {}\\ & =& c_{1}3^{t}\left [\begin{array}{*{10}c} 1 -\frac{1} {3}t \\ -\frac{1} {3}t \end{array} \right ] + c_{2}3^{t}\left [\begin{array}{*{10}c} \frac{1} {3}t \\ 1 + \frac{1} {3}t \end{array} \right ]{}\\ \end{array}$$

is a general solution.

Fundamental matrices can be used to solve the nonhomogeneous equation (1.40).

Theorem 1.91 (Variation of Parameters (Constants)).

Assume  $$\Phi (t)$$ is a fundamental matrix of (1.41) . Then the unique solution of (1.40) that satisfies the initial condition y(a) = y 0 is given by the variation of parameters formula

 $$\displaystyle{ y(t)\ = \Phi (t)\Phi ^{-1}(a)y_{ 0} + \Phi (t)\int _{a}^{t}\Phi ^{-1}(s + 1)f(s)\Delta s, }$$

(1.47)

for  $$t \in \mathbb{N}_{a}$$ .

Proof.

Let y(t) be given by (1.47) for  $$t \in \mathbb{N}_{a}$$ . Using the vector version of the Leibniz formula (1.29), we have

 $$\displaystyle\begin{array}{rcl} \Delta y(t)\ & =& \Delta \Phi (t)\Phi ^{-1}(a)y_{ 0} + \Delta \Phi (t)\int _{a}^{t}\Phi ^{-1}(s + 1)f(s)\Delta s {}\\ & & +\Phi (t + 1)\Phi ^{-1}(t + 1)f(t) {}\\ & =& A(t)\Phi (t)\Phi ^{-1}(a)y_{ 0} + A(t)\Phi (t)\int _{a}^{t}\Phi ^{-1}(s + 1)f(s)\Delta s + f(t) {}\\ & =& A(t)\left [\Phi (t)\Phi ^{-1}(a)y_{ 0} + \Phi (t)\int _{a}^{t}\Phi ^{-1}(s + 1)f(s)\Delta s\right ] + f(t) {}\\ & =& A(t)y(t) + f(t). {}\\ \end{array}$$

Consequently, y(t) defined by (1.47) is a solution of the nonhomogeneous equation, and also we have that y(a) = y 0.  □ 

A special case of the above theorem is the following result.

Theorem 1.92.

Assume A(t) is a regressive matrix function on  $$\mathbb{N}_{a}$$ and assume  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}^{n}$$ . Then the unique solution of the IVP

 $$\displaystyle{\Delta y(t) = A(t)y(t) + f(t),\quad t \in \mathbb{N}_{a}}$$

 $$\displaystyle{y(a) = y_{0}}$$

is given by the variation of constants formula

 $$\displaystyle{y(t) = e_{A}(t,a)y_{0} +\int _{ a}^{t}e_{ A}(t,\sigma (s))f(s)\Delta s,}$$

for  $$t \in \mathbb{N}_{a}$$ .

Proof.

Since  $$\Phi (t) = e_{A}(t,a)$$ is a fundamental matrix of  $$\Delta y(t) = A(t)y(t)$$ , we have by Theorem 1.91, that the solution of our IVP in the statement of this theorem is given by

 $$\displaystyle\begin{array}{rcl} y(t)& =& e_{A}(t,a)e_{A}^{-1}(a,a)y_{ 0} + e_{A}(t,a)\int _{a}^{t}e_{ A}^{-1}(\sigma (s),a)f(s)\Delta s {}\\ & =& e_{A}(t,a)y_{0} +\int _{ a}^{t}e_{ A}(t,a)e_{A}(a,\sigma (s))f(s)\Delta s {}\\ & =& e_{A}(t,a)y_{0} +\int _{ a}^{t}e_{ A}(t,\sigma (s))f(s)\Delta s, {}\\ \end{array}$$

where in the last two steps we used properties (viii) and (vi) in Theorem 1.84. □ 

Example 1.93.

Solve the system

 $$\displaystyle\begin{array}{rcl} u(t + 1)& =& \left [\begin{array}{ll} \;\;\;0 &\;\;\;1\\ - 2 & - 3 \end{array} \right ]u(t) + \left (\frac{2} {3}\right )^{t}\left [\begin{array}{l} \;\;\;1 \\ - 2 \end{array} \right ],\quad t \in \mathbb{N}_{0}, {}\\ u(0)& =& \left [\begin{array}{l} 1\\ 1 \end{array} \right ]. {}\\ \end{array}$$

From Exercise 1.72, we can choose

 $$\displaystyle\begin{array}{rcl} \Phi (t)& =& \left [\begin{array}{ll} (-2)^{t} &(-1)^{t} \\ (-2)^{t+1} & (-1)^{t+1} \end{array} \right ] {}\\ & =& (-1)^{t}\left [\begin{array}{ll} \;\;\;2^{t} &\;\;\;1 \\ - 2^{t+1} & - 1 \end{array} \right ].{}\\ \end{array}$$

Then

 $$\displaystyle{\Phi ^{-1}(t) = \frac{(-1)^{t}} {2^{t}} \left [\begin{array}{ll} - 1 & - 1\\ \;\;2^{t+1 } & \;\;2^{t} \end{array} \right ].}$$

From (1.47), we have for t ≥ 0,

 $$\displaystyle\begin{array}{rcl} u(t)& =& (-1)^{t}\left [\begin{array}{ll} \;\;\;2^{t} &\;\;\;1 \\ - 2^{t+1} & - 1 \end{array} \right ]\left (\left [\begin{array}{l} - 2\\ \;\;3 \end{array} \right ] +\sum _{ s=0}^{t-1}\left [\begin{array}{l} -.5(-3)^{-s} \\ \;\;\;0 \end{array} \right ]\right ) {}\\ & =& (-1)^{t}\left [\begin{array}{ll} \;\;\;2^{t} &\;\;\;1 \\ - 2^{t+1} & - 1 \end{array} \right ]\left (\left [\begin{array}{l} - 2\\ \;\;3 \end{array} \right ] + \left [\begin{array}{l}.375((-3)^{-t} - 1) \\ \;\;\;\;\;\quad \quad 0 \end{array} \right ]\right ) {}\\ & =& (-1)^{t}\left [\begin{array}{ll} \;\;\;2^{t} &\;\;\;1 \\ - 2^{t+1} & - 1 \end{array} \right ]\left [\begin{array}{l} -.125((-3)^{1-t} + 19) \\ \quad \quad \quad \quad 3 \end{array} \right ]. {}\\ \end{array}$$