Floquet Systems - Basic Difference Calculus - Discrete Fractional Calculus

Discrete Fractional Calculus (2015)

1. Basic Difference Calculus

1.11. Floquet Systems

In this section we consider the so-called Floquet system

 $$\displaystyle{ u(t + 1) = A(t)u(t),\quad t \in \mathbb{Z}_{\alpha }, }$$

(1.55)

where  $$\alpha \in \mathbb{R}$$ and

 $$\displaystyle{\mathbb{Z}_{\alpha }:=\{\ldots,\alpha -2,\alpha -1,\alpha,\alpha +1,\alpha +2,\ldots \},}$$

and we assume that A(t) is an n × n matrix function which has minimum positive period p (p is an integer). We say that p is the prime period of A(t). Here are two simple scalar examples that are indicative of the behavior of general Floquet systems.

Example 1.102.

Since a(t): = 2 + (−1) t ,  $$t \in \mathbb{Z}_{0}$$ is periodic with prime period p = 2, the equation

 $$\displaystyle{ u(t + 1) = [2 + (-1)^{t}]u(t),\quad t \in \mathbb{Z}_{ 0} }$$

(1.56)

is a scalar Floquet system with prime period p = 2. Equation (1.56) can be written in the form

 $$\displaystyle{\Delta u(t) = [1 + (-1)^{t}]u(t).}$$

By Theorem 1.14 the general solution of (1.56) is given by

 $$\displaystyle{u(t)\ = ce_{h}(t,0),}$$

where h(t): = 1 + (−1) t ,  $$t \in \mathbb{Z}_{0}$$ . It follows from (1.8) that for  $$t \in \mathbb{N}_{0}$$

 $$\displaystyle\begin{array}{rcl} u(t)& =& c\prod _{\tau =0}^{t-1}[1 + h(\tau )] {}\\ & =& c\prod _{\tau =0}^{t-1}[2 + (-1)^{\tau }] {}\\ & =& \left \{\begin{array}{@{}l@{\quad }l@{}} (\sqrt{3})^{t},\quad \quad \mbox{ if $t$ is even} \quad \\ (\sqrt{3})^{t+1},\quad \mbox{ if $t$ is odd}.\quad \end{array} \right.{}\\ \end{array}$$

It is easy to check that the last expression above is also true for negative integers. Define the p = 2 periodic function r by

 $$\displaystyle{ r(t):= \left \{\begin{array}{@{}l@{\quad }l@{}} c, \quad &\mbox{ if $t$ is even} \\ c\sqrt{3},\quad &\mbox{ if $t$ is odd}, \end{array} \right. }$$

for  $$t \in \mathbb{Z}_{0}$$ and put  $$b:= \sqrt{3}$$ . Then we have that all solutions of (1.56) are of the form

 $$\displaystyle{u(t) = r(t)b^{t},\quad t \in \mathbb{Z}_{ 0}.}$$

Compare this with formula (1.59) in Floquet’s Theorem 1.105.

In some cases we need b to be a complex number to write the general solution in the form u(t) = r(t)b t as the next example shows.

Example 1.103.

Since a(t): = (−1) t is periodic with prime period p = 2, the equation

 $$\displaystyle{u(t + 1) = (-1)^{t}u(t),\quad t \in \mathbb{Z}_{ 0}}$$

is a scalar Floquet system. The general solution is given by

 $$\displaystyle{u(t) =\alpha (-1)^{\frac{t(t-1)} {2} },\quad t \in \mathbb{Z}_{0}.}$$

We can write this solution in the form

 $$\displaystyle{u(t) = r(t)b^{t},\quad t \in \mathbb{Z}_{ 0},}$$

where

 $$\displaystyle{r(t) =\alpha (-1)^{\frac{t^{2}} {2} },\quad t \in \mathbb{Z}_{0}}$$

is periodic with period 2 and b = −i. Compare this with the formula (1.59) in Floquet’s Theorem 1.105.

In preparation for the proof of Floquet’s Theorem, we need the following result concerning logarithms of nonsingular matrices.

Lemma 1.104.

Assume C is a nonsingular matrix and p is a positive integer. Then there is a nonsingular matrix B such that

 $$\displaystyle{B^{p}\ =\ C.}$$

Proof.

We will prove this theorem only for 2 × 2 matrices. First consider the case where C has two linearly independent eigenvectors. In this case, by the Jordan canonical form theorem, there is a nonsingular matrix Q so that

 $$\displaystyle{C\ =\ Q^{-1}JQ,}$$

where

 $$\displaystyle{J\ =\ \left [\begin{array}{ll} \lambda _{1} & 0 \\ 0&\lambda _{2}\\ \end{array} \right ],}$$

and  $$\lambda _{1}$$ ,  $$\lambda _{2}$$ ( $$\lambda _{1} =\lambda _{2}$$ is possible) are the eigenvalues of C. Now we want to find a matrix B such that

 $$\displaystyle{B^{p} = C = Q^{-1}JQ.}$$

Equivalently, we want to pick B so that

 $$\displaystyle{QB^{p}Q^{-1} = J,}$$

or

 $$\displaystyle{\left (QBQ^{-1}\right )^{p} = J.}$$

Then we need to choose B so that

 $$\displaystyle{QBQ^{-1}\ =\ \left [\begin{array}{ll} (\lambda _{1})^{\frac{1} {p} } & \;\;0 \\ \;\;0 &(\lambda _{2})^{\frac{1} {p} } \end{array} \right ],}$$

so

 $$\displaystyle{B\ =\ Q^{-1}\left [\begin{array}{ll} (\lambda _{1})^{\frac{1} {p} } & \;\;0 \\ \;\;0 &(\lambda _{2})^{\frac{1} {p} } \end{array} \right ]Q.}$$

Finally, we consider the case where C has only one linearly independent eigenvector. In this case, by the Jordan canonical form theorem, there is a nonsingular matrix Q so that

 $$\displaystyle{C\ =\ Q^{-1}JQ,}$$

where

 $$\displaystyle{J\ =\ \left [\begin{array}{ll} \lambda _{1} & 1 \\ 0&\lambda _{1}\\ \end{array} \right ],}$$

and  $$\lambda _{1}$$ is the eigenvalue of C.

Let’s try to find a matrix B of the form

 $$\displaystyle{ B\ =\ Q^{-1}\left [\begin{array}{ll} a&b \\ 0 &a \end{array} \right ]Q }$$

(1.57)

so that B p  = C. 

Then

 $$\displaystyle\begin{array}{rcl} B^{p}& =& \left (Q^{-1}\left [\begin{array}{ll} a&b \\ 0 &a \end{array} \right ]Q\right )^{p} {}\\ & =& Q^{-1}\left [\begin{array}{ll} a&b \\ 0 &a \end{array} \right ]^{p}Q {}\\ & =& Q^{-1}\left \{aI + \left [\begin{array}{ll} 0&b \\ 0&0 \end{array} \right ]\right \}^{p}Q,{}\\ \end{array}$$

where I is the 2 × 2 identity matrix. Using the binomial theorem, we get

 $$\displaystyle\begin{array}{rcl} B^{p}& =& Q^{-1}\left \{a^{p}I + pa^{p-1}\left [\begin{array}{ll} 0&b \\ 0&0 \end{array} \right ]\right \}Q {}\\ & =& Q^{-1}\left [\begin{array}{ll} a^{p}&pa^{p-1}b \\ 0 &\;\;\;a^{p} \end{array} \right ]Q {}\\ & =& C = Q^{-1}\left [\begin{array}{ll} \lambda _{1} & \;\;1 \\ 0&\lambda _{1} \end{array} \right ]Q, {}\\ \end{array}$$

if a and b are picked to satisfy

 $$\displaystyle{a^{p} =\lambda _{ 1}\quad \mbox{ and}\quad pa^{p-1}b = 1.}$$

Solving for a and b we get from (1.57) that

 $$\displaystyle{B = Q^{-1}\left [\begin{array}{ll} \lambda _{1}^{ \frac{1} {p} } & \frac{1} {p}\lambda _{1}^{ \frac{1} {p}-1} \\ 0 &\;\;\;\lambda _{1}^{ \frac{1} {p} } \end{array} \right ]Q}$$

is the desired expression for B. □ 

Theorem 1.105 (Discrete Floquet’s Theorem).

If  $$\Phi (t)$$ is a fundamental matrix for the Floquet system (1.55) , then  $$\Phi (t + p)$$ is also a fundamental matrix and  $$\Phi (t + p) = \Phi (t)C,$$  $$t \in \mathbb{Z}_{\alpha }$$ , where

 $$\displaystyle{ C = \Phi ^{-1}(\alpha )\Phi (\alpha +p). }$$

(1.58)

Furthermore, there is a nonsingular matrix function P(t) and a nonsingular constant matrix B such that

 $$\displaystyle{ \Phi (t) = P(t)B^{t-\alpha },\quad t \in \mathbb{Z}_{\alpha }, }$$

(1.59)

where P(t) is periodic on  $$\mathbb{Z}_{\alpha }$$ with period p.

Proof.

Assume  $$\Phi (t)$$ is a fundamental matrix for the Floquet system (1.55). If  $$\Psi (t):= \Phi (t + p)$$ , then  $$\Psi (t)$$ is nonsingular for all  $$t \in \mathbb{Z}_{\alpha }$$ , and

 $$\displaystyle\begin{array}{rcl} \Psi (t + 1)& =& \Phi (t + p + 1) {}\\ & =& A(t + p)\Phi (t + p) {}\\ & =& A(t)\Psi (t), {}\\ \end{array}$$

for  $$t \in \mathbb{Z}_{\alpha }$$ . Hence  $$\Psi (t) = \Phi (t + p)$$ is a fundamental matrix for the vector difference equation (1.55). Since  $$\Psi (t)$$ and  $$\Phi (t)$$ are both fundamental matrices of (1.55), we have by Theorem 1.85, there is a nonsingular constant matrix C such that

 $$\displaystyle{\Psi (t) = \Phi (t + p) = \Phi (t)C,\quad t \in \mathbb{Z}_{\alpha }.}$$

Letting t = α and solving for C, we get that equation (1.58) holds. By Lemma 1.104, there is a nonsingular matrix B so that B p  = C. Let

 $$\displaystyle{ P(t):= \Phi (t)B^{-(t-\alpha )},\quad t \in \mathbb{Z}_{\alpha }. }$$

(1.60)

Note that P(t) is nonsingular for all  $$t \in \mathbb{Z}_{\alpha }$$ and since

 $$\displaystyle\begin{array}{rcl} P(t + p)& =& \Phi (t + p)B^{-(t-\alpha +p)} {}\\ & =& \Phi (t)CB^{-p}B^{-(t-\alpha )} {}\\ & =& \Phi (t)B^{-(t-\alpha )} {}\\ & =& P(t), {}\\ \end{array}$$

P(t) is periodic with period p. Solving equation (1.60) for  $$\Phi (t)$$ , we get equation (1.59). □ 

Definition 1.106.

Let  $$\Phi (t)$$ and C be as in Floquet’s theorem (Theorem 1.105). Then the eigenvalues μ of the matrix C are called the Floquet multipliers of the Floquet system (1.55).

Since fundamental matrices of a linear system are not unique (see Theorem 1.85), we must show that the Floquet multipliers are well defined. Let  $$\Phi (t)$$ and  $$\Psi (t)$$ be fundamental matrices for the Floquet system (1.55) and let

 $$\displaystyle{C_{1} = \Phi ^{-1}(\alpha )\Phi (\alpha +p)\quad \mbox{ and}\quad C_{ 2} = \Psi ^{-1}(\alpha )\Psi (\alpha +p).}$$

It remains to show that C 1 and C 2 have the same eigenvalues. By Theorem 1.85 there is a nonsingular constant matrix F so that

 $$\displaystyle{\Psi (t)\ = \Phi (t)F,\quad t \in \mathbb{Z}_{\alpha }.}$$

Hence,

 $$\displaystyle\begin{array}{rcl} C_{2}& =& \Psi ^{-1}(\alpha )\Psi (\alpha +p) {}\\ & =& [\Phi (\alpha )F]^{-1}[\Phi (\alpha +p)F] {}\\ & =& F^{-1}\Phi ^{-1}(\alpha )\Phi (\alpha +p)F {}\\ & =& F^{-1}C_{ 1}F. {}\\ \end{array}$$

Since

 $$\displaystyle\begin{array}{rcl} \det (C_{2} -\lambda I)& =& \det (F^{-1}C_{ 1}F -\lambda I) {}\\ & =& \det F^{-1}(C_{ 1} -\lambda I)F {}\\ & =& \det (C_{1} -\lambda I), {}\\ \end{array}$$

C 1 and C 2 have the same characteristic polynomial and therefore have the same eigenvalues. Hence Floquet multipliers are well defined.

Theorem 1.107.

The Floquet multipliers of the Floquet system (1.55) are the eigenvalues of the matrix

 $$\displaystyle{D:= A(\alpha +p - 1)A(\alpha +p - 2)\cdots A(\alpha ).}$$

Proof.

To see this let  $$\Phi (t)$$ be the fundamental matrix of the Floquet system (1.55) satisfying  $$\Phi (\alpha ) = I$$ . Then the Floquet multipliers are the eigenvalues of

 $$\displaystyle{D = \Phi ^{-1}(\alpha )\Phi (\alpha +p) = \Phi (\alpha +p).}$$

Iterating the equation

 $$\displaystyle{\Phi (t + 1) = A(t)\Phi (t),}$$

we get that

 $$\displaystyle\begin{array}{rcl} D& =& \Phi (\alpha +p) = [A(\alpha +p - 1)A(\alpha +p - 2)\cdots A(\alpha )]\Phi (\alpha ) {}\\ & =& A(\alpha +p - 1)A(\alpha +p - 2)\cdots A(\alpha ), {}\\ \end{array}$$

which is the desired result. □ 

Here are some simple examples of Floquet multipliers. Note that in the scalar case we use d instead of D.

Example 1.108.

For the scalar equation

 $$\displaystyle{u(t + 1)\ =\ (-1)^{t}u(t),\quad t \in \mathbb{Z}_{ 0},}$$

the coefficient function a(t) = (−1) t has prime period p = 2, and d = a(1) a(0) = −1, so μ = −1 is the Floquet multiplier.

Example 1.109.

Find the Floquet multipliers for the Floquet system

 $$\displaystyle{u(t+1)\ = \left [\begin{array}{*{10}c} 0 &1\\ (-1)^{t } &0 \end{array} \right ]u(t),\quad t \in \mathbb{Z}_{0}.}$$

The coefficient matrix A(t) is periodic with prime period p = 2, so

 $$\displaystyle\begin{array}{rcl} D& =& \ A(1)A(0) {}\\ & =& \left [\begin{array}{ll} \;\;0 &1\\ - 1 &0 \end{array} \right ]\left [\begin{array}{ll} 0&1\\ 1 &0 \end{array} \right ] {}\\ & =& \left [\begin{array}{ll} 1&\;\;0\\ 0 & - 1 \end{array} \right ]. {}\\ \end{array}$$

Consequently, μ 1 = 1, μ 2 = −1 are the Floquet multipliers.

The following theorem demonstrates why the term multiplier is appropriate.

Theorem 1.110.

The number μ is a Floquet multiplier for the Floquet system (1.55) , if and only if there is a nontrivial solution u(t) of (1.55) such that

 $$\displaystyle{u(t + p) =\mu u(t),\quad t \in \mathbb{Z}_{\alpha }.}$$

Furthermore, if μ 1 2 ,⋯ ,μ k are distinct Floquet multipliers, then there are k linearly independent solutions, u i (t),1 ≤ i ≤ k of the Floquet system (1.55) on  $$\mathbb{Z}_{\alpha }$$ satisfying

 $$\displaystyle{u_{i}(t + p) =\mu _{i}u_{i}(t),\quad t \in \mathbb{Z}_{\alpha },\;\;1 \leq i \leq k.}$$

Proof.

Assume μ 0 is a Floquet multiplier of (1.55). Then μ 0 is an eigenvalue of the matrix C given by equation (1.58). Let u 0 be an eigenvector of C corresponding to μ 0, and  $$\Phi (t)$$ be a fundamental matrix for (1.55). Define

 $$\displaystyle{u(t):= \Phi (t)u_{0}\quad t \in \mathbb{Z}_{\alpha }.}$$

Then u(t) is a nontrivial solution of (1.55), and from Floquet’s theorem  $$\Phi (t + p) = \Phi (t)C$$ . Hence, we have

 $$\displaystyle\begin{array}{rcl} u(t + p)\ & =& \ \Phi (t + p)u_{0} {}\\ & =& \Phi (t)Cu_{0} {}\\ & =& \Phi (t)\mu _{0}u_{0} {}\\ & =& \mu _{0}u(t), {}\\ \end{array}$$

for  $$t \in \mathbb{Z}_{\alpha }.$$ The proof of the converse is essentially reversing the above steps. The proof of the last statement in this theorem is Exercise 1.89. □ 

In Example 1.109 we saw that 1 and − 1 were Floquet multipliers. Theorem 1.110 implies that there are linearly independent solutions that are periodic with periods 2 and 4. The next theorem shows how a Floquet system can be transformed into an autonomous system.

Theorem 1.111.

Let  $$\Phi (t) = P(t)B^{t-\alpha }$$ ,  $$t \in \mathbb{Z}_{\alpha }$$ , be as in Floquet’s theorem. Then y(t) is a solution of the Floquet system (1.55) if and only if

 $$\displaystyle{z(t) = P^{-1}(t)y(t),\quad t \in \mathbb{Z}_{\alpha }}$$

is a solution of the autonomous system

 $$\displaystyle{z(t + 1) = Bz(t),\quad t \in \mathbb{Z}_{\alpha }.}$$

Proof.

Assume y(t) is a solution of the Floquet system (1.55). Then there is a column vector w so that

 $$\displaystyle{y(t) = \Phi (t)w = P(t)B^{t-\alpha }w,\quad t \in \mathbb{Z}_{\alpha }.}$$

Let z(t): = P −1(t)y(t),  $$t \in \mathbb{Z}_{\alpha }.$$ Then z(t) = P −1(t)y(t) = B tα w. It follows that z(t) is a solution of z(t + 1) = Bz(t),  $$t \in \mathbb{Z}_{\alpha }.$$ The converse can be proved by reversing the above steps. □ 

Example 1.112.

In this example we determine the asymptotic behavior of two linearly independent solutions of the Floquet system

 $$\displaystyle{u(t+1)\ =\ \left [\begin{array}{ll} \;\;\;\;\;0 &\frac{2+(-1)^{t}} {2} \\ \frac{2-(-1)^{t}} {2} & \;\;\;\;\;0 \end{array} \right ]u(t),\quad t \in \mathbb{Z}_{0}.}$$

First we find the Floquet multipliers for this Floquet system. For this system, p = 2 and thus

 $$\displaystyle\begin{array}{rcl} D& =& A(1)A(0) {}\\ & =& \left [\begin{array}{ll} 0 &\frac{1} {2} \\ \frac{3} {2} & 0 \end{array} \right ]\left [\begin{array}{ll} 0 &\frac{3} {2} \\ \frac{1} {2} & 0 \end{array} \right ] {}\\ & =& \ \left [\begin{array}{ll} \frac{1} {4} & 0 \\ 0 &\frac{9} {4} \end{array} \right ]. {}\\ \end{array}$$

Hence the Floquet multipliers are  $$\mu _{1} = \frac{1} {4}$$ and  $$\mu _{2} = \frac{9} {4}$$ . Since  $$\vert \mu _{1}\vert = \frac{1} {4} < 1$$ and  $$\vert \mu _{2}\vert = \frac{9} {4} > 1$$ , we get there are two linearly independent solutions u 1(t), u 2(t) on  $$\mathbb{Z}_{0}$$ satisfying

 $$\displaystyle{\lim _{t\rightarrow \infty }\|u_{1}(t)\| = 0,\quad \lim _{t\rightarrow \infty }\|u_{2}(t)\| = \infty.}$$

Using Theorem 1.111 one can prove (see Exercise 1.92) the following stability theorem for Floquet systems.

Theorem 1.113.

Let μ 1 , μ 2 ,  $$\ldots$$ , μ n be the Floquet multipliers of the Floquet system (1.55) . Then the trivial solution is

(i)

globally asymptotically stable iff  $$\|\mu _{i}\| < 1$$ , 1 ≤ i ≤ n;

(ii)

stable provided  $$\|\mu _{i}\| \leq 1,$$ 1 ≤ i ≤ n and whenever  $$\|\mu _{i}\| = 1$$ , then μ i is a simple eigenvalue;

(iii)

unstable provided there is an i 0 , 1 ≤ i 0 ≤ n, such that  $$\|\mu _{i}\| > 1.$$

Example 1.114.

By Theorem 1.113 the trivial solution of the Floquet system in Example 1.112 is globally asymptotically stable.

We conclude this section by mentioning that although in this chapter we have explored a substantial introduction to the classical difference calculus, there are naturally many stones we have left unturned. So, for the reader who is interested in more advanced and specialized techniques from the classical theory of difference equations, we encourage him or her to consult the book by Kelley and Peterson [135] for a multitude of related results.