The Delta Laplace Transform - Discrete Delta Fractional Calculus and Laplace Transforms - Discrete Fractional Calculus

Discrete Fractional Calculus (2015)

2. Discrete Delta Fractional Calculus and Laplace Transforms

2.2. The Delta Laplace Transform

In this section we develop properties of the (delta) Laplace transform. First we give an abstract definition of this transform.

Definition 2.1 (Bohner–Peterson [62]).

Assume  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}.$$ Then we define the (delta) Laplace transform of f based at a by

 $$\displaystyle{\mathcal{L}_{a}\{f\}(s) =\int _{ a}^{\infty }e_{ \ominus s}(\sigma (t),a)f(t)\Delta t}$$

for all complex numbers s ≠ − 1 such that this improper integral converges.

The following theorem gives two useful expressions for the Laplace transform of f.

Theorem 2.2.

Assume  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ . Then

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\left \{f\right \}(s) = F_{a}(s)&:=& \int _{0}^{\infty } \frac{f(a + k)} {(s + 1)^{k+1}}\Delta k{}\end{array}$$

(2.1)

 $$\displaystyle\begin{array}{rcl} & =& \sum \limits _{k=0}^{\ \infty } \frac{f(a + k)} {(s + 1)^{k+1}},{}\end{array}$$

(2.2)

for all complex numbers s ≠ − 1 such that this improper integral (infinite series) converges.

Proof.

To see that (2.1) holds note that

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\left \{f\right \}(s)& =& \int _{a}^{\infty }e_{ \ominus s}(\sigma (t),a)f(t)\Delta t {}\\ & =& \sum _{t=a}^{\infty }e_{ \ominus s}(\sigma (t),a)f(t) {}\\ & =& \sum _{t=a}^{\infty }[1 + \ominus s]^{\sigma (t)-a}f(t) {}\\ & =& \sum _{t=a}^{\infty } \frac{f(t)} {(1 + s)^{t-a+1}} {}\\ & =& \sum _{k=0}^{\infty } \frac{f(a + k)} {(1 + s)^{k+1}}. {}\\ \end{array}$$

This also gives us that

 $$\displaystyle{\mathcal{L}_{a}\left \{f\right \}(s) =\int _{ 0}^{\infty } \frac{f(a + k)} {(1 + s)^{k+1}}\Delta k.}$$

 □ 

To find functions such that the Laplace transform exists on a nonempty set we make the following definition.

Definition 2.3.

We say that a function  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ is of exponential order r > 0 (at  $$\infty $$ ) if there exists a constant A > 0 such that

 $$\displaystyle{ \left \vert f(t)\right \vert \leq Ar^{t},\quad \text{ for }t \in \mathbb{N}_{ a},\quad \text{sufficiently large.} }$$

Now we can prove the following existence theorem.

Theorem 2.4 (Existence Theorem).

Suppose  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ is of exponential order r > 0. Then  $$\mathcal{L}_{a}\left \{f\right \}(s)$$ converges absolutely for |s + 1| > r.

Proof.

Assume  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ is of exponential order r > 0. Then there is a constant A > 0 and an  $$m \in \mathbb{N}_{0}$$ such that for each  $$t \in \mathbb{N}_{a+m},$$  $$\left \vert f(t)\right \vert \leq Ar^{t}$$ . Hence for | s + 1 |  > r,

 $$\displaystyle\begin{array}{rcl} \sum \limits _{k=m}^{\infty }\left \vert \frac{f(k + a)} {(s + 1)^{k+1}}\right \vert & =& \sum \limits _{k=m}^{\infty }\frac{\left \vert f(k + a)\right \vert } {\left \vert s + 1\right \vert ^{k+1}} {}\\ & \leq & \sum \limits _{k=m}^{\infty } \frac{Ar^{k+a}} {\left \vert s + 1\right \vert ^{k+1}} {}\\ & =& \frac{Ar^{a}} {\left \vert s + 1\right \vert }\sum \limits _{k=m}^{\infty }\left ( \frac{r} {\left \vert s + 1\right \vert }\right )^{k} {}\\ & =& \frac{Ar^{a}} {\left \vert s + 1\right \vert } \frac{\left ( \frac{r} {\left \vert s+1\right \vert }\right )^{m}} {1 -\left ( \frac{r} {\left \vert s+1\right \vert }\right )} {}\\ & =& \frac{A} {^{\left \vert s+1\right \vert ^{m} }} \frac{r^{a+m}} {\left \vert s + 1\right \vert - r}{}\\ &<&\infty. {}\\ \end{array}$$

Hence, the Laplace transform of f converges absolutely for | s + 1 |  > r. □ 

We will see later (see Remark 2.57) that the converse of Theorem 2.4 does not hold in general.

In this chapter, we will usually consider functions f of some exponential order r > 0, ensuring that the Laplace transform of f does in fact converge somewhere in the complex plane—specifically, it converges for all complex numbers outside the closed ball of radius r centered at negative one, that is, for | s + 1 |  > r. We will abuse the notation by sometimes writing  $$\mathcal{L}_{a}\{f(t)\}(s)$$ instead of the preferred notation  $$\mathcal{L}_{a}\{f\}(s).$$

Example 2.5.

Clearly,  $$e_{p}\left (t,a\right ),$$ p ≠ − 1, a constant, is of exponential order  $$r = \vert 1 + p\vert > 0.$$ Therefore, we have for  $$\vert s + 1\vert > r = \vert 1 + p\vert,$$

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\left \{e_{p}(t,a)\right \}(s)& =& \mathcal{L}_{a}\left \{\left (1 + p\right )^{t-a}\right \}(s) {}\\ & =& \sum \limits _{k=0}^{\infty } \frac{\left (1 + p\right )^{k}} {\left (s + 1\right )^{k+1}} {}\\ & =& \frac{1} {s + 1}\sum \limits _{k=0}^{\infty }\left (\frac{p + 1} {s + 1}\right )^{k} {}\\ & =& \frac{1} {s + 1}\left ( \frac{1} {1 -\frac{p+1} {s+1} }\right ) {}\\ & =& \frac{1} {s - p}. {}\\ \end{array}$$

Hence

 $$\displaystyle{\mathcal{L}_{a}\{e_{p}(t,a)\}(s) = \frac{1} {s - p},\quad \vert s + 1\vert > \vert 1 + p\vert.}$$

An important special case (p = 0) of the above formula is

 $$\displaystyle{ \mathcal{L}_{a}\left \{1\right \}(s) = \frac{1} {s},\quad \text{for }\quad \vert s + 1\vert > 1. }$$

In the next theorem we see that the Laplace transform operator  $$\mathcal{L}_{a}$$ is a linear operator.

Theorem 2.6 (Linearity).

Suppose  $$f,g: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ and the Laplace transforms of f and g converge for |s + 1| > r, where r > 0, and let  $$c_{1},c_{2} \in \mathbb{C}$$ . Then the Laplace transform of c 1 f + c 2 g converges for |s + 1| > r and

 $$\displaystyle{ \mathcal{L}_{a}\left \{c_{1}f + c_{2}g\right \}\left (s\right ) = c_{1}\mathcal{L}_{a}\left \{f\right \}\left (s\right ) + c_{2}\mathcal{L}_{a}\left \{g\right \}\left (s\right ), }$$

(2.3)

for |s + 1| > r.

Proof.

Since  $$f,g: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ and the Laplace transforms of f and g converge for | s + 1 |  > r, where r > 0, we have that for | s + 1 |  > r

 $$\displaystyle\begin{array}{rcl} & & c_{1}\mathcal{L}_{a}\left \{f\right \}\left (s\right ) + c_{2}\mathcal{L}_{a}\left \{g\right \}\left (s\right ) {}\\ & & \quad \quad \quad = c_{1}\sum _{k=0}^{\infty } \frac{f(a + k)} {(s + 1)^{k+1}} + c_{2}\sum _{k=0}^{\infty } \frac{g(a + k)} {(s + 1)^{k+1}} {}\\ & & \quad \quad \quad =\sum _{ k=0}^{\infty }\frac{(c_{1}f + c_{2}g)(a + k)} {(s + 1)^{k+1}} {}\\ & & \quad \quad \quad = \mathcal{L}_{a}\{c_{1}f + c_{2}g\}(s). {}\\ \end{array}$$

This completes the proof. □ 

The following uniqueness theorem is very useful.

Theorem 2.7 (Uniqueness).

Assume  $$f,g: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ and there is an r > 0 such that

 $$\displaystyle{\mathcal{L}_{a}\left \{f\right \}\left (s\right ) = \mathcal{L}_{a}\left \{g\right \}\left (s\right )}$$

for |s + 1| > r. Then

 $$\displaystyle{f(t) = g(t),\quad \mbox{ for all}\quad t \in \mathbb{N}_{a}.}$$

Proof.

By hypothesis we have that

 $$\displaystyle{\mathcal{L}_{a}\left \{f\right \}\left (s\right ) = \mathcal{L}_{a}\left \{g\right \}\left (s\right )}$$

for | s + 1 |  > r. This implies that

 $$\displaystyle{\sum _{k=0}^{\infty } \frac{f(a + k)} {(s + 1)^{k+1}} =\sum _{ k=0}^{\infty } \frac{g(a + k)} {(s + 1)^{k+1}}}$$

for | s + 1 |  > r. It follows from this that

 $$\displaystyle{f(a + k) = g(a + k),\quad k \in \mathbb{N}_{0},}$$

and this completes the proof. □ 

Next we give the Laplace transforms of the (delta) hyperbolic sine and cosine functions.

Theorem 2.8.

Assume p ≠ ± 1 is a constant. Then

(i)

 $$\mathcal{L}_{a}\{\cosh _{p}(t,a)\}(s) = \frac{s} {s^{2}-p^{2}};$$

(ii)

 $$\mathcal{L}_{a}\{\sinh _{p}(t,a)\}(s) = \frac{p} {s^{2}-p^{2}},$$

for  $$\vert s + 1\vert >\max \{ \vert 1 + p\vert,\vert 1 - p\vert \}.$$

Proof.

To see that (ii) holds, consider

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\sinh _{p}(t,a)\}(s)& =& \frac{1} {2}\left [\mathcal{L}_{a}\{e_{p}(t,a)\}(s) -\mathcal{L}\{e_{-p}(t,a)\}(s)\right ] {}\\ & =& \frac{1} {2} \frac{1} {s - p} -\frac{1} {2} \frac{1} {s + p} {}\\ & =& \frac{p} {s^{2} - p^{2}} {}\\ \end{array}$$

for  $$\vert s + 1\vert >\max \{ \vert 1 + p\vert,\vert 1 - p\vert \}.$$ The proof of (i) is similar (see Exercise 2.5). □ 

Next, we give the Laplace transforms of the (discrete) sine and cosine functions.

Theorem 2.9.

Assume p ≠ ± i. Then

(i)

 $$\mathcal{L}_{a}\{\cos _{p}(t,a)\}(s) = \frac{s} {s^{2}+p^{2}};$$

(ii)

 $$\mathcal{L}_{a}\{\sin _{p}(t,a)\}(s) = \frac{p} {s^{2}+p^{2}},$$

for  $$\vert s + 1\vert >\max \{ \vert 1 + ip\vert,\vert 1 - ip\vert \}.$$

Proof.

To see that (i) holds, note that

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\cos _{p}(t,a)\}(s)& =& \mathcal{L}_{a}\{\cosh _{ip}(t,a)\}(s) {}\\ & =& \frac{1} {2}\left [\mathcal{L}_{a}\{e_{ip}(t,a)\}(s) + \mathcal{L}\{e_{-ip}(t,a)\}(s)\right ] {}\\ & =& \frac{1} {2} \frac{1} {s - ip} + \frac{1} {2} \frac{1} {s + ip} {}\\ & =& \frac{s} {s^{2} + p^{2}}, {}\\ \end{array}$$

for  $$\vert s + 1\vert >\max \{ \vert 1 + ip\vert,\vert 1 - ip\vert \}.$$ For the proof of part (ii) see Exercise 2.6. □ 

Theorem 2.10.

Assume α ≠ − 1 and  $$\frac{\beta }{1+\alpha }\neq \pm 1.$$ Then

(i)

 $$\mathcal{L}_{a}\{e_{\alpha }(t,a)\cosh _{ \frac{\beta }{ 1+\alpha } }(t,a)\}(s) = \frac{s-\alpha } {(s-\alpha )^{2}-\beta ^{2}};$$

(ii)

 $$\mathcal{L}_{a}\{e_{\alpha }(t,a)\sinh _{ \frac{\beta }{ 1+\alpha } }(t,a)\}(s) = \frac{\beta } {(s-\alpha )^{2 } -\beta ^{2}},$$

for  $$\vert s + 1\vert >\max \{ \vert 1 +\alpha +\beta \vert,\vert 1 +\alpha -\beta \vert \}.$$

Proof.

To see that (i) holds, for  $$\vert s + 1\vert >\max \{ \vert 1 +\alpha +\beta \vert,\vert 1 +\alpha -\beta \vert \},$$ consider

 $$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a}\{e_{\alpha }(t,a)\cosh _{ \frac{\beta }{ 1+\alpha } }(t,a)\}(s) {}\\ & & \quad \quad = \frac{1} {2}\mathcal{L}_{a}\{e_{\alpha }(t,a)e_{ \frac{\beta }{ 1+\alpha } }(t,a)\}(s) + \frac{1} {2}\mathcal{L}_{a}\{e_{\alpha }(t,a)e_{ \frac{-\beta } {1+\alpha } }(t,a)\}(s) {}\\ & & \quad \quad = \frac{1} {2}\mathcal{L}_{a}\{e_{\alpha \oplus \frac{\beta }{ 1+\alpha } }(t,a)\}(s) + \frac{1} {2}\mathcal{L}_{a}\{e_{\alpha \oplus \frac{-\beta } {1+\alpha } }(t,a)\}(s) {}\\ & & \quad \quad = \frac{1} {2}\mathcal{L}_{a}\{e_{\alpha +\beta }(t,a)\}(s) + \frac{1} {2}\mathcal{L}_{a}\{e_{\alpha -\beta }(t,a)\}(s) {}\\ & & \quad \quad = \frac{1} {2} \frac{1} {s -\alpha -\beta } + \frac{1} {2} \frac{1} {s -\alpha +\beta } {}\\ & & \quad \quad = \frac{s-\alpha } {(s-\alpha )^{2} -\beta ^{2}}. {}\\ \end{array}$$

The proof of (ii) is Exercise 2.7. □ 

Similar to the proof of Theorem 2.10 one can prove the following theorem.

Theorem 2.11.

Assume α ≠ − 1 and  $$\frac{\beta }{1+\alpha }\neq \pm i.$$ Then

(i)

 $$\mathcal{L}_{a}\{e_{\alpha }(t,a)\cos _{ \frac{\beta }{ 1+\alpha } }(t,a)\}(s) = \frac{s-\alpha } {(s-\alpha )^{2}+\beta ^{2}};$$

(ii)

 $$\mathcal{L}_{a}\{e_{\alpha }(t,a)\sin _{ \frac{\beta }{ 1+\alpha } }(t,a)\}(s) = \frac{\beta } {(s-\alpha )^{2 } +\beta ^{2}},$$

for  $$\vert s + 1\vert >\max \{ \vert 1 +\alpha +i\beta \vert,\vert 1 +\alpha -i\beta \vert \}.$$

When solving certain difference equations one frequently uses the following theorem.

Theorem 2.12.

Assume that f is of exponential order r > 0. Then for any positive integer N

 $$\displaystyle{ \mathcal{L}_{a}\left \{\Delta ^{N}f\right \}(s) = s^{N}F_{ a}(s) -\sum \limits _{j=0}^{N-1}s^{j}\Delta ^{N-1-j}f(a), }$$

(2.4)

for |s + 1| > r.

Proof.

By Exercise 2.2 we have for each positive integer N, the function  $$\Delta ^{N}f$$ is of exponential order r. Hence, by Theorem 2.4 the Laplace transform of  $$\Delta ^{N}f$$ for each N ≥ 1 exists for | s + 1 |  > r. Now integrating by parts we get

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\Delta f\}(s)& =& \int _{a}^{\infty }e_{ \ominus s}(\sigma (t),a)\Delta f(t)\Delta t {}\\ & =& e_{\ominus s}(t,a)f(t)\vert _{a}^{b\rightarrow \infty }-\int _{ a}^{\infty }\ominus se_{ \ominus s}(t,a)f(t)\Delta t {}\\ & =& -f(a) + s\int _{a}^{\infty }e_{ \ominus s}(\sigma (t),a)f(t)\Delta t {}\\ & =& sF_{a}(s) - f(a) {}\\ \end{array}$$

for | s + 1 |  > r. Hence (2.4) holds for N = 1. Now assume N ≥ 1 and (2.4) holds. Then

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\Delta ^{N+1}f\}(s)& =& \mathcal{L}_{ a}\{\Delta \left (\Delta ^{N}f\right )\}(s) {}\\ & =& s\mathcal{L}_{a}\{\Delta ^{N}f\}(s) - \Delta ^{N}f(a) {}\\ & =& s\left [s^{N}F_{ a}(s) -\sum \limits _{j=0}^{N-1}s^{j}\Delta ^{N-1-j}f(a)\right ] - \Delta ^{N}f(a) {}\\ & =& s^{N+1}F_{ a}(s) -\sum \limits _{j=0}^{(N+1)-1}s^{j}\Delta ^{(N+1)-1-j}f(a). {}\\ \end{array}$$

Hence (2.4) holds for each positive integer by mathematical induction. □ 

The following example is an application of formula (2.4).

Example 2.13.

Use Laplace transforms to solve the IVP

 $$\displaystyle\begin{array}{rcl} & \Delta ^{2}y(t) - 3\Delta y(t) + 2y(t) = 2 \cdot 4^{t},\quad t \in \mathbb{N}_{0}& {}\\ & y(0) = 2,\quad \Delta y(0) = 4 &. {}\\ \end{array}$$

Assume y(t) is the solution of the above IVP. We have, by taking the Laplace transform of both sides of the difference equation in this example,

 $$\displaystyle{[s^{2}Y _{ 0}(s) - sy(0) - \Delta y(0)] - 3[sY _{0}(s) - y(0)] + 2Y _{0}(s) = \frac{2} {s - 3}.}$$

Applying the initial conditions and simplifying we get

 $$\displaystyle{(s^{2} - 3s + 2)Y _{ 0}(s) = 2s - 2 + \frac{2} {s - 3}.}$$

Further simplification leads to

 $$\displaystyle{(s - 1)(s - 2)Y _{0}(s) = \frac{2(s - 2)^{2}} {s - 3}.}$$

Hence

 $$\displaystyle\begin{array}{rcl} Y _{0}(s)& =& \frac{2(s - 2)} {(s - 1)(s - 3)} {}\\ & =& \frac{1} {s - 1} + \frac{1} {s - 3}. {}\\ \end{array}$$

It follows that the solution of our IVP is given by

 $$\displaystyle\begin{array}{rcl} y(t)& =& e_{1}(t,0) + e_{3}(t,0) {}\\ & =& 2^{t} + 4^{t},\quad t \in \mathbb{N}_{ 0}. {}\\ \end{array}$$

Now that we see that our solution is of exponential order we see that the steps we did above are valid.

The following corollary gives us a useful formula for solving certain summation (delta integral) equations.

Corollary 2.14.

Assume  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ is of exponential order r > 1. Then

 $$\displaystyle{\mathcal{L}_{a}\left \{\int _{a}^{t}f(\tau )\Delta \tau \right \}(s) = \frac{1} {s}\mathcal{L}_{a}\{f\}(s) = \frac{F_{a}(s)} {s} }$$

for |s + 1| > r.

Proof.

Since  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ is of exponential order r > 1, we have by Exercise 2.3 that the function h defined by

 $$\displaystyle{h(t):=\int _{ a}^{t}f(\tau )\Delta \tau,\quad t \in \mathbb{N}_{ a}}$$

is also of exponential order r > 1. Hence the Laplace transform of h exists for | s + 1 |  > r. Then

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{f\}(s)& =& \mathcal{L}_{a}\{\Delta h\}(s) {}\\ & =& s\mathcal{L}_{a}\{h\}(s) - h(a) {}\\ & =& s\mathcal{L}_{a}\left \{\int _{a}^{t}f(\tau )\Delta \tau \right \}(s). {}\\ \end{array}$$

It follows that

 $$\displaystyle{\mathcal{L}_{a}\left \{\int _{a}^{t}f(\tau )\Delta \tau \right \}(s) = \frac{1} {s}\mathcal{L}_{a}\{f\}(s) = \frac{F_{a}(s)} {s} }$$

for | s + 1 |  > r.  □ 

Example 2.15.

Solve the summation equation

 $$\displaystyle{ y(t) = 2 \cdot 4^{t} + 2\sum _{ k=0}^{t-1}y(k),\quad t \in \mathbb{N}_{ 0}. }$$

(2.5)

Equation (2.5) can be written in the equivalent form

 $$\displaystyle{ y(t) = 2 \cdot e_{3}(t,0) + 2\int _{0}^{t}y(k)\Delta k,\quad t \in \mathbb{N}_{ 0}. }$$

(2.6)

Taking the Laplace transform of both sides of (2.6) we get, using Corollary 2.14,

 $$\displaystyle\begin{array}{rcl} Y _{0}(s)& =& \frac{2} {s - 3} + \frac{2} {s}\;Y _{0}(s). {}\\ \end{array}$$

Solving for Y 0(s) we get

 $$\displaystyle\begin{array}{rcl} Y _{0}(s)& =& \frac{2s} {(s - 2)(s - 3)} {}\\ & =& \frac{6} {s - 3} - \frac{4} {s - 2}. {}\\ \end{array}$$

It follows that

 $$\displaystyle\begin{array}{rcl} y(t)& =& 6e_{3}(t,0) - 4e_{2}(t,0) {}\\ & =& 6 \cdot 4^{t} - 4 \cdot 3^{t},\quad t \in \mathbb{N}_{ 0}. {}\\ \end{array}$$

is the solution of (2.5).

Next we introduce the Dirac delta function and find its Laplace transform.

Definition 2.16.

Let  $$c \in \mathbb{N}_{a}$$ . We define the Dirac delta function at c on  $$\mathbb{N}_{a}$$ by

 $$\displaystyle{\delta _{c}(t) = \left \{\begin{array}{@{}l@{\quad }l@{}} 1,\quad &\quad t = c\\ 0,\quad &\quad t\neq c. \end{array} \right.}$$

Theorem 2.17.

Assume  $$c \in \mathbb{N}_{a}$$ . Then

 $$\displaystyle{\mathcal{L}_{a}\{\delta _{c}\}(s) = \frac{1} {(s + 1)^{c-a+1}}\quad \mbox{ for}\quad \vert s + 1\vert > 0.}$$

Proof.

For | s + 1 |  > 0, 

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\delta _{c}\}(s)& =& \sum _{k=0}^{\infty } \frac{\delta _{c}(a + k)} {(s + 1)^{k+1}} {}\\ & =& \frac{1} {(s + 1)^{c-a+1}}. {}\\ \end{array}$$

This completes the proof. □ 

Next we define the unit step function and later find its Laplace transform.

Definition 2.18.

Let  $$c \in \mathbb{N}_{a}$$ . We define the unit step function on  $$\mathbb{N}_{a}$$ by

 $$\displaystyle{u_{c}(t) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0,\quad &\quad t \in \mathbb{N}_{a}^{c-1} \\ 1,\quad &\quad t \in \mathbb{N}_{c}. \end{array} \right.}$$

We now prove the following shifting theorem.

Theorem 2.19 (Shifting Theorem).

Let  $$c \in \mathbb{N}_{a}$$ and assume the Laplace transform of  $$f: \mathbb{N}_{a} \rightarrow \mathbb{R}$$ exists for |s + 1| > r. Then the following hold:

(i)

 $$\mathcal{L}_{a}\{f(t - (c - a))u_{c}(t)\}(s) = \frac{1} {(s+1)^{c-a}}\mathcal{L}_{a}\{f\}(s);$$

(ii)

 $$\mathcal{L}_{a}\{f(t + (c - a))\}(s) = (s + 1)^{c-a}\left [\mathcal{L}_{a}\{f\}(s) -\sum _{k=0}^{c-a-1} \frac{f(a+k)} {(s+1)^{k+1}} \right ],$$

for |s + 1| > r. (In (i) we have the convention that  $$f(t - (c - a))u_{c}(t) = 0$$ for  $$t \in \mathbb{N}_{a}^{c-1}$$ if c ≥ a + 1.)

Proof.

To see that (i) holds, consider

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{f(t + a - c)u_{c}(t)\}(s)& =& \sum _{k=0}^{\infty }\frac{f(2a + k - c)u_{c}(a + k)} {(s + 1)^{k+1}} {}\\ & =& \sum _{k=c-a}^{\infty }\frac{f(2a + k - c)} {(s + 1)^{k+1}} {}\\ & =& \sum _{k=0}^{\infty }\frac{f(2a + k + c - a - c)} {(s + 1)^{k+c-a+1}} {}\\ & =& \sum _{k=0}^{\infty } \frac{f(a + k)} {(s + 1)^{k+c-a+1}} {}\\ & =& \frac{1} {(s + 1)^{c-a}}\sum _{k=0}^{\infty } \frac{f(a + k)} {(s + 1)^{k+1}} {}\\ & =& \frac{1} {(s + 1)^{c-a}}\mathcal{L}_{a}\{f\}(s) {}\\ \end{array}$$

for | s + 1 |  > r. 

Part (ii) holds since

 $$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{f(t + (c - a))\}(s)& =& \sum _{k=0}^{\infty }\frac{f(a + k + c - a)} {(s + 1)^{k+1}} {}\\ & =& \sum _{k=0}^{\infty } \frac{f(k + c)} {(s + 1)^{k+1}} {}\\ & =& \sum _{k=c-a}^{\infty } \frac{f(a + k)} {(s + 1)^{k-c+a+1}} {}\\ & =& (s + 1)^{c-a}\sum _{ k=c-a}^{\infty } \frac{f(a + k)} {(s + 1)^{k+1}} {}\\ & =& (s + 1)^{c-a}\left [\sum _{ k=0}^{\infty } \frac{f(a + k)} {(s + 1)^{k+1}} -\sum _{k=0}^{c-a-1} \frac{f(a + k)} {(s + 1)^{k+1}}\right ] {}\\ & =& (s + 1)^{c-a}\left [\mathcal{L}_{ a}\{f\}(s) -\sum _{k=0}^{c-a-1} \frac{f(a + k)} {(s + 1)^{k+1}}\right ] {}\\ \end{array}$$

for | s + 1 |  > r.  □ 

In the following example we will use part (i) of Theorem 2.19 to solve an IVP.

Example 2.20.

Solve the IVP

 $$\displaystyle\begin{array}{rcl} \Delta y(t) - 3y(t)& & = 2\delta _{50}(t),\quad t \in \mathbb{N}_{0} {}\\ & & y(0) = 5. {}\\ \end{array}$$

Taking the Laplace transform of both sides, we get

 $$\displaystyle{sY _{0}(s) - y(0) - 3Y _{0}(s) = \frac{2} {(s + 1)^{51}}.}$$

Using the initial condition and solving for Y 0(s) we have that

 $$\displaystyle{Y _{0}(s) = \frac{5} {s - 3} + \frac{2} {s - 3} \frac{1} {(s + 1)^{51}}.}$$

Taking the inverse transform of both sides we get the desired solution

 $$\displaystyle\begin{array}{rcl} y(t)& =& 5e_{3}(t,0) + 2e_{3}(t - 51,0)u_{51}(t) {}\\ & =& 5(4^{t}) + 2(4)^{t-51}u_{ 51}(t),\quad t \in \mathbb{N}_{0}. {}\\ \end{array}$$

In the following example we will use part (ii) of Theorem 2.19 to solve an IVP.

Example 2.21.

Use Laplace transforms to solve the IVP

 $$\displaystyle\begin{array}{rcl} & & y(t + 2) + y(t + 1) - 6y(t) = 0,\quad t \in \mathbb{N}_{0} {}\\ & & y(0) = 5,\quad y(1) = 2. {}\\ \end{array}$$

Assume y(t) is the solution of this IVP and take the Laplace transform of both sides of the given difference equation to get (using part (ii) of Theorem 2.19) that

 $$\displaystyle{(s + 1)^{2}\left [Y _{ 0}(s) - \frac{5} {s + 1} - \frac{2} {(s + 1)^{2}}\right ] + (s + 1)\left [Y _{0}(s) - \frac{5} {s + 1}\right ] - 6Y _{0}(s) = 0.}$$

Solving for Y 0(s) we get

 $$\displaystyle\begin{array}{rcl} Y _{0}(s)& =& \frac{5s + 12} {(s - 1)(s + 4)} {}\\ & =& \frac{17} {5} \frac{1} {s - 1} + \frac{8} {5} \frac{1} {s + 4}. {}\\ \end{array}$$

Taking the inverse transform of both sides we get

 $$\displaystyle\begin{array}{rcl} y(t)& =& \frac{17} {5} e_{1}(t,0) + \frac{8} {5}e_{-4}(t,0) {}\\ & =& \frac{17} {5} 2^{t} + \frac{8} {5}(-3)^{t},\quad t \in \mathbb{N}_{ 0}. {}\\ \end{array}$$

Theorem 2.22.

The following hold for n ≥ 0:

(i)

 $$\mathcal{L}_{a}\{h_{n}(t,a)\}(s) = \frac{1} {s^{n+1}}$$ for |s + 1| > 1;

(ii)

 $$\mathcal{L}_{a}\{(t - a)^{\underline{n}}\}(s) = \frac{n!} {s^{n+1}}$$ for |s + 1| > 1.

Proof.

The proof of this theorem follows from Corollary 2.14 and the fact that  $$\mathcal{L}\{1\}(s) = \frac{1} {s}$$ for | s + 1 |  > 1.  □