Delta Exponential Function - Basic Difference Calculus - Discrete Fractional Calculus

Discrete Fractional Calculus (2015)

1. Basic Difference Calculus

1.2. Delta Exponential Function

In this section we want to study the delta exponential function that plays a similar role in the delta calculus on  $$\mathbb{N}_{a}$$ that the exponential function e pt ,  $$p \in \mathbb{R}$$ , does in the continuous calculus. Keep in mind that when p is a constant, x(t) = e pt is the unique solution of the initial value problem

 $$\displaystyle{x' = px,\quad x(0) = 1.}$$

For the delta exponential function we would like to consider functions in the set of regressive functions defined by

 $$\displaystyle{\mathcal{R} =\{ p: \mathbb{N}_{a} \rightarrow \mathbb{R}\;\;\mbox{ such that}\;\;1 + p(t)\neq 0\;\;\mbox{ for}\;\;t \in \mathbb{N}_{a}\}.}$$

Some of the results that we give will be true if in the definition of regressive functions we consider complex-valued functions instead of real-valued functions. We leave it to the reader to note when this is true.

We then define the delta exponential function corresponding to a function  $$p \in \mathcal{R}$$ , based at  $$s \in \mathbb{N}_{a}$$ , to be the unique solution (why does  $$p \in \mathcal{R}$$ guarantee uniqueness?), e p (t, s), of the initial value problem

 $$\displaystyle{ \Delta x(t) = p(t)x(t), }$$

(1.4)

 $$\displaystyle{ \quad x(s) = 1. }$$

(1.5)

Theorem 1.11.

Assume  $$p \in \mathcal{R}$$ and  $$s \in \mathbb{N}_{a}.$$ Then

 $$\displaystyle{ e_{p}(t,s) = \left \{\begin{array}{@{}l@{\quad }l@{}} \prod _{\tau =s}^{t-1}[1 + p(\tau )],\quad t \in \mathbb{N}_{s} \quad \\ \prod _{\tau =t}^{s-1}[1 + p(\tau )]^{-1},\quad t \in \mathbb{N}_{a}^{s-1}.\quad \end{array} \right. }$$

(1.6)

Here, by a standard convention on products, it is understood that for any function h that

 $$\displaystyle{\prod _{\tau =s}^{s-1}h(\tau ):= 1.}$$

Proof.

We solve the IVP (1.4), (1.5) to get a formula for e p (t, s). Solving (1.4) for x(t + 1) we get

 $$\displaystyle{ x(t + 1) = [1 + p(t)]x(t),\quad t \in \mathbb{N}_{a}. }$$

(1.7)

Letting t = s in (1.7) and using the initial condition (1.5) we get

 $$\displaystyle{x(s + 1) = [1 + p(s)]x(s) = [1 + p(s)].}$$

Next, letting t = s + 1 in (1.7) we get

 $$\displaystyle{x(s + 2) = [1 + p(s + 1)]x(s + 1) = [1 + p(s)][1 + p(s + 1)].}$$

Proceeding in this fashion we get

 $$\displaystyle{ e_{p}(t,s) =\prod _{ \tau =s}^{t-1}[1 + p(\tau )] }$$

(1.8)

for  $$t \in \mathbb{N}_{s+1}$$ . In the product in (1.8), it is understood that the index τ takes on the values  $$s,s + 1,s + 2,\ldots,t - 1.$$ By convention  $$e_{p}(s,s) =\prod _{ \tau =s}^{s-1}[1 + p(\tau )] = 1$$ . Next assume  $$t \in \mathbb{N}_{a}^{s-1}$$ . Solving (1.7) for x(t) we get

 $$\displaystyle{ x(t) = \frac{1} {1 + p(t)}x(t + 1). }$$

(1.9)

Letting t = s − 1 in (1.9), we get

 $$\displaystyle{x(s - 1) = \frac{1} {1 + p(s - 1)}x(s) = \frac{1} {1 + p(s - 1)}.}$$

Next, letting t = s − 2 in (1.9), we get

 $$\displaystyle{x(s - 2) = \frac{1} {1 + p(s - 2)}x(s - 1) = \frac{1} {\left [1 + p(s - 2)\right ]\left [1 + p(s - 1)\right ]}.}$$

Continuing in this manner we get

 $$\displaystyle{x(t) =\prod _{ \tau =t}^{s-1}[1 + p(\tau )]^{-1},\quad t \in \mathbb{N}_{ a}^{s-1}.}$$

 □ 

Theorem 1.11 gives us the following example.

Example 1.12.

If p(t) = p is a constant with p ≠ − 1 (note this constant function is in  $$\mathcal{R}$$ ), then from (1.6)

 $$\displaystyle{e_{p}(t,s):= (1 + p)^{t-s},\quad t \in \mathbb{N}_{ a}.}$$

Example 1.13.

Find e p (t, 1) if p(t) = t − 1,  $$t \in \mathbb{N}_{1}.$$ First note that 1 + p(t) = t ≠ 0 for  $$t \in \mathbb{N}_{1}$$ , so  $$p \in \mathcal{R}$$ . From (1.6) we get

 $$\displaystyle{e_{p}(t,1) =\prod _{ \tau =1}^{t-1}\tau = (t - 1)!}$$

for  $$t \in \mathbb{N}_{1}$$ .

It is easy to prove the following theorem.

Theorem 1.14.

If  $$p \in \mathcal{R}$$ , then a general solution of

 $$\displaystyle{\Delta y(t) = p(t)y(t),\quad t \in \mathbb{N}_{a}}$$

is given by

 $$\displaystyle{y(t) = ce_{p}(t,a),\quad t \in \mathbb{N}_{a},}$$

where c is an arbitrary constant.

The following example is an interesting application using an exponential function.

Example 1.15.

According to folklore, Peter Minuit in 1626 purchased Manhattan Island for goods worth $24. If at the beginning of 1626 the $24 could have been invested at an annual interest rate of 7% compounded quarterly, what would it have been worth at the end of the year 2014. Let y(t) be the value of the investment after t quarters of a year. Then y(t) satisfies the equation

 $$\displaystyle\begin{array}{rcl} y(t + 1)& =& y(t) + \frac{.07} {4} \;y(t) {}\\ & =& y(t) +.0175\;y(t). {}\\ \end{array}$$

Thus y is a solution of the IVP

 $$\displaystyle{\Delta y(t) =.0175\;y(t),\quad y(0) = 24.}$$

Using Theorem 1.14 and the initial condition we get that

 $$\displaystyle{y(t) = 24\;e_{.0175}(t,0) = 24(1.0175)^{t}.}$$

It follows that

 $$\displaystyle{y(1552) = 24(1.0175)^{1552} \approx 1.18 \times 10^{13}}$$

(about 11.8 trillion dollars!).

We now develop some properties of the (delta) exponential function e p (t, a). To motivate our later results, consider, for  $$p,q \in \mathcal{R}$$ the product

 $$\displaystyle\begin{array}{rcl} e_{p}(t,a)e_{q}(t,a)& =& \prod _{\tau =a}^{t}(1 + p(\tau ))\prod _{\tau =a}^{t}(1 + q(\tau )) {}\\ & =& \prod _{\tau =a}^{t}[1 + p(\tau )][1 + q(\tau )] {}\\ & =& \prod _{\tau =a}^{t}[1 + (p(t) + q(t) + p(t)q(t))] {}\\ & =& \prod _{\tau =a}^{t}[1 + (p \oplus q)(\tau )],\quad \mbox{ if $(p \oplus q)(t):= p(t) + q(t) + p(t)q(t)$} {}\\ & =& e_{p\oplus q}(t,a). {}\\ \end{array}$$

Hence we get the law of exponents

 $$\displaystyle{e_{p}(t,a)e_{q}(t,a) = e_{p\oplus q}(t,a)}$$

holds for  $$p,q \in \mathcal{R},$$ provided

 $$\displaystyle{p \oplus q:= p + q + pq.}$$

Theorem 1.16.

If we define the circle plus addition , ⊕, on  $$\mathcal{R}$$ by

 $$\displaystyle{p \oplus q:= p + q + pq,}$$

then  $$\mathcal{R}$$ , ⊕ is an Abelian group.

Proof.

First to see the closure property is satisfied, note that if  $$p,q \in \mathcal{R}$$ , then 1 + p(t) ≠ 0 and 1 + q(t) ≠ 0 for  $$t \in \mathbb{N}_{a}.$$ It follows that

 $$\displaystyle\begin{array}{rcl} 1 + (p \oplus q)(t) = 1 + [p(t) + q(t) + p(t)q(t)] = [1 + p(t)][1 + q(t)]\neq 0& & {}\\ \end{array}$$

for  $$t \in \mathbb{N}_{a},$$ and hence  $$p \oplus q \in \mathcal{R}$$ .

Next the zero function  $$0 \in \mathcal{R}$$ as 1 + 0 = 1 ≠ 0. Also

 $$\displaystyle{0 \oplus p = 0 + p + 0 \cdot p = p,\quad \mbox{ for all $p \in \mathcal{R}$},}$$

so the zero function 0 is the additive identity element in  $$\mathcal{R}.$$

To show that every element in  $$\mathcal{R}$$ has an additive inverse let  $$p \in \mathcal{R}$$ . Then set  $$q = \frac{-p} {1+p}$$ and note that since

 $$\displaystyle{1 + q(t) = 1 + \frac{-p(t)} {1 + p(t)} = \frac{1} {1 + p(t)}\neq 0}$$

for  $$t \in \mathbb{N}_{a}$$ , so  $$q \in \mathcal{R}$$ and we also have that

 $$\displaystyle{p \oplus q = p \oplus \frac{-p} {1 + p} = p + \frac{-p} {1 + p} + \frac{-p^{2}} {1 + p} = p - p = 0}$$

so q is the additive inverse of p. For  $$p \in \mathcal{R},$$ we use the following notation for the additive inverse of p:

 $$\displaystyle{ \ominus p:= \frac{-p} {1 + p}. }$$

(1.10)

The fact that the addition ⊕ is associative and commutative is Exercise 1.19. □ 

We can now define circle minus subtraction on  $$\mathcal{R}$$ in the standard way that subtraction is defined in terms of addition.

Definition 1.17.

We define circle minus subtraction on  $$\mathcal{R}$$ by

 $$\displaystyle{p \ominus q:= p \oplus [\ominus q].}$$

It can be shown (Exercise 1.18) that if  $$p,q \in \mathcal{R}$$ then

 $$\displaystyle{(p \ominus q)(t) = \frac{p(t) - q(t)} {1 + q(t)},\quad t \in \mathbb{N}_{a}.}$$

The next theorem gives us several properties of the exponential function e p (t, s), based at  $$s \in \mathbb{N}_{a}$$ .

Theorem 1.18.

Assume  $$p,q \in \mathcal{R}$$ and  $$t,s,r \in \mathbb{N}_{a}$$ . Then

(i)

e 0 (t,s) = 1 and e p (t,t) = 1;

(ii)

 $$e_{p}(t,s)\neq 0,\quad t \in \mathbb{N}_{a};$$

(iii)

if 1 + p > 0, then e p (t,s) > 0;

(iv)

 $$\Delta e_{p}(t,s) = p(t)\;e_{p}(t,s);$$

(v)

 $$e_{p}^{\sigma }(t,s) = e_{p}(\sigma (t),s) = [1 + p(t)]e_{p}(t,s);$$

(vi)

e p (t,s)e p (s,r) = e p (t,r);

(viii)

e p (t,s)e q (t,s) = e pq (t,s);

(viii)

 $$e_{\ominus p}(t,s) = \frac{1} {e_{p}(t,s)};$$

(ix)

 $$\frac{e_{p}(t,s)} {e_{q}(t,s)} = e_{p\ominus q}(t,s);$$

(x)

 $$e_{p}(t,s) = \frac{1} {e_{p}(s,t)}.$$

Proof.

We prove many of these properties when s = a and leave it to the reader to show that the same results hold for any  $$s \in \mathbb{N}_{a}$$ . By the definition of the exponential we have that (i) and (iv) hold. To see that (ii) holds when s = a note that since  $$p \in \mathcal{R}$$ , 1 + p(t) ≠ 0 for  $$t \in \mathbb{N}_{a}$$ and hence we have that

 $$\displaystyle{e_{p}(t,a) =\prod _{ \tau =a}^{t-1}[1 + p(\tau )]\neq 0,}$$

for  $$t \in \mathbb{N}_{a}$$ . The proof of (iii) is similar to the proof of (ii).

Since

 $$\displaystyle\begin{array}{rcl} e_{p}(\sigma (t),a)& =& \prod _{\tau =a}^{\sigma (t)-1}[1 + p(\tau )] {}\\ & =& \prod _{\tau =a}^{t}[1 + p(\tau )] {}\\ & =& [1 + p(t)]e_{p}(t,a), {}\\ \end{array}$$

we have that (v) holds when s = a.

We only show (vi) holds when t ≥ s ≥ r and leave the other cases to the reader. In particular, we merely observe that

 $$\displaystyle\begin{array}{rcl} e_{p}(t,s)e_{p}(s,r)& =& \prod _{\tau =s}^{t-1}[1 + p(\tau )]\prod _{\tau =r}^{s-1}[1 + p(\tau )] {}\\ & =& \prod _{\tau =r}^{t-1}[1 + p(\tau )] {}\\ & =& e_{p}(t,r). {}\\ \end{array}$$

We proved (vii) holds when with s = a, earlier to motivate the definition of the circle plus addition. To see that (viii) holds with s = a note that

 $$\displaystyle\begin{array}{rcl} e_{\ominus p}(t,a)& =& \prod _{\tau =a}^{t-1}[1 + (\ominus p)(\tau )] {}\\ & =& \prod _{\tau =a}^{t-1} \frac{1} {1 + p(\tau )} {}\\ & =& \frac{1} {\prod _{\tau =a}^{t-1}[1 + p(\tau )]} {}\\ & =& \frac{1} {e_{p}(t,a)}. {}\\ \end{array}$$

Since

 $$\displaystyle{ \frac{e_{p}(t,a)} {e_{q}(t,a)} = e_{p}(t,a)e_{\ominus q}(t,a) = e_{p\oplus [\ominus q]}(t,a) = e_{p\ominus q}(t,a), }$$

we have (ix) holds when s = a. Since

 $$\displaystyle{e_{p}(t,a) =\prod _{ s=a}^{t-1}[1 + p(s)] = \frac{1} {\prod _{s=a}^{t-1}[1 + p(s)]^{-1}} = \frac{1} {e_{p}(a,t)},}$$

we have that (x) holds. □ 

Before we derive some other properties of the exponential function we give another example where we use an exponential function.

Example 1.19.

Assume initially that the number of bacteria in a culture is P 0 and after one hour the number of bacteria present is  $$\frac{3} {2}P_{0}$$ . Find the number of bacteria, P(t), present after t hours. How long does it take for the number of bacteria to triple? Experiments show that P(t) satisfies the IVP (why is this plausible?)

 $$\displaystyle{\Delta P(t) = kP(t),\quad P(0) = P_{0}.}$$

Solving this IVP we get from Theorem 1.14 that

 $$\displaystyle{P(t) = P_{0}e_{k}(t,0) = P_{0}(1 + k)^{t}.}$$

Using the fact that  $$P(1) = \frac{3} {2}P_{0}$$ we get  $$1 + k = \frac{3} {2}$$ . It follows that

 $$\displaystyle{P(t) = P_{0}\left (\frac{3} {2}\right )^{t},\quad t \in \mathbb{N}_{ 0}.}$$

Let t 0 be the amount of time it takes for the population of the bacteria to triple. Then

 $$\displaystyle{P(t_{0}) = P_{0}\left (\frac{3} {2}\right )^{t_{0} } = 3P_{0},}$$

which implies that

 $$\displaystyle{t_{0} = \frac{\ln (3)} {\ln (1.5)} \approx 2.71\;\;\mbox{ hours}.}$$

The set of positively regressive functions,  $$\mathcal{R}^{+}$$ , is defined by

 $$\displaystyle{\mathcal{R}^{+}:=\{ p \in \mathcal{R}: 1 + p(t) > 0,\;\;t \in \mathbb{N}_{ a}\}.}$$

Note that by Theorem 1.18, part (iii), we have that if  $$p \in \mathcal{R}^{+}$$ , then e p (t, a) > 0 for  $$t \in \mathbb{N}_{a}$$ . It is easy to see (Exercise 1.20) that  $$(\mathcal{R}^{+},\oplus )$$ is a subgroup of  $$(\mathcal{R},\oplus )$$ .

We next define the circle dot scalar multiplication ⊙ on  $$\mathcal{R}^{+}.$$

Definition 1.20.

The circle dot scalar multiplication, ⊙, is defined on  $$\mathcal{R}^{+}$$ by

 $$\displaystyle{\alpha \odot p = (1 + p)^{\alpha } - 1.}$$

Theorem 1.21.

If  $$\alpha \in \mathbb{R}$$ and  $$p \in \mathcal{R}^{+}$$ , then

 $$\displaystyle{e_{p}^{\alpha }(t,a) = e_{\alpha \odot p}(t,a)}$$

for  $$t \in \mathbb{N}_{a}.$$

Proof.

Consider

 $$\displaystyle\begin{array}{rcl} e_{p}^{\alpha }(t,a)& =& \left \{\prod _{\tau =a}^{t-1}[1 + p(\tau )]\right \}^{\alpha } {}\\ & =& \prod _{\tau =a}^{t-1}[1 + p(\tau )]^{\alpha } {}\\ & =& \prod _{\tau =a}^{t-1}\{1 + [(1 + p(\tau ))^{\alpha } - 1]\} {}\\ & =& \prod _{\tau =a}^{t-1}[1 + (\alpha \odot p)(\tau )] {}\\ & =& e_{\alpha \odot p}(t,a). {}\\ \end{array}$$

This completes the proof. □ 

The following lemma will be used in the proof of the next theorem.

Lemma 1.22.

If  $$p,q \in \mathcal{R}$$ and

 $$\displaystyle{e_{p}(t,a) = e_{q}(t,a),\quad t \in \mathbb{N}_{a},}$$

then p = q.

Proof.

Assume  $$p,q \in \mathcal{R}$$ and e p (t, a) = e q (t, a) for  $$t \in \mathbb{N}_{a}.$$ It follows that

 $$\displaystyle{p(t)\;e_{p}(t,a) = q(t)\;e_{q}(t,a),\quad t \in \mathbb{N}_{a}.}$$

Dividing by e p (t, a) = e q (t, a) we get that p = q.  □ 

Theorem 1.23.

The set of positively regressive functions  $$\mathcal{R}^{+}$$ , with the addition ⊕, and the scalar multiplication ⊙ is a vector space.

Proof.

We just prove two of the properties of a vector space and leave the rest of the proof (see Exercise 1.25) to the reader. First we show that the distributive law

 $$\displaystyle{(\alpha +\beta ) \odot p = (\alpha \odot p) \oplus (\beta \odot p)}$$

holds for  $$\alpha,\beta \in \mathbb{R}$$ ,  $$p \in \mathcal{R}^{+}.$$ This follows from

 $$\displaystyle\begin{array}{rcl} e_{(\alpha +\beta )\odot p}(t,a)& =& e_{p}^{\alpha +\beta }(t,a) {}\\ & =& e_{p}^{\alpha }(t,a)e_{ p}^{\beta }(t,a) {}\\ & =& e_{\alpha \odot p}(t,a)e_{\beta \odot p}(t,a) {}\\ & =& e_{(\alpha \odot p)\oplus (\beta \odot p)}(t,a) {}\\ \end{array}$$

and an application of Lemma 1.22. Next we show that 1 ⊙ p = p for all  $$p \in \mathcal{R}^{+}.$$ This follows from

 $$\displaystyle{e_{1\odot p}(t,a) = e_{p}^{1}(t,a) = e_{ p}(t,a)}$$

and an application of Lemma 1.22. □