Real Numbers, Series, and Integrals - Special Topics - A brief course - The history of mathematics

The history of mathematics: A brief course (2013)

Part VII. Special Topics

Chapter 42. Real Numbers, Series, and Integrals

In complex analysis attention is restricted from the outset to functions that have a complex derivative. That very strong assumption automatically ensures that the functions studied will have convergent Taylor series. If only mathematical physics could manage with just such smooth functions, the abstruse concepts that fill up courses in real analysis would not be needed. But the physical world is full of boundaries, where the density of matter is discontinuous, temperatures undergo abrupt changes, light rays reflect and refract, and vibrating membranes are clamped. For these situations the imaginary part of the variable, which often has no physical interpretation anyway, might as well be dropped, since its only mathematical role was to complete the analytic function. From that point on, analysis proceeds on the basis of real variables only. Real analysis, which represents another extension of calculus, has to deal with very general, “rough” functions. All of the logical difficulties about calculus poured into this area of analysis, including the important questions of convergence of series, existence of maxima and minima, allowable ways of defining functions, continuity, and the meaning of integration. As a result, real analysis is so much less unified than complex analysis that it hardly appears to be a single subject. Its basic theorems do not follow from one another in any canonical order, and their proofs tend to be a bag of special tricks, rarely remembered for long except by professors who lecture on the subject constantly.

The subject arose in the attempts to solve the partial differential equations of mathematical physics, the wave equation, the Laplace equation, and, later on, the heat equation. Thus, to speak paradoxically, its roots are in the branches of the subject. At the same time, real analysis techniques forced mathematicians to confront the issues of what is meant by an integral, in what sense a series converges, and what, in the final analysis, a real number actually is. Thus, the branches of real analysis extended into the roots of analysis in general.

The free range of intuition suffered only minor checks in complex analysis. In that subject, what one wanted to believe very often turned out to be true. But real analysis almost seemed to be trapped in a hall of mirrors at times, as it struggled to gain the freedom to operate while avoiding paradoxes and contradictions. The generality of operations allowed in real analysis has fluctuated considerably over the centuries. While Descartes had imposed rather strict criteria for allowable curves (functions), Daniel Bernoulli attempted to represent very arbitrary functions as trigonometric series, and the mathematical physicist André-Marie Ampère (1775–1836) attempted to prove that a continuous function (in the modern sense, but influenced by preconceptions based on the earlier sense) would have a derivative at most points. The critique of this proof was followed by several decades of backtracking, as more and more exceptions were found for operations with series and integrals that appeared to be formally all right. Eventually, when a level of rigor was reached that eradicated the known paradoxes, the time came to reach for more generality. Georg Cantor's set theory played a large role in this increasing generality, while developing paradoxes of its own. In the twentieth century, the theories of generalized functions and distributions restored some of the earlier freedom by inventing a new object to represent the derivative of functions that have no derivative in the ordinary sense.

42.1 Fourier Series, Functions, and Integrals

There is a symmetry in the development of real and complex analysis. Broadly speaking, both arose from differential equations, and complex analysis grew out of power series, while real analysis grew out of trigonometric series. These two techniques, closely connected with each other through the relation zn = rn(cos + i sin ), led down divergent paths that nevertheless crossed frequently in their meanderings. The real and complex viewpoints in analysis began to diverge with the study of the vibrating string problem in the 1740s by d'Alembert, Euler, and Daniel Bernoulli.

For a string fastened at two points, say (0, 0) and (L, 0) and vibrating so that its displacement above or below the point (x, 0) at time t is y(x, t), mathematicians agreed that the best compromise between realism and comprehensibility to describe this motion was the one-dimensional wave equation, which d'Alembert studied in 1747,1 publishing the results in 1749:

equation

D'Alembert exhibited a very general solution of this problem in the form

equation

where for simplicity he assumed that c = 1. The equation alone does not determine the function, of course, since the vibrations depend on the initial position and velocity of the string.

The following year, Euler took up this problem and commented on d’Alembert's solution. He observed that the initial position could be any shape at all, “either regular or irregular and mechanical.” D'Alembert found that claim hard to accept. After all, the functions Ψ and Γ had to have periodicity and parity properties. How else could they be defined except as power series containing only odd or only even powers? Euler and d'Alembert were not interpreting the word “function” in the same way. Euler was even willing to consider initial positions f(x) with corners (a “plucked” string), whereas d'Alembert insisted that f(x) must have two derivatives in order to to satisfy the equation.

Three years later, Daniel Bernoulli tried to straighten this matter out, giving a solution in the form

equation

which he did not actually write out. Here the coefficients an were to be chosen so that the initial condition was satisfied, that is,

equation

Observing that he had an infinite set of coefficients at his disposal for “fitting” the function, Bernoulli claimed that “any” function f(x) had such a representation. Bernoulli's solution was the first of many instances in which the classical partial differential equations of mathematical physics—the wave, heat, and potential equations—were studied by separating variables and superposing the resulting solutions. The technique was ultimately to lead to what are called Sturm–Liouville problems.

Before leaving the wave equation, we must mention one more important intersection between real and complex analysis in connection with it. In studying the action of gravity, Pierre-Simon Laplace (1749–1827) was led to what is now known as Laplace's equation in three variables. The two-variable version of this equation in rectangular coordinates—Laplace was using polar coordinates—is

equation

The operator on the left-hand side of this equation is known as the Laplacian. Since Laplace's equation can be thought of as the wave equation with velocity img, complex numbers again enter into a physical problem. Recalling d’Alembert's solution of the wave equation, Laplace suggested that the solutions of his equation might be sought in the form img. Once again a problem that started out as a real-variable problem led naturally to the need to study functions of a complex variable.

42.1.1 The Definition of a Function

Daniel Bernoulli accepted his father's definition of a function as “an expression formed in some manner from variables and constants,” as did Euler and d'Alembert. But those words seemed to have different meanings for each of them. Daniel Bernoulli thought that his solution met the criterion of being “an expression formed from variables and constants.” His former colleague in the Russian Academy of Sciences,2 Euler, saw the matter differently. This time it was Euler who argued that the concept of function was being used too loosely. According to him, since the right-hand side of Bernoulli's formula consisted of odd functions of period 2L, it could represent only an odd function of period 2L. Therefore, he said, it did not have the generality of the solution he and d'Alembert had given. Bottazzini (1986, p. 29) describes the situation concisely:

We are here facing a misunderstanding that reveals one aspect of the contradictions between the old and new theory of functions, even though they are both present in the same man, Euler, the protagonist of this transformation.

The difference between the old and new concepts is seen in the simplest example, the function |x|, which equals x when x ≥ 0 and −x for x ≤ 0. We have no difficulty thinking of this function as one function. It appeared otherwise to nineteenth-century mathematicians. Fourier described what he called a “discontinuous function represented by a definite integral” in 1822: the function

equation

Fifty years later, Gaston Darboux (1844–1918) gave the modern point of view, that this function is not truly discontinuous but merely a function expressed by two different analytic expressions in different parts of its domain.

The change in point of view came about gradually, but an important step was Cauchy's refinement of the definition in the first chapter of his 1821 Cours d'analyse:

When variable quantities are related so that, given the value of one of them, one can infer those of the others, we normally consider that the quantities are all expressed in terms of one of them, which is called the independentvariable, while the others are called dependent variables.

Cauchy's definition still does not specify what ways of expressing one variable in terms of another are legitimate, but this definition was a step toward the basic idea that the value of the independent variable determines (uniquely) the value of the dependent variable or variables.

42.2 Fourier Series

Daniel Bernoulli's work introduced trigonometric series as an alternative to power series. In a classic work of 1811, a revised version of which was published in 1821,3Théorie analytique de chaleur (Analytic Theory of Heat), Fourier established the standard formulas for the Fourier coefficients of a function. For an even function of period 2π, these formulas are

equation

A trigonometric series whose coefficients are obtained from an integrable function f(x) in this way is called a Fourier series.

42.2.1 Sturm–Liouville Problems

After trigonometric series had become a familiar technique, mathematicians were encouraged to look for other simple functions in terms of which solutions of more general differential equations than Laplace's equation could be expressed. Between 1836 and 1838 this problem was attacked by Charles Sturm (1803–1855) and Joseph Liouville, who considered general second-order differential equations of the form

equation

When a solution of Laplace's equation is sought in the form of a product of functions of one variable (the separation of variables technique), the result is an equation of this type for the one-variable functions. It often happens that only isolated values of λ yield solutions satisfying given boundary conditions. Sturm and Liouville found that in general there will be an infinite set of values λ = λn, n = 1, 2,. . ., satisfying the equation together with a pair of conditions at the endpoints of an interval [a, b], and that these values increase to infinity. The values can be arranged so that the corresponding solutions yn(x) have exactly n zeros in [a, b], and any solution of the differential equation can be expressed as a series

equation

The sense in which such series converge was still not clear, but it continued to be studied by other mathematicians. It required some decades for all these ideas to be sorted out.

Proving that a Fourier series actually did converge to the function that generated it was one of the first places where real analysis encountered greater difficulties than complex analysis. In 1829 Peter Lejeune Dirichlet (1805–1859) proved that the Fourier series of f(x) converged to f(x) for a bounded periodic function f(x) having only a finite number of discontinuities and a finite number of maxima and minima in each period.4 Dirichlet tried to get necessary and sufficient conditions for convergence, but that is a problem that has never been solved. He showed that some kind of continuity would be required by giving the famous example of the function whose value at x is one of two different values according as x is rational or irrational. This function is called the Dirichlet function. For such a function, he thought, no integral could be defined, and therefore no Fourier series could be defined.5

42.3 Fourier Integrals

The convergence of the Fourier series of f(x) can be expressed as the equation

equation

That equation may have led to an analogous formula for Fourier integrals, which appeared during the early nineteenth century in papers on the wave and heat equations written by Poisson, Laplace, Fourier, and Cauchy. The central discovery in this area was the Fourier inversion formula, which we now write as

equation

The analogy with the formula for series is clear: The continuous variable z replaces the discrete index n, and the integral on z replaces the sum over n. Once again, the validity of the representation is much more questionable than the validity of the formulas of complex analysis, such as the Cauchy integral formula for an analytic function. The Fourier inversion formula has to be interpreted very carefully, since the order of integration cannot be reversed. If the integrals make sense in the order indicated, that happy outcome can only be the result of some special properties of the function f(x). But what are those properties?

The difficulty was that the integral extended over an infinite interval so that convergence required the function to have two properties: It needed to be continuous, and it needed to decrease sufficiently rapidly at infinity to make the integral converge. These properties turned out to be, in a sense, dual to each other. Considering just the inner integral as a function of z:

equation

it turns out that the more rapidly f(y) decreases at infinity, the more derivatives img has, and the more derivatives f(y) has, the more rapidly img decreases at infinity. The converses are also, broadly speaking, true. Could one insist on having both conditions, so that the representation would be valid? Would these assumptions impair the usefulness of these techniques in mathematical physics? Alfred Pringsheim (1850–1941, father-in-law of the writer Thomas Mann) studied the Fourier integral formula (Pringsheim, 1910), noting especially the two kinds of conditions that f(x) needed to satisfy, which he called “conditions in the finite region” (“im Endlichen”) and “conditions at infinity” (“im Unendlichen”). Nowadays, they are called local and global conditions. Pringsheim noted that the local conditions could be traced all the way back to Dirichlet's work of 1829, but said that “a rather obvious backwardness reveals itself” in regard to the global conditions.

[They] seem in general to be limited to a relatively narrow condition, one which is insufficient for even the simplest type of application, namely that of absolute integrability of f(x) over an infinite interval. There are, as far as I know, only a few exceptions.

Thus, to the question as to whether physics could get by with sufficiently smooth functions f(x) that decay sufficiently rapidly, the answer turned out to be, in general, no. Physics needs to deal with discontinuous integrable functions f(y), and for these img cannot decay rapidly enough at infinity to make its integral converge absolutely. What was to be done?

One solution involved the introduction of convergence factors, leading to a more general sense of convergence, called Abel–Poisson convergence. In a paper on wave motion published in 1818, Siméon-Denis Poisson (1780–1840) used the representation

equation

The exponential factor provided enough decrease at infinity to make the integral converge. Poisson claimed that the resulting integral tended toward f(x) as k decreased to 0. (He was right.)

Abel used an analogous technique with infinite series, multiplying the nth term by rn, where 0 < r < 1, then letting r increase to 1. In this way, he was able to justify the natural value assigned to some nonabsolutely convergent series such as

equation

which can be obtained by expanding the integrands of the following integrals as geometric series and integrating termwise:

equation

In Abel's case, the motive for making a careful study of continuity was his having noticed that a trigonometric series could represent a discontinuous function. From Paris in 1826 he wrote to a friend that the expansion

equation

was provable for 0 ≤ x < π, although obviously it could not hold at x = π. Thus, while the representation might be a good thing, it meant, on the other hand, that the sum of a series of continuous functions could be discontinuous. Abel also believed that many of the difficulties mathematicians were encountering were traceable to the use of divergent series. He gave, accordingly, a thorough discussion of the convergence of the binomial series, the most difficult of the elementary Taylor series to analyze.6

For the two conditionally convergent series shown above and the general Fourier integral, continuity of the sum was needed. In both cases, what appeared to be a necessary evil—the introduction of the convergence factor eka or rn—turned out to have positive value. For the functions rn cos and rn sin are harmonic functions if r and θ are regarded as polar coordinates, while eay cos (ax) and eay sin (ax) are harmonic if x and y are regarded as rectangular coordinates. The factors used to ensure convergence provided harmonic functions, at no extra cost.

42.4 General Trigonometric Series

The study of trigonometric functions advanced real analysis once again in 1854, when Riemann was required to give a lecture to qualify for the position of Privatdocent (roughly what would be an assistant professor nowadays). As the rules required, he was to propose three topics and the faculty would choose the one he lectured on. One of the three, based on conversations he had had with Dirichlet over the preceding year, was the representation of functions by trigonometric series.7 Dirichlet was no doubt hoping for more progress toward necessary and sufficient conditions for convergence of a Fourier series, the topic he had begun so promisingly a quarter-century earlier. Riemann concentrated on one question in particular: Can a function be represented by more than one trigonometric series? That is, can two trigonometric series with different coefficients have the same sum at every point? The importance of this problem seems to come from the possibility of starting with a general trigonometric series and summing it. One then has a periodic function which, if it is sufficiently smooth, is the sum of its Fourier series. The natural question arises: Is that Fourier series the trigonometric series that generated the function in the first place?

In the course of his study, Riemann was driven to examine the fundamental concept of integration. Cauchy had defined the integral

equation

as the number approximated by the sums

equation

as N becomes large, where a = x0 < x1 < img < xN−1 < xN = b. Riemann refined the definition slightly, allowing f(xn) to be replaced by img for any img between xn−1 and xn. The resulting integral is known as the Riemann integral today. Riemann sought necessary and sufficient conditions for such an integral to exist. The condition that he formulated led ultimately to the concept of a set of measure zero,8 half a century later: For each σ > 0 the total length of the intervals on which the function f(x) oscillates by more than σ must become arbitrarily small if the partition is sufficiently fine.

Problems and Questions

Mathematical Problems

42.1. Show that if y(x, t) = (f(x + ct) + f(xct))/2 is a solution of the one-dimensional wave equation that is valid for all x and t, and y(0, t) = 0 = y(L, t) for all t, then f(x) must be an odd function of period 2L.

42.2. Show that the problem X ” (x) − λX(x) = 0, Y ” (y) + λY(y) = 0, with boundary conditions Y(0) = Y(2π), Y′ (0) = Y′ (2π), implies that λ = n2, where n is an integer, and that the function X(x)Y(y) must be of the form (cnenx + dnenx)(an cos (ny) + bn sin (ny)) if n ≠ 0.

42.3. Show that Fourier series can be obtained as the solutions to a Sturm–Liouville problem on [0, 2π] with p(x) = r(x) ≡ 1, q(x) = 0, with the boundary conditions y(0) = y(2π), y′ (0) = y′ (2π). What are the possible values of λ?

Historical Questions

42.4. Why did the problem of the vibrating string force the consideration of nonanalytic solutions of differential equations?

42.5. What problems arose in the use of trigonometric series that had not arisen in the use of power series?

42.6. How did Sturm–Liouville problems come to be an area of particular interest in analysis?

Questions for Reflection

42.7. What is the value of harmonic functions, which are solutions of Laplace's equation

equation

Consider that the classical linearized heat equation describes the temperature u(t ; x, y) at point (x, y) of a plate at time t is

equation

and that the classical linearized wave equation describes the vertical displacement of a membrane u(t ; x, y) over a point (x, y) of a plate at time t is

equation

where c is the velocity of wave propagation in the membrane. (What does it mean for the first or second derivative with respect to time to be zero?)

42.8. The Cauchy–Riemann equations (see Chapter 41) and the equality of mixed partial derivatives (∂2u/∂ xy = ∂ 2u/∂ yx easily imply that the real and imaginary parts of an analytic function of a complex variable are harmonic functions. Does it follow that if u(x, y) and img are real-valued harmonic functions, then u(x, y) + iv(x, y) is an analytic function? What further property is needed?

42.9. Why is it of interest to know whether two different trigonometric series can converge to the same function?

Notes

1. Thirty years earlier, Brook Taylor (1685–1731) had analyzed the problem geometrically and concluded that the normal acceleration at each point would be proportional to the normal curvature at that point. That statement is effectively the same as this equation, and it was quoted by d'Alembert.

2. Bernoulli had left St. Petersburg in 1733, Euler in 1741.

3. The original version remained unpublished until 1972, when Grattan-Guinness published an annotated version of it.

4. We would call such a function piecewise monotonic.

5. The increasing latitude allowed in analysis, mentioned above, is illustrated very well by this example. When the Lebesgue integral is used, this function is regarded as identical with the constant value it assumes on the irrational numbers.

6. Unknown to Abel, Bolzano had discussed the binomial series in 1816, considering integer, rational, and irrational (real) exponents, admitting that he could not cover all possible cases, due to the incomplete state of the theory of complex numbers at the time (Bottazzini, 1986, pp. 96–97). He performed a further analysis of series in general in 1817, with a view to proving the intermediate value property).

7. As the reader will recall from Chapter 40, this topic was not the one Riemann did lecture on. Gauss preferred the topic of foundations of geometry, and so Riemann's paper on trigonometric series was not published until 1867, after his death.

8. A set of points on the line has measure zero if for every ε > 0 it can be covered by a sequence of intervals (ak, bk) whose total length is less than ε.