For Once It’s Safe to Dream in Color - Newton, Leibniz, and the Greatest Mathematical Clash of All Time - The Calculus Wars

The Calculus Wars: Newton, Leibniz, and the Greatest Mathematical Clash of All Time (2006)

Chapter 1. For Once It’s Safe to Dream in Color

1704

Meticulous, miraculous, ridiculous, fabulous, nebulous, populace, populous, scrupulous, stimulus, tremulous, unscrupulous.

—Word(s) rhyming with “calculus” (pronounced ka”lkyulus) with a maximum number of phoneme matches. Taken from www.websters-online-dictionary.org/definition/calculus.

Three hundred years ago, history was made when a forgotten English printing press pounded out a few hundred copies of a 348-page work written by a minor government administrator, the retired Cambridge University professor Isaac Newton. Newton was a fairly old man, over sixty, and was already quite famous in England and abroad. But he was not quite the superfamous older scientist he would become in just a few years’ time, the venerable elder statesman of British science. In England, Newton’s image would approach that of a living god, and, in many ways, Opticks helped to create this persona Newton would become.

The book described Newton’s experiments and conclusions about the basic physical behavior of light and optics derived through years of independent experimentation. It described such phenomena as how light is bent by lenses and prisms, and how those physical observations lead to a new theory of light and colors: that light was composed of emissions of particles and that white light was a mixture of different rays of distinct colors.

Opticks had a huge impact, and it was well received at home and abroad. It was written in the kind of clear language that only comes from an authoritative and comprehensive understanding of the subject—an understanding that Newton had cultivated over the course of a couple of decades. Because it was written in this less formal style, Opticks was widely accessible to the reader, and it became a primary text in physics for the next century. The book was subsequently expanded, reprinted, translated into Latin, carried to France and other points on the continent, and sometimes copied out by hand. Albert Einstein once wrote that the world would have to wait for more than a century before the next major theoretical advance in the field after Opticks, and the book is still regarded as a classic of physics, still in print, and still read by students of physics today.

A year after the book appeared in 1704, Newton would be knighted by Britain’s Queen Anne, and this marked the beginning of the glorious final chapter in his life. He would be celebrated for the rest of his days, admired by intellectuals, kings, and commoners alike. Abroad, he would be a man of celebrity status, recognized by many as one of Europe’s premier natural philosophers, a living legend whose company would be sought after by many who traveled to London from elsewhere in Europe and as far away as the American colonies. A young nineteen-year-old Benjamin Franklin tried unsuccessfully to meet Newton in 1725. Forty years later, Franklin had a portrait painted of himself with Newton in the background.

As much of a new beginning as Opticks was, it was also the end of an era. Newton was well past his prime as an experimental scientist when it appeared. He was no longer the lonely young genius of half a lifetime before, the silent, sober-thinking lad, as one of his friends described him, who would work day and night, forget to eat, forget to wash, and neglect everything around him except his books, notes, and experiments. He was no longer the man who contemplated the world and figured out how it worked—from gravity and planetary orbits to fluids and tides, revolutionary mathematics, and the nature of light and color. A significant portion of this work was described in Newton’s Principia, published in 1687, and now, as he was bringing out this second helping, he was much older and busier with professional and social obligations.

In 1704, Newton was no longer a professor at Cambridge and now lived in London, where he would spend the last thirty years of his life as a government administrator in charge of the British mint. His day-to-day business was now overseeing the coining of the English currency, and he threw himself into the mint with all the vigor he had formerly applied to his scientific research. He studied all the parts of the coining process—the machines, the men, and the methods—and became an expert in everything from assaying gold and silver to prosecuting counterfeiters. It was in this role as master of the mint—in a way master of his own universe—that Newton brought forth his book Opticks in 1704.

Opticks had been a long time in coming, and publishing it was a catharsis of sorts for Newton. Almost nothing new was published in the book. Much of the material had existed in one form or another among Newton’s notes and papers for nearly forty years. Some parts were from lectures he had made as a young professor at Cambridge University and others were taken from letters Newton had written to his acquaintances through the Royal Society in London. Still, before 1704 few people had seen Newton’s work on optics.

One of those who had, a mathematician named John Wallis, had tried to get him to publish this material for years, saying that Newton was doing himself and his country a disservice by not publishing it. Wallis wrote to Newton on April 30, 1695, thanking him for a letter and chastising him for not publishing his optical work. “I can by no means admit your excuse for not publishing your treatise of light and colors,” Wallis wrote. “You say you dare not yet publish it. And why not yet? Or if not now, when then?”

Ironically, Wallis was dead by the time Newton finally had bound copies of Opticks under his arm. Why had Newton waited so long to publish? There were numerous reasons, though none perhaps larger than the bad taste his first attempts at publishing left in his mouth. In the early 1670s, while he was a young professor at Cambridge University, Newton had written a letter on his theory of colors that he sent to be read before the members of the Royal Society in London. His “New Theory about Light and Colors” was published in the Philosophical Transactions on February 19, 1672, and it is a letter that reads like one you would expect from the pen of a self-confident young man putting forth a bold new theory to his contemporaries.

For Newton, “New Theory about Light and Colors” was meant to be a third act—a culmination of work already completed. In 1672, he had already been working on his new theories for several years, perfecting his optical outlook on the universe into well-founded science. He had long since gotten over the initial conjectures from which he started, and he was ready to close the book on the work by presenting his conclusions. But Newton was oblivious to the impact that it would have. Writing this letter was something that he would almost immediately regret, because controversy swirled around him after he wrote it. Newton failed to account for the fact that his contemporaries would have to wrestle with the ideas as much as he had for the previous several years. Nor did he suspect how much the people whose theories his was to replace would resist him.

Newton’s new way of looking at light threatened the ideas of a number of his contemporaries, including men who were older and more famous than he—for example, his fellow British scientist Robert Hooke. Instead of a third-act curtain call, Newton’s letter opened up a whole new dialogue, and he became embroiled in bitter fights with Hooke and others over his new theories—so much so that he swore off publishing for decades. He once even told one of his colleagues that he would rather wait until he died for his works to be published.

Half a lifetime later, after Hooke died in March 1703, Newton was elected president of the Royal Society on November 30, 1703, and it was in this newly appointed role that he published Opticks.

The book would be the last original scientific work Newton would ever publish. Nevertheless, it was also a first of sorts because, in it, he staked his claim to the invention of calculus. At the time, most of his contemporaries were attributing calculus to court counselor for the Dukes of Hanover, the German mathematician and philosopher Gottfried Wilhelm Leibniz.

The main body of the book was not about mathematics; it had only a small section in the back on calculus, a treatise Newton had written a dozen years before, entitled Tractatus de Quadratura Curvarum (On the Quadrature of Curves). He had written this in 1691, and even then only after the Scottish mathematician James Gregory had sent Newton his own method, which he was about to publish. The essay had started as a letter to Gregory but quickly grew into a text that by 1692 was extensive enough to impress one of Newton’s close friends and fellow mathematicians. He revised and shortened this material for publication in Opticks. As strange as it may seem for a mathematician as famous as Newton, this appendix was his first actual publication of a purely mathematical treatise.

Newton had discovered calculus during his most creative years of 1665 and 1666. when as a Cambridge University student he had retreated to his family’s country estate to escape a particularly bad outbreak of bubonic plague. He had intended to publish his calculus works at the same time as his optical works but, when he published his theory of colors in 1672, he took such a beating from his contemporaries that he swore off publishing in general. Newton was an old man before he published any of his work in calculus, although he wrote letters, sent private, unpublished copies of papers he had written to friends, and wrote page after page in his journals that he never sent to anyone. For most of his life, the heart of his mathematical work was not published.

It might seem strange compared to today’s publication-enamored academic world that anyone would sit on an intellectual development as huge as calculus for a period of months, let alone years or decades. Stranger still for someone like Newton, who displayed almost absurd self-confidence at times in his life. And even stranger for a work as important as calculus, which is one of the greatest intellectual legacies of the seventeenth century.

What is calculus? As a body of knowledge, it is a type of mathematical analysis that can be used to study changing quantities—bodies in motion, for instance. Basically, calculus is a set of mathematical tools for analyzing these bodies in motion. Given almost any physical motion today (e.g., the movement of clouds, the orbit of GPS satellites around the earth, or the interaction of an HIV drug with its target enzyme), scientists might like to apply the equations of calculus to the bodies in order to predict, track, or model these phenomena.

Differentials are small momentary increments or decreases in changing quantities, and integrals are sums of infinitesimal intervals of geometrical curves or shapes. What does that all mean? A nice contemporary way to describe this is to think of the way a baseball curves as it goes from the pitcher’s hand to the catcher’s mitt. In calculus you express one variable in terms of another. A baseball player throws a perfect fastball, and the radar records the maximum speed, but geometry describes much more—for instance, the changing position of the ball with time. And physics can add another dimension to that, such as accounting for the resistance the ball feels in the air or the effect of gravity on how high the ball is when it crosses the plate or how the spin of the ball will affect the curvature of the pitch. But calculus is about the ability to analyze moving and changing objects mathematically; in other words, using calculus you could calculate all the above without having to throw the ball at all.

Being able to analyze such motion is the domain of calculus. The position, speed, and trajectory of the baseball are changing at every instant as the baseball makes its way to the plate. If you were to take a snapshot of the baseball every hundredth of a second, you could represent the ball’s position in terms of time. At time zero, the pitch is on the player’s fingertips. A tenth of a second later, it is a few feet in front of the pitcher’s hand, another few tenths of a second, the ball reaches its zenith and begins to descend to where it lands in the catcher’s glove in the bottom right-hand corner of the strike zone another tenth of a second later—a perfect slider. Newton would have thought of a baseball pitch in terms of these changing quantities as the ball moves.

In the seventeenth century, of course, nobody had heard of or cared anything about baseball. But understanding how the position, speed, and trajectory of a thrown baseball are in a constant state of change is the basis for understanding the physics of all bodies in motion. As such, calculus was the greatest mathematical advance since the time of the Greeks, who had a difficult time getting a handle on such questions. Changing acceleration, for instance, would have been a difficult concept for an ancient Greek mathematician, since it is the measure of the change of velocity over time, and velocity itself is a measure of a change of position with time.

Calculus allowed some of the great problems of geometry to be solved. Newton was not the first to conceptualize such problems. Nor was he the first to successfully tackle the mathematics that could allow him to solve them. The ancients had calculated the area of geometric shapes through what we now call the method of exhaustion—by filling an area with triangles, rectangles, or some other geometrical shapes with easy-to-calculate areas and then adding them up. Using this method, Archimedes determined the area of parabolas and spherical segments.

In the seventeenth century, Johannes Kepler repeated Archimedes’ work by thinking of the circle as made up of an infinite number of infinitely small triangles, and then he applied the same reasoning to determine the areas and volumes of other geometric shapes Archimedes never considered. (Interestingly enough, Kepler was inspired in part by the fact that 1612 was a great year for wine but there were not great methods for estimating the volumes of barrels.) Another man, Bonaventura Cavalieri, a friend of Galileo’s and professor of mathematics at Bologna, considered a line to be an infinity of points: an area, an infinity of lines; and a solid, an infinity of surfaces.

René Descartes made perhaps the most major contribution to mathematics since the time of the Greeks, when he invented analytical geometry (suffice it to add that the subsequent breakthrough was calculus). Basically, Descartes showed that geometric lines, surfaces, and shapes can be reduced to algebraic equations and that such equations can be graphed geometrically. This was a huge discovery, because it allowed the analysis of geometrical shapes through mathematical equations.

Several mathematicians contemporary to and following Descartes also made contributions. Pierre Fermat, the counselor of the parliament of Toulouse who is most remembered today for his famous last theorem, made a method for finding maxima and minima, drawing tangents to curves so similar to differential calculus that in the eighteenth century some would declare him the inventor of calculus.

Blaise Pascal was a boy wonder in Paris who also worked and wrote on such considerations, publishing his important paper on conics when he was sixteen. Gilles Personne de Roberval worked on geometrical shapes and volumes, and made a general method for drawing tangents to curves. Evangelista Torricelli, a pupil of Galileo, was unaware of Roberval, and published similar results using the infinitesimal method. Scottish mathematician James Gregory in 1668 determined integration of trigonometry functions. John Wallis’s book, Arithmetica Infinitorum, amplified and extended Cavalieri’s work and presented a number of results. Johann Hudde in Holland described a method for finding maxima and minima. Christian Huygens also found ways of determining maxima, minima, and points of inflection of curves. Isaac Barrow published a method of drawing tangents in 1670, and René François de Sluse published one in 1673.

All these works have been called “isolated instances of differentiation and integration,” and the mathematicians who accomplished them—along with several more whom I did not mention—were trailblazers. But Newton was the first to figure out a general system that enabled him to analyze these sorts of problems generally—calculus or, as Newton called it, the method of fluxions and fluents. Unfortunately for him, Newton was not the only one to hit upon this.

Leibniz discovered calculus during the prolific time he spent in Paris between 1672 and 1676. Though he was a lawyer and had no formal training in mathematics, he nevertheless showed an incredible propensity toward it. In just a few years he managed to pull together all the mathematical discoveries of his contemporaries to devise calculus. And since Leibniz believed in simple explanations rather than jargon, he invented a completely original and ingenious system of notation to go along with it.

Over the next ten years, he refined his discovery and developed his system of symbols and notations, then published his results in two scholarly papers that appeared in 1684 and 1686. With these two papers, Leibniz could claim intellectual ownership for calculus. He then spent the two decades between those publications and the publication of Newton’s Opticks refining his ideas, corresponding with his contemporaries, mentoring other mathematicians, reviewing the published work of others, and otherwise extending the techniques of calculus. The word calculus was even coined by Leibniz—a calculus being a type of stone that the Romans used for counting.

Calculus was such a promising invention that by the time Newton published “On the Quadrature of Curves” in the back of Opticks in 1704, Leibniz was ahead of him by almost two decades. Newton was fighting an uphill battle to wrest credit away from Leibniz, who for over a decade had been basking in the glow of his own invention and was widely recognized throughout Europe as its sole discoverer. Some even thought Newton was plagiarizing Leibniz.

The one place where Leibniz’s mathematics had not yet caught on was in England. Part of the problem, apparently, was that the English lacked interest in foreign journals. But this lack of attention in England did nothing to detract from Leibniz’s reputation on the continent. Across the English Channel and in the heartland of Germany, he was at the height of his fame—not only for his mathematical genius but also for his philosophical works.

This short treatise in the back of Opticks marked the quiet beginning of the calculus wars because it was the light that revealed the long-hidden feelings of jealousy and resentment between Leibniz and Newton. Newton had suffered in quiet humiliation for years with the knowledge that he was first inventor of calculus, and he was a smoldering fire ready to be released into flames.

On the other hand, “On the Quadrature of Curves” was not the first time someone had made the claim that Newton was calculus’s true inventor, but it was the first time that Newton himself had published something to this effect. So Leibniz simply could not ignore it.

IN 1705, AN anonymous review of Newton’s essay appeared in a European journal with which Leibniz was closely associated, and it was this review that really fanned the flames. The review made a comment that Newton and his supporters interpreted as a suggestion that the Englishman had borrowed ideas from Leibniz. The German mathematician constantly denied authorship of this review throughout his life but, in the nineteenth century, one of Leibniz’s biographers proved that he indeed wrote it. This was not really a revelation, however, because few people ever really doubted that Leibniz wrote the review—least of all Newton.

From the time that Newton read that review and continuing even after Leibniz died in 1716, the Englishman would wage war to stake his claim to the glory of calculus. He would take two approaches. One, quite simply, was to suggest that perhaps Leibniz’s own invention was tainted with plagiarism. The other was to assert that in any case he, Newton, had invented calculus first. “Whether Mr. Leibniz invented it after me, or had it from me, is a question of no consequence,” Newton would write, “for second inventors have no rights.”

Leibniz was not one to take such a threat lightly. He worked the community of intellectuals in Europe by writing letter after letter in support of his own cause. He also wrote multiple anonymous attacks of Newton and published these alongside papers that he wrote, reviewing his own anonymous attacks.

A little more than a decade after Opticks appeared, the calculus wars reached their height, and, when Leibniz passed away in 1716, he and Newton were old men fighting openly about which of them deserved credit and whether one had plagiarized the other. Their letters and their private writings are bitter testimonials to their respective brilliance and rival’s deceit.

Though it was not until after 1704 that they argued publicly, the foundation of their battle had actually unfolded slowly over the previous quarter of a century, when Newton and Leibniz were much younger. This was an interesting time in history, and the times in which the two had lived played a major role in the dispute that would eventually break out between them. It was a time not just of people coming into conflict but of ideas coming into conflict as well. Europe of the second half of the seventeenth century was a world where worldviews were no longer solely the subject of dogma but of debate. Accepted beliefs that had stood for centuries were suddenly felled by the measurements and controlled experiments of the scientific revolution—the birth of the modern in the ashes of the Middle Ages.

In the 1600s, medieval Europe was fading fast, but the continent was still more supernatural than natural. Science and the use of mathematical reasoning to describe the world was emerging in a backdrop that was still seen by most living in those times to be a battlefield inhabited by supernatural spirits—angels and devils that would subject humans to their capricious whims. Dark magic was real. People in the 1600s paid attention to horoscopes, sought omens to predict their fate, interpreted dreams, and believed in miracles. Criminals were detected through divination rather than through investigation. Alchemists tried to transmute lead into gold. Astrologers stood beside astronomers in the palaces of kings. People were accused of witchcraft and hung by their thumbs, whipped, tortured, and treated to grisly deaths. In total perhaps some 100,000 people throughout Europe were accused of witchcraft in the seventeenth century.

The century was also a time that witnessed major political changes, as national identity and nationalism arose alongside the powerful state. In many places, the state became the embodiment of the personal property of the ruler. As Louis XIV famously said, “L’état, c’est moi”—I am the state. Spoils naturally arose from this point of view; in the 1690s the French regent sold blank patents of nobility for anyone with a bagful of cash. This was, in fact, a common practice throughout Europe in the seventeenth century. Titles and positions were commodities to be bought, sold, and traded as much as they were attainments to be acquired. In fact, King James I of England sold so many knighthoods in the early 1600s that their value decreased—much as you would expect for any commodity that suddenly becomes freely available.

Against this backdrop of occult beliefs, cronyism, and political upheaval, the seventeenth century also saw some of the greatest scientific and mathematical advances made by some of the greatest minds who ever lived. Those hundred years witnessed an explosion of knowledge perhaps unrivaled in the history of civilization. The nature of light and sound were discovered. The diameter of the Earth was estimated to within a few yards, and the speed of light was measured accurately. The orbits of planets and comets were tracked by telescopes and moons were discovered around Saturn and Jupiter. A sophisticated modern view of the solar system evolved, thanks largely to Newton, and it was faithfully described by mathematics. The circulation of blood through the body was carefully charted, and microscopes led to the discovery of cells and a world of tiny organisms too small to be seen with the naked eye.

Because of the wonderment at these achievements, there is a temptation to focus on the intellectual achievements of the seventeenth century. As one historian put it, “During few periods of his history has western man ever really possessed the confidence to believe that by his reasoning alone he could fathom all the questions about himself and his existence.”

Nevertheless, we must never forget that calculus and all other significant intellectual developments occurred against a backdrop of horror. If the seventeenth century proved anything about history, it is that it doesn’t always unfold gradually.

It was a century of fits and starts; of incredible advances and terrible setbacks; of the most sublime genius and of the cruelest clamoring despotism; and of creative possibility and cruel persecution. For me, the seventeenth century represents a cross between a box of chocolates and a commuter train wreck—an era that delivered to the world a number of remarkable tastes of smooth, sweet, and stimulating hard science and at the same time subjected those living then to the horrors of plague, religious and political persecution, starvation, and war.