The Well-Educated Mind: A Guide to the Classical Education You Never Had (2016)


Chapter 10. The Cosmic Story: Understanding the Earth, the Skies, and Ourselves

SCIENCE BEGAN LONG before the first science text, just as storytelling came before the novel, poetic performances before the written poem. Science, says historian-of-science George Sarton, began as soon as humans “tried to solve the innumerable problems of life.” Mapping out a journey by the skies, balancing a wheel, building an irrigation canal, mixing herbs to relieve pain, designing a pyramid: this was science.1

And it went on for quite a long time before anyone decided to write about it.

Compared with the other genres we’ve investigated, science books have a much more distant relationship to the actual practice of the craft. Made-up stories; stories about the past; stories about ourselves; lyrical outpourings about God, or love, or depression; acting out imagined scenes: all of these existed before novels, histories, autobiographies, poems, and plays took the forms we now know. But all of them had to migrate into written form before they could survive, develop, evolve.

Science is different. Scientific discoveries don’t require the written word. Many of the most essential insights into the natural world (right triangles exist; electrical current can be channelled through a wire; the atoms of elements can be charted onto a periodic table; antibiotics kill bacteria) have not led to books about them. Science is perfectly capable of continuing independent of written narratives.

But side by side with the actual doing of science, a tradition of science writing slowly evolved: starting, as history did, with the Greeks.


The Natural Philosophers

In the fifth century B.C., the physician Hippocrates was struggling with the nature of disease.

He had been trained to practice medicine in a world where the divine suffused everything. Doctors were also priests, and they treated the sick by sending them for a night’s vigil in one of the temples of Aesculapius, god of healing. Perhaps the sacred serpents that lived in the temple would lick the patient’s wounds and miraculously heal them; or maybe the god would send a dream explaining how the illness should be treated; or, Aesculapius himself might even appear to carry out the cure.2

In this world, Hippocrates was an outlier.

He did not think that diseases were caused by angry deities, nor that they needed to be cured by a benevolent one. “I do not believe,” he wrote in his treatise about epilepsy, long thought to be a holy affliction sent directly from the gods, “that the ‘Sacred Disease’ is any more divine or sacred than any other disease, but, on the contrary, has specific characteristics and a definite cause . . . It is my opinion that those who first called this disease ‘sacred’ were the sort of people we now call witch-doctors, faith-healers, quacks and charlatans . . . By invoking a divine element they were able to screen their own failure to give suitable treatments.”3

Instead of invoking the gods, Hippocrates looked to the visible world, searching for both “definite causes” and “suitable treatments” in nature itself.

His investigations led him to formulate an entirely secular theory of disease. Four fluids course through the human body, Hippocrates claimed: bile, black bile, phlegm, and blood. When these four fluids (“humors”) exist in their proper proportions, we are healthy. But any number of natural factors might throw the fluids out of whack. Hot winds, for example, cause the body to produce too much phlegm; drinking stagnant water can lead to an overabundance of black bile. The recommended treatment? Restore the body’s balance. Use purges and bleeds to get rid of excess humors; send sick men and women to different climates, away from the winds and waters that are deranging their harmonies.4

The theory was ingenious, convincing, and completely wrong.

It could hardly be otherwise. Hippocrates had no access to the body’s secrets; no way to discern what was really happening inside the skin. Twenty-three centuries later, Albert Einstein and the physicist Leopold Infeld jointly offered an analogy for Hippocrates’s plight. The ancient investigator of the natural world, they wrote, was like

a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he . . . will never be able to compare his picture with the real mechanism and he cannot even imagine the possibility or the meaning of such a comparison.5

Hippocrates was no more able to peer inside the watch-case than his priest-physician contemporaries. He was not doing science as we would understand it; he was philosophizing about nature, attempting to reason his way into a closed system that he could not observe. But Hippocrates and his followers were at least attempting to find natural factors that would help explain the natural world. So the Hippocratic Corpus—some sixty medical texts, collected by his students and followers, that neither blame nor invoke the gods—is the first written record of a scientific endeavor.

In the centuries after Hippocrates, other Greek philosophers expanded his way of thinking of to encompass phusis: not just man, but the whole of the ordered universe.

Their theories were varied. The monists believed that the ordered universe all began with a single underlying element, one sort of stuff (water, or fire, or some still-unknown material); the pluralists were in favor of multiple underlying elements, most often a four-way assembly of earth, air, fire, and water. And the atomists suggested that all of reality was made up of minuscule elements called atomoi, the “indivisibles”—incomprehensibly small particles that clump together to form the “visible and perceptible masses” that make up our world.6

This last theory, as we now know, was within stabbing distance of the truth. But the atomists, like the monists and pluralists, were still doing philosophy. None of these explanations were susceptible to proof. These early “scientists” were theorizing with no way to check their results; the watch case was still firmly closed.

For the philosopher Aristotle—born a century after Hippocrates’ death, on the opposite side of the Aegean Sea—the greatest flaw in all of these speculations was their failure to account for change. Searching for a quality that all natural things shared, Aristotle pinpointed the principle of development. An animal, a plant, fire, water—none of these things remain the same indefinitely. Each, Aristotle wrote in his great work Physics, “has within itself a source of change . . . in respect of either movement or increase and decrease or alteration.” A bed, or a cloak, or a stone building—all created by man’s artifice—have no such “intrinsic impulse for change.”7

Watching a sprout grow into a tree, a cub into a lion, an infant into a man, Aristotle wanted an explanation. How do these changes happen? In what stages does one entity, one being, assume more than one form? What impels the change, and what determines its ending point? And even more, he wanted a reason. Why does a kitten become a cat, a seed a flower? What sends it on the long journey of transformation?

The monists and atomists had no answers for him; nor did the pluralists, although he found their theory of multiple elements more convincing. So he began to work his way toward a new set of explanations. To the pluralist sketch of four elements that combine to make up all natural things, Aristotle added a fifth, an imperishable heavenly substance called aether that carries the stars. He also proposed that each element has particular qualities (air is cold and dry, water is cold and wet) that interact with each other and produce change (for example, the “dry” in air can expel the “wet” from water, making water cold and dry, and thus converting water to air). Earth is the heaviest of the elements, and so is drawn toward the center of the universe; fire, the lightest, always tends to fly away from the cosmic core.8

Most important of all, natural things have within themselves a principle of motion: an internal potential for change. Each object and being in the natural world must move from its present state into a future, more perfect one. Built into the very fabric of the seed, the kitten, the infant, is the impulse to develop toward a more fully realized end.

The Physics, widely read in the Greek world, provided a model of the universe that would influence the practice of science for two thousand years. But Aristotle, too, was philosophizing. He could offer no solid proofs of his elements, nor pinpoint the principle of motion within them. And his vision of a driven and purpose-filled world did not go unchallenged.

The atomists were his most vocal opponents; particularly Epicurus, a generation Aristotle’s junior, who argued vehemently that there was no purposeful movement in the universe. There were only randomly moving atoms and “the empty”—the place in which atoms rushed about, collided, and intertwined by chance. The world that we see has come into being only because atoms, spinning through the void, occasionally give an unpredictable hop, a random jump sideways, slam into each other, and fortuitously join up to create new objects.9

Two hundred years after Epicurus’s death, his disciple Lucretius—a Roman educated in Greek philosophy—recast his teachings in a long poem called De Rerum Natura (On the Nature of the Universe or, more literally, On the Nature of Things). The atoms that make up everything, Lucretius writes, are in “ceaseless motion” and vary in size and shape. They created the earth and the human race; there is no design, either natural or supernatural. The soul is not transcendent; like our bodies, it is made up of material particles, of atoms “most minute.” Too tiny to comprehend, they disperse into air when the body dies, and so the soul too ceases to exist.

But the most central truth of atomism, as Lucretius explains in Book II, is that all things come to an end. All natural bodies—sun, moon, sea, our own—age and decay. They do not mature into greater and truer versions of themselves. Rather, they are struck again and again by “hostile atoms” and slowly melt away. And what is true of the physical bodies within the universe is true of the universe itself: “So likewise,” he concludes, “the walls of the great world . . . shall suffer decay and fall into moldering ruins . . . [I]t is vain to expect that the frame of the world will last forever.” Aristotle’s teleology was a delusion. The universe will perish, as surely as our own bodies, and come not to fulfillment, but only to dust.10

Even more firmly than Hippocrates and Aristotle, Lucretius insisted that phusis, the ordered universe, could be explained in purely natural terms: the most central principle of modern science. But like them, he had no way of proving his theories. He could not observe his atoms at work, any more than Hippocrates could view his humors, or Aristotle examine the aether. The “watch case” of nature was still firmly closed; and for the next fifteen hundred years, no one would succeed in popping the lock.

The Observers

In 1491, Nicolaus Copernicus began a new search for the key.

He was eighteen years old, a student of astronomy at the University of Cracow, grappling with his introductory astronomy textbook. The Epitome of the Almagest, a standard handbook for beginners, was an abridgment of a much more complex manual: the Almagest, assembled by the Greek astronomer Ptolemy in the second century. The Almagest assumed that the universe was exactly as Aristotle had described it: spherical, made up of five elements, with the earth sitting at its center. (That was logical enough. The earth is “heavy matter,” constantly drawn toward the center of the universe, and it’s clearly not falling through space: Q.E.D., it must already be at the center.) The stars above the earth, along with the seven independently moving celestial bodies known as the aster planetes (the wandering stars), moved around the earth.

But this movement around the earth was far from simple.

According to the Epitome of the Almagest, each planet came to a regular stop in its orbit (a “station”) and then backtracked for a predictable, calculable distance (“retrogradation”). They also performed additional small loops (“epicycles”) while traveling along the larger circles (“deferents”); and the center of the deferents was not the earth itself, but a point slightly offset from the theoretical core of the universe (the “eccentric”). Furthermore, the speed of planetary movement was measured from yet a third point, an imaginary standing place called the equant. (The equant was self-defining—it was the place from which measurement had to be made in order to make the planet’s path along the deferent proceed at a completely uniform rate.)11


This was a complicated and ugly system—but by measuring from the equant and the eccentric and by building epicycle upon epicycle, students of astronomy could accurately predict the future position of any given star or planet.

In all likelihood, none of them believed that the Epitome of the Almagest provided an actual picture of the universe. Ptolemy himself probably did not think that, should he suddenly be transported into the heavens, he would see Jupiter suddenly charge backward into retrograde and then swing around into an epicycle. The mathematical strategies were just that: gimmicks and tricks that yielded the correct results, not realistic sketches.

This was called “saving the phenomena”—proposing geometrical patterns that matched up with observational data. The patterns were reliable enough for the use of navigators and time-keepers, and allowed astronomers to (more or less) accurately chart the heavens. And, since no one had the ability to look into the heavens and see what Jupiter was actually up to, the earth-centered orbits were generally accepted.12

But from his first introduction to the Almagest, Copernicus questioned those elaborate and unwieldy paths. Why, he wondered, should each planet require its own individual set of movements, its own particular laws? It was as if, he later wrote, an artist decided to draw the figure of a man, but gathered “the hands, feet, head and other members for his images from diverse models, each part excellently drawn, but not related to a single body . . . the result would be monster rather than man.”13

The earth-centered universe of the Almagest was, he thought, monstrous: an unwieldy set of awkward mathematical contortions.

Copernicus spent a decade and a half studying the Almagest and making his own records of planetary positions. By 1514, he had formulated a more graceful theory. He wrote it out in a simple and readable form, eliminating all of the mathematics involved, and circulated it to his friends. This informal proposal was known as the Commentariolus.

“I often considered,” it began, “whether there could perhaps be found a more reasonable arrangement of circles.” This more reasonable arrangement began with a simple assumption: “All the spheres revolve about the sun as their mid-point, and therefore the sun is the center of the universe.” The earth was merely the center of the “lunar sphere,” not the entire universe. Furthermore, the earth did not remain motionless; instead, it “performs a complete rotation on its fixed poles in a daily motion.” This earthly rotation actually caused the apparent movement of the sun, and accounted for what seemed to be retrograde motion in the planetary paths.14

Copernicus spent the next quarter of a century working the Commentariolus up into the full-fledged astronomical manual On the Revolutions of the Heavenly Spheres, complete with mathematical calculations. “The harmony of the whole world teaches us their truth,” Copernicus wrote, “if only—as they say—we would look at the thing with both eyes.” That truth was simple: Only the mobility of the Earth, and the sun’s position at “the centre of the world,” can explain the motion of the stars.15

In other words, the heliocentric model was intended to be a true picture of the universe, not just a mathematical strategy. Unlike the Greek astronomers, Copernicus clearly believed that, should he suddenly be transported into the heavens, he would see the earth tracking faithfully around the sun.

He was moving away from philosophy, toward what we would now think of as a more scientific endeavor: a careful explanation of the physical world based on phenomena, not on a priori assumptions. But it was an incomplete journey. Copernicus had no telescope capable of providing visual confirmation of his model; he was a theoretical eyewitness, not an actual one.

And heliocentrism had its problems. For one thing, Copernicus couldn’t explain why, if the earth was whirling on its axis and sailing around the sun, both of those motions were imperceptible to people standing on its surface. And for another, heliocentrism seemed to contradict the literal interpretation of biblical passages such as Joshua 10:12–13, in which the sun and moon “stand still” rather than continuing to move around the earth.

So when On the Revolutions was first printed in 1542, an unsigned introduction was appended to it, explaining that the heliocentric model was merely another mathematical trick, not a real description. “For these hypotheses,” the introduction explained, “are not put forward to convince anyone that they are true, but merely to provide a reliable basis for computation.”

Copernicus may not even have seen this disclaimer; it is generally thought to have been written by a friend who was overseeing the printing process. Yet it was accepted as genuine by most readers, and for a century, Copernicus’s scheme remained merely one among many.

Its ultimate triumph was due to the work of two men: the English philosopher Francis Bacon, and the Italian astronomer and physicist Galileo.

Bacon, born nineteen years after On the Revolutions was published, was an ambitious politician and an even more ambitious thinker. While he was busy climbing up the ladder of preferment at the English court, he was also planning out his masterwork. This would be a definitive study of human knowledge called the Great Instauration: the Great Establishment, a complete system of philosophy in six volumes that would shape the minds of men and guide them into new truths.

By 1620, he had only completed the first two books, and his time was running out; his position at court had been undermined by his enemies, he was about to be confined in the Tower of London, and he would die of pneumonia in 1626 without returning to his magnum opus. But Novum Organon (“New Tools”), published shortly before his death, laid the foundations for the modern scientific method.

Novum Organon (a play on the title of Aristotle’s six books on logic, known collectively as the Organon) challenged the reliability of deductive reasoning, the Aristotelian way of thinking generally followed by natural philosophers. Deductive reasoning begins with generally accepted truths, or premises, and works its way toward more specific conclusions:

MAJOR PREMISE: All heavy matter falls toward the center of the universe.

MINOR PREMISE: The earth is made of heavy matter.

MINOR PREMISE: The earth is not falling.

CONCLUSION: The earth must already be at the center of the universe.

Bacon had come to believe that deductive reasoning was a dead end that distorted physical evidence, and made observation secondary to preconceived ideas. “Having first determined the question according to his will,” he complained, “man then resorts to experience, and bending her to conformity . . . leads her about like a captive in a procession.”

Instead, he argued, the careful thinker must reason the other way around: starting from specific observations, and building from them toward general conclusions. This new way of thinking—inductive reasoning—had three steps to it. The “true method,” Bacon explained, “first lights the candle, and then by means of the candle shows the way; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms, and from established axioms again new experiments.” In other words, the natural philosopher must first come up with an idea about how the world works: “lighting the candle.” Second, he must test the idea against physical reality, against “experience duly ordered”—both observations of the world around him, and carefully designed experiments. These experiments should be carried out with the use of instruments that magnify, intensify, and make clearer the process of nature: “Neither the naked hand nor the understanding left to itself can effect much,” Bacon wrote. “It is by instruments and helps that the work is done.”16

Only then, as a last step, should the natural philosopher “deduce axioms,” or come up with a theory that could be claimed to carry truth.

Hypothesis, experiment, conclusion: Bacon had just traced out the outlines of the scientific method. It was not, of course, fully developed. But the Novum Organum continued to shape the seventeenth-century practice of science. Finally, a method was in place that would allow natural philosophers to “look with both eyes,” as Copernicus had asked, and to come to conclusions based on their observations.

Chief among the “instruments and helps” that made these observations more useful was the telescope: brand new, and under steady improvement even as Bacon was writing. Ten years before the publication of the Novum Organon, the Italian mathematician and astronomer Galileo Galilei had first encountered a telescope on a visit to Venice. This arrangement of convex and concave lenses had been invented the year before by a Low Country spectacle-maker; immediately on returning home, Galileo set to work grinding his own lenses and improving the instrument’s refraction.

The original telescope had been only slightly more useful than the naked eye, but Galileo managed to refine the magnification to around 20X. Through his instrument, he saw mountains and valleys on the moon, and many more stars than were visible to the eye alone. He also saw four objects near Jupiter, never before observed. When Galileo first viewed them, he thought they were fixed stars.

But when he looked at them again on the following day, they had moved.

And they kept moving, in and out of sight, to the left and to the right of Jupiter itself. Over the course of a week, Galileo was able to sketch out their progression and come to the inevitable conclusion: they were moons, and all four “perform[ed] their revolutions about this planet . . . in unequal circles.”

This provided unequivocal proof that not all heavenly bodies revolved around the earth—proof that Galileo published in 1610, in a short work known as The Sidereal Messenger (“The Starry Messenger”). A few months later, he used his telescope to observe the changing phases of Venus; inexplicable in the Ptolemaic system, making sense only if Venus were, in fact, traveling around the sun.

These observations did not convince anyone. In fact, the chief philosopher at Padua, an Aristotelian named Cesar Cremonini, simply refused to look through Galileo’s telescope. “To such people,” Galileo wrote bitterly to the astronomer Johannes Kepler, “truth is to be sought, not in the universe or in nature, but (I use their own words) by comparing texts!” In Galileo’s opinion, Aristotle himself would have been happy both to look, and to adjust his physics in response: “We do have in our age new events and observations,” he later remarked, “such that, if Aristotle were now alive, I have no doubt he would change his opinion.”17

An epic battle was shaping up: between ancient authority and present observation, Aristotelian thought and Baconian method, text and eye. Galileo himself had not yet written anything that explicitly supported Copernicus. But his observations in The Sidereal Messenger certainly implied that he accepted heliocentrism, and he had already offered (in an unpublished collection of essays known as De motu) a mathematical explanation for why the earth’s motion through space was imperceptible from its surface.

In 1616, the cardinal Robert Bellarmine (under orders from Pope Paul V) recommended that On the Revolutions be placed on the church’s list of condemned texts. He also warned Galileo, in a private but official meeting, to abandon public agreement with Copernicus. Instead, Galileo spent the next sixteen years tackling the remaining problems with the heliocentric model, one at a time.

In 1632, he put all of his conclusions into a major work: the Dialogue on the Two Chief World Systems, Ptolemaic and Copernican. In order to sidestep Bellarmine’s dictate, Galileo framed the Dialogue as a hypothetical discussion, an argument among three friends as to what might be the best possible model for the universe. Two of his characters, charmingly intelligent and sympathetic, agree that the Copernican theory is superior; the third, a moronic idiot named Simplicius, insists on Aristotle’s earth-centered system.

The first print run of a thousand copies sold out almost immediately. It didn’t take long for churchmen to notice that Galileo was violating Bellarmine’s warning; and in 1633, Galileo—now in his seventies, and unwell—was forced to travel to Rome to defend himself against the Inquisition. Threatened with “greater rigor of procedure,” a code phrase for torture, Galileo agreed to “abandon the false opinion which maintains that the Sun is the center and immovable.” The Dialogue was banned in Italy, and Galileo was sentenced to house arrest. He died in 1642, his condemnation still in place.18

But outside the reach of the Inquisition, the Dialogue continued to circulate: reprinted, read throughout Europe, translated into English in 1661, consulted by astronomers who used ever more powerful telescopes to confirm Galileo’s conclusions.

At the same time, the English scientist Robert Hooke took Bacon’s recommendations in the other direction; instead of using instruments to examine the distant skies, he looked more closely at earthly objects.

Hooke was an excellent mathematician, an expert at grinding and using lenses, the inventor of a barometer, a competent geologist, biologist, meteorologist, architect, and physicist. In 1662, he was appointed to the post of Curator of Experiments for the fledgling Royal Society of London for Improving Natural Knowledge. The Society was a “research club,” a regular gathering of natural philosophers who were committed to the experimental method of science; they were all students of the Novum Organan, and the Royal Society’s dedicatory epistle (written by the poet Abraham Cowley, himself an enthusiastic amateur scientist) was all in praise of Francis Bacon.

  From words, which are but pictures of the thought,

Cowley enthused,

  Though we our thoughts from them perversely drew

  To things, the mind’s right object, he it brought . . .

  Who to the life an exact piece would make,

  Must not from other’s work a copy take . . .

  No, he before his sight must place

  The natural and living face;

  The real object must command

  Each judgment of his eye, and motion of his hand.

This examination of “real objects,” when carried out with instruments and helps, was known as “elaborate,” and such experiments were done in well-equipped “elaboratories”; Hooke himself had worked, as a young man, in the elaboratory of the chemist Robert Boyle. His training there, along with his wide-ranging skills and interests, made him the perfect choice as Curator of Experiments. He was paid a full-time stipend to do two things: to present a variety of weekly experiments to the gathered Society, explaining and demonstrating as he went; and to assist the Fellows with their own experiments, as needed.

This made Robert Hooke (probably) the first full-time salaried scientist in history. The Royal Society was made up of astronomers, geographers, physicians, philosophers, mathematicians, opticians, and even a few chemists, so Hooke was called on to experiment and research across the entire field of natural philosophy. He conducted demonstrations with pendulums, distilled urine, insects placed in pressurized containers, colored and plain glass, and much more.

But, increasingly, his experimental demonstrations involved the microscope.

Microscopes had improved as telescopes had grown more powerful. In 1663, the minutes of the Royal Society note, Hooke demonstrated the microscopic structures of moss, cork, bark, mold, leeches, spiders, and “a curious piece of petrified wood.” This puzzled him greatly, but he suggested perhaps it had “lain in some place where it was well soaked with water . . . well impregnated with stony and earthy particles,” and that the stone and earth had “intruded” into it.19

Hooke had described, for the first time the process of fossilization. And he had gone beyond observation with instruments to something new: the establishment of a new physical process which he had not (and could not) see, but which he was able to deduce.

In 1664, the Royal Society formally requested that Hooke print his micrographical observations. On top of his other competencies, Hooke was a skilled draughtsman and artist. Rather than merely describing his discoveries in words, or commissioning nonscientists to produce his drawings, he made his own: large, exquisitely detailed, and perfectly clear. The resulting work, Micrographia, was published in 1665.

The eye-grabbing pictures attracted the most attention. But even more notable is that, throughout, Hooke uses his newly extended senses to build new theories. After carefully examining the colors and layers of muscovite (“Moscovy-glass”) he goes beyond his observations to suggest nothing less than a theory of how light works: it is, he speculates, a “very short vibrating motion” propagated “through an Homogeneous medium by direct or straight lines.” It was not enough merely to extend the senses by way of instruments; the reason must follow the path laid by these observations, interpret them, and then check itself again.

And again, and again, and again. Hooke and the members of the Royal Society were committed to Baconian thinking, but they were also cautious, reluctant to draw conclusions without exhaustive proof—an attitude that soon drove a wedge between the Society and its newest member, one “Mr. Isaac Newton, professor of mathematics in the university of Cambridge.”

Isaac Newton, twenty-nine years old when he joined the Society in 1672, was a student of the experimental method and an enthusiastic user of artificial helps (his most recent work was with prisms). But when he shared his most recent “philosophical discovery”—that all light is made up of a spectrum of rays, and that “whiteness is nothing but a mixture of all sorts of colours, or that it is produced by all sorts of colours blended together”—the Society greeted him with skepticism. Hooke objected that he could think of at least two other “various hypotheses” that could equally well explain Newton’s results, and the other members of the Society recommended that many more experiments should be made before any universal conclusions were drawn.20

These experiments dragged on for the next three years, with much correspondence flying back and forth between Newton’s Cambridge elaboratory and the Society’s London headquarters. Newton became increasingly frustrated. “It is not number of experiments, but weight, to be regarded,” he complained in 1676, “and where one will do, what need many?” Gradually, he withdrew from participation in the Royal Society, and devoted himself instead to his own research: not only light and optics, but also the orbits of the planets, and the celestial mechanics that might explain them.

In 1687, he published his first major work: Philosophiae Naturalis Principia Mathematica, or “Mathematical Principles of Natural Philosophy.” It was intended to solve the biggest problems that still plagued the heliocentric model. For one thing, calculations based on perfectly circular orbits didn’t match up with the exact position of the planets. Galileo’s friend and colleague Johannes Kepler had proposed laws for elliptical orbits; this yielded much better results, but neither Kepler nor Galileo had been able to explain why the orbits should be elliptical rather than circular.

Newton had a possible solution. Planets circled the sun, he suggested, not because they were mounted on some sphere, but because the sun was exerting a force on them. Planets exerted the same force on moons that surrounded them. This force he called gravitas.

Galileo, like Aristotle, had believed that objects fell because of an inherent quality within them, an intrinsic “weightiness.” Newton argued that objects fell because the earth’s gravitas drew them toward it. But the strength of this force did not remain the same over distance. It changed. As the planets moved further from the sun, the force that pulled on them weakened: thus, the ellipse.

In order to fully explain the laws governing this new force, Newton had to come up with an improved mathematics, capable of accounting for continual small changes. This new math was a “mathematics of change,” able to predict results in a setting where the conditions were constantly shifting, forces altering, factors appearing and receding.21

So the Principia performed two groundbreaking tasks simultaneously. It explained the why behind the ellipses of the planets—and in doing so, revealed for the first time a new force in the universe, the force of gravity. And it introduced an entirely new branch of mathematics, which became known as calculus (from the Latin word for “pebble,” the tiny stones used as arithmetical counters).

None of this was easy going. The Principia is, deliberately, composed of impenetrable mathematical explanations; William Derham, Newton’s longtime friend and colleague, later explained that since Newton “abhorred all Contests,” he “designedly made his Principia abstruse” in order to “avoid being baited by little Smatterers in Mathematicks.” (This drove off quite a few academics as well; a frustrated Cambridge student famously remarked, as Newton passed him on the street, “There goes the man who has writt a book that neither he nor any one else understands.”)22

But at the beginning of Book 3, Newton abandoned his dense formulaic prose in order to write clearly.

The “Rules for the Study of Natural Philosophy” that begin Book 3 are, in a way, his final response to the Royal Society’s unending demands for proof. Newton was aware that the conclusions of the Principia could be dismissed by the literal-minded as “ingenious Romance”—mere guesses. After all, he had not actually spun planets at different distances from the sun to observe the rate of their orbit. Instead, he had taken the results of experiments with weighted objects carried out on earth, and had extrapolated their results into the heavens.23

The Rules explain why Newton’s conclusions about the planetary orbits, while not experimentally proven in the way that would make the Society happy, are nevertheless reliable—as the first three Rules make very clear.

1. Simpler causes are more likely to be true than complex ones.

2. Phenomena of the same kind (e.g., falling stones in Europe and falling stones in America) are likely to have the same causes.

3. If a property can be demonstrated to belong to all bodies on which experiments can be made, it can be assumed to belong to all bodies in the universe.

This is Bacon’s inductive reasoning, always progressing from specifics to generalities, based on observation—but now extended by Newton to breathtaking lengths.

Across, in fact, the entire face of the universe.

The Historians

For nearly two centuries, the universe would remain Newtonian.

His laws always seemed to work, in every place. Gravity functioned in the same way in every corner of the universe. Time passed everywhere at the same rate. The universe was static and infinite, and it went on forever.

But this did not mean that the earth was static and unchanging, or that the living things upon it had always been the same. And Newton’s Rules made it possible for observations about the present to be extrapolated back into educated guesses about the past.

For one thing, how long had the earth been around?

Newton himself speculated that the earth might originally have been a molten sphere. In that case, it would have taken at least 50,000 years for it to cool to its present temperature—although he refused to offer this as an actual theory, since he didn’t feel he had the experimental proof to back it up. His colleague and sometimes competitor, the German mathematician Gottfried Wilhelm von Leibniz, offered a similar speculation—that the earth had once been liquified, like metal, and had cooled and hardened over time. This had produced large bubbles; some of them calcified into mountains, others shattered and disintegrated, producing valleys.24

Questions about the age of the earth and its past history became suddenly more fraught in 1701, when the the Bishop of Winchester, William Lloyd, inserted a creation date of 4004 B.C. to the marginal notes of the newest version of the 1611 Authorized Version of the Bible. This date had first been proposed by the Irish bishop and astronomer James Ussher a half century before; Ussher had combined the study of biblical chronology with his own astronomical observations, and had concluded that the earth could not be more than six thousand years old.

The Authorized Version of the Bible was the most widely read and influential English translation in print. From this point on, proposing an age of more than six thousand years for the earth would carry with it the slur of denying Scripture—and not just in English-speaking countries. In 1749, the French naturalist George-Louis Leclerc (usually known by his title, the Comte de Buffon) estimated the age of the earth at 74,832 years, and privately thought that an even longer time frame was probable—perhaps as long as three billion years (not so far off from the contemporary estimate of 4.57 billion). His theories drew the attention of the Faculty of Theology in Paris, which carried on a long and suspicious correspondence with him over his understanding of Genesis. But Buffon dug in his heels, refusing to yield the point.25

He was not alone in his insistence. Despite theological opposition, a growing cadre of scientists was coming to the conclusion that the scientific method and Newton’s Rules, exercised together, yielded a long, long history for the earth. In 1785’s Theory of the Earth, the Scottish-born James Hutton argued that the continents had been formed, over vast amounts of time, by the exact same cycles of erosion and buildup, ebb and flow, that still operate today. And the measurement of those present processes suggested that change happened very, very slowly.

So slowly, in fact, that Hutton could not wrap his head around the amount of time needed. “[T]he production of our present continents must have required a time which is indefinite,” he wrote. “. . . The result, therefore, of this physical inquiry, is that we find no vestige of a beginning, no prospect of an end.” Geological time—what John McPhee would later label “deep time”—was so different from the time of human experience that Hutton could barely even use the measure of years to express it.26

In 1809, the French zoologist Jean-Baptiste de Monet—better known as the Chevalier de Lamarck—suggested that the living creatures on the earth’s surface had a history almost as long. Before Lamarck, most natural historians had treated animals and plants as coming late to the surface of the globe, arriving more or less already in their present forms. But Lamarck’s Zoological Philosophy married the history of life to the history of the globe: As it altered, so did the creatures on its surface. “With regard to living bodies . . . nature has done everything little by little,” he wrote. “[S]he acts everywhere slowly and by successive stages.”27

Unfortunately, Lamarck couldn’t really come up with a defensible theory as to how living creatures altered. The best he could do was to offer a “principle of use and disuse,” which suggested that when the environment changed, living creatures found themselves using some organs more (leading to greater “vigour and size” in those parts) and other organs less (causing them to “deteriorate and ultimately disappear”). This was impossible to demonstrate experimentally and the principle was widely scorned by other scientists: “Something that might amuse the imagination of a poet,” sniffed Lamarck’s contemporary, the naturalist Georges Cuvier.28

But despite Lamarck’s shortcomings, he and his predecessors had managed to establish a firm working principle: Both the earth, and the living creatures who occupied it, had an unimaginably long history. It was a principle that gave birth to the foundational works of modern biology and geology.

The first among these was written by Georges Cuvier himself. In his twenties, Cuvier had been given the job of organizing and cataloguing the massive collection of fossil bones piled haphazardly in the storage rooms of Paris’s National Museum of Natural History. It seemed clear to Cuvier that some of these fossil skeletons—particular two that he labeled as “mammoth” and “mastodon”—were not simply variations on present-day animals; they were something else, species that no longer existed.

Eventually, Cuvier identified, in the museum stockpiles, twenty-three species that appeared to be extinct. Trying to figure out why they had disappeared, Cuvier turned to the rock layers in which the fossils had been found. He and his colleague, the mineralogist Alexandre Brongniart, identified six distinct layers in the rock strata around Paris: six different eras in the earth’s past, each with its own population of plants and animals, some now extinct. Before long, Cuvier extrapolated these discoveries into an earth-wide theory. In 1812, he published this theory as the preface, or “Preliminary Discourse,” to his collected papers on fossils (Recherches sur Les Ossemenes Fossiles de Quadrupeds, an assembly of all of the different studies he had presented and published since 1804).

The earth, Cuvier argued, had undergone six separate catastrophic changes. Its layers changed suddenly and distinctly, not gradually and by degrees; therefore, it seemed clear that a series of nearly worldwide disasters had wiped out various populations of flora and fauna. “Thus, life on earth has often been disturbed by terrible events,” Cuvier concluded. “These great and terrible events are clearly imprinted everywhere, for the eye that knows how to read.”29

For a time, Cuvier’s catastrophism was the most widely accepted model for the past—until the geologist Charles Lyell proposed a different version of the past.

Catastrophe, Lyell argued, wasn’t necessarily the cause of past phenemona. “It appears premature,” he wrote in the London journal Quarterly Review, “to assume that existing agents could not, in the lapse of ages, produce such effects.” Extraordinary, earth-wrecking disasters could have produced the specimens in Cuvier’s collections. But it was equally possible that the “existing agents” still at work in the world—plain old erosion, the common rise and fall of temperatures, the regular wash of the tides—might be responsible instead.30

Which was Lyell’s distinct preference. He was convinced that catastrophism was a dead end for science. If one-time past events were responsible for the current form of the earth, there was no way that the past could be understood through the exercise of reason. The natural philosopher could always haul in a disastrous flood, or a passing giant comet, or some other event that could never be experimentally reproduced, to explain the planet.

Instead, Lyell argued, every force that has operated in the past can be observed, still acting, with the same intensity, in the present: a principle now known as uniformitarianism. The title of his 1830 natural history made this commitment perfectly clear: Principles of Geology, Being an Attempt to Explain the Former Changes of the Earth’s Surface, by Reference to Causes Now in Operation.

Uniformitarianism made catastrophes unfashionable, global floods and divine intervention unnecessary. Uniformitarianism also made the unimaginably long time frame first proposed by Hutton completely necessary. “Existing agents” such as tides and erosion could have shaped the world into its present form, but it would take them a really, really long time.

The year after the Principles of Geology was published, a young Charles Darwin put it into his luggage before setting off on the HMS Beagle for what would become a five-year journey of exploration: from Plymouth Sound to the South American coast, then to the Galápagos Islands, Tahiti, and Australia, circling the globe before returning home. “[Lyell’s] book was of the highest service to me in many ways,” he later wrote. He was struggling with the problem of species (where did they come from? what accounted for the differences between them?) and he found Lyell’s long-and-slow philosophy of change entirely convincing. “Natura non facit saltum,” Darwin concluded: Nature does not make sudden jumps. Whatever mechanism had produced the difference between species, it had taken a very long time to work.

He also read Lamarck, but disagreed vigorously with the principle of use and disuse. “It is absurd!” he scribbled in the margin of the Zoological Philosophy. Instead, Darwin found the key to the species question in Thomas Malthus’s bestselling An Essay on the Principle of Population, which had been first published in 1798. The future of the human race, Malthus argued, was shaped by two factors: Humanity has an innate drive to reproduce, which means that the population constantly increases. But because the food supply does not increase as rapidly as the population, a large percentage of those born will always die of starvation.

“It at once struck me,” Darwin later wrote, “that under these circumstances favourable variations would tend to be preserved and unfavorable ones to be destroyed. The result of this would be the formation of new species.” He had found, he believed, the key to the species problem; but he drafted and redrafted his thoughts for over a decade before finally publishing On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life in 1859.31

The book laid out a series of arguments in support of Darwin’s main conclusion: Life, like the earth itself, is changing constantly, and natural causes alone account for that change. And different species of animals have not always existed; new species appear when previous species develop variations, and those variations prove helpful in the fight for survival. In 1864, the well-known biologist and philosopher Herbert Spencer used the phrase “survival of the fittest” to described Darwin’s theory; although it never appears in The Origin of Species itself, the phrase soon became inextricably entwined with Darwin’s own work.

A major stumbling block remained. Although Charles Darwin was quite sure that variations were passed from parent to child, he had no idea how this worked.

“The laws governing inheritance are quite unknown,” he lamented, in the second chapter of The Origin of Species. “No one can say why a peculiarity . . . is sometimes inherited and sometimes not so.” Nine years after Origin of Species was first published, he suggested that inheritance could be explained through the existence of “minute particles” called gemmules, which are thrown off by every part of an organism, accumulate in the sex organs, and are then passed on to offspring. The strongest argument for this theory was simply that he couldn’t think of anything better. “It is a very rash and crude hypothesis,” he wrote to his friend T. H. Huxley, “yet it has been a considerable relief to my mind, and I can hang on it a good many groups of facts.”32

He never came up with a better explanation, although the key to the truth was literally under his own roof.

At Darwin’s death in 1882, his library contained unopened copies of a short paper in German by the Austrian botanist (and Augustinian friar) Gregor Mendel, describing Mendel’s nine-year experiments with sweet peas. Interbreeding thirty-four different varieties, Mendel had discovered a series of laws that seemed to govern how their characteristics (shape and color of the seeds and pods, position of flowers, length of stem) were passed on.

Clearly, the characteristics were carried from parent pea to offspring pea by the egg and pollen cells, so (Mendel proposed) those cells must contain discrete units, or elements, with each element carrying a particular characteristic within it. The proper manipulation of those elements could change the characteristics of the next generation—and, Mendel speculated, might be able to eventually mutate one species into another.33

Mendel wasn’t able to identify exactly what the elements of heredity were, or where in the cell they might be. But a series of biological experiments to pinpoint them was already underway.

The German biologist Ernst Haeckel, a generation younger than Darwin (and originator of the catchy phrase “ontogeny recapitulates phylogony”)34 proposed that inheritance might be controlled by something in the nucleus of a cell. He didn’t have the equipment to prove it, but in the early 1880s, Haeckel’s countryman Walther Flemming made use of much-improved microscopic lenses and better staining techniques to observe minuscule, thread-like structures in cells that had begun to divide (mitosis). His colleague Wilhelm Waldeyer suggested that these should be named chromosomes, a name which simply described their ability to soak up dye (chrom, color; soma, body).

In 1902, the German biologist Theodore Boveri discovered that sea urchin embryos need exactly 36 chromosomes to develop normally—which strongly suggested that each chromosome carried a unique and necessary piece of information from parent to child. Simultaneously, an American graduate student named Walter Sutton realized from his experiments with grasshoppers that chromosomes carry the “physical basis of a certain definite set of qualities.” The Danish botanist Wilhelm Johannsen gave this unit of heredity, the carrier of information from one generation to the next, its name: the gene. This was Darwin’s missing puzzle piece, the mechanism that transformed organic life from one form into another.35

A decade and a half later, a German astronomer named Alfred Wegener stumbled across the other major missing mechanism: the one that had transformed the inorganic surface of the globe.

“Anyone who compares, on a globe, the opposite coasts of South America and Africa,” Wegener wrote, in his 1915 book The Origin of Continents and Oceans, “cannot fail to be struck by the similar configuration of the two coast lines.” The jigsaw match suggested to him that the continents had once been a single mass, a giant supercontinent that he labeled Pangea; long, long ago, Pangea had broken up and drifted apart. This required him to provide an explanation for how solid earth could “drift.” So he proposed that the earth was not actually solid. Instead, it consisted of a liquid core, surrounded by a series of shells that increased in density as they got closer to the surface.36

It was a simple, elegant explanation, and accounted for almost all the factors that puzzled geologists: odd similarities between fossils found in far distant places, the apparent interlocking fit of the continental coastlines, the origin of mountains (which, according to Wegener, sprang up where the drifting pieces collided and overlapped). The problem was the absolute absence of any physical evidence. Wegener could not demonstrate the existence of a liquid core; nor could he supply a reason why Pangea didn’t simply remain in one supercontinent.

But Wegener believed that the explanatory power of his theory trumped his lack of explicit proof. He argued that, after all, the earth “supplies no direct information” about any part of its history: “We are like a judge confronted by a defendent who declines to answer,” he wrote, “and we must determine the truth from the circumstantial evidence. . . . The theory offers solutions for . . . many apparently insoluble problems.”37

Thirteen years after the original publication of The Origin of Continents and Oceans, the naval astronomers F. B. Littell and J. C. Hammond compared the longitudes of Washington and Paris in 1913 and in 1927. Their readings revealed that the distance between the two cities had increased by 4.35 meters—a creep of .32 meters per year.

Given that Paris is some six thousand kilometers from Washington, it would have taken over 18 million years for the two cities to move that far apart. But the drift was measurable, beyond a doubt. The continents were indeed drifting—and had been doing so for a very long time. They, like the living creatures on them, had a history; and the basic time line of that history had now been put into place for both.

The Physicists

While historians of life were working out a narrative for the past, physicists were puzzling out the present—and discovering that time, space, and matter itself were not nearly as straightforward as Newton, Bacon, and their heirs had once thought.

Ten years before the publication of The Origin of Continents and Oceans, the patent examiner and physicist Albert Einsten had completed five papers in a single year, all dealing with problems in electricity, magnetism, and related issues of space, time, and motion. One of the papers proposed that the conversion of energy into mass could be expressed as

E = mc2

which became the most familiar formula of the twentieth century.

But Einstein thought another of the papers, “On the Electrodynamics of Moving Bodies,” even more important. It was, he told a friend, an out-and-out “modification of the theory of space and time”: his first exploration of what would later be known as the special theory of relativity.

The paper set out to reconcile two apparently contradictory principles of physics. The first concerned the speed of light. Since the early 1880s, physicists had agreed that light traveling through a vacuum always has the exact same velocity (“c = 300000 km/sec.”).

The second was the principle of relativity, a cornerstone of the Newtonian universe, which decrees that a law of physics must work in the same way across all related frames of reference.

Imagine, Einstein later wrote, that a railway car is traveling along next to an embankment at a regular rate of speed. At the same time, a raven is flying through the air, also in a straight line relative to the embankment, and also at a steady rate of speed. An observer standing on the embankment sees the raven flying at a certain rate of speed. An observer standing on the moving railway car sees the raven flying at a different rate of speed. But although the speed changes, relative to the observer, both watchers still see the raven flying at a constant rate of speed, and in a straight line. The principle of relativity dictates that the raven cannot suddenly appear to be accelerating, or traveling in zigzags.


Now, imagine that a vacuum exists above the railway tracks, and that a ray of light travels above it, in the same direction as the raven. The principle of relativity says that light too will travel at a constant rate, and in a straight line. But it also implies that an observer on the embankment and an observer on the railway car will see the light traveling at two different speeds—which means that the speed of light is not constant.

Most physicists dealt with this problem by abandoning the principle of relativity. But Einstein argued that neither law needed to be given up—as long as we are willing to adjust our ideas about time and space.38

Both observers measure the speed of light per second; perhaps, Einstein suggested, what was changing was not the speed per second, but the second itself. Time itself was slowing down as the observer moved faster. For the observer who was moving, a second was actually . . . longer. Time was not, as had always been thought, a constant.

Instead, Einstein concluded, time was a fourth dimension that we move through—a dimension that changes as we travel in it. The “special theory of relativity” had redefined the nature of time.

In 1916, Einstein redefined space as well.

Building on the work of the nineteenth-century mathematician Bernhard Riemann, Einstein proposed that space is just as relative to the observer as time (the “general theory of relativity”). The presence of massive objects, Einsten argued, actually bends space. Since we (the observers) are within space, we cannot see the curves—but objects traveling through space are affected by the bends.

This theory could be checked against effects caused by the sun, the most massive object nearby. If Einstein was correct, light from stars, traveling through space, would move along curved space as it neared the sun. The starlight would then appear to be “pulled” toward the mass of the sun; starlight would be, observably, bent by the sun’s mass.

This could only be observed during a total solar eclipse, and it was another three years before the British astronomer Arthur Eddington was able to take the necessary measurements. His calculations, made during a solar eclipse in 1919, showed that the starlight passing by the sun had shifted, to the exact degree that Einstein had foreseen.

In Relativity: The Special and General Theory, Einstein laid out his conclusions about time and space for general readers. Neither, it turned out, was what it seemed. Baconian observation had its limits; common sense can lead the observer astray.

Meanwhile, a small handful of Einstein’s colleagues were doing equally revolutionary work on a much smaller scale: on the atom itself. By the end of the nineteenth century, physicists had come to believe that atoms—Lucretius’s “indivisible” particles—were, in fact, made up of smaller particles carrying a negative electrical charge; these were labeled electrons by the Irish physicists George Stoney and George Fitzgerald. Early in the twentieth century, the young German physicist Hans Geiger and his elder colleague Ernest Rutherford theorized that these electrons were orbiting a central mass, a “nucleus.” It was an elegant, intuitive model; electrons spun around the nucleus like planets around the sun, the smallest particles in the universe mirroring the heavens.39

But the orbits of those electrons posed a problem.

The “Rutherford model” imagined electrons to be something like satellites circling the earth. If a satellite orbiting the earth lost some of its energy, it would spiral down and crash. But when an atom emitted energy (as, for example, hydrogen atoms did, giving off light particles that some physicists had labeled photons), it remained stable. The orbits of the electrons did not seem to decay.

In 1913, the Danish physicist Niels Bohr proposed a solution. Electrons, he suggested, don’t orbit in continuous smooth circles, like planets or satellites. Instead, they jump from discrete spot to discrete spot. When a hydrogen atom emits a photon, the electron loses energy, but it doesn’t spiral down; it “leaps” to a lower orbital path, one which is stable but takes less energy to maintain.

These jumps were known as quantum jumps. A few years earlier, the physicist Max Planck had discovered that he could only predict the behavior of certain kinds of radiation if he treated energy, not as a wave (radiating out smoothly and evenly, as was the accepted model) but as a series of chunks: separate particles, pulsing out at intervals. Planck called these hypothetical energy particles “quanta,” and he wasn’t happy with them. They were, he told a friend, a “formal assumption,” a mathematical hat trick, a way of “saving the phenomenon.” “What I did can be described as simply an act of desperation,” he explained. “It was clear to me that classical physics could offer no solution to this problem . . . [so] I was ready to sacrifice every one of my previous convictions about physical laws.”40

But then Einstein himself found that treating light as if it were made up of quanta, rather than waves, helped explain some previously perplexing properties. And now, Bohr had solved an atomic-level problem by proposing that an electron’s path was quantized. Quantum theory, announced Max Planck in his Nobel Prize Address of 1922—a clear and interesting summary of the field’s development—had the potential “to transform completely our physical concepts” of the universe.41

Yet its implications became increasingly odder. For example: in the new “Bohr-Rutherford model” of the atom, an electron performed a “quantum leap” between orbits, rather than gliding smoothly through consecutive space. This implied that, while making the leap, the electron was . . . nowhere.

It was also impossible to predict, with certainty, where the electron would reappear at the end of its jump. The best physicists could do was predict its probable place of reappearance. The theoretical physicist Werner Heisenberg, who worked extensively on this problem, pointed out (reasonably enough) that the uncertainty is infinitesimal once physics moves into the realm of objects larger than a molecule; an electron orbiting the nucleus of a hydrogen atom might make an unexpected leap, but a goat grazing on a hillside isn’t going anywhere unpredictable at all.

But other scientists found it maddening to be pushed into the realm of probabilities, rather than measurable certainties. “If we are going to have to put up with these damn quantum jumps,” the Austrian physicist Erwin Schrödinger complained to Niels Bohr, “I am sorry that I ever had anything to do with quantum theory.” Even Einstein, who had a high capacity for startling new ideas, objected that quantum theory was “spookish.” (“I cannot seriously believe in it,” he wrote to his friend Max Born, not long before Born won the Nobel Prize for his work in quantum mechanics.)42

Yet quantum theory continued to solve problems, despite the massive disturbance it had caused in the world of physics.

The Synthesists

Meanwhile, Darwinian evolution had begun to lose its grip on the scientific imagination.

Since Darwin had created the grand narrative of evolution, individual researchers in widely separated fields had been slotting new details into place: the existence of chromosomes, the laws of heredity, the presence of deoxyribonucleic acid (DNA) within the nucleus of cells. Better instruments, more data, and improved research techniques were yielding discoveries thick and fast, many of them (in new fields of study: cytology, biometry, embryonics, genetics) filling in the empty fretwork of Darwin’s overarching structure.

But these studies were clogged with technical language, published in narrowly focused professional journals with tiny specialist audiences. There was, in Ernst Mayr’s words, “an extraordinary communication gap” between the sciences. Genetics had nothing to do with anthropology, or paleontology with biochemistry. Each researcher, viewing his (rarely her) own brick in the wall, had lost sight of the the whole building. “The theory of evolution,” concluded the director of the National Museum of Natural History in Paris, in 1937, “will very soon be abandoned.”43

Yet individual discoveries in the life sciences were confirming, again and again, that natural selection did explain the present form of organic life. A defense of Darwin was needed: a defense which would connect the highly meaningful dots, explaining the ways in which the grand theory and specific discoveries acted together.

In 1937, the Russian entomologist Theodosius Dobzhansky published the first attempt to do just that: Genetics and the Origin of Species. The book was a synthesis of his laboratory experiments in genetics, his field observations on fruit fly inheritance, and his work in the mathematical field of population genetics. In the next decade, a handful of well-regarded biologists followed his lead. George Gaylord Simpson’s Tempo and Mode in Evolution, Bernhardt Rensch’s Evolution above the Species Level, and Ersnt Mayr’s Systematics and the Origin of Species, from the Viewpoint of a Zoologist all made the same argument: Darwinian natural selection did, indeed, account for the existence of species.

In 1942, yet another work on the topic appeared: Evolution: The Modern Synthesis, by the English biologist Julian Huxley (grandson, as it happened, of one of Darwin’s most ardent contemporary supporters, Thomas Huxley). Julian Huxley was not only a well-regarded biologist, but a skilled popular writer; a decade before, he had collaborated with the novelist H. G. Wells on a best-selling popular history of biology.

Evolution: The Modern Synthesis was a sprawling, multifaceted book. It covered, in turn, paleontology, genetics, geographical differentiation, ecology, taxonomy, and adaptation—but clearly, readably, without jargon. It was an instant success: “The outstanding evolutionary treatise of the decade, perhaps of the century,” exclaimed one of the most important journals of the field. From 1942 on, the ongoing attempt to connect specialized laboratory discoveries with the larger world of natural history, all in support of the Darwinian scheme, would take its name from Huxley’s book: the modern synthesis.44

Two years later, Erwin Schrödinger—still struggling with those damn quantum jumps—published another kind of synthesis. What Is Life? dealt with the overlap between quantum physics and biology, the common ground between the study of ourselves and the study of the cosmos. Using quantum theory to account for the behavior of orbiting electrons, Schrödinger showed how this behavior affected the formation of chemical bonds, and how those chemical bonds then affected cell behavior, genetics, evolutionary biology itself.

The success of What Is Life? as a synthesis can be measured by the number of physicists who were inspired, after reading it, to migrate over into biological research. “No doubt molecular biology would have developed without What Is Life?” writes Schrödinger’s biographer, Walter Moore, “but it would have been at a slower pace, and without some of its brightest stars. There is no other instance in the history of science in which a short semipopular book catalyzed the future development of a great field of research.”45

Semipopular: the word is a signpost, pointing out a shift in scientific writing.

What Is Life? was, first and foremost, written for other scientists. A biologist had once been able to glance over an entire kingdom. Now, it was a full-time job to keep up with discoveries in a single subspecies: epigenetics, population genetics, genomics, phytochemistry, phylogenetics, and many more. The study of physics—the behavior of the universe—had increasingly focused itself on smaller and smaller segements of the cosmos, each requiring more and more specialized instrumentation: optics, photonics, particle physics, radio astronomy, quantum chemistry. New theories were written up for academic journals with very narrow audiences. The articles made use of technical vocabulary and arcane mathematical notation, inaccessible to nonspecialists—and even more so to the general public.

As discoveries multiplied, audiences shrank. Yet translating those discoveries for the wider reading public—the interested, intelligent layperson—turned out to be a fraught activity.

The Popularizers

A faint line had been already traced between professional and popular science writing.

In 1894, Julian Huxley’s grandfather had complained about the unwillingness of scientists to write plainly for the lay reader, for fear of lowering their prestige in their own fields: “[They] keep their fame as scientifichierophants,” T. H. Huxley grumbled, “unsullied by attempts—at least of the successful sort—to be understood of the people.” As the twentieth century wore on, the line between popularizers and academic scientists darkened. Best-selling books on science were widely scorned by professional researchers, and to be labeled a “mere popularizer” was death to an academic career.46

Simultaneously, the public thirst for science was growing greater and greater. The first daily science feature to run in a newspaper (“What’s What with Science,” by journalist Watson Davis) appeared in the Washington Herald in the 1920s; in the 1930s, the National Association of Science Writers (journalists, not professors) took shape. The end of World War II whetted interest in atomic science, and the startling launch of Sputnik by the Soviet Union in 1957 sparked a general demand for information about space.

Yet scientists were slow to feed this public appetite. “For better or worse, whether scientists like it or not,” mourned the Bulletin of the Atomic Scientists in 1963, “the public today gets its image of science, its information about science, and its understanding of scientific concepts largely from these nonscientists, the science writers.” Why not join the ranks of science writers themselves? Because most scientists believed themselves to be objective, unbiased, clearsighted hunters of truth. The “science writer,” on the other hand, “works in the world of journalism and is subject to its pressures, its traditions and conventions, and its biases.” 47

Given this deepening hostility toward “popular” science, it is hardly surprising that the next influential science book to hit the shelves was written by a (female) outsider: Rachel Carson, a talented biologist who ran out of money after completing her M.A. in 1932, and was never able to complete her Ph.D. or gain an academic position. Instead, she wrote about science: first, for the Baltimore Sun, and then for the U.S. Fish and Wildlife Service. Her second book, 1951’s The Sea Around Us, was a best seller and National Book Award winner. But sales of her third book, Silent Spring, left it in the dust.

“There are very few books that can be said to have changed the course of history,” writes Carson’s biographer, Linda Lear, “but this was one of them.” Silent Spring began with a dreadful warning: “For the first time in the history of the world, every human being is now subjected to contact with dangerous chemicals, from the moment of conception until death.” The book went on to attack Western governments, the chemical industry, and the farming industry for the indiscriminate use of pesticides.

Silent Spring was not only a massive work of synthesis (between chemistry and biology, laboratory science and public policy, academic research and citizen activism, the study of man and the study of man’s entire world) but popular science at its best: well-informed and dramatic, a gripping blend of statistic and story, affecting every human being. Carson had demonstrated just how powerful popular science could be; and in the next two decades, an unprecedented raft of academic scientists defected to the popular fold.48

Life scientists led the pack. In 1967, the zoologist Desmond Morris teased out the full implications of Darwinian evolution for human behavior in The Naked Ape, an interpretation of man’s cultural behavior through the lens of biology: one of the first works of sociobiology. The following year, James Watson published an account of his work with Francis Crick on DNA. That odd little substance in the nucleus of the cell had been identified as the carrier of genetic information from one generation to the next, and in 1953, Crick and Watson together had proposed a double helix structure for DNA that made sense of the mechanism. Their model, which would not actually be observed for some decades, was chemically sound, tested worldwide, and soon accepted by biologists everywhere. Watson’s 1968 bestseller, The Double Helix: A Personal Account of the Discovery of the Structure of DNA, mixed science with memoir, and made DNA a household word.

In 1976, Oxford biologist Richard Dawkins took the story of DNA further in The Selfish Gene, which offered a comprehensive explanation for all organic life, including ours. “Intelligent life on a planet comes of age when it first works out the reason for its own existence,” Dawkins begins, and the reason he has worked out is a simple one: we eat, sleep, have sex, think, write, build space vehicles and war machines, sacrifice ourselves or others, all in order to preserve our DNA. Natural selection happens at the most basic level, the molecular; our bodies have evolved to do nothing more than protect and propagate our genes, which are ruthlessly selfish molecules working to ensure their own survival.49

This was not a comforting view of human nature, but popular science was proving a perfect vehicle for scientists to make the sort of sweeping conclusions (about human existence, all of culture, the cosmos itself) that scientific papers and journal articles rarely contained.

In 1977, Steven Weinberg’s smash hit, The First Three Minutes leapt directly from physics to metaphysics. Weinberg explained the so-called “Big Bang,” the expansion of the entire universe from an original super-dense point known as a singularity—and then went further:

It is almost irresistable for humans to believe that we have some special relation to the universe . . . that human life is not just a more-or-less farcical outcome of a chain of accidents reaching back to the first three minutes. . . . The more the universe seems comprehensible, the more it also seems pointless.

That conclusion (one that certainly reaches beyond the Baconian project) leads him to an even broader statement about the purpose of human existence. “If there is no solace in the fruits of our research,” Weinberg concludes, at the very end of the book, “there is at least some consolation in the research itself. . . . The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce, and gives it some of the grace of tragedy.”50

Popular science was itself evolving. It was more than information, more than entertainment, more than a call to activism. It offered scientists a chance to make broader conclusions about human life: to explain not just what, but who and why we are.

In some ways, popular science did succumb to the “traditions and conventions” of the marketplace, as the Bulletin of the Atomic Scientists had gloomily foretold. Scientists were forced to write in ways that would grab, and keep, their readers; witness the fairy-tale opening of Silent Spring (“Some evil spell had settled on the community . . . Everywhere was the a shadow of death”), the vivid analogies of The First Three Minutes (“If some ill-advised giant were to wiggle the sun back and forth, we on earth would not feel the effect for eight minutes, the time required for a wave to travel at the speed of light from the sun to the earth”), and the epic first chapter of Walter Alvarez’s T. rex and the Crater of Doom, which is titled “Armageddon” and begins with an epigraph from the Lord of the Rings.

The hostility between popular and academic science grew more nuanced and complex, but didn’t go away. “Popularization,” concluded a 1985 study of the relationship, “is traditionally seen as a low-status activity . . . something external to research which can be left to non-scientists, failed scientists, or ex-scientists.” Among scientists, the Oprah Effect became known as the Sagan Effect, “whereby one’s popularity and celebrity with the general public were thought to be inversely proportional to the quantity and quality of real science being done.”51

Science writing, increasingly, traveled down two different paths: one broad and well-trodden, the other narrow and high-walled. New discoveries and groundbreaking theories were first floated to the scientific world in journals, articles, and conference talks, and slowly disseminated through the scientific world. Only then did they take book form and enter the general consciousness. James Gleick’s best-selling Chaos: Making a New Science came out in 1987, twelve years after the mathematicians Tien-Yien Li and James A. Yorke used the term chaos theory in their technical paper about nonlinear equations, and twenty-four years after Edward Lorenz had first described the phenomenon. And Stephen Hawking’s cosmology overview A Brief History of Time, published in 1988, sold over 10 million copies—but contained nothing revolutionary at all.

T. rex and the Crater of Doom, Walter Alvarez’s widely read account of his detective work in finding tracks of the asteroid that (theoretically) wiped out the dinosaurs, came out in 1997, seventeen years after Alvarez and his colleagues first published their theory as an academic paper (“Extraterrestrial Cause for the Cretaceous-Tertiary Extinction”). Alvarez’s dramatic scenarios (“Doom was coming out of the sky . . . Entire forests were ignited, and continent-sized wildfires swept across the lands . . . [A] wall of water . . . towered above the shorelines”) were immediately incorporated into the movies Deep Impact and Armageddon, sparking an entire subgenre of films about the end of the earth—and also gave rise to academic conferences (such as 2009’s “Near-Earth Objects: Risks, Responses and Opportunities,” hosted by the University of Nebraska–Lincoln) and at least one multinational committee tasked with “establishing global frameworks to respond to NEO threats.” Popular science writing had not only grasped the public imagination; it had altered public policy—and even turned back to shape the academy.52


All of the books on the annotated list can be read by nonspecialists, but be prepared to take some time. As you’ll see from the steps listed below, science should be approached with a slightly different attitude than the other books we’ve discussed. Your first read-through is where the really hard work happens; understanding the context and content of the text is the greatest challenge (which is why this chapter has a much longer “history of” section, and much shorter “how to read” assignments). Don’t rush the first read-through, and make use of any reference works or guides necessary.

Keep your purpose in mind, though. You aren’t trying to master physics, or genetics, or biochemistry. You are attempting to learn something about the development of human understanding, the ways in which we have used our reason and our senses to comprehend the world. As Mortimer Adler wrote, over forty years ago, “As a layman, you do not read the classical scientific books to become knowledgeable in their subject matters in a contemporary sense. Instead, you read them to understand the history and philosophy of science.” That task is well within the ability of any serious reader—even if you don’t remember anything about your college survey course in cosmology.53

The First Level of Inquiry: Grammar-Stage Reading

Read a synopsis.    Before this point, you’ve always started with the book itself. But when you’re reading science—particularly the pre–twentieth century works—your chances of understanding the book on your first read-through will be much improved if you have some idea of what it’s about before you crack it open. Unlike history, which is about human experience (something you have firsthand knowledge of), science is about a construct: an interrelated set of ideas and theories that you might not be at all familiar with. Reading a summary of Aristotle’s Physics or the Commentariolus of Copernicus will introduce you to the construct and give you some sense of the book’s structure.

If the book contains an introduction written by an expert in the field, that introduction probably contains a brief summary of the book’s content. If the book itself doesn’t contain a synopsis, look online. Searching for “aristotle physics synopsis,” for example, brings up summaries at Sparknotes and the Stanford Encyclopedia of Philosophy (both reliable sources), as well as multiple summaries written by university instructors and posted on course websites. A search for “stephen hawking brief history of time summary” brings up several reviews from reputable papers that include a survey of the book’s content, as well as a number of reader-generated guides and a Wikipedia entry. These are perfectly acceptable—you’re going to read the book yourself, after all, so you’ll discover any inaccuracies as you go. Your goal with this step is simply to put yourself into the same frame as the book: to acquaint yourself with the context in which the author was writing, the primary arguments made, and any concepts central to the book’s development.

Look at the title, cover, and table of contents.    As you did with your histories, note down the title, the author’s name, and the original publication date. Read through the table of contents to get a sense of the topics the author will cover.

Define the audience and its relationship to the author.    Who is the author, and for whom is he or she writing? A scientist writing primarily for other scientists, as Julian Huxley did? A scientist writing for laypeople? A nonscientist digesting technical information for other nonscientists? The cover copy, back cover summary, and introduction, preface, or foreword of the book can point you toward the answers.

Keep a list of terms and definitions.

Now, start reading.

As you read, look for technical terms and their statements of definition. Write them in your journal for reference.

For example, in the first chapter of Steven Weinberg’s The First Three Minutes, you will encounter the “electron, the negatively charged particle that flows through wires in electric currents and makes up the outer parts of all atoms and molecules” and the “positron, a positively charged particle with precisely the same mass as the electron.” The beginning of James Lovelock’s Gaia offers, “An aeon represents 1,000 million years,” “A supernova is the explosion of a large star.”

These are fairly straightforward (and if you already understand a technical term, you don’t need to write it down). But a statement of definition can also be a little more complex. On the first page of Galileo’s Dialogue Concerning the Two Chief World Systems, for example, the character Salvati observes that there are in nature “two substances which differ essentially. These are the celestial and the elemental, the former being invariant and eternal; the latter, temporary and destructible.” This is a statement of definition; the terms “celestial” and “elemental” will be important as Galileo’s argument develops, so you will want to write these terms in your notebook as

two substances in nature
     celestial: unvarying and eternal
     elemental: temporary and destructible

If you’re having trouble locating the statements of definition, keep a look out for sentences which take the form noun [the term being defined], state of being verb/linking verb, and then description OR predicate nominative.








“The second motion, which is peculiar to the earth, is the daily rotation on the poles . . . from west to east.” (Nicolaus Copernicus, Commentariolus)

second motion of the earth: daily rotation on poles from west to east


state of





“The pair-formation stage . . . is characterized by tentative, ambivalent behaviour involving conflicts between fear, aggression and sexual attraction.”(Desmond Morris, The Naked Ape)

pair-formation stage: tentative ambivalent behavior, conflict of fear, aggression, attraction

Whenever you run across an italicized or bold word or phrase, be sure to find its definition. In many cases, these have been set off because they come at the end of a longer, somewhat complicated paragraph (or paragraphs) of definition. For example, in Chapter Five of The Origin of Species, Darwin writes:

Hence, when an organ, however abnormal it may be, has been transmitted in approximately the same condition to many modified descendants, as in the case of the wing of the bat, it must have existed, according to our theory, for an immense period in nearly the same state; and thus it has come not to be more variable than any other structure. It is only in those cases in which the modification has been comparatively recent and extraordinarily great that we ought to find the generative variability, as it may be called, still present in a high degree.

“Generative variability,” it turns out, is the term he has decided to assign to a type of modification that he’s been describing over the previous two pages. Looking back, I can paraphrase the (somewhat convoluted) explanation as:

generative variability: when very rapid and recent changes in a species means that not all members of the species have a particular variation

It’s absolutely OK to “cheat” in order to find definitions. Science writers do not always provide the clearest possible definitions for their terms, and even after rereading the text I’m not entirely sure that I understand what Darwin means. If I do an online search for “generative variability + Darwin,” I mostly end up with reprints of the Origin of Species text, but if I search for “generative variability is,” I find the following explanation:

Generative variability is variation manifest in structures that have recently experienced rapid and considerable evolutionary change. Darwin envisions this as a dynamic process. Given enough time—after the structure has reached its maximum extent of development—selection weeds out most of the deviations and the trait ends up fixed. (James T. Costa, The Annotated Origin, Harvard University Press, 2009, p. 154)

Any time you can’t quite figure out the meaning of a term, use reference tools to sharpen your understanding.

The more unfamiliar terms a book contains, the longer it will take for you to do your first reading. Don’t lose heart. For science books, the first reading is the hardest; your second and third levels of inquiry will go much more quickly (and smoothly) if you take the time now to understand exactly what the book is saying.

Mark anything that still confuses you and keep reading.    You will probably find that some pages, sections, or even entire chapters of these books still confuse you. Don’t get stalled. Take a reasonable amount of time to look up definitions, and then, if you remain puzzled, bookmark or turn down the page and keep going.

Your primary goal, on this first read-through, is to get through to the end. In most great books of science, the last chapter is the clearest and most straightforward, because the author—having done the difficult and painstaking work of laying out the evidence and drawing conclusions from it—is free to explain what it all means. Not only is the conclusion (usually) easier to read, but it tends to illuminate everything that came before: once you know where the book is heading, it’s much simpler to make sense of the details that line the path.

The Second Level of Inquiry: Logic-Stage Reading

Go back to your marked sections and figure out what they mean. Once you’ve reached the last page, you’re ready to go back and reread those confusing sections.

Are they technically confusing? If you simply don’t understand the concepts, bring in some other experts to help. Do an online search for explanations; look for university websites and excerpts from published books, as these tend to be more reliable than personal websites or blogs. Or consult an encyclopedia of science such as James Trefil’s The Encyclopedia of Science and Technology (Routledge, 2014) the McGraw-Hill Concise Encyclopedia of Science and Technology (6th ed., McGraw Hill, 2009), or the hyperbolically titled Science Desk Reference: Everything You Need to Know about Science, From the Origins of Life to the Ends of the Universe, ed. John Rennie (Scientific American/John Wiley, 1999).

Are they linguistically confusing? Try rewriting the section in your own words. Begin with a sentence-by-sentence paraphrase, and then attempt to summarize your paraphrase in a single paragraph.

A related method that some readers find helpful is to outline the text in question instead. Try to identify the main topic of each paragraph; assign that topic a Roman numeral (I, II, III . . .). Then, ask yourself: What are the most important pieces of information about this idea? Assign capital letters (A, B, C . . .) to those ideas. If necessary, you can then identify details about each idea and list them with Arabic numerals (1, 2, 3 . . .).

Define the field of inquiry.    What set of phenomena, exactly, is the writer studying? And to what field of science do they belong? Aristotle’s Physics is an attempt at a unified theory of the universe, encompassing astronomy, cosmology, physics, biology, and mathematics; Galileo’s Dialogues brings physics as well as astronomy to the table. Walter Alvarez’s T. rex, the latest book on the annotated list, is rooted in Alvarez’s training as a geologist, but paleontology plays a large role in Alvarez’s investigations, and Alvarez himself now teaches a course in cosmology (“Big History”).

First, locate the work within one of the major divisions of science: earth science, astronomy, biology, chemistry, physics. Then, spend some time investigating the sub-branches of each. For this purpose, Wikipedia can be very useful, as it offers multiple charts and ways of connecting the sciences; you can also make use of one of the science encyclopedias listed above, or do an online search for “branches of science.”

Now try to identify the subfields of science that the work in question encompasses. You can spend as much or as little time on this project as you find helpful; draw diagrams or branch charts of your own, if useful; read up a little on the kinds of work done in the fields; or simply identify them and move on. Each scientific field has its own conventions; each has its own history, rooted at a particular point in time; each prioritizes a certain kind of evidence, which leads to the next step . . .

What sort of evidence does the writer cite?    Are the writer’s conclusions based on observations, such as Hooke’s microscopic studies, or Darwin’s notes on species seen in the Galápagos Islands? If so, how were those observations made? In person? Gathered from the works of others? What helps and instruments were used? Did those instruments introduce any distortion into the observation? What kind of distortion?

Are the conclusions experimental, set up in a laboratory and carried out in order to test a particular hypothesis? Where were the experiments done, and by whom? How many times were they repeated? Have they been confirmed by other scientists? (You might have do a little external research to answer that question.)

What part does anecdote play? Rachel Carson’s Silent Spring offers both observational and experimental evidence to demonstrate the destruction caused by pesticides, but she also relies on a series of stories, such as those told by the residents of southeastern Michigan about a 1959 spraying for Japanese beetles. (“A woman . . . reported that coming home from church she saw an alarming number of dead and dying birds . . . A local veterinarian reported that his office was full of clients with dogs and cats that had suddenly sickened.”)

Identify the places in which the work is inductive, and the areas where it is deductive.    Does the writer begin with a “big idea” and then work down to specifics, as Aristotle and Alfred Wegener do? This is the inductive method: beginning with a large concept or overall theory, and then looking for pieces of evidence to support it. Or, does the writer start out with individual observations, inconvenient facts, experimental results that can’t be explained under current theories, and then generalize to a larger hypothesis? If so, the work is primarily deductive in nature.

Despite the elevation of deductive thinking in modern science, almost all researchers also make use of inductive thinking, and the relationship between the two is complex. Walter Alvarez found iridium where it should not have been; this led him to theorize that perhaps a comet or asteriod had struck the earth (deductive). If the comet struck the earth, there should be an impact crater; so he then spent years searching for the impact crater. This search led him to interpret the sediment layers on the Yucatán peninsula in reference to impact, which in turn led him to the conclusion that he had discovered the crater. This is induction: beginning with the assumption that the crater existed, and then looking for the evidence to support it.

Flag anything that sounds like a statement of conclusion.    “I believe,” writes Darwin, as he rejects Lamarck’s theory of use and disuse in favor of his own variation by natural selection, “that the effects of habit are of quite subordinate importance to the effects of . . . natural selection.”

Darwin helpfully precedes many of his statements of conclusion with “I believe,” but the statement of conclusion can take a number of forms. “The universe will certainly go on expanding for a while,” writes Steven Weinberg. “It obviously follows that if we are to gain scientific knowledge of nature,” Aristotle concludes, “we should begin by trying to decide about its principles.” And James Lovelock tells us, “The theory of Gaia has developed to the stage where it can now be demonstrated, with the aid of numerical models and computers, that a diverse chain of predators and prey is a more stable and stronger ecosystem than a single self-contained species, or a small group of very limited mix.”

Look for the following markers:

Therefore . . . [or thus, or other related words; Darwin is fond of “hence”]

It is clear that . . .

I believe . . .

We now know . . .

It can be demonstrated that . . .

Certainly . . .

It is obvious that . . .

It follows that . . .

Scientists now agree that . . .

Once you’ve located the conclusions, jot them down (in your own words, if you prefer) in your journal.

Now, you’re ready to move on to the final level of inquiry.

The Third Level of Inquiry: Rhetoric-Stage Reading

For nonscientists, it isn’t easy to answer the most basic question of the rhetoric stage: Do you agree?

You can certainly attempt to evaluate the connection between evidence and conclusion, making use of the techniques suggested on pages 198–206 of my chapter on history. But science writing, particularly in the twentieth century and beyond, often cites evidence that is impossible for the lay reader to evaluate. If you’re determined to test Galileo’s conclusions, you can drop two different weights off your second-floor deck and watch them strike the ground; but most of us are not going to have much luck reproducing the quantum leap of a decaying atom, or the nonlinear equations of a chaotic system.

So the final stage of reaction to each text needs to be slightly more philosophical. When Steven Weinberg tells us that the present universe “faces a future extinction of endless or intolerable heat,” non-physicists are obliged to take him at face value. But when he adds that “working out the meaning of the data” accumulated by science is “one of the very few things that lifts human life a little above the level of farce,” we should feel free to argue back.

Consider asking two large questions of each work.

What metaphors, analogies, stories, and other literary techniques appear, and why are they there?   The first chapter of The First Three Minutes begins, rather unexpectedly, with the Viking origin myth found in the Edda: the universe emerged as a cosmic cow began to devour a salt-lick. This is more than an engaging, reader-friendly opener—as Weinberg’s conclusion (“Men and women are [no longer] content to comfort themselves with tales of gods and giants”) makes clear. Weinberg isn’t just writing about the first three minutes; he’s providing an alternative origin story, one that can take the place of religious explanation.

Metaphors and narratives, in other words, give clues to the science writer’s basic argument. The opening scene of Rachel Carson’s Silent Spring immediately sets up a contrast between the good life of the rural past, and the world of the chemical companies—a commercial, industrial, unnatural society. Even Albert Einstein’s opening metaphor in Relativity: The Special and General Theory points the reader toward Einstein’s underlying theory of knowledge: “In your schooldays,” he writes, “most of you who read this book made acquaintance with the noble building of Euclid’s geometry, and you remember— perhaps with more respect than love—the magnificent structure, on the lofty staircase of which you were chased about for uncounted hours by conscientious teachers.” Staircases lead to upper levels, magnificent ones: mathematics is the ladder that we climb to find truth.

Find the metaphors, or stories, or narratives. Ask yourself: Why this metaphor? Why this particular story? What does it tell me about the writer’s assumptions?

Are there broader conclusions?    Isaac Newton famously remarked that, while he could explain what gravity was, he felt no need to explain why. He did not intend to explain the nature of the cosmos. He simply wanted to discover its laws.

He was in the minority. Many of the books on the annotated list go well beyond the boundary Newton erected—from Lucretius’s insistance that all religious belief darkens the mind, to Stephen Hawking’s speculation that a unified theory of physics might actually answer “the question of why it is that we and the universe exist.”

Which texts make sweeping statements about the nature of man, the ultimate purpose of our existence, the why of the cosmos? What are those statements? Do you agree with them? If so, is it because the writer has convinced you that those broader statements arise logically out of the evidence presented? And if you disagree, why?


The following books are chosen, not to give you a comprehensive overview of the greatest discoveries in science (that would require a much longer list) but to highlight the ways in which we think about science. It is a reader’s list for nonspecialists, so important books that are highly technical and equation-heavy (Euclid’s Elements, for example) are not on it.

It isn’t necessary to read every word of the older texts. Dipping into Hippocrates will give you a good sense of his method; Aristotle’s Physics certainly doesn’t have to be mastered in every detail before you move on; and if you examine a few of the illustrations in Micrographia, you’ll be perfectly well equipped to understand Robert Hooke’s revolutionary ideas.

From Silent Spring on, many of the books are available as unabridged audios. But almost all of these books contain graphs, illustrations, and charts that will help your understanding—so consider the audio versions supplemental.


On Airs, Waters, and Places

(460–370 B.C.)

Best translations: The nineteenth-century Francis Adams translation, still readable, is widely available online. It is included in several printed collections simply titled The Corpus; editions include paperback reprints by Kessinger Legacy (2004) and Kaplan Classics of Medicine (2008). A more modern translation is included in the Penguin Classics paperback, Hippocratic Writings, trans. G. E. R. Lloyd, John Chadwick, and W. N. Mann (1983). The sentence structure is slightly easier to follow, but the two translations are very similar.

The neuroscientist Charles Gross once characterized Hippocratic medicine as combining “absence of superstition, accurate clinical description, ignorance of anatomy, and a physiology that is largely an absurd mixture of false analogy, speculation, and humoral theory.”54 All four of those characteristics are on full display in “On Airs, Waters, and Places.”

“Whoever wishes to investigate medicine properly,” the essay begins, “should proceed thus: in the first place to consider . . . the winds . . . the qualities of the waters . . . and the grounds.” The cures for mankind’s various bodily ills will not be found in prayer, but in a better understanding of the natural world.

So the physician must understand his patients’ surroundings: winds, waters, temperatures, and elevations of particular cities shape the health of their inhabitants. Each place has its own peculiar kind of air and water, so each also has its own kind of diseases. A city that is exposed to hot southern winds, for example, will be filled with flabby men and women who don’t eat and drink much and suffer from too much phlegm; babies are subject to convulsions and asthma, and the most the common diseases are dysentery, diarrhea, chronic winter fevers, and hemorrhoids. By contrast, cities which are sheltered from hot southern breezes but open to northern winds have hard, cold water. Their inhabitants suffer from a lack of correct bodily fluids; the men are prone to constipation, the women often have trouble nursing their babies, and everyone is subject to nosebleed and stroke. To treat his patients, the physician must first analyze the natural surroundings, and then shift the sick from one climate to another in order to encourage production and balance of the appropriate humors.

Salted into this theorizing are some perfectly valid observations; for example, that “marshy, stagnant” waters with “a strong smell” are unwholesome and will cause illness. Hippocratic medicine chalked this unwholesomeness up to humoral imbalance: bad-smelling waters produce too much bile, which makes those who drink it sick. This was, of course, the wrong explanation. But the Hippocratic physician could at least see the connection between foul water, and the subsequent stomach upset in his patient. In seeking to connect natural causes to natural effects, the Hippocratic approach took the first huge step away from magical thinking.



(c. 330 B.C.)

Best translations: Robin Waterfield’s translation for Oxford World’s Classics (1999) is clear and fluid. In addition, the R. P. Hardie and R. K. Gaye translation, done as part of a forty-year effort to translate Aristotle into a standard English version (the “Oxford Translation”), is widely available and is still very readable; a good edition is the Clarendon Press Physics (1930), available as a free ebook.

The Physics is divided into eight books, but the first two are the most important. Book 1 establishes Aristotle’s scientific method: He recommends beginning with our general understanding of the universe (“the things which are more knowable and obvious to us”) and proceeding from these general ideas to the specific examination (always shaped by our previous understanding) of specific things, or phenomena (“clearer and more knowable by nature”). This is deductive reasoning (starting with a general truth and reasoning your way to logically necessary conclusions) rather than inductive reasoning (beginning with individual observations and reasoning your way toward a general explanation that accounts for them). Modern science relies on inductive reasoning, but not until the sixteenth century would deductive reasoning give way to its rival.

Book 2 defines “nature” in terms of the principle of internal change: Natural things contain within themselves a principle of motion, while things constructed by men (“art”) do not. A sapling grows into a tree because of its intrinsic principle of motion; a house or a bed, although made of wood, never grows into anything else; it is a work of art, and remains a house or a bed. The principle of motion is purposeful: motion propels natural things, inexorably, toward an end which is predetermined.

Throughout the Physics, Aristotle assumes that the world is evolving toward something better. This is, of course, not exactly what we mean by evolution today: modern biological evolution has no predetermined goal, no overall design. Aristotle’s science on the other hand, is teleological, firmly convinced that nature is developing, purposefully, toward a more fully realized end. But this end was not (as medieval science, baptized into Christianity, would assume) set into place by a Creator. A sprout becomes a tree because its treeness is already inherent in it. For Aristotle, teleology is not an external guiding force, but an internal potentiality.


On the Nature of Things (De Rerum Natura)

(c. 60 B.C.)

Best translation: Lucretius wrote in Latin verse: the scientific prose of the ancient world. Ronald Melville’s translation On the Nature of the Universe (Oxford University Press, 2009) is a clear and elegant poetic version; if you’d rather have Lucretius in prose, try Ronald E. Latham’s translation for Penguin Classics (rev. ed., 1994).

Lucretius lays out three key positions. First, religion is mere superstition: “We start then from [Nature’s] first great principle,” he writes, “that nothing ever by divine power comes from nothing” (1.148–149). Belief in the gods darkens the mind, making it unable for thinkers reach any true or accurate understanding of the world. Book One opens with a paean to Epicurus, the first man who dared to teach that the gods did not control daily life, and continues to develop a philosophy of complete materialism. Doing away with belief in the divine, Lucretius argues, opens the mind’s eye: “The terrors of the mind flee all away,” he explains, in Book Three, “the walls of heaven open, and through the void/ Immeasurable, the truth of things I see” (3.16–17).

Second, a degenerative principle is at work in the universe. All things are continually struck by an ongoing hail of atoms, which wears away at them; eventually, everything in the cosmos will decay (“So shall the ramparts of the mighty world / Themselves be stormed and into crumbling ruin / Collapse”) (2.1145–1147). Book Two is one of the earliest written attempts to lay out a philosophy of entropy.

Third: there is no plan in the universe. All that is has come from a chance collision of the atomic particles which make up the world. Book Five explains all of human history as the result of randomness: “For sure,” Lucretius sums up, “not by design or intelligence / Did primal atoms place themselves in order” (5.419–420). No other explanation accounts for the random aspects that Lucretius sees in the world around him: a place of inhospitability, ill fortune, and death.




Best translation: The Commentariolus is included, along with a summary of Copernicuss work written by his champion Rheticus (the Narratio Prima ) and a letter written by Copernicus disproving the calculations of the astronomer Johannes Werner (the Letter against Werner), in the paperback Three Copernican Treatises, translated by Edward Rosen (2nd rev. ed., Dover Publications, 2004). If you feel adventurous, you can tackle On the Revolutions of the Heavenly Spheres itself. The early twentieth-century translation by Charles Glenn Wallis has been reprinted in paperback by Prometheus Books (1995) and by Running Press (2002, with notes by Stephen Hawking).

The Commentariolus begins with a brief statement of the presenting problem: even with the employment of eccentrics, epicycles, and equants, planets do not move with “uniform velocity.” The problem can be partially solved, Copernicus explains, if the sun is at the center of the universe.

Much of the Commentariolus is devoted to explaining this new universe, but Copernicus also tackles the movement of the earth, which is threefold: it “revolves annually in a great circle about the sun,” it rotates on its own axis, and it also tilts from side to side, over the course of the seasons. These movements cause “the entire universe” to appear to “revolve with enormous speed” around the earth, but this, Copernicus concludes, is merely illusion: “The motion of the earth can explain all these changes in a less surprising way.”

Throughout, the Commentariolus is dedicated to finding the simplest explanation. Yet, as Copernicus goes on to investigate the motion of each planet, he finds himself building more and more shells around the sun, an increasingly complex interlocking series of spheres. His simple explanation eventually wraps him into a ridiculously complicated final statement: “Altogether,” he concludes, “thirty-four circles suffice to explain the entire structure of the universe and the entire ballet of the planets.”


Novum Organum


Best translations: The nineteenth-century translation by James Spedding and Robert Ellis remains readable. It is still the most commonly reprinted and can be read in multiple free ebook versions, such as The Philosophical Works of Francis Bacon, trans. and ed. James Spedding and Robert Ellis, Vol. IV (Longman & Co., 1861). A more recent translation with introduction, outline, and explanatory notes is The New Organon, ed. Lisa Jardine and Michael Silverthorn (Cambridge University Press, 2000). The notes are useful, but the translation, while more contemporary, is not always clearer.

Since Aristotle, deductive reasoning had ruled the practice of science; Bacon sets out to overthrow it. On the cover of the first edition of the Novum Organum, Bacon placed a ship—his new inductive method—sailing triumphantly past the Pillars of Hercules: the mythological pillars that marked the furthest reach of Hercules’s journey to the “far west,” the outermost boundaries of the ancient world, the greatest extent of the old way of knowledge.

Book I begins with “Aphorisms,” brief independent statements that lay out Bacon’s objections to the current methods in use in natural science. Deductive reasoning, Bacon objects, tends to reinforce four inaccurate ways of thinking. He calls these the “Idols of the Tribe” (general assumptions that all of society accepts as common sense and no longer questions), the “Idols of the Cave” (assumptions that seem natural to individual thinkers because of their own peculiar education, or experience, or inborn tendencies), the “Idols of the Marketplace” (the careless assumption that words and definitions carry the same meaning to every listener), and the “Idols of the Theatre” (assumptions based on philosophical systems handed down from ancient times). In Section 82, he lays out his alternative proposal for finding knowledge, the three steps that (eventually) developed into the modern scientific method.

Book II expands on Bacon’s central theme: if men could only “lay aside received opinions” (all those idols), and “refrain the mind for a time from the highest generalizations,” the “native and genuine force of the mind” will impart understanding. It isn’t necessary to read all of Book II, which dissects various physical processes in order to prove Bacon’s point and ends with Bacon’s attempt to divide the study of natural history into categories.


Dialogue Concerning the Two Chief World Systems


Best translation: The most readable is Stillman Drake’s, originally published in 1953 and now available in a nicely revised and annotated edition from the Modern Library Science Series: Dialogue Concerning the Two Chief World Systems: Ptolemaic and Copernican (2001).

By the time Galileo published the Dialogue, Cardinal Bellamine was dead; but the Inquisition was still alive and active, so the Dialogue is framed as a hypothetical discussion among three friends as to whether the heliocentric, geokinetic model could, theoretically, prove to be the best possible picture of the universe.

The Copernican model is defended by the thoughtful and intelligent characters Salviati and Sagredo; all Inquisition-approved opinions are voiced by the least sympathetic character, the clearly ignorant and incompetent Simplicius, blindly loyal to Aristotle, willing to check his reason at the door. The ruse was sufficient to get the Dialogue past the initial censor, the Dominican theologian Niccolo Riccardi, although Riccardi insisted on a preface that recognized the Church’s objections to heliocentrism as perfectly valid. He also wanted a disclaimer at the end, cautioning that the tides could be understood without recourse to a moving earth.

Galileo promply supplied a highly sarcastic preface (“Several years ago there was published in Rome a salutary edict which, in order to obviate the dangerous tendencies of our present age, imposed a seasonable silence upon the . . . opinion that the earth moves”), and placed in Simplicius’s mouth an ending assertion that God, “in His infinite power and wisdom,” was probably causing the tides to move “in many ways which are unthinkable to our minds.” This temporarily satisfied the censor, but didn’t fool any of Galileo’s scientific colleagues.

The Dialogues are divided into four books of discussion, each taking place over the course of a day. The discussions of the First and Second Days are the most central; the Third and Fourth Days expand on the problems of motion laid out in the first two parts.




Best editions: Although multiple reprints of the Micrographia are available, few of them reproduce Hookes groundbreaking illustrations at the original size or with decent detail. The best way to view the illustrations is in the Octavo CD, which offers clear scans of the actual pages of the original book, in PDFs that can be magnified, rotated, and viewed in color or black and white (Octavo Digital Rare Books, CD-ROM, 1998). However, the text itself, complete with unmodernized spelling, is extremely difficult make out in the Octavo scans. Consider turning to one of the free ebook versions (such as that found at Project Gutenberg) or a paperback reprint (Cosimo Classics, 2007) in order to read Hookes accompanying essays.

First, read the Preface, in which Hooke explains the relationship between the senses and the faculty of reason. Then, take some time to examine Hooke’s prints. The first fifty-seven illustrations and observations are microscopic; the last three, of refracted light, stars, and the moon, are telescopic.

Throughout Micrographia, Hooke uses his close observations—the extension of the senses through artificial means—as the launching place for new ways of thinking. Ultimately, his instruments augment human reason, not just human senses. Close observation leads to new theories; new theories lead to new paradigms.

Using William Harvey’s circulatory system as his analogy, Hooke explains in the Preface that true natural philosophy

is to begin with the Hands and Eyes, and to proceed on through the Memory, to be continued by the Reason; nor is it to stop there, but to come about to the Hands and Eyes again, and so, by a continual passage round from one Faculty to another, it is to be maintained in life and strength, as much as the body of man is by the circulation of the blood through the several parts of the body, the Arms, the Feet, the Lungs, the Heart, and the Head. If once this method were followed with diligence and attention, there is nothing that lies within the power of human Wit . . . Talking and contention of Arguments would soon be turned into labours; all the fine dreams of Opinions, and universal metaphysical natures, which the luxury of subtle brains has devised, would quickly vanish, and give place to solid Histories, Experiments and Works. And as at first, mankind fell by tasting of the forbidden Tree of Knowledge, so we, their posterity, may be in part restored by the same way, not only by beholding and contemplating, but by tasting too those fruits of natural knowledge, that were never yet forbidden.

Instruments and helps are no longer merely extensions of the senses; they become, for Hooke, the Tree of Knowledge, the path to perfection.


“Rules” and “General Scholium” from Philosophiae Naturalis Principia Mathematica


Best translations: Selected excerpts from the Principia (including the “Rules” and “General Scholium”) can be found in the Norton Critical Edition of Newtons work: Newton: Texts, Backgrounds, Commentaries, ed. and trans. I. Bernard Cohen and Richard S. Westfall (W. W. Norton, 1995). The entire Principia has been translated by I. Bernard Cohen and Anne Whitman in the massive (950-page) paperback The Principia: Mathematical Principles of Natural Philosophy: A New Translation (University of California Press, 1999). A simpler way to read the entire book is to search for the public domain 1729 translation by Andrew Motte, which is not significantly more difficult to read.

The four books of the Principia lay out the rules by which gravity functions. Throughout, Newton establishes and makes use of three principles (“Newton’s Laws of Motion”). The Law of Inertia states that objects in motion remain in motion, and objects at rest remain at rest (unless an outside force is applied). The Law of Acceleration states that, when a force is applied to a mass, acceleration results; the greater the mass, the greater the force needed to produce acceleration. And the Law of Action and Reaction states that, for every action, there is an equal and opposite reaction. Books I and II establish these laws of motion, both in the abstract (without any friction present) and in the presence of resistance; the remainder of the Principia deals with gravity as a universal force.

The Rules of Reasoning explain why Newton can be sure that these laws function everywhere in the universe. He was concerned that his critics might accuse him of offering a mere “ingenious Romance,” rather than a reliable hypothesis. So, in the Rules, Newton sets out to show that experimental conclusions can be generalized to reach beyond the scope of individual experiments.

Then, in the General Scholium (which also contains a famous discussion of the place of God in natural philosophy) Newton places limits on the method. Gravity, Newton explains, is a force

that penetrates as far as the centers of the sun and planets without any diminution of its power to act, and that acts not in proportion to the quantity of the surfaces of the particles on which it acts . . . but in proportion to the quantity of solid matter, and whose action is extended everywhere to immense distances, always decreasing as the squares of the distances.

But, he cautions, “I have not yet assigned a cause to gravity.” He could deduce the laws of gravity from his experiments on earth, but the reason for gravity lay beyond his grasp. Nor did he feel it was necessary for him to explain why it existed: “It is enough,” he concludes, “that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies and of our sea.” In extending the reach of the experimental method across the universe, he had also been careful to erect a boundary fence on the other side: Science can tell what, but it has no responsibility to tell why.


“Preliminary Discourse”


Best translations: Martin J. S. Rudwick’s, in Georges Cuvier, Fossil Bones, and Geological Catastrophes: New Translations & Interpretations of the Primary Texts (University of Chicago Press, 1998). The “Preliminary Discourse,” titled “The Revolutions of the Globe” (the title often given the discourse when published separately), is found in Chapter 15. There is no need to read all of Rudwick’s preface, which is almost as long as the discourse but less elegantly written. You can also search for Robert Jameson’s 1818 translation, published under the title Essay on the Theory of the Earth, which is archaic in spots but still perfectly readable.

The “Preliminary Discourse” arose out of Cuvier’s commitment to the Baconian method. Sorting through the National Museum’s “charnel house” of fossils, he found species that no longer existed. He had no explanation as to whythey had died out, no grand overarching theory of life; instead, he examined each specific fossil and the strata in which it was found. Increasingly, these led him to believe that “the globe has not always been as it is at present.” The strata were a book of the earth’s past that could be read by the perceptive, and Cuvier’s reading led him to a series of propositions:

Life has not always existed.

There have been several successive changes in state, from sea into land, from land into sea.

Several of the revolutions that have changed the state of the globe have been sudden.

Using only the evidence before him, Cuvier had moved from observation to hypothesis: The past is punctuated by a series of catastrophic disasters.


Principles of Geology


Best editions: The original 1830 text, published by John Murray, can be read online or downloaded as a PDF from multiple sources. Penguin has also produced a high-quality paperback, edited by James A. Secord (1997).

Most available editions of the Principles of Geology contain all three volumes, written between 1830 and 1832. Originally, Lyell had planned to write just two volumes, one dealing with his overall principles (Volume 1), and the second marshaling more specific geological proofs (now Volume 3). Eventually, though, he realized that he had to give some accounting for the fossil record, so interposed a new volume (the current Volume 2) between. You only need to read Volume 1, which lays out Lyell’s basic principles; the specific observations in Books 2 and 3 have been thoroughly superseded.

In the twenty-six short chapters of Volume 1, Lyell lays out three interlocking principles for geology, now generally known by the names actualism, anti-catastrophism, and (more awkwardly) the earth as a steady-state system.

Actualism: Every force that has acted in the past is still acting (and can be observed) in the present.

Anti-catastrophism: Those forces did not act with more intensity in the past; their degree has not changed.

The earth as a steady-state system: The history of the earth has no direction or progression; all periods are essentially the same.

Lyell refused to entertain the idea that any extraordinary events played a part in the history of the earth—not flood, or comet, or asteroid, or even heating or cooling beyond what can be observed in the present day. “No causes whatever,” he wrote, “have, from the earliest time to which we can look back, to the present, ever acted, but those now acting; and . . . they never acted with different degrees of energy from that which they now exert.”

Two years later, the English natural philosopher and clergyman William Whewell gave Lyell’s principles the label by which they have been known ever since: uniformitarianism.


On the Origin of Species


Best editions: The Origin of Species is widely available in many different editions and formats. Check the textual notes; the original 1859 text is the clearest, most succinct, and most easily grasped by the general reader. The Wordsworth Editions Ltd. reprint (1998) reproduces both the 1859 text and the essay that Darwin added to the third (1861) edition, “Historical Sketch of the Progress of Opinion on the Origin of Species,” which lays out his intellectual debt to Lyell, Lamarck, and others.

Charles Darwin’s five-year journey on the HMS Beagle began in certainty: “When I was on board the Beagle,” he later wrote, “I believed in the permanence of species.” Different kinds of animals, he assumed, had always existed. But as he took notes on the vast variations of living creatures he now encountered, his puzzlement grew. What was a species? Where did they come from? Why did different species arise? As he prepared his notes for publication (1829’s Journal and Remarks, now generally known as The Voyage of the Beagle), he became convinced that “many facts indicated the common descent of species.”

He was still working on the problem in 1858, when he received a letter from the British explorer Alfred Russel Wallace, fourteen years his junior. Wallace had collected his own observations on tens of thousands of different species and had come to the conclusion that species change, or evolve, because of environmental pressures. “On the whole,” Wallace wrote “the best fitted live.”

From the effects of disease the most healthy escaped; from enemies, the strongest, the swiftest, or the most cunning; from famine, the best hunters or those with the best digestion; and so on. Then it suddenly flashed upon me that this self-acting process would necessarily improve the race, because in every generation the inferior would inevitably be killed off and the superior would remain—that is, the fittest would survive.55

Wallace had enclosed his essay, “On the Tendency of Varieties to Depart Indefinitely From the Original Type,” in his letter to Darwin, asking him to pass it on to any natural philosophers who might find it interesting.

Darwin had independently come to exactly the same conclusion. He sent Wallace’s letter on to the Linnean Society of London, a century-old club for the discussion of natural history, along with an abstract of his own conclusions; in August of 1858, Wallace’s and Darwin’s theories were published side by side in the Linnean Society’s printed proceedings.

The following year, Darwin, energized by Wallace’s co-discovery of the principle of natural selection, finally published his entire argument. This first edition—On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life, immediately sold out. Over the next two decades, he revised The Origin of Species six times. Even in his final revision, he did not take his theory to its logical end; but he had already privately concluded that his principles of natural selection applied to the human race as well. “As soon as I had become . . . convinced that species were mutable productions,” he wrote in his later Autobiography, “I could not avoid the belief that man must come under the same law.”


Experiments in Plant Hybridization


Best translation: Mendel’s paper was translated into English by the Royal Horticultural Society of London in 1901; this clear and succinct translation remains the standard. W. P. Bateson’s republication of the entire English-language paper in his 1909 book Mendel’s Principles of Heredity is widely available online; Cosimo has also republished it in a high-quality paperback with all formulae and diagrams included (2008).

Gregor Mendel spent nearly a decade interbreeding sweet peas, in an effort to confirm—or disprove—the most widely accepted nineteenth-century model of inheritance. This was called “blending,” and proposed that the characteristics of both parents somehow passed into their offspring and melded together to create a happy medium: a black stallion and a white mare should have a gray foal, a six foot father and five foot mother should produce a child who would mature at five foot six.

There were two problems with this. First, it was (often) demonstrably untrue. And second, blending was completely incompatible with the theory of natural selection: blending tended to remove all variations, not preserve the most favorable ones.

Mendel discovered that some of the characteristics of the peas were always passed on to the next generation; he called these “dominant” characteristics. Other aspects seemed to disappear in the offspring, but then would sometimes reappear several generations on; these, Mendel termed “recessive.” The painstaking cross-fertilization of generation after generation of sweet peas allowed Mendel to work out a series of formulas for the passing on of these dominant and recessive characteristics. And as he did so, he realized that blending did not explain his sweet-pea variations. Rather, there must be separate units of inheritance that pass from one plant to the next.

Over time, this could indeed transform one species into another:

If a species A is to be transformed into a species B, both must be united by fertilisation and the resulting hybrids then be fertilised with the pollen of B; then, out of the various offspring resulting, that form would be selected which stood in nearest relation to B and once more be fertilised with B pollen, and so continuously until finally a form is arrived at which is like B and constant in its progeny. By this process the species A would change into the species B.


The Origin of Continents and Oceans


Best translation: John Biram’s 1966 translation, made from the fourth German edition of 1929, has been reprinted by Dover Publications (1966).

Alfred Wegener came up with his theory of continental drift not based on evidence, but because the most widely accepted explanation for the presence of ocean basins and continental masses had been cast into doubt.

Following a theory of Isaac Newton’s, many geologists believed that the earth had once been molten. As it cooled, it contracted and its crust wrinkled, sinking in some places, rising up into continents and mountains in others. In that case, the earth must still be cooling. But discoveries in radiation at the turn of the century made it clear that certain atoms generated more heat over time. This didn’t fit at all with the idea that a uniformly hot earth was now cooling; or, as Wegener himself put it in The Origin, “The apparently obvious basic assumption of contraction theory, namely that the earth is continuously cooling, is in full retreat before the discovery of radium.”

Instead, Wegener came up with his theory of continental drift, laid out in The Origin of Continents and Oceans. Don’t look for proofs; this was a grand theory in the Aristotelian tradition. Wegener came up with the huge overarching explanation first, and defends it entirely on its internal consistency. “The theory offers solutions for . . . many apparently insoluble problems,” he concludes.

Most geologists disagreed. The hypothesis gained very slow acceptance over time; the measurements of Littell and Hammond in 1929 helped, but not until the discovery of mantle convection currents in the 1960s was the mechanism for continental drift finally understood.


The General Theory of Relativity


Best translation: Robert W. Lawson’s 1920 translation into English is widely available; most editions include Einstein’s summary of his findings on the special theory first. Read both, since the general theory builds on the special. An excellent edition is Relativity: The Special and the General Theory, trans. Robert W. Lawson, with introduction by Roger Penrose, commentary by Robert Geroch, and historical essay by David C. Cassidy (Pi Press, 2005).

“The present book is intended, as far as possible,” Einstein’s 1916 preface begins, “to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics.” In other words, with a little persistance, you too can follow Einstein’s arguments. Einstein worked at the end of an era; he was one of the last great scientists to bring his most groundbreaking discoveries directly to the general public.


“The Origin and Development of the Quantum Theory”


Best translation: The original English translation by H. T. Clarke and L. Silberstein is widely available online (The Clarendon Press, 1922), as well as in a paperback reprint by Forgotten Books (2013).

Planck’s brief essay, the written version of his Nobel Prize address, provides a fascinating glimpse into the development and early direction of quantum theory. By 1922, the contradictions inherent in quantum mechanics were already clear. Don’t try to follow all the details of Planck’s address; instead, pay special attention to pages 10–11. Look for the promise that he believes quantum theory will fulfill—and the possible consequences that Planck fears.


Evolution: The Modern Synthesis


Best edition: The MIT Press version, The Modern Synthesis: The Definitive Edition ( 2010).

“The death of Darwinism has been proclaimed not only from the pulpit, but from the biological laboratory,” Huxley begins, “but, as in the case of Mark Twain, the reports seem to have been greatly exaggerated, since to-day Darwinism is very much alive.” And his first chapter lays out his intentions:

Biology in the last twenty years, after a period in which new disciplines were taken up in turn and worked out in comparative isolation, has become a more unified science. It has embarked upon a period of synthesis, until to-day it no longer presents the spectacle of a number of semi-independent and largely contradictory sub-sciences, but is coming to rival the unity of older sciences like physics, in which advance in any one branch leads almost at once to advance in all other fields, and theory and experiment march hand-in-hand. As one chief result, there has been a rebirth of Darwinism. . . . The Darwinism thus reborn is a modified Drawinism, since it must operate with facts unknown to Darwin; but it is still Darwinism in the sense that it aims at giving a naturalistic interpretation of evolution. . . . It is with this reborn Darwinism, this mutated phoenix risen from the ashes of the pyre . . . that I propose to deal in succeeding chapters.

It was a sprawling, multifaceted task, but the clarity of Huxley’s style and the down-to-earth, jargon-free presentation of technical ideas made Evolution: The Modern Synthesis both readable and popular. The book went through five printings and three editions; the latest, in 1973, included a new introduction co-authored by nine prominent scientists, affirming the overall truth of the synthesis and updating its data.


What Is Life?


Best edition: The standard edition is published by Cambridge University Press as What Is Life? The Physical Aspect of the Living Cell with Mind & Matter and Autobiographical Sketches (1992).

What Is Life? begins with an introduction to classical, Newtonian physics; continues, in the second and third chapters, to sum up advances in genetics; and then brings quantum mechanics into the picture. Schrödinger’s goal is to offer a single coherent explanation, drawing on physics, chemistry, and biology, for the ways in which life is sustained and passed on: “The obvious inability of present-day physics and chemistry to account for such events,” he begins, “is no reason . . . for doubting that they can be accounted for by those sciences.” Schrödinger was the first to propose that chemistry could explain how inheritance functioned. There must be, he argued, a “code-script” that could be chemically analyzed and passed on; life was not a mysterious “vital force,” but an orderly series of chemical and physical reactions.

A young James Watson happened on What Is Life? and was immediately hooked: “Schrödinger argued that life could be thought of in terms of storing and passing on biological information,” Watson later wrote. “Chromosomes were thus simply information bearers . . . To understand life . . . we would have to identify molecules, and crack their code.” What Is Life? created the new field of biochemistry, and led directly to the discovery of DNA.


Silent Spring


Best edition: Houghton Mifflin (1994), with a new introduction by Al Gore.

From its first lines, Silent Spring shows itself to be a new kind of science book: one that is intended to grasp the imagination as well as the brain, emotion as well as reason. “There was once a town in the heart of America, where all life seemed to live in harmony with its surroundings,” Carson begins, and goes on to sketch an idyllic portrait of white-blooming orchards in spring, scarlet and gold leaves in the fall, wildflowers, birds soarching in a blue sky, fish leaping in clear ponds, herds of deer “half hidden in the mists.” And then a “strange blight” creeps in, an “evil spell” that sickens livestock, kills birds, strikes down children while at play and causes them to “die within a few hours.”

This morality tale is a prediction: what will happen to organic life if the use of chemicals is not regulated. Silent Spring is a story of failure on the part of government, blind greed on the part of corporations, silence on the part of science: Pesticides, unregulated and unexamined, have the power to wipe out the complex ecosystem around us. Man, Carson says, has “written a depressing record of destruction, directed not only against the earth he inhabits, but against the life that shares it with him.”

Silent Spring was brilliantly successful. Called to testify before Congress about the dangers of unregulated pesticides, Carson was greeted by one senator with the words, “Miss Carson, you are the lady who started all this.” All this: the regulation of pesticides, the creation of the EPA, and the beginning of the modern environmental movement.56


The Naked Ape


Best edition: The Naked Ape: The Controversial Classic of Man’s Origins (Delta, 1999).

Both Charles Darwin and Erwin Schrödinger had edged up to the implications of their discoveries, and then sidled away. Darwin had declined to tease out the full implications of his theory of origins, even though (as he later wrote) he “could not avoid the belief that man must come under the same law” as every other species: man, too, was mutable. What Is Life? had concluded that life is chemical, but ended with a final epilogue, “On Determinism and Free Will,” in which Schrödinger attempted to hold on to the uniqueness of the human experience.

“I am a zoologist,” Desmond Morris begins, in The Naked Ape’s Introduction, “and the naked ape is an animal. He is therefore fair game for my pen and I refuse to avoid him any longer because some of his behavior patterns are rather complex and impressive.” In the chapters that follow, Morris attempts to explain almost every aspect of human existence, from origin to romantic love, from feeding patterns to maternal and paternal love, as survival mechanisms. Everything we do, from getting our hair styled to laughing at a joke, has a biological and chemical explanation.

It was, at the time, shocking: “Zoologist Dr Desmond Morris has stunned the world by writing about humans in the same way scientists describe animals,” marveled the BBC. But Morris’s study, boosted by an approachable prose style and a canny amount of space devoted to sex, was translated into twenty-three languages and sold over ten million copies. It was the first popular work in a field which would become known as sociobiology: the investigation of human culture as, no less than human inheritance, shaped and determined by physical and chemical factors.


The Double Helix: A Personal Account of the Discovery of the Structure of DNA


Best editions: Watson’s original text is available as both a paperback reprint and an ebook from Touchstone (2001). A more elaborate edition, containing editorial annotations, historical background, excerpts from personal letters, and additional illustrations, is The Annotated and Illustrated Double Helix, ed. Alexander Gann and Jan Witkowski (Simon & Schuster, 2012).

“Science seldom proceeds in the straightforward logical manner imagined by outsiders,” remarks Watson, near the beginning of The Double Helix; and his account of the “discovery” of DNA by himself and his British colleague Francis Crick is filled with false starts, stolen research, territorial jousts between scientists, and misogyny (“The best home for a feminist,” Watson remarks, in one of his less charming moments, “was in another person’s lab”).

Despite its title, Watson’s memoir isn’t about a “discovery”: it is about the construction of a theoretical structure. Crick and Watson, determined to come up with a model that would 1) be consistent with the chemical and structural properties of the nucleic substance known as deoxyribonucleic acid, and 2) allow it to pass information along, came up with the idea of a double helix. In April of 1953, Watson and Crick proposed this model in a short article published in the journal Nature, concluding with a brief sentence (composed by Crick) suggesting that the double helix would allow nucleic acids to form hydrogen bonds—which meant DNA could reproduce itself. “It has not escaped our notice,” Crick wrote, in the paper’s conclusion, “that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”

The model was convincing: consistent with observed properties of DNA, and clearly able to replicate itself. It was elaborated upon by such biochemical luminaries as Frederick Sanger, George Gamow, Marshal Nirenberg, and Heinrich Matthaei. By the time James Watson published The Double Helix: A Personal Account of the Discovery of the Structure of DNA in 1968, the double helical structure of DNA and its role in reproducing life was accepted as gospel (although Crick objected to the memoir, pointing out a number of places where his recollection didn’t match up with Watson’s story).

But not until the late 1970s would scientists have the technical tools to produce a truly detailed map of DNA. Neither Watson nor Crick had “discovered” DNA. Like Copernicus, they had instead built a convincing theory that accounted, very neatly, for decades of observable phenomena.


The Selfish Gene


Best editions: The first edition (1976) can be easily located secondhand, but the third edition, The Selfish Gene: 30th Anniversary Edition (Oxford University Press, 2006), contains updated bibliography and a new introduction.

The Selfish Gene took Desmond Morris’s conclusions to the molecular level; Morris had explained human culture in terms of the organism’s will to survive, but Dawkins argued that that organism itself (animal, or human) will had nothing to do with it. The gene itself, he concluded, will preserve itself at all costs.

Dawkins had not “invented the notion . . . that the body is merely an evolutionary vehicle for the gene” (as one science book claims), any more than Watson and Crick had “discovered” DNA. In fact, in 1975, the year before The Selfish Gene was published, the biologist E. O. Wilson had concluded (in the first chapter of his text Sociobiology) that “the organism is only DNA’s way of making more DNA.” But Dawkins was a good writer and a capable rhetorician, and The Selfish Gene managed to spell the implications of this idea out with particular clarity, accessible both to lay readers and to students of the life sciences. In the words of evolutionary biologist Andrew Read, a doctoral candidate when the book came out, “[T]he intellectual framework had already been in the air, but The Selfish Gene crystallized it and made it impossible to ignore.”57

Read the whole book, but note especially Chapter Nine, where Dawkins discusses the ways in which cultural as well as biochemical information is transmitted from generation to generation. Looking for a name for a “unit of cultural transmission” (Dawkins offers, as examples, “tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches”), Dawkins abbreviated the Greek word mimeme to meme, thus contributing a brand-new (and now common) word to the English language.


The First Three Minutes:
A Modern View of the Origin of the Universe


Best edition: The 1977 text, which has never gone out of print, has been published in a second updated edition, with a new foreword and an even more recent afterword, by Basic Books (1993).

By 1977, physicists were largely in agreement: the universe had once been a singularity, a super-dense, molten, “primeval atom” that somehow contained all matter now in the universe, and had expanded outward. It was, in fact, stillexpanding steadily outward; the expansion, seen as the steady distancing of distant nebulae from our vantage point, had been measured. Originally a theoretical construct proposed by the Belgian astronomer Georges Lemaître, the so-called “Big Bang” (a name given by the theory’s opponents) was not an explosion, but a steady expansion outward over inconceivable amounts of time. Its supporters suggested that the enormous heat of this original super-dense starting point would still be radiating around the universe as residual microwave radiation; when this radiation was first measured, in 1965, even skeptical physicists began to agree that, yes, the singularity had once indeed existed at the center, or beginning (the two were identical) of the cosmos.

It took the general public a few more years to sign on. The expansion of the universe from a singularity was both technical and counterintuitive. It needed a popularizer, and Steven Weinberg—a theoretical physicist from New York who won the Nobel Prize two years after publishing The First Three Minutes—was able to convey highly technical content in a clear and simplified way. The First Three Minutes clearly lays out background information about the expansion of the universe, runs through the historical development of various explanations (including steady-state theory), and shows the necessity of cosmic microwave radiation; it was the first widely read explanation of the Big Bang, and the catalyst for a explosion of books for lay readers on cosmology and theoretical physics over the next decade.

Yet, as groundbreaking as it was, The First Three Minutes shares the drawbacks of all origin stories. It demands a leap of faith about the beginning of the universe: “There is an embarrassing vagueness about the very beginning,” Weinberg wrote, in his introduction, “the first hundredth of a second or so . . . we may have to get used to the idea of an absolute zero of time—a moment in the past beyond which it is in principle impossible to trace any chain of cause and effect.” And Weinberg also is unable to avoid speculating about the end. The universe, Weinberg writes, must, ultimately, stop expanding; it will either simply cease, fading away into cold and darkness, or else “experience a kind of cosmic ‘bounce,’ ” and begin to re-expand, in “an endless cycle of expansion and contraction stretching into the infinite past, with no beginning whatever.”


On Human Nature


Best edition: Hardcover copies of the first edition (Harvard University Press) are widely available. The 2004 revision, On Human Nature: With a New Preface (rev. ed., Harvard University Press), contains a useful foreword by Wilson, reflecting on the public reception of the original book.

On Human Nature, Wilson’s most widely read work, assumes that human behavior rests on chemistry. Wilson’s philosophy is one of disciplinary reductionism; insights from physics and chemistry, demonstrable through experimentation, able to be confirmed by calculation, are the bedrock of all human knowledge. Biology rests on this bedrock; biological laws are directly derived from physical and chemical principles. And the social sciences—psychology, anthropology, ethology (natural animal behavior), sociology—float above, entirely dependent upon the “hard” sciences beneath.

Wilson’s first work was done on ant societies. His 1975 text Sociobiology: The New Synthesis, argued that human behavior, no less than ant action, results from nothing more transcendent than physical necessity. Even seemingly intangible feelings and motivations (hate, love, guilt, fear) are

constrained and shaped by the emotional control centers in the hypothalamus and limbic system of the brain. . . . What, we are then compelled to ask, made the hypothalamus and limbic system? They evolved by natural selection . . . [T]he hypothalmus and limbic system are engineered to perpetuate DNA. We are flooded with remorse, or the impulse to altruism, or despair, only because our brains (independent of our conscious knowledge) are reacting to our environment in the way that will best preserve our genes.

“Sociobiology,” then, was the attempt to understand human society solely as a product of biological impulse.

All of Sociobiology except for the last chapter was based on animal research; On Human Nature, published three years later, focuses in more closely on human data. “The human mind,” Wilson argues, “is a device for survival and reproduction, and reason is just one of its various techniques.” He then explains how each of our most treasured attributes arise from our genes (so, for example, “The highest forms of religious practice . . . can be seen to confer biological advantage,” not to mention “Genetic diversification, the ultimate function of sex, is served by the physical pleasure of the sex act”).

Like James Watson and Richard Dawkins, Wilson proved to be a talented writer, with a knack for powerful metaphors. On Human Nature was praised, excoriated, and read; it was an instant best seller, and in 1979 won a Pulitzer Prize.




Best edition: The Oxford University Press reprint, Gaia: A New Look at Life on Earth (2000).

James Lovelock picks up Rachel Carson’s themes, exploring the interrelationship between human beings and the planet by envisioning the entire related system as a single symbiotic “being.” This is not, he hastens to explain, a literal being, a sentient creature of some kind: rather, “the entire surface of the Earth, including life, is a self-regulating entity, and this is what I mean by Gaia.” (The name was suggested by his neighbor William Goldman, author of The Princess Bride.)

With this as his central construct, Lovelock—an environmentalist and inventor who did his graduate work in medicine—explores the interrelationship of the biosphere (the “region of the Earth where living organisms” exist) and the surface rocks, air, and ocean. It is, he argues, a tightly organized interlocking system, with pollution or sickness in one part forcing the entire “super-organism” to adapt.

Like his fellow populariziers, Lovelock then progresses to conclusions about human existence. He explains the human sense of beauty (“complex feelings of pleasure, recognition, and fulfillment, of wonder, excitement, and yearning, which fill us”) as a biological response that has been “programmed to recognize instinctively our optimal role” in relationship to the earth. “It does not seem inconsistent with the Darwinian forces of evolutionary selection,” he concludes, “for a sense of pleasure to reward us by encouraging us to achieve a balanced relationship between ourselves and other forms of life.”


The Mismeasure of Man


Best edition: Paperback copies of the original 1981 edition can easily be located secondhand. The original publisher, W. W. Norton, put out a revised and expanded edition of the title in 1996; it includes Gould’s updated defense of his argument and his interaction with biological determinism in the years since original publication.

Stephen Jay Gould believed that Morris and Wilson were oversimplifying. In The Mismeasure of Man, he argues against what he calls “Darwinian fundamentalism”—the use of natural selection to explain the totality of human experience. Instead, Gould writes, there are multiple overlapping factors (all of them natural, but the sum total too complex to be reduced to DNA) that determine human behavior.

The Mismeasure of Man was (like Wilson’s own book) aimed at a general readership. It was a focused and powerful refutation of one specific instance of what Gould saw as “fundamentalist”: the “abstraction of intelligence” as a biochemically determined quality, its “quantification” as a number (thanks to the increasing popularity of IQ tests), and “the use of these numbers to rank people” in a biologically determined “series of worthiness.”

The argument was intended to play a much larger role than simply debunking IQ tests: Gould hoped to refute the disciplinary reductionism so prominent in Wilson’s works. “The Mismeasure of Man is not fundamentally about the general moral turpitude of fallacious biological arguments in social settings,” he wrote, in his introduction. “It is not even about the full range of phony arguments for the genetic basis of human inequalities” (a clear shot at Sociobiology). Rather, “The Mismeasure of Man treats one particular form of quantified claim about the ranking of human groups: the argument that intelligence can be meaningfully abstracted as a single number capable of ranking all people on a linear scale of intrinsic and unalterable mental worth.”

Like Wilson, Gould was assailed by some (“More factual errors per page than any book I have ever read,” snapped the prominent psychologist Hans Eysenck, himself a believer in the genetic basis of intelligence) and praised by others (the book won the National Book Critics Circle award in 1982).58


Chaos: Making a New Science


Best edition: Gleick’s original 1987 text (Viking) is still available secondhand; a slightly revised and updated second edition was published in by Penguin Books in 2008.

Unlike the other authors on this list, James Gleick is not a scientist; he is a journalist (and English major). But in Chaos, he was able to digest and re-present a series of highly technical research articles so clearly that chaos theory became a household name (and ended up in the movies.)

Chaos theory was born in 1961, when the American mathematician Edward Lorenze was tinkering with metereology. Lorenz had written computer code that should have taken various factors (wind distance and speed, air pressure, temperature, etc.) and used them to predict weather patterns. He discovered, accidentally, that tiny variations in the factors entered—changes in wind speed, or temperature, so small that they should have been completely insignificant—sharply changed the predicted patterns.

In 1963, he published a paper suggesting that, in some systems, minuscule changes could actually produce massively different results. In 1972, he followed up with another, called “Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” It was the first time that a butterfly’s wing was used as an analogy for one of those tiny starting changes: the first use of the “butterfly effect.” In 1975, two other mathematicians, Tien-Yien Li and James A. Yorke, published a paper that first gave this phenomenon a name. They called it chaos: an immensely powerful word for most English-speaking readers who, even by 1975, knew something of its biblical use: utter formlessness, confusion, disorder.

Chaos theory was still in its early adolescence when Gleick—a New York Times Magazine columnist and freelance essayist—chose it as the subject of his first book. Peppered with vivid metaphors, Chaos gripped the popular imagination. The “Butterfly Effect” became a household phrase, especially once Jeff Goldblum’s rock-star scientist character in Jurassic Park gave worldwide audiences the shorthand version (“A butterfly can flap its wings in Peking, and in Central Park, you get rain instead of sunshine . . . Tiny variations . . . never repeat, and vastly affect the outcome”).

But the word chaos is misleading. Chaos here means “unpredictability”—and not even ultimate, intrinsic unpredictibility (as in, “No matter how much we know, we will not be able to predict the end result”) but, instead, a contingent, practical unpredictibility (“This system is so sensitive to microscopic changes in initial conditions that we are not, at the moment, capable of analysing those initial conditions with the accuracy necessary to predict all possible outcomes”).


A Brief History of Time


Best edition: A Brief History of Time: Updated and Expanded Tenth Anniversary Edition (Bantam Books, 1998).

A Brief History of Time was not the first popular physics best seller (“Surely not another book on the Big Bang and all that stuff,” physicist Paul Davies remembers thinking when he first saw Hawking’s tome), but it outdid all the rest. Hawking’s modest goal is to use physics to answer a series of questions: “What do we know about the universe, and how do we know it? Where did the universe come from, and where is it going? Did the universe have a beginning, and if so, what happened before then? What is the nature of time? Will it ever come to an end?” The answers garnered a readership of over ten million readers in thirty-five languages—making A Brief History of Time one of the most popular science books ever written.


T. rex and the Crater of Doom


Best edition: The Princeton University Press paperback (2008).

Finding a strange abundance of the element iridium in a layer of Italian rock where it had no business being, Walter Alvarez—taught, by his scientific training, to prefer uniformitarianism over catastrophe—began to suspect that a huge catastrophe had, in fact, once struck the earth. The rock in question was at the so-called “K-T” boundary, a strata of rock where geologists had long noted a discontinuity in the fossil record. Before the K-T boundary, dinosaurs and ammonites abounded; after it, they disappeared.

Together, Alvarez and his father, physicist (and Nobel Prize winner) Luis Alvarez, theorized that the iridum might come from an asteroid collision with the earth. In 1980, Alvarez proposed in the journal Science (in a paper co-authored by his father, along with fellow scientists Frank Asaro and Helen Michel), that the “KT boundary iridium anomaly” might well be due to an asteroid strike. Furthermore, this impact might explain the fossil discontinuity:

Impact of a large earth-crossing asteroid would inject about 60 times the object’s mass into the atmosphere as pulverized rock; a fraction of this dust would stay in the stratosphere for several years and be distributed worldwide. The resulting darkness would suppress photosynthesis, and the expected biological consequences match quite closely the extinctions observed in the paleontological record.59

What was missing was the impact crater. Eleven years later, Alvarez and his colleagues found traces of a crater 125 miles across, concealed by millennia of accumulated sediment, on the Yucatán coast. The impact of a striking object large enough to make such a crater would have vaporized crust, set forests on fire, sent tsunamis ripping through the oceans, and thrown enough debris into the atmosphere to block the sun’s rays and create storms of poisonous acid rain. The impact, Alvarez concluded, changed the face of the planet—and wiped out the dinosaurs.

In 1997, Alvarez published his account of the hypothesis’s formation in T-Rex and the Crater of Doom. For the most part a carefully written, precise account of the clues that led Alvarez and his team to their conclusions, the book begins with a first chapter called “Armageddon,” a quote from The Lord of the Rings, and a dramatic account of what the impact must have looked like. (“Doom was coming out of the sky . . .”). Popular science writing had hit its zenith: “Suddenly,” says science writer Carl Zimmer of Alvarez’s book, “the history of life was more cinematic than any science fiction movie.”

1George Sarton, A History of Science: Ancient Science Through the Golden Age of Greece (Cambridge: Harvard University Press, 1952), p. 3.

2Plinio Prioreschi, A History of Medicine, Vol. I: Primitive and Ancient Medicine, 2nd ed. (Omaha, Neb.: Horatius Press, 1996), p. 42.

3Hippocrates, “On the Sacred Disease,” qtd. in Steven H. Miles, The Hippocratic Oath and the Ethics of Medicine (New York: Oxford University Press, 2005), p. 20.

4Lawrence I. Conrad et al., The Western Medical Tradition: 800 BC–AD 1800 (New York: Cambridge University Press, 1995), pp. 23-25; Pausanius, Pausaniass Description of Greece, Vol. III, trans. J. G. Frazer (New York: Macmillan & Co., 1898), p. 250; “On Airs, Waters, and Places,” in The Corpus, p. 117.

5Albert Einstein and Leopold Infeld, The Evolution of Physics (New York: Cambridge University Press, 1938), p. 33.

6Simplicius, Commentary on the Physics 28.4–15, qtd. in Jonathan Barnes, Early Greek Philosophy, rev. ed (New York: Penguin, 2002), p. 202; Aristotle, On Democritus fr. 203, qtd. in Barnes, pp. 206–207.

7Aristotle, Physics, trans. Robin Waterfield (New York: Oxford University Press, 2008), II.1

8Edward Craig, ed., Routledge Encyclopedia of Philosophy (Oxford, U.K.: Taylor & Francis, 1998), pp. 193–194; David Bolotin, An Approach to Aristotle’s Physics, With Particular Attention to the Role of His Manner of Writing (Albany: SUNY Press, 1998), p. 127; J. Den Boeft, ed., Calcidius on Demons (Commentarius CH. 127-136) (Leiden: E. J. Brill, 1977), pp. 19–20.

9C. C. W. Taylor, The Atomists: Leucippus and Democritus, Fragments (Toronto: University of Toronto Press, 1999), pp. 60, 214–215; Epicurus, “Letter to Herodotus,” in Letters and Sayings of Epicurus, trans. Odysseus Makridis (New York: Barnes & Noble, 2005), pp. 3–6; Anthony Gottlieb, The Dream of Reason: A History of Philosophy from the Greeks to the Renaissance (New York: W. W. Norton, 2000), pp. 290, 303.

10Titus Lucretius Carus, Lucretius on The Nature of Things, trans. John Selby Watson (London: Henry G. Bohn, 1851), p. 96.

11Margaret J. Osler, Reconfiguring the World: Nature, God, and Human Understanding from the Middle Ages to Early Modern Europe (Baltimore: Johns Hopkins University Press, 2010), p. 15; C. M. Linton, From Eudoxus to Einstein: A History of Mathematical Astronomy (New York: Cambridge University Press, 2008), p. 48.

12Norris S. Hetherington, Cosmology: Historical, Literary, Philosophical, Religious, and Scientific Perspectives (London: CRC Press, 1993), pp. 74–76.

13Nicolaus Copernicus, Preface, De Revolutionibus, qtd. in Thomas S. Kuhn, The Copernican Revolution: Planetary Astronomy in the Development of Western Thought (Cambridge: Harvard University Press, 1957), p. 137.

14Nicolaus Copernicus, Three Copernican Treatises, trans. Edward Rosen (Mineola, N.Y.: Dover Publications, 1959), pp. 57–59.

15Copernicus, Preface, p. 18.

16Francis Bacon, Selected Philosophical Works, ed. Rose-Mary Sargent (Cambridge: Hackett Publishing Co., 1999), pp. 118–119.

17David Deming, Science and Technology in World History, Vol. 3 (Jefferson, N.C.: McFarland & Co., 2010), p. 165; Galileo Galilei, Dialogue Concerning the Two Chief World Systems: Ptolemaic and Copernican, trans. Stillman Drake, ed. Stephen Jay Gould (New York: Modern Library, 2001), pp. 130–131.

18Deming, Science and Technology, pp. 177–178.

19Robert Hooke, Micrographia (1664), Preface; David Freedberg, The Eye of the Lynx: Galileo, His Friends and the Beginnings of Natural History (Chicago: University of Chicago Press, 2002), p. 180; Thomas Birch, The History of the Royal Society of London, Vol. 1 (London: A. Millar, 1756), pp. 215ff.

20Thomas Birch, The History of the Royal Society of London, Vol. 3 (London: A. Millar, 1757), pp. 1, 10.

21Ron Larson and Bruce Edwards, Calculus (Independence, Ky.: Cengage Learning, 2013), p. 42.

22James L. Axtell, “Locke, Newton and the Two Cultures.” In John W. Yolton, ed., John Locke: Problems and Perspectives (New York: Cambridge University Press, 1969), pp. 166–168.

23Barry Gower, Scientific Method: A Historical and Philosophical Introduction (New York: Routledge, 1997), p. 69.

24Isaac Newton, Mathematical Principles of Natural Philosophy, trans. Andrew Motte (Daniel Adee, 1848), p. 486; G. Brent Dalrymple, The Age of the Earth (Stanford, Calif.: Stanford University Press, 1991), pp. 28–29.

25Dalrymple, The Age of the Earth, pp. 29–30; Jacques Roger, Buffon: A Life in Natural History, trans. Sarah Lucille Bonnefoi (Ithaca: Cornell University Press, 1997), pp. 187–193.

26Dennis R. Dean, James Hutton and the History of Geology (Ithaca: Cornell University Press, 1992), pp. 17, 24–25; James Hutton, “Theory of the Earth,” in Transactions of the Royal Society of Edinburgh, Vol. I (J. Dickson, 1788), pp. 301, 304.

27M. J. S. Hodge, “Lamarck’s Science of Living Bodies,” in The British Journal for the History of Science 5:4 (December 1971), p. 325; Martin Rudwick, Bursting the Limits of Time: The Reconstruction of Geohistory in the Age of Revolution (Chicago: University of Chicago Press, 2005), p. 390; J. B. Lamarck, Zoological Philosophy, trans. Hugh Elliot (London: Macmillan & Co., 1914), pp. 12, 41, 46.

28Robert J. Richards, Darwin and the Emergence of Evolutionary Theories of Mind and Behavior (Chicago: University of Chicago Press, 1987), p. 63.

29Martin Rudwick, Georges Cuvier, Fossil Bones, and Geological Catastrophes (Chicago: University of Chicago Press, 1997), p. 190.

30Charles Lyell, Principles of Geology (New York: Penguin, 1998), p. 6.

31Charles Darwin, Charles Darwin: His Life Told in an Autobiographical Chapter (London: John Murray, 1908), p. 82.

32Charles Darwin, The Variation of Animals and Plants Under Domestication, Vol. II (New York: D. Appleton & Co., 1897), p. 371; P. Kyle Stanford, Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives (New York: Oxford University Press, 2006), p. 65; Darwin, The Origin of Species, p. 13.

33Gregor Mendel, Experiments in Plant Hybridisation (New York: Cosimo Classics, 2008), pp. 15, 21ff., 47.

34The development of a living creature, from egg/embryo to adult (“ontogony”) goes through the same series of steps as the evolution of a living creature from a primitive to a modern state (phylogeny). The theory was wildly popular in the late nineteenth and early twentieth centuries, but now has been thoroughly discarded by biologists.

35J. A. Moore, Heredity and Development, 2nd ed. (New York: Oxford University Press, 1972), p. 74.

36Alfred Wegener, “The Origin of Continents and Oceans,” in The Living Age, 8th Series, Vol. XXVI (April, May, June 1922), pp. 657–658.

37Alfred Wegener, The Origins of Continents and Oceans, trans. John Biram (New York: Dover Publications, 1966), p. viii.

38Albert Einstein, Relativity: The Special and General Theory, trans. Robert W. Lawson (New York: Pi Press, 2005), pp. 25, 28; Galison et al., p. 223; Jay M. Pasachoff and Alex Filippenko, The Cosmos: Astronomy in the New Millennium, 4th ed. (New York: Cambridge University Press, 2014), pp. 239–240, 271–272.

39Ernest Rutherford, The Collected Papers of Lord Rutherford of Nelson, Vol. 2 (New York: Interscience Publishers, 1963), p. 212.

40Bruce Rosenblum and Fred Kuttner, Quantum Enigma: Physics Encounters Consciousness, 2nd ed. (New York: Oxford University Press, 2011), pp. 59–60; M. S. Longair, Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics, 2nd ed. (New York: Cambridge University Press, 2003), p. 339.

41Max Planck, The Origin and Development of the Quantum Theory, trans. H. T. Clarke and L. Silberstein (New York: Clarendon Press, 1922), p. 12.

42Quoted. in Franco Selleri, Quantum Paradoxes and Physical Reality: Fundamental Theories of Physics (Dordrecht: Kluwer Academic Publishers, 1990), p. 363.

43Ernst Mayr and William B. Provine, The Evolutionary Synthesis: Perspectives on the Unification of Biology (Cambridge: Harvard University Press, 1998), pp. 8, 282, 315, 316.

44Julian Huxley, Evolution: The Modern Synthesis: The Definitive Edition (Cambridge: MIT Press, 2010), pp. 3, 6–7.

45Walter J. Moore, Schrödinger: Life and Thought (New York: Cambridge University Press, 1992), p. 404.

46Peter J. Bowler, Science for All: The Popularization of Science in Early Twentieth-Century Britain (Chicago: University of Chicago Press, 2009), pp. 5–6; William Jay Youmans, ed., Popular Science Monthly XLVI (New York: D. Appleton & Co., November 1894–April 1895), p. 127.

47Pierre C. Fraley and Earl Ubell, “Science Writing: A Growing Profession,” Bulletin of the Atomic Scientists (December 1963), pp. 19–20.

48Rachel Carson, Silent Spring, anniversary edition (Boston: Houghton Mifflin, 2002), pp. xii–xiv, 15; Linda J. Lear, “Rachel Carson’s ‘Silent Spring,’ ” in Environmental History Review 17:2 (Summer, 1993), p. 28.

49Richard Dawkins, The Selfish Gene (New York: Oxford University Press, 1976), p. 1.

50Steven Weinberg, The First Three Minutes: A Modern View of the Origin of the Universe, 2nd ed. (New York: Basic Books, 1993), p. 153.

51Carson, Silent Spring, p. 2; Weinberg, The First Three Minutes, p. 8; Michael B. Shermer, “This View of Science: Stephen Jay Gould as Historian of Science and Scientific Historian, Popular Scientist and Scientific Popularizer,” in Social Studies of Science 32:4 (August 2002), pp. 490, 494.

52“Apollo 9 astronaut to kick off conference on ‘Near-Earth Object’ risks.” Released April 9, 2009, by UN-L. Accessed September 29, 2014, at‘Near-Earth+Object’+risks.

53Mortimer J. Adler and Charles Van Doren, How to Read a Book: The Classic Guide to Intelligent Reading (New York: Simon & Schuster, 1972), p. 251.

54Charles G. Gross, Brain, Vision, Memory: Tales in the History of Neuroscience (Cambridge: MIT Press, 1999), p. 13.

55Alfred Russel Wallace, Infinite Tropics: An Alfred Russel Wallace Anthology, ed. Andrew Berry (New York: Verso, 2002), p. 51.

56Carson, Silent Spring, p. xix.

57Matt Ridley, The Red Queen: Sex and the Evolution of Human Nature (New York: Harper Perennial, 2003), p. 9; Alan Grafen and Mark Ridley, eds., Richard Dawkins: How a Scientist Changed the Way We Think (New York: Oxford University Press, 2007), p. 7.

58Hans. J. Eysenck, Intelligence: A New Look (New Brunswick, N.J.: Transaction Publishers, 2000), p. 10.

59Alvarez et al., “Extraterrestrial Cause for the Cretaceous-Tertiary Extinction,” p. 1095.