MATH IN THE NATURAL SCIENCES - The Handy Math Answer Book

The Handy Math Answer Book, Second Edition (2012)

MATH IN THE NATURAL SCIENCES

MATH IN GEOLOGY

What is geology?

The word “geology” comes from the Greek geo, meaning “the Earth,” and the ology suffix comes from logos, meaning “discussion.” Overall, geology is considered to be the study of the Earth. In modern times, thanks to space probes reaching into the solar system, geology also now entails the surface features on other planets and satellites.

Who first made some of the first accurate measurements of Earth?

Hellenic geographer, librarian, and astronomer Eratosthenes of Cyrene (276-194 B.C.E.) made several accurate measurements of the Earth, which is why he is often known as the “father of geodesy” (the science of Earth measurement). Although he was not the very first to deduce the Earth’s circumference, Eratosthenes is thought by most historians to be the first to accurately measure it.

Eratosthenes knew the Sun’s light at noon reached the bottom of a well in Syene (now Aswan on the Nile in Egypt) on the summer solstice (which meant the Sun was directly overhead). Thanks to a helper, he compared it to a well’s shadow at the same time in Alexandria. He determined that the zenith distance—or the angle from the zenith (point directly overhead) to the point where the Sun was at noon—was 0 degrees at Syene, and at Alexandria it was about 7 degrees. By measuring these angles and the distance between the two cities, Eratosthenes used geometry to deduce that the Earth’s circumference was 250,000 stadia. The number was later revised to 252,000 stadia, or 25,054 miles (40,320 kilometers).

The actual circumference of the planet is 24,857 miles (40,009 kilometers) around the poles and 24,900 miles (40,079 kilometers) around the equator, because the Earth is not completely round. From his data Eratosthenes also determined another accurate measurement: the Earth’s diameter. He deduced the Earth was 7,850 miles (12,631 kilometers) in diameter, which is close to the modern mean value of 7,918 miles (12,740 kilometers).

Image

Eratosthenes brilliantly used his knowledge of angles and mathematics to be the first to determine the Earth’s circumference accurately.

What is the difference between the Earth’s sidereal and solar days?

The difference between the Earth’s sidereal and solar days has to do with angles and the Earth’s rotation. The mean solar day is equal to 24 hours, or the average of all the solar days in an orbital year. The mean sidereal day is 23 hours, 56 minutes, and 04.09053 seconds. It is not exactly equal to a solar day because by the time the Earth has rotated once, it has moved a little in its orbit around the Sun. Thus, it rotates for about another four minutes before the Sun is considered to be back in exactly the same place in the sky as it was the day before.

How do scientists measure the Earth’s rotational speed?

The Earth’s rotational speed is based on the sidereal period of the Earth’s rotation, but it differs depending on where the observer is located. By dividing the distance traveled once around the Earth by the time it takes to travel that distance, the speed can be determined.

For example, a person on the Earth’s equator will travel once around the Earth’s circumference—or 24,900 miles (40,079 kilometers)—in one day. To get the speed, divide the miles by the time it takes to get back to the same place (around 24 hours), or just over 1,000 miles (1,609 kilometers) per hour. A person at one of the poles is hardly moving at any speed. This is because there is so little distance traveled in a day (a stick stuck vertically in the ice exactly at the North or South Pole will only travel about 0.394 inch [1 centimeter] per day.)

What about other places on Earth? Traveling north or south from the equator toward the poles decreases one’s tangential rotational speed. Thus, the rotational speed at any point on the Earth can be calculated by multiplying the speed at the equator by the cosine of the point’s latitude.

What is the geologic time scale?

The geologic time scale is a way of managing large amounts of time in a convenient chart. The scale is actually a measurement encompassing the entire history of the Earth—from its beginnings some 4.55 billion years ago to the present day. The largest divisions include eons, eras, and periods; the smaller time divisions include epochs, ages, and subages.

What natural phenomena can change the Earth’s rotational speed?

In general, the length of a normal Earth day is about 86,400 seconds; over the course of a year, the length varies by 1 millisecond, or 1,000 microseconds, because of a redistribution in the planet’s mass. This is because Earth’s rotation is not always consistent from year to year or even season to season. Scientists know there are other factors in the rotational equation, including the differences caused by widespread climatic conditions or natural events—and all of them are determined using geometry and mathematics.

For example, during El Niño years (the periodic upwelling of warmer waters around the equator in the Pacific Ocean off South America), the “drag” of the ocean water welling up can slow down the Earth’s rotation. It happened between El Niño years 1982 and 1983, when Earth’s rotation slowed by 1/5,000th of a second, prolonging the day.

Even a giant earthquake can speed up the planet’s rotation by redistributing the Earth’s mass. For example, on March 11, 2011, the 8.9-magnitude earthquake that struck northeast Japan shortened the length of Earth’s day by 1.6 microseconds; the 2010 quake measuring magnitude 8.8 in Chile shortened it by 1.26 microseconds; and a 9.1-magnitude quake in 2004 that struck off Sumatra sped up the planet’s rotation, shortening the day by 6.8 microseconds.

The actual divisions of geologic time are not arbitrary, or uniform. The larger divisions are based on major events that occurred sporadically over the Earth’s long history. For example, the end of the Permian Period, about 240 million years ago, was marked by a major catastrophe. Some scientists estimate that close to 90 percent of all species on the Earth died at that time, resulting in a major extinction event that may have been caused by huge volcanic eruptions or even a space object striking the Earth. The smaller divisions are usually based on specific local structures or fossils found within the rock. Most often they are named after local towns, people, and sundry other nearby associations.

What is the longest span of time measured on the geologic time scale?

The longest span of time measured on the geologic time scale is the Precambrian Era (also called the Precambrian Eon). It represents the time between 4.55 billion years to about 544 million years ago, or about seven-eighths of the Earth’s history. This time period includes the beginning of the Earth’s formation, its cool-down, its crust’s formation, and, within the last billion years of the time period, the evolution of the first single-celled to multi-celled organisms. The demarcation of 544 million years ago represents a burst in the evolution of multi-celled organisms, including the first plant and animal species.

Image

The geologic time scale is divided into epochs, ages, and other periods based on important historical events that radically changed life on Earth.

Image

A “dip” is the angle at which a layer of rock or vein is inclined, while a “strike” is the angle made between the direction of true north and the direction of the planar feature, such as an incline or fault.

How do geologists use angles to understand rock layers?

Mathematics—especially geometry—is instrumental in understanding rock layers. In a branch of geology called stratigraphy, scientists measure angles and planes in rock in order to know the location of certain rock layers and the possible geologic events that affected the layers over time. In particular, geologists measure strike and dip. Strike is the angle between true north and a horizontal line contained in any planar feature, such as a fault (usually caused by an earthquake) or inclined bed (often caused by the uplift of hot molten rock around a volcano). Dip is the angle at which a bed or rock vein is inclined to the horizontal; it is measured perpendicular to the strike and in the vertical plane (as opposed to the strike’s horizontal line).

How are the shapes of crystals classified?

Geometry plays an important part in the study of minerals. This is because certain minerals exhibit specific shapes called crystals, with specific crystalline forms occurring when a mineral’s atoms join in a particular pattern or internal structure. This arrangement is determined by several factors, including the chemistry and structure of the mineral’s atoms, or even the environment in which the crystal grew.

Overall, there are specific angles between corresponding faces of all crystals. Mineralogists (scientists who study minerals) divide these crystalline forms into 32 geometric classes of symmetry; they use this information to identify and classify certain minerals.

The crystals are also subdivided into seven systems on the basis of an imaginary straight line that passes through a crystal’s center (or axis). The seven groups include cubic (or isometric), tetragonal, ortho-rhombic, monoclinic, triclinic, hexagonal, and trigonal (or rhombohedral). For example, a crystal in the cubic system has three axes that intersect at right angles; the axes are also of equal lengths. The best way to envision this crystal is to think of a box with equal sides—or a cube.

Image

1. Halite is an example of a cubic (or isometric) system in which three equal axes are all at right angles. 2. Calcite is formed with a trigonal (or hombohedral) crystal system in which the three axes are set obliquely at equal angles to each other. 3. Rhodonite is a crystal formed by a triclinic system in which the three axes are unequal and set obliquely at unequal angles. 4. In a tetragonal system minerals such as zircon are formed by crystals with three axes, all at right angles, in which one axis is longer than the other two. 5. Quartz is an example of a hexagonal system in which three axes at 60° angles to each other are positioned around a vertical axis that can be longer or shorter than the other three. 6. In an orthorhombic system (such as orthorhombic sulfur) the three axes are of unequal length and set at right angles to each other. 7. Finally, with a monoclinic system, two of three axes are at right angles with the third set obliquely, such as in the example of feldspar.

What is a carat?

A carat is a unit of measurement representing the weight of precious stones, pearls, and certain metals (such as gold). It was originally a unit of mass based on the carob seed or bean used by ancient merchants in the Middle East. In terms of weight measurement, a carat equals three and one-fifth grains troy, and it is also divided into four grains (sometimes referred to as carat grains). Diamonds and other precious stones are estimated by carats and fractions of carats; pearls are usually measured by carat grains (for more about grains and measurement, see “Mathematics throughout History”).

Carats of gold are measured based on the number of twenty-fourths of pure gold. For example, 24-carat gold is pure gold (but for a goldsmith’s standard, it is actually 22 parts gold, 1 part copper, and 1 part silver, as real gold is too malleable to hold its shape), 18-carat gold is 75 percent pure, 14-carat gold is 58.33 percent pure, and 10-carat gold is 41.67 percent pure gold.

What is Mohs’ Scale of hardness?

Mohs’ Scale of hardness (also seen as Mohs Hardness Scale, Mohs Scale, or even erroneously as Moh’s Scale) was invented by German mineralogist Friedrich Mohs (1773–1839). This arbitrary scale measures hardness or the scratch resistance of minerals and is often used as a quick way to help identify minerals in the field and laboratory. But the numbers assigned to the various minerals are not proportional to their actual scratch resistance. Thus, the main reason for using the scale is to know that a mineral with a lower number can be scratched by a mineral with a higher number.

Mineral

Hardness

talc

1

gypsum

2

calcite

3

fluorite

4

apatite

5

orthoclase

6

quartz

7

topaz

8

corundum

9

diamond

10

How is modeling and simulation used in geology?

Like so many other fields of science, mathematical modeling and simulation is used in geology to understand the intricacies of physical events in the past, present, and future. For example, hydrologists (geologists who study water flow above and below the Earth’s surface) often use models to simulate the effects of increased groundwater pumping of wells. They may also use a simulation to determine how much water can be presently pumped out of a well, or how much can be pumped out in the future without harm to the environment. Other hydrologists may use modeling to understand the flow of water in a river, bay, or estuary, for example, to determine how the water erodes a shoreline. Still other researchers may model how snow on a volcanic mountain melts, gathers debris, and potentially flows toward populated areas during an eruption event. (For more about modeling and simulation, see “Math in Computing.”)

How do geologists measure the intensity of earthquakes?

Geologists measure the intensity of earthquakes in order to compare and judge potential damage. One of the first standard ways to measure intensity was developed in 1902 by Italian seismologist Giuseppe Mercalli (1850–1914) and is called the Mercalli Intensity Scale (it was later modified and renamed the Modified Mercalli Intensity Scale). The numbers, in Roman numerals from I to XII, represent the subjective measurement of an earthquake’s strength based on its effects on local populations and structures. For example, Roman numeral V on the scale represents a quake felt by nearly everyone, with some dishes and windows broken, unstable objects overturned, and disturbances of trees, poles, and other tall objects sometimes noticed.

Image

A devastating tsunami struck Indonesia in 2004. Scientists are using mathematics to study tsunamis and develop a more effective advance-warning system that could save lives.

But scientists wanted a more solid, less subjective scale. One of the first scales developed to measure the true magnitude was invented by American seismologist Charles Francis Richter (1900–1985) and German-born seismologist Beno Gutenberg (1889–1960). In 1935 these scientists borrowed the idea of magnitude from astronomers (stellar brightness is measured by magnitude), defining earthquake magnitude as how fast the ground moved as measured on a particular seismograph a specific distance from the quake’s epicenter.

The Richter Scale is not a physical scale like a ruler, but rather a mathematical construct—it is not linear, but logarithmic. Thus, an increase in each whole number on the scale represents a ten-fold increase in power. Its numbers represent the maximum amplitude of seismic waves that occur 62 miles (100 kilometers) from the epicenter of an earthquake. Because seismographs are usually not located at this exact interval, the magnitudes are deduced using the arrival of specific waves of energy given off when an earthquake occurs.

How does mathematics help scientists understand tsunamis?

Depending on where it occurs, a powerful earthquake in the oceans can shift rock, displacing the water above and generating seismic oceanic waves called tsunamis. Erroneously called “tidal waves” (they do not form by tidal action), tsunamis are long surface waves—often expressed as “like dropping a pebble in a pond”—that can attain tens of feet in height once they reach a shoreline. They happen mostly in the Pacific and Indian Oceans, and more rarely in the Atlantic. Because of the Pacific’s propensity toward tsunamis, there are several warning systems in effect; once an earthquake is detected, a network of early warning stations (with automatic sensors) send out signals that a tsunami may be imminent. If the warning is in time, coastal residents in the tsunami’s path may be able to escape to higher ground.

Mathematicians are also trying to help understand the physics and processes of tsunamis, mainly because the landforms, coastlines, and depth of the ocean basins differ. For example, in one study to try and understand the mathematics of the deadly waves that occurred in the 2004 Indian Ocean tsunami, mathematical models have show that the waves were a “classic wave packet,” or the wave behaved in a way that is close to that predicted by mathematical theory—waves traveling together as well as evolving in form as they cross the ocean. Mathematics also showed that, contrary to popular belief, the first tsunami to strike a shoreline is often not the largest; for example, in the Indian tsunami, the third and fourth waves were the larger than the first and second.

Another mathematical model developed in 2010 finds the best spots to install tsunami detection buoys and sea-level monitors in the Indian Ocean, and it can also be used for the less-tsunami-prone areas of the Atlantic Ocean, Mediterranean, Caribbean, and Black Seas. Yet another study used a mathematical model to show that the number and height of the waves hitting the shore depends on the shape of the initial surface wave in deep water.

Although the Richter Scale is mentioned most often in the media when a quake occurs, there is a more precise scale in use today that is based on the mathematics of motions caused by the earthquake. Called moment magnitude,this method uses a physical quantity related to the total energy released in the quake, which is called a moment. Seismologists can also deduce moment magnitude from a fault’s geometry in the field or a seismogram reading. Scientists occasionally use moment magnitude when describing an earthquake event to the public, but because the concept is so difficult to explain the number is often translated into the Richter Scale.

What is sea level?

Sea level is the height of the ocean’s surface at a certain spot and depends on changing conditions. It is also the basis for most Earth surface measurements, because sea levels are used as a reference point in determining land elevations and ocean depths.

Scientists have averaged out the highest and lowest altitudes and depths from sea level locations: The highest is Mount Everest (Nepal-Tibet), which has been measured at 29,022 feet, 7 inches (8,846 meters) above sea level, although China and Nepal have their own numbers, both agreeing on 29,029 feet (8,848 meters) above sea level. (The differences are because no one knows where to measure the height from—the ground or snow height; it may eventually be a moot point, since the ice melts and the ground is rising as the Indian continent is being pushed beneath China and Nepal.) The lowest point on land is the Dead Sea (Israel-Jordan), which measures 1,299 feet (396 meters) below sea level. The greatest depth below sea level is the Mariana Trench in the Pacific Ocean—a deep chasm measuring 36,201 feet (11,033 meters) below sea level.

What is mean sea level?

Mean sea level (MSL) is the average water level (height of the sea) for all stages of a tide. Locally, MSL is measured by tidal gages at one or more points over a given period of time. The resulting numbers average out wind waves and other periodic changes in sea level. The overall values of a MSL are measured with respect to level marks on land called benchmarks. Thus, scientists know a true change in MSL is either from a change in sea level from, for example, possible global warming effects, or changes in the gage’s height on land, such as in the case of local uplift.

There is also a more mathematically intensive way to determine the MSL. To a geodesist (a person who studies the shape of the Earth), MSL is determined by comparing measured heights of the global Mean Sea Surface (MSS) above a level reference surface called a geoid—a mathematical model of an ellipsoid shape that approximates Earth’s mean sea level. This comparison is done because the Earth does not have a geometrically perfect shape (for example, the Atlantic Ocean north of the Gulf Stream is about 3.3 feet (1 meter) lower than it is farther south). The MSS is not a “level” surface, thanks to such factors as currents created by wind, as well as atmospheric cooling and heating that cause differences in sea levels around the world. But interestingly enough, it never differs from the global geoid by more than about 6.56 feet (2 meters).

How do scientists use mean sea level in connection with global climate change?

Many scientists are interested in the long-term mean sea level change, especially in connection with global climate change. By taking such long-term measurements, these scientists are hoping to confirm the predictions of several climate models, including the idea that global warming is a result of the “greenhouse” gases from either human or natural sources.

Image

Mt. Everest (right peak, with Mt. Nuptse at left) in Nepal, rising to a height of 29,022 feet, is the tallest mountain on Earth when measuring height compared to sea level.

There are two major ways to determine such sea level variations. The first estimates sea level changes using tide gauge measurements, mathematically averaging the numbers. Graphs of the most recent estimates using this method show a 0.669 to 0.960 inch (1.7 to 2.44 millimeter) rise in sea level per year. The second method uses global positioning system (GPS) devices and satellite altimeter measurements, both of which accurately pinpoint global ocean heights quickly and more efficiently. For example, from 1994 to 2004, scientists mathematically constructed graphs from satellite altimeter measurements, showing that the global mean sea levels have risen anywhere between 1.10 and 1.18 inches (2.8 and 3.0 millimeters).

No matter what the method, scientists do know the global mean sea levels are slowly rising. Many believe that about one quarter of the rise is caused by thermal expansion as the oceans warm, and another one quarter by small glaciers melting around the world. Some rise may also be caused by such human activities as burning trees, pumping ground water, and draining wetlands. Currently, scientists are not quite certain about the true rate of sea level rise, mainly because of the intensity of working on the data: Ocean-tide gauge records must be averaged, over many decades, and corrected for variable ocean dynamics and distortions of Earth’s crust.

MATH IN METEOROLOGY

What is meteorology?

Meteorology is the study of atmospheric phenomena, their interactions, and processes. It is often considered part of the Earth sciences and is most commonly associated with weather and weather forecasting.

What is the composition of the air?

Meteorologists determine the composition of air by analyzing its various constituents; these are mainly displayed in terms of percent of the atmosphere. The first 40 to 50 miles (64 to 80 kilometers) above the surface contains 99 percent of the total mass of the Earth’s atmosphere. It is generally uniform in composition, except for a high concentration of ozone, known as the ozone layer, at 12 to 30 miles (19 to 50 kilometers) in altitude.

In the lowest part of the atmosphere—the area in which humans, other animals, and plants live—the most common gases are nitrogen (78.09 percent), oxygen (20.95 percent), argon (0.93 percent), carbon dioxide (0.03 percent), and minute traces of such gases as neon, helium, methane, krypton, hydrogen, xenon, and ozone. Water vapor is also present in the lower atmosphere, although variable and at a very low percent. Higher in the atmosphere, the composition and percentages change as the atmosphere thins. (For more about percents, see “Math Basics.”)

How is air temperature measured?

Simply put, there are two ways to look at air temperature: On the micro-scale, it is the small scale measure of gas molecules’ average kinetic energy; on a larger scale, it is the action of the atmospheric gases as a whole. In physics, an entire branch is devoted to objects’ temperatures and the transfer of heat between objects of differing temperatures. Called thermodynamics, it is a study that entails a great deal of mathematical knowledge.

No matter what the type of temperature discussed, the most common apparatus for measurement is the thermometer. The most familiar thermometers are thin, long, closed glass tubes containing some type of liquid—most often alcohol or mercury. When the temperature increases—or the air around the tube heats up—it causes the liquid to expand, moving it up the tube. Air temperature measurements are most commonly read in Celsius or Fahrenheit (for more about Celsius and Fahrenheit, see “Mathematics throughout History”).

How are snowflakes defined using mathematics?

Thanks to some mathematical research in 2009, the symmetrical details of snowflakes has been revealed. Mathematicians have managed to build “snowflakes” using an elaborate computer model designed to replicate the complex growth of the flakes in three dimensions. In reality, a snowflake begins when a piece of dust, pollution particles, or even bacteria are surrounded by water; the water freezes to form a small crystal of ice. Dictated by the temperature, local conditions, and humidity, the flakes form, each one containing around one million million million (quintillion) molecules. The results show the minute details seen in snowflakes, including long needles, dendritic-tendrils, and star-like features, based on theory and computations of the mathematical model. The researchers hope that this model will help meteorologists understand how the various snowflakes not only form in the clouds, but also how the flakes affect the amount of water reaching the Earth’s surface.

Why do weather reports sometimes say the humidity is 100 percent when there is no rain or snow falling?

Areading of 100 percent humidity usually means there is a high probability that rain will be or is occurring, but not always. It might be 100 percent RH because clouds are forming. If the RH near the ground is much less—for example, if a relatively dry air mass is in place—there will be no rain at the surface. This is why Doppler radar sometimes shows rain or snow in an area when none is actually reaching the ground.

How are absolute and relative humidity determined?

Like many other facets of meteorology, mathematics comes in handy when determining absolute and relative humidities. The absolute humidity is the mass of water vapor divided by the mass of dry air in a specific volume of air at a specific temperature. In this instance, the warmer the air, the more water vapor it contains.

On the other hand, relative humidity (RH) is the ratio of the absolute humidity to the highest possible absolute humidity, which in turn depends on the current air temperature. Mathematically, RH is often defined as the ratio of the water vapor density (mass per unit volume) to the saturation water vapor density, usually expressed as a percent. The equation for relative humidity is: RH = actual water vapor density / vapor saturation density × 100 percent.

More commonly, RH is thought of as the amount of water vapor in the air at a given temperature in comparison to the amount that the air could contain at the same temperature. For example, if an area is experiencing 100 percent relative humidity, that usually means the air is saturated with (can’t hold anymore) water vapor.

What is the heat index?

Our bodies dissipate heat by varying the rate of blood circulation, losing water through the skin and sweat glands—and, as a last resort, by panting—when the blood is heated above 98.6 degrees Fahrenheit (37 degrees Celsius), the average body temperature. Sweating cools the body through evaporation. You can get the same feeling when you put alcohol on your skin, because as the alcohol evaporates, the skin is cooled.

The Heat Index (HI) is an index that combines air temperature and relative humidity to estimate how hot it actually feels. It is based on a mathematical concept called the heat index equation, a long equation that includes the dry air temperature, relative humidity (in percent form), and many biometeorological factors too long to list here. The resulting heat index table represents the apparent, or “feels like,” temperature. For example, if the air temperature is 90 degrees Fahrenheit, with the relative humidity at 60 percent, it will feel like 100 degrees Fahrenheit.

Why do meteorologists want people to pay attention to the heat index? The major reason involves how the body responds to high heat-value numbers: If the relative humidity is high, it curtails evaporation on the skin, and the body is unable to effectively cool itself (and a person will perceive that the air is warmer). When heat index values grow higher, conditions exceed the level a body can remove heat, causing the body temperature to rise. This can cause heat-related illnesses, such as sunstroke or heat exhaustion. For example, according to the United States National Weather Service, exposure to direct sunlight can increase the HI by up to 15 degrees Fahrenheit (9.4 degrees Celsius). And when a heat index between a mere 90 degrees Fahrenheit (32.2 degrees Celsius) to 105 degrees Fahrenheit (40.6 degrees Celsius) can cause possible sunstroke, heat exhaustion, and heat cramps, it is easy to see the meteorologists’ concerns.

The following table shows how the heat we actually experience changes with temperature and humidity (humidity is expressed as a percentage; temperatures are in degrees Fahrenheit).

Image

According to the National Weather Service, sunstroke, heat cramps, and heat exhaustion are possible above 90 degrees Fahrenheit; temperatures above 105 degrees can also lead to heat stroke; and above 130 degrees heat stroke is likely if exposure to such temperatures is prolonged.

How is barometric (or air) pressure measured?

Barometric (or air) pressure, named after the instrument used to measure this pressure, is caused by the weight of the atmosphere pressing down on the land, ocean, and air below, with gravity creating the downward force. Because pressure is dependent on the amount of air above a certain point, pressures are greatest at the surface and less at higher altitudes. On the average, at sea level, the air has a pressure of 14.7 pounds per square inch (a one-inch square has 14.7 pounds of air pressure on each side).

Image

A barometer is a type of gauge that measures atmospheric pressure, which makes it useful in detecting high and low pressure systems that can predict changes in the weather.

The United States National Weather Service does not measure pressure in pounds per square inch, but in terms of inches of mercury—or how high the pressure pushes mercury in a sealed tube. Air pressure aloft is reported in millibars (or hectopascals [hPa], a term most often used by scientists to measure air pressure).

Most of us are very familiar with air pressure as it changes with the weather. For example, the use of the terms “high pressure system” and “low pressure system” are often indicators of the types of weather fronts traveling through a region. In general, falling air pressure (seen on a barometer) means that clouds and precipitation are more likely; rising air pressure means that clear weather is more likely. In addition, many people experience “personal” changes in air pressure. For example, the discomfort or even pain felt in a person’s ears as they ascend or descend in an airplane, a large hill, or even in an elevator is evidence of changing air pressure.

How are millibars converted to inches of mercury?

Mathematically, the conversion is simple. The air pressure at sea level is 29.92 inches of mercury, or 1,013.2 millibars. For example, if you see air pressure of 1,016 millibars on a weather map, convert to inches of mercury by multiplying by 29.92, and then divide by 1,013.2. The result is 30.00 inches of mercury.

How much does air pressure decrease with altitude?

It takes mathematics to figure out how much air pressure decreases with altitude. Close to the surface, and due to the pull of gravity, the air pressure exerted by air molecules is greatest (around 1,000 millibars at sea level). From there, it declines quickly with altitude to 500 millibars at around 18,000 feet (5,500 meters). At 40 miles (64.37 kilometers), it will be 1/10,000th of the surface air pressure.

This can also be interpreted another way: For altitudes of less than about 3,000 feet (914.4 meters), the barometric air pressure decreases about 0.01 inches of mercury for each 10 feet (3 meters) of altitude (or a decrease of 1 inch of mercury for each 1,000 foot [304.8 meters] gain in altitude). If millibars are used, it is 1 millibar for every 26.25 foot (8 meter) altitude gain. That means if a person takes a ride in an elevator, hits the button for the 50th floor—and coincidentally has a barometer in his or her pocket—the pressure would fall by approximately 0.5 inch (1.27 centimeters) during the ascent. This also means that higher-altitude cities have major differences in barometric readings. For example, the air pressure in almost mile-high Denver, Colorado, is only 85 percent that of cities that reside at sea level.

How is wind measured?

Wind speed is the measurable motion of air with respect to the surface of the Earth. It is measured in terms of a unit distance over a unit time, such as miles per hour. The wind direction is also an indication of the wind’s source. For example, a southerly wind means the wind is blowing toward north—it is coming from a southerly direction.

What is the new formula used for calculating wind chill?

Most people know about wind chill: the temperature your body feels when it is exposed to a certain air temperature combined with a particular wind speed. The higher the wind speed, the “colder” the wind chill temperature and the faster the exposed areas of a person’s body will lose heat (a process known as transpo-evaporation; when moisture evaporates, the surface from which it evaporates loses some heat).

The newest wind chill chart—called the Wind Chill Temperature Index—took over from the old chart (developed in 1945) in the 2001-2002 winter season. The reason for the change was simple: The original wind chill index revolved around heat loss, with a standard set at the chill experienced while standing outside in air moving 4 miles (2 kilometers) per hour. Based purely on temperature and wind—and on how water freezes in plastic containers—the charts were developed in Antarctica by Paul Siple and his fellow explorer, P. F. Passel, back in 1939, partly with the intention of being used in World War II battlefield planning.

Not everyone was thrilled with this simplistic, two-factor interpretation, however. There were pieces missing from the wind chill puzzle, such as the fact that humans constantly generate heat to the lack of wind measurements above 40 and below 5 miles per hour (64 and 5 kilometers per hour). Passel and Siple’s wind speeds were also taken about 33 feet (10 meters) above the ground, making the chart more valuable for a third-floor office than ground level. But the biggest problem overall was that the old wind chill chart could not accurately predict how humans perceive temperature.

How is air density related to air pressure?

The expression “thin air” is actually a reference to the atmosphere’s density— or how “thick” the air molecules are near the Earth’s surface. In chemistry terms, density is merely the mass of anything (including air) divided by the volume the mass occupies. For example, the density of dry air at sea level is high, mainly due to the pull of gravity. In metric system terms, sea level density is about 1.2929 kilograms/meter3, or about 1/800th the density of water. But as altitude increases, the density drops dramatically. Mathematically speaking, the density of air is proportional to the air pressure and inversely proportional to temperature. Thus, the higher up one is in the atmosphere, the lower the air pressure and the lower the air density.

There is somewhat of an athletic advantage to higher elevations—at least for players of such sports as football and baseball. Because the air density is lower, a ball thrown in high-elevation places like Denver, Colorado, will travel even farther than a ball thrown in a close-to-sea-level city, such as Miami, Florida. In fact, the air at the Denver stadium allows balls to travel almost 10 percent farther.

Thus, the new wind chill index was created. This chart includes such changes as wind speeds calculated at the average height of a human head (about 5 feet [1.52 meters] above the ground); it is based on a human face model and sundry other more “modern” considerations. The actual general formula for the wind chill has now changed to the following: Wind chill in degrees Fahrenheit = 35.74 + 0.6215T -35.75(V0.16) + 0.4275(V0.16), in which T is the air temperature (in degrees Fahrenheit), and V is the wind speed (in miles per hour).

The biggest difference between the old and new indexes is that the new index usually registers warmer temperatures than the old index. Still, no matter what the equation or chart, when temperatures are icy cold and winds are high, everyone should be careful and bundle up.

What are the major scales used in interpreting hurricanes and tornadoes?

There are two major scales used to interpret hurricane and tornado intensity—and thus potential damage. The Saffir-Simpson Hurricane Damage-Potential Scale is a hurricane force scale using the numbers 1 through 5 to rate a hurricane’s intensity. The scale was developed by engineer Herbert Saffir (1917–2007) and pioneer hurricane expert Robert Simpson (1912-) in 1971. A number on the scale is assigned to a hurricane based on its peak wind speed; it is also used to give an estimate of the potential property damage and flooding expected along the coast from a hurricane landfall.

Calculating the Wind Chill

Image

Saffir-Simpson Hurricane Scale

Image

The Fujita-Pearson Tornado Intensity Scale (or “F-Scale”) is used to measure tornado wind speeds. It was developed in 1971 and named after Tetsuya Theodore Fujita (1920–1998) of the University of Chicago and Allan Pearson, who was then head of the National Severe Storms Forecast Center in Kansas City. It was Fujita who came up with a system to rank tornadoes according to how much damage they cause. He developed his categories by connecting the twelve forces of the Beaufort wind scale (knots based on what the sea surface looks like—from smooth to waves over 45 feet) with the speed of sound (Mach 1). Then, for each category he estimated how strong the wind must be to cause certain observed damages. Fujita’s scale was later combined with Pearson’s scale, which measures the length and width of a tornado’s path, or its contact with the ground.

Fujita Tornado Scale

Image

What is the Enhanced Fujita Scale?

Introduced by the National Weather Service in February 2006, and first put into use on February 1, 2007, the Enhanced Fujita Scale (EFS) was created to better reflect actual damages recorded since the original Fujita Scale was developed. Meteorologists have recently concluded that structures could be damaged by tornadic winds that were slower than previously thought. The original scale, which was felt to be too general, did not take into careful enough account the different types of construction, and it was hard to evaluate tornadoes that struck in lowpopulated areas where not many structures were present. The new scale also offers more detailed descriptions of potential damages by using 28 Damage Indicators that describe building types, structures, and vegetation, accompanied by a Degrees of Damage scale. Otherwise, the EFS uses the same categories, ranking tornadoes from 0 up to 5.

Image

The strength of a tornado is rated on the Fujita-Pearson Tornado Intensity Scale, which takes into account wind speeds and damage created by the twister.

Enhanced Fujita Scale

Scale

Wind Speed (mph/kph)

Damages

EF0

65-85/105-137

Tree branches break off, trees with shallow roots fall over; house siding and gutters damaged; some roof shingles peel off or other minor roof damage.

EF1

86-110/138-177

Mobile homes overturned; doors, windows, and glass broken; severe damage to roofs.

EF2

111-135/178-217

Large tree trunks split and big trees fall over; mobile homes destroyed, and homes on foundations are shifted; cars lifted off the ground; roofs torn off; some lighter objects thrown at the speed of missiles.

EF3

136-165/218-265

Trees broken and debarked; mobile homes completely destroyed and houses on foundations loose stories, and buildings with weaker foundations are lifted and blown distances; commerical buildings such as shopping malls are severely damaged; heavy cars are thrown and trains are tipped over.

EF4

166-200/266-322

Frame houses leveled; cars thrown long distances; larger objects become dangerous projectiles.

EF5

>200/>323

Homes are completely destroyed and even steelreinforced buildings are severely damaged; objects the size of cars are thrown distances of 300 feet (90 meters) or more. Total devastation.

When was mathematics first used to predict the weather?

One of the first people to use mathematics to predict the weather was English meteorologist Lewis Fry Richardson (1881–1953). In 1922 he proposed the use of differential equations to forecast the weather, an idea published in his book Weather Prediction by Numerical Process. He believed that observations from weather stations would provide data for the initial conditions; from that information, predictions of the weather could be made for several days ahead.

But Richardson’s methods were extremely tedious and time consuming, mainly because they had to be done by hand in the pre-computer age. Thus, most of his calculations came too late to be of any predictive value. Richardson determined that 60,000 people would have to do the calculations in order to predict the next day’s weather. But his ideas did lay the foundation for modern weather forecasting.

What is numerical weather prediction?

Numerical weather prediction is forecasting the weather using numerical models. Because of the complexity of the mathematics involved—not to mention the number of variables needed to predict the weather—all numerical model studies are run on high-speed computers. The computer solves a set of equations, resulting in a computer model of the atmosphere showing how weather conditions will change over time.

How do computer models attempt to predict the weather?

In general, computer models used to predict weather use around seven equations that govern how the basic parameters—temperature, pressure, and so on—change over time in the atmosphere. Scientists call the study of how they can physically and mathematically represent all the processes in the atmosphere dynamics.

In reality, everyone knows computer models can’t perfectly predict the weather at this time. This is because of several factors, including errors in the initial conditions (or the observations the model gets to begin making its forecast) and errors inherent in the model (a computer model can’t take into consideration all the factors controlling the weather). Long-term forecasts are even more inaccurate because these two errors are compounded mathematically over time.

What are some examples of weather prediction models?

Because there is more than one group carrying out weather predictions, there are many computer models used by meteorologists around the world. For example, the United States National Weather Service’s weather predictions are carried out at the National Centers for Environmental Prediction (NCEP). The NCEP runs several different computer models each day to determine the best weather forecasts. Some are used for short-term forecasting, others for the longer term; and some are used for global or hemispherical predictions, while others are only regional. They include the following mathematically intensive computer models (for more information about computer modeling, see , “Math in Computing”):

NGM—NGM, or the Nested Grid Model, is one in which observations are converted to values at various points that are evenly spaced, making it easy for computer programs to plug them into equations. This model is now considered to be obsolete.

WRF—The WRF (Weather Research and Forecasting) computer program is used for forecasting and research, created through the efforts of the National Oceanic and Atmospheric Administration (NOAA) and the National Center for Atmospheric Research (NCAR), along with more than 150 international organizations. A special version of the program is called the Hurricane Weather Research and Forecasting (HWRF) model.

AVN, MRF, and GSM—The AVN (Aviation Model), MRF (Medium Range Forecast), and the GSM (Global Spectral Model) convert data into a large number of mathematical waves; they then return the waves in a manner that will produce a forecast map.

GFS—The GFS (Global Forecast System) is a global numerical weather prediction computer model run by the National Oceanic and Atmospheric Administration (NOAA); it was upgraded in 2010 by the National Centers for Environmental Prediction (NCEP). It produces high resolution forecasts—the first part of the model representing up to 180 hours (seven days), the second part of the model up to 16 days in advance (beyond seven days is often thought to be the limit of weather prognostication). It depicts several types of weather phenomena, such as precipitation types, precipitation movements, temperature, and winds.

RR—The RR (Rapid Refresh) is the next generation of hourly updated replacement information for the RUC (Rapid Update Cycle) model. It was developed for users who needed frequently updated, short-range weather forecasts, especially those working in the aviation field and in severe weather forecasting facilities.

ECMWF—The ECMWF (European Center for Medium-Range Weather Forecasts) is considered to be one of the most advanced weather forecast models in the world; it is mostly used for the Northern Hemisphere.

UKMET—The UKMET (United Kingdom meteorology offices) model also gives forecasts for the entire Northern Hemisphere.

MM5—The MM5 (Mesoscale Model #5) and WRF (Weather Research Forecast model) are actually the same models. The MM5 has long been a research computer model for smaller geographic forecast regions (such as Antarctica); the WRF is the name for MM5 as an operational model—not just for research.

What is ensemble forecasting?

Ensemble forecasting is not all the meteorologists of the world getting together to predict the weather; it’s a numerical prediction method used to generate a possible outcome of an event, in this case, weather events. This type of forecasting is actually a form of Monte Carlo analysis, in which the predictions are based on a different initial conditions that are relatively feasible—all based on observations and measurements that have happened in the past (for more about the Monte Carlo analysis, see “Applied Mathematics”). In fact, during hurricane season, many of us who watch the weather channels have seen the results of ensemble forecasting: notice some of the model simulations of the many possible tracks of a hurricane, for example, as it heads out of the Caribbean Ocean and towards the U.S. coastline.

What are some of the mathematics behind global warming?

There are plenty of mathematics and data behind the idea of global warming (or global climate change). In particular, forecasts of climate change are based on the mathematical interpretation of weather models—not just one, but many weather/climate models (more than 30 at this writing) that all seem to disagree with each other. In other words, in a way, mathematics has caused the global warming problem to become extremely controversial.

One mathematical field in particular that has caused an overall problem in the interpretation of global climate change—creating both the believers and deniers—is statistics. Some scientists claim that global warming advocates have skewed the data in their favor, and vice versa. The main problem is that the collection of sampled data has to be analyzed to show a real phenomena—not one inferred by a relatively small window of data collection or on very few data points. The real question many scientists ask is: do climate changes occur over the average lifetime of a human, or even every so many hundred years? Do changes in weather and climate come in cycles that are much longer than humans have collected data? For example, reliable weather data is considered to have only started within the last century—and thus, scientists can’t truly determine warming from human-related reasons versus a natural cycle.

Image

A growing number of scientists believe that our planet is experiencing global warming, which, among other effects, is causing ice at the poles to melt and lead to rising sea levels.

Another fly in the ointment in determining global climate change is the use of computers and mathematics to determine changes (present and future) in the world’s atmosphere. Even if the physical models are correct, computers and mathematicians can’t solve the complex weather equations with the utmost confidence. This is because computers, no matter how advanced, cannot make enough computations that weather/ climate predictions need. In addition, there are, at this time, too much data and variables involved in determining the weather and climate change. Thus, in a strange way, both camps—the global climate change believers and naysayers—are correct, at least until computers can take in more data and humans can collect enough data to make the predictions and definitive statements on global warming more accurate.

MATH IN BIOLOGY

What is biology?

Biology is the science of life. It includes the study of the characteristics and behaviors of organisms; how a population, species, or individual comes into existence and evolves; and the interaction of organisms with the environment and each other.

What is mathematical biology?

Mathematical biology is another word for biomathematics, the interdisciplinary field that includes the modeling of natural biological processes using mathematical techniques. Mathematical biology is carried out by mathematicians, physicists, and biologists from various disciplines within their fields. These scientists work on such problems as modeling blood vessel formation, with possible applications to drug therapies; modeling the electrophysiology of the heart; exploring enzyme reaction within the body; and even developing models that track the spread of disease.

What is population dynamics?

One major area of interest in mathematical biology is population dynamics. A population is the number of individuals of a particular species in a certain area; population dynamics deals with the study of short- and long-term changes in certain biological variables in one or several populations.

Population dynamic studies have actually been around for centuries. For example, weight or age comparisons of human or other animal populations—or even how such populations grow and shrink over time—have long been areas of study. With regard to human populations, the two simplest kinds of input in a population study are birth and immigration rates, and the two basic outputs are death and emigration rates. If the inputs are greater than the outputs, the population will grow; if the outputs are greater than the inputs, the population will shrink.

Image

Gregor Mendel was an Austrian monk who, in the 19th century, developed matrices of characteristics in pea plants he was breeding, which led to his founding the science of genetics.

How does population dynamics use mathematics?

Population dynamics combines observations and mathematics, especially the use of differential equations (for more about differential equations, see “Mathematical Analysis”). For example, to determine what the population of a certain country will be in ten years, scientists use a mathematical model commonly called the exponential model, or the rate of change of a population as it is proportional to the existing population. (For more about population growth mathematics and the environment, see below.)

Who was Gregor Mendel?

Austrian monk Gregor Johann Mendel (1822–1884) performed experiments with pea plants from 1857 to 1865 that eventually led to his discovery of the laws of heredity. Gathering 34 different kinds of peas of the genus Pisum (all tested for their purity), he attempted to determine the possibility of producing new variants by cross-breeding. By self-pollinating the plants—and covering them over so there was no unplanned cross-pollination—he determined the detailed characteristics of their offspring, such as height and color.

Image

A simple Mendelian matrix.

Prior to Mendel, scientists believed that heredity characteristics of a species were the result of a blending process, and that over time various parental characteristics were diluted. Mendel showed that characteristics actually followed a set of specific hereditary laws. He worked out what can be described as a mathematical matrix of the characteristics, thus determining what characteristics were dominant and recessive in the plants. (For more about matrices, see “Algebra.”)

But Mendel had a hard time getting his results published. Even after publication by a local natural history society, his work was ignored. Mendel gave up both gardening and science when he was promoted to abbot. Coincidentally, and amazingly, by 1900 three different biologists working in three different countries—Hugo de Vries in the Netherlands, Erich Tschermak von Seysenegg in Austria, and Karl Correns in Germany—determined the hereditary laws independently. But they all knew about Mendel’s work, graciously giving the credit for the findings to him. Mendel, rightfully, is now commonly considered the “father of genetics.”

What is Fisher’s fundamental theorem of natural selection?

Evolutionary biologist, geneticist, and statistician Sir Ronald Aylmer Fisher (1890–1962) first proposed Fisher’s fundamental theorem of natural selection in 1930. A mathematical concept, it states that the rate of evolutionary change in a population is proportional to the amount of genetic diversity available. He is also often credited with creating the foundations for modern statistical science.

What were some contributions John Haldane made to genetics?

Scottish geneticist John Burdon Sanderson Haldane (1892–1964), along with Sir Ronald Aylmer Fisher (1890–1962) and Sewall Green Wright (1889–1988), developed population genetics. Among other contributions, Haldane’s famous book The Causes of Evolution (1932) was the first major work of what came to be known as the modern evolutionary synthesis. It made use of Charles Darwin’s theory of the evolution of species by natural selection, presented in terms of the mathematical consequences of Gregor Mendel’s theory of genetics, to form the basis for biological inheritance.

Image

Charles Darwin took into account Mendel’s ideas about genetics in forming his theory of evolution.

What is computational biology?

Computational biology refers to biological studies that include computation, mainly with computers. Many biologists study computational biology to develop algorithms and software to manipulate and analyze biological data; they also use computers to develop and apply certain mathematical methods to analyze and simulate molecular biological processes.

Another good reason for this marriage of biology and computers is obvious in today’s world: the genome—human and otherwise. To take on the giant task of mapping genomes (the entire collection of genes in a species), scientists have turned to the computer, using it for such studies as genomic sequencing, computational genome analysis, and protein structure analysis.

Computational power is needed for a plethora of other tasks, too. For example, it is being used to develop methods to predict the structure and function of newly discovered proteins and structural RNA sequences in humans and other organisms, to group protein sequences into families of related sequences, and to generate phylogenetic trees (or lineage trees, such as the human relationship to apes) to examine evolutionary connections.

What is bioinformatics?

Bioinformatics is a field that evolved by joining biology and information science. In the past few decades, advances in molecular biology and the increase in computer power have allowed biologists to accomplish tasks such as mapping large portions of genomes of several species. For example, a baker’s yeast called Saccharomyces cerevisiae has been sequenced in full.

Humans have not been exempt, either: The Human Genome Project was completed in 2003. It determined the complete sequence of the three billion DNA subunits (bases) for humans, identified all human genes, and made all the associated information accessible for further biological study. Since that time, other universities and agencies have taken on the task of analyzing the results, such as determining the gene number, exact locations, and functions. Such a deluge of information has also made it necessary to store, organize, and index all the sequence data, which is where information science, or the method to store and work on such large amounts of data, comes in the form of bioinformatics. The computer experts who deal with such information are known as bioinformatics specialists.

How does the adult brain process fractions?

For those who found it challenging to work out fractions in elementary math class, researchers have discovered that the adult brain actually encodes fractions automatically without conscious thought. According to one study, the cells in the brain’s pre-frontal cortex and intrapariental sulcus seem to respond to particular fractions. And since these sections of the brain are responsible for processing whole numbers, it may mean that adults develop an intuitive understanding of fractions, whether they are presented as numbers (½) or words (one half). If this is true, can adults “speak in fractions”? Three-quarters, please.

How many bases are in a human’s genome sequence?

There’s a good reason why computers are so important to biologists working on the human genome. The amount of data is staggering, and would take scientists generations to analyze without the benefit of computers. For example, it would take about 9.5 years to read out loud (without stopping) the three billion bases in a person’s genome sequence. This is calculated on a reading rate of 10 bases per second, equaling 600 bases per minute, 36,000 bases per hour, 864,000 bases per day, and 315,360,000 bases per year.

Image

Human DNA contains about three million base pairs. Each base pair is made up of G-C (guanine-cytosine) and A-T (adenine-thymine) bonds.

One million bases (called a megabase and abbreviated Mb) of DNA sequence data is roughly equivalent to one megabyte of computer data storage space. Because the human genome is three billion base pairs long, three gigabytes of computer data storage space are needed to store the entire genome. This includes something called nucleotide sequence data only and does not include other information that can be associated with sequence data. Because of such numbers, scientists working on the human genome are grateful they have computers on their side!

MATH AND THE ENVIRONMENT

What is ecology?

Ecology (also known as bionomics) is a branch of biology that deals with the abundance and distribution of organisms in nature, as well as the relations between organisms and their environment. It is an inherently quantitative science, with ecologists using sophisticated mathematics and statistics to describe and predict patterns and processes in nature.

How is mathematics used to describe population growth of organisms in a certain environment?

In general, a certain population of organisms—from rabbits to humans—will grow exponentially if it is left unchecked. That means that in a “perfect world” a population’s rate of increase is constant. This can be seen in the following equations:

· after 1 year = P0(1 + r)

· after 2 years = P0(1 + r)2

· after 3 years = P0(1 + r)3

· after n years = P0(1 + r)n

in which P0 is the population today and r is the rate of increase. These equations are further modified for population growth statistics, but such intricate calculations are not within the scope of this book. (For more about population growth and biology, see above.)

Who were John Graunt and Sir William Petty?

English statistician John Graunt (1620–1674) is generally considered to be the founder of the science of demography, which is the statistical study of human populations. In 1661, after analyzing some major statistics of the London populace, he wrote what is considered the first book on statistics, Natural and Political Observations upon the Bills of Mortality.

The “Bills of Mortality” refers to the collections of mortality figures in London, a city that had suffered greatly from the outbreak of several plagues. Because the king wanted an early warning system of new outbreaks, weekly mortality records were kept, along with the causes of death. Based on this information, Graunt made an estimate of London’s population that is thought to be the first time anyone interpreted such data; it is therefore considered by some to mark the beginnings of population statistics.

Image

Sir William Petty was a practical mathematician who wanted to establish a national statistics office in England that, among other studies, could calculate economic losses and benefits due to the plague.

Graunt’s work influenced his friend, Sir William Petty (1737–1805). (He also influenced Edmond Halley, the astronomer and discoverer of Comet Halley; for more about Halley, see “Mathematics in the Physical Sciences.”) Petty’s work was a bit more practical (and political): he wanted to set up a central statistical office for the English crown in order to make estimates about the sum of England’s overall wealth. His unusual approach was to assume that the national income was the same as the total national consumption. He didn’t forget about the plague, but added estimates of losses to the national economy due to the plague. From there, he suggested that modest investment by the state to prevent deaths from plague would produce abundant economic benefits.

What is a logistic equation?

A logistic equation (resulting in a curve on a graph) represents the exponential increase in numbers of a species until it reaches the carrying capacity in its specific environment. This carrying capacity, usually referred to by the letter K, is the maximum population size that can be regularly sustained by an environment. Change the environment and K changes, for example, by such events as adding a predator, removing a competitor, or adding a parasite. The notation that follows (in the form of a differential equation) represents a rate of population increase that is limited by interspecific competition (see above):

Image

in which N is the population size, t is time, K is the carrying capacity, and r is the intrinsic rate of increase.

What are survivorship curves?

Survivorship curves record and plot the fate of the young, and their chances of survival in key age categories. Significant factors affecting all populations are birth rates, death rates, and longevity. By recording the numbers of births and deaths over a period of time, researchers can determine the average longevity of organisms in each age class; these numbers tell a great deal about a population.

What are the Lotka-Volterra Interspecific Competition Logistic Equations?

The Lotka-Volterra Interspecific Competition Logistic Equations are concerned with the predator-prey relationships between species in the environment, and are based on differential equations (for more on differential equations, see “Mathematical Analysis”). Such predator-prey theories were developed independently by then-Austrian (now Ukrainian) chemist, demographer, ecologist, and mathematician Alfred James Lotka (1880–1949) and Italian mathematician Vito Volterra (1860–1940) in 1925. They refer to interspecific competition, or the competition between two or more species for some limiting resource, such as food, nutrients, space, mates, nesting sites, or anything in which the demand is greater than the supply.

There are three basic survivorship curves. Type I curves represent species that have offspring with a high survival rate, with most living to a certain age and then dying; humans are an example. Type II curves represent organisms with a steady death rate from the time they are born or hatch until they die; their survivorship varies and includes such species as deer, large birds, and fish. Type III curves include those organisms that have a low survivorship shortly after being born, but with a high longevity for the individual organisms that survive; maple and oak trees can be included in this category.

What is the air quality index?

Mathematics plays an important part in the air quality index (AQI), a scale developed by the U.S. government to measure how much pollution is in the air. The AQI measures five specific pollutants: ozone, particulate matter, carbon monoxide, sulfur dioxide, and nitrogen dioxide. The levels range from 0 (good air quality) to 500 (hazardous air quality); the higher the index, the higher the level of pollutants and the greater the likelihood of detrimental health effects.

Most people think about the AQI in terms of being outdoors—and most weather broadcasts include air quality listings, especially in larger cities. When the readings are high, people are warned not to participate in strenuous activities like sports or hard work outside; people with asthma or other lung problems are urged to stay inside.

What is environmental modeling?

As with most of the sciences, mathematical modeling and computer simulations also come in handy for environmental applications on a local, regional, and global scale. For example, scientists model environmental landscape changes, global climate change and the impacts on ecosystems, watershed and reservoir interactions, and forest management and sustainability.

Image

In this sample graph of a survivorship curve, it is easy to see how the survival rates of maple trees, deer, and people vary greatly over time.

What is computational ecology?

Computational ecology can be considered a subset of environmental modeling, as it addresses practical questions arising from environmental problems using mathematics; in particular, researchers in computational ecology develop new ways to predict and mitigate the rapidly changing life support systems on Earth. For example, in the field of ecotoxicology, mathematical models are used to predict the effects of environmental pollutants on populations. Natural resource management uses mathematics to set quotas for fish and game. And conservation ecologists use mathematical models to determine the effects of various recovery plans for threatened species, and even to design nature preserves.

The most challenging efforts in computational ecology have to do with the current problems in global climate change. In particular, scientists who work on climate change are concentrating on such models as carbon-climate feedback models (to forecast future climates associated with various policy scenarios), forest VOC emissions (how forests release volatile organic compounds, or VOCs, and how they affect the global atmospheric chemistry), modeling tropical deforestation (the loss of trees means less carbon stores, not to mention the effects of the burning of vegetation involved in deforestation), and so on.