MATHEMATICS THROUGHOUT HISTORY - The Handy Math Answer Book

The Handy Math Answer Book, Second Edition (2012)

MATHEMATICS THROUGHOUT HISTORY

THE CREATION OF ZERO AND PI

How did the concept of zero evolve over time?

The concept of zero developed because it was necessary to have a placeholder—or a number that holds a place—to make it easier to designate numbers in the tens, hundreds, thousands, etc. For example, the number 4,000 implies that the three places to the right of the 4 are “empty”—with only the thousandths column containing any value. Because zero technically means nothing, at first few people accepted the concept of “nothing” between numbers. Not that all cultures ignored the possibility of such an idea. For example, Hindu mathematicians, who wrote their math in verse, used words similar to “nothing,” such as sunya (“void”) and akasa (“space”).

It is thought that the Babylonians were the first to use a placeholder in their numbering system as far back as 400 B.C.E. Instead, it appears they used other symbols, such as a double hash-mark (also called two wedges) as a placeholder. And on a clay tablet found at Kish, an ancient Mesopotamian city east of Babylon, three hooks were used as placeholders—and have been dated to around 700 B.C.E.

Archeologists believe an actual symbol for zero probably started in Indochina or India about the 7th century, but the evidence is scarce. Some scientists believe a crude symbol for zero—resembling a shell—may have been developed by the Mayans independently about a hundred years earlier. While the isolated Mayans could not spread the idea of the zero, the Indians seemed to have no problem. Around 650 C.E., zero became a mathematically important number in Indian mathematics—although the symbol was a bit different than today’s zero.

As for the familiar Hindu-Arabic symbol for zero—the open circle—it would take several more centuries to become more readily accepted. For example, by 1200, the Chinese began to use a sign for zero in their mathematical calculations. And by 1202, Liber abaci (Book of the Abacus) by Leonardo of Pisa, also known as Fibonacci (c. 1170-c. 1250) introduced “0” to Europe, in the famous Fibonacci sequence (he was actually using the sequence in a problem about the reproduction of rabbits). (For more about zero and Hindu-Arabic symbols, see “History of Mathematics” and “Math Basics”; for more about the Fibonacci sequence, see “Math Basics.”)

What are some special properties of zero?

There are many special properties of zero. For instance, you cannot divide by zero (or have zero as the denominator [bottom number] of a fraction). This is because, simply put, something cannot be divided by nothing. Thus, if some equation has a unit (usually a number) divided by zero, the answer is considered to be “undefined.” But it is possible to have zero in the numerator (top number) of a fraction; as long as it does not have zero in the denominator (called a legal fraction), it will always be equal to zero. Other special properties of zero include: Zero is considered an even number; any number ending in zero is considered an even number; when zero is added to a number, the sum is the original number; and when zero is subtracted from a number, the difference is the original number.

What modern problem is there with zero?

If you want to start an argument, just ask someone about years and zero. For example, the new millennium started on January 1, 2000. But only 1999 years passed since the calendar was set up, since no year called “zero” was established. Thus, in reality, the millennium and the 21st century really began on January 1, 2001—not on January 1, 2000. But don’t worry about your age: If you were born on July 5, 1980, by July 5, 2020, you would be 40. After all, when you were born, you start with “zero,” unlike the human-made calendar.

What is pi and why is it important?

Pi (pronounced “pie”; the symbol is π) is the ratio of the circumference to the diameter of a circle. Another way of looking at pi is by the area of a circle: pi times the square of the length of the radius, or as it is often phrased “pi rsquared.” There are more ways to consider the value of pi: 2 pi (2π) in radians is 360 degrees; thus, pi radians is 180 degrees and ½ pi (½π) radians is 90 degrees. (For more about pi and radians, see “Geometry and Trigonometry.”)

What is the importance of pi? It was used in calculations to build the huge cathedrals of the Renaissance, to find basic Earth measurements, and it has been used to solve a plethora of other mathematical problems throughout the ages. Even today it is used in the calculations of items that surround everyone. To give just a few examples, it is used in geometric problems, such as machining parts for aircraft, spacecraft, and automobiles; in interpreting sine wave signals for radio, television, radar, telephones, and other such equipment; in all areas of engineering, including simulations and modeling of a building’s structural loads; and even to determine global paths of aircraft (airlines actually fly on an arc of a circle as they travel above the Earth).

Image

Advances in architecture during the European Renaissance would not have been possible without similar advances in mathematics and a knowledge of the value of π (pi). This cathedral in York, England, is a prime example of what can be accomplished with mathematics.

What is the value of pi?

Pi is a number, a constant, and to twenty places, it is equal to 3.14159265358979323846. But it doesn’t end there: Pi is an infinite decimal. In other words, it has an infinite number of numbers to the right of the decimal point. Thus, no one will ever know the “end” number for pi. Not that mathematicians will stop trying any time soon. Today’s supercomputers and networks of computers continue to work out the value of pi. On October 2011, Japanese and American computer experts Shigeru Kondo and Alexander Yee said they had calculated the value of π to ten trillion decimal places on a personal computer, which is double the previous record. (For more about pi and computers, see “Math in Computing.”)

Who first determined the value of pi?

People have been fascinated by pi throughout history. It was used by the Babylonians and Egyptians; the Chinese thought it stood for one thousand years. Some even give the Bible credit for mentioning the concept of pi (in which it apparently equaled 3). In one Biblical version of I Kings 7: 23-26, it states “And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it about.” The same verse is found in II Chronicles 4: 2-5, in reference to a vessel (“sea”) made in the temple of Solomon, which was built around 950 B.C.E.

No one truly knows the origins of finding pi, although most historians believe it was probably figured out long ago. There are some clues as to its discovery, though. For example, some people claim the Egyptian Rhind papyrus (also called Ahmes papyrus)—transcribed about 1650 B.C.E. by Ahmes, an Egyptian scribe who claimed he was copying a 200-year-old document—contains a notation that pi equals 3.16, which is close to the real value of pi. (For more about the Rhind papyrus, see “History of Mathematics.”)

But it was the Greeks who promoted the idea of pi the most: They were very interested in the properties of circles, especially the ratio of a circle to its diameter. In particular, Greek mathematician Archimedes (c. 287-212 B.C.E., Hellenic) computed close limits of pi by comparing polygons inscribed in and circumscribed about a circle. He applied the method of exhaustion to approximate the area of a circle, which in turn led to a better approximation of pi (π). Through his iterations, he determined that 223 / 71 < π 22 / 7; the average of his two numbers equals 3.141851 (and so on). (For more about Archimedes, see “History of Mathematics” and “Geometry and Trigonometry.”)

Image

In 1647 English mathematician William Oughtred assigned the Greek letter pi to represent the ratio of the circumference to the diameter of a circle.

Who first suggested the symbol for pi?

There are many claims for the first use of the symbol for pi (π). English mathematician William Oughtred (1575–1660) in 1647, wrote it “p.d.” But the symbol also meant a number of other things in other texts, including a point, a positive number, and sundry other representations. The modern use of π didn’t occur until 1706, when Welsh mathematician William Jones (1675–1749) described it as, “3.12159 and c. = π” [sic]. Even then, not everyone used it as a standard symbol for pi. By 1737, Swiss mathematician Leonhard Euler (1707–1783), one of the most prolific mathematicians who ever lived, adopted the symbol in his work, making π a standard notation since that time.

What is the Feynman Point?

The Feynman Point was named after Nobel Prize-winning physicist Richard Feynman (1918–1988), who once joked he wanted to memorize pi up to “999999 … and so on and so on,” as if to say pi continues from the Feynman Point with nines forever. This is not true, of course, and the sequence of nines is a mere coincidence—thus, the joke of the Feynman Point. If pi were that predictable, then it would be a rational number, which it is not.

How was the value of pi determined arithmetically?

One of the earliest mathematical formulas for pi was determined by English mathematician John Wallis (1616–1703), who wrote the notation as:

Image

Another more commonly-recognized notation for pi is often attributed to German philosopher and mathematician Baron Gottfried Wilhelm Leibniz (1646–1716), but is more likely the work of Scottish mathematician James Gregory (1638–1675):

Image

Both are amazing examples of something that was not only figured out using geometric methods, but also arithmetic methods.

What are the measurements of a circle using pi?

There are many measurements of a circle. The perimeter of a circle is called the circumference; to calculate, multiply pi (π) times the diameter, or c = (πd), or pi (π) times twice the radius, (c = 2πr). The area (a) of a circle is calculated by multiplying pi (π) times the radius squared, or a = πr2.

DEVELOPMENT OF
WEIGHTS AND MEASURES

What is measurement?

Measurement is the methods used to determine length, volume, distance, mass (weight), or some other quantity or dimension. Each measurement is defined by specific units, such as inches and centimeters for length, or pounds and kilograms for weight. Such measurements are an integral part of our world, from their importance in travel and trade, to weather forecasting and engineering a bridge.

When did people first start using measurement?

No one knows definitely who, where, or when the first measurements began. No doubt people developed the first crude measurement systems out of necessity. For example, knowing the height of a human, versus the height of a lion, versus the height of the grass in which a human hid were probably some of the first (intuitive and necessary) measurements.

The first indications of measurements date back to around 6000 B.C.E. in what today encompasses the area from Syria to Iran. As populations grew, and the main source of food became farmland rather than wild game, new ways of calculating crops for growing and storage became a necessity. In addition, in certain cultures during times of plenty, each person—depending on their status (from adult men who received the most, to women, children, and slaves who were given less)— received a specific measurement of food. During a famine, in order to stretch supplies, a certain minimal measurement of food was divided between each person. It is thought that the first true measuring was done by hand—in particular, measuring grains by the handful. In fact, the half-pint, or the contents of two hands cupped together, may be the only volume unit with a natural explanation.

Is measurement tied to mathematics?

Yes, measurement is definitely tied to mathematics. In particular, the first steps toward mathematics used units (and eventually numbers) to describe physical quantities. There had to be a way to add and subtract the quantities, and most of those crude “calculations” were based on fundamental mathematics. For example, in order to trade horses for gold, merchants had to agree on how much a certain amount of gold (usually as weight) was worth, then translate that weight measurement into their barter system. In other words, “x” amount of gold would equal “y” amount of horses.

What are some standard measurement units and their definitions?

For a helpful list of standard measurement units and systems for converting them to other types of units, see Appendix 1 in the back of this book.

Upon what were ancient measurements based?

Initially, people used different measurement systems and methods, depending on where they lived. Most towns had their own measurement system, which was based on the materials the residents had at hand. This made it difficult to trade from region to region.

Image

The advent of agriculture in human civilization necessitated the development of mathematical concepts so that farmers could better predict times to plant and harvest.

Measurements eventually became based on common and familiar items. But that did not mean they were accurate. For example, length measurements were often based on parts of the human body, such as the length of a foot or width of the middle finger; longer lengths would be determined by strides or distances between outstretched arms. Because people were of different heights and body types, this meant the measurements changed depending on who did the measuring. Even longer lengths were based on familiar sights. For example, an acre was the amount of land that two oxen could plow in a day.

What is the historical significance of the barleycorn in measurement?

The barleycorn (just a grain of barley) definitely had a significant historical role in determining the length of an inch and the English foot measurement (for more about the inch and foot, see below). In addition, in traditional English law, the various pound weights all referred to multiples of the “grain”: A single barleycorn’s weight equaled a grain, and multiples of a grain were important in weight measurement. Thus, some researchers believe the lowly barleycorn was actually at the origin of both weight and distance units in the English system.

What were some early units used for calculating length?

The earliest length measurements reach back into ancient time, and it is a convoluted history. Some of the earliest measurements of length are the cubit, digit, inch, yard, mile, furlong, and pace. One of the earliest recorded length units is the cubit. It was invented by the Egyptians around 3000 B.C.E. and was represented by the length of a man’s arm from his elbow to his extended fingertips. Of course, not every person has the same proportions, so a cubit could be off by a few inches. This was something the more precision-oriented Egyptians fixed by developing a standard royal cubit. This was maintained on a black granite rod accessible to all, enabling the citizenry to make their own measuring rods fit the royal standard.

The Egyptian cubit was not the only one. By 1700 B.C.E., the Babylonians had changed the measurement of a cubit, making it slightly longer. In our measurement standards today, the Egyptian cubit would be equal to 524 millimeters (20.63 inches) and the Babylonian cubit would be equal to 530 millimeters (20.87 inches; the metric unit millimeters is used here, as it is an easier way to see the difference between these two cubits).

As the name implies, a digit was measured as the width of a person’s middle finger, and was considered the smallest basic unit of length. The Egyptians divided the digit into other units. For example, 28 digits equaled a cubit, four digits equaled a palm, and five digits equaled a hand. They further divided three palms (or 12 digits) into a small span, 14 digits (or a half cubit) into a large span, and 24 digits into a small cubit. To get smaller measurements than a digit, the Egyptians used fractions.

Over time, the measurement of an inch was all over the measurement map. For example, one inch was once defined as the distance from the tip to the first joint on a man’s finger. The ancient civilization of the Harappan in the Punjab used the “Indus inch”; based on ruler markings found at excavation sites, it measured, in modern terms, about 1.32 inches (3.35 centimeters; see below for more about the Harappan). The inch was defined as 1/36th of King Henry I of England’s arm in the 11th century; and by the 14th century, King Edward II of England ruled that one inch equaled three grains of barleycorn placed end to end lengthwise. (See below for more about both kings.)

Longer measurements were often measured by such units as yards, furlongs, and miles in Europe. At first, the yard was the length of a man’s belt (also called a girdle). The yard became more “standard” for a while, as determined to be the distance from King Henry I’s nose to the thumb of his outstretched arm. The term mile is derived from the Roman mille passus, or “1,000 double steps” (also called paces). The mile was determined by measuring 1,000 double steps, with each double step by a Roman soldier measuring five feet. Thus, 1,000 double steps equaled a mile, or 5,000 feet (1,524 meters). The current measurement of feet in a mile came in 1595, when, during the reign of England’s Queen Elizabeth I, it was agreed that 5,280 feet (1,609 meters) would equal one mile. This was mainly chosen because of the popularity of the furlong—eight furlongs equaled 5,280 feet.

What was the first civilization to use
a decimal system of weights and measures?

Between 2500 and 1700 B.C.E., the Harappa (or Harappan) civilization of the Punjab—now a province in Pakistan—developed the earliest known decimal system of weights and measures (for more about decimals, see “Math Basics”). The proof was first found in the modern Punjab region, where cubical (some say hexahedral) weights in graduated sizes were uncovered at Harappa excavations.

Archeologists believe that these weights were used as a standard Harappan weight system, represented by the ratio 1 : 2 : 4 : 8 : 16 : 32 : 64. The small weights have been found in many of the regional settlements, and were probably used for trade and/or collecting taxes. The smallest weight is 0.8375 grams (0.00185 pounds), or as measured by the Harappa, 0.8525; the most common weight is approximately 13.4 grams (0.02954 pounds), or in Harappa, 13.64, the 16th ratio. Some larger weights represent a decimal increase, or 100 times the most common weight (the 16th ratio). Other weights correspond to ratios of 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, and 500.

There is also evidence that the Harappan civilization had some of the most advanced length measurements of the time. For example, a bronze rod found at an excavation was marked in units of precisely 0.367 inch (0.93 centimeter). Such a measuring stick was perfect to plan roads, to construct drains for the cities, and even to build homes. An ivory scale found at Lothal, once occupied by the Harappan civilization, is the smallest division ever recorded on any measuring stick yet found from the Bronze Age, with each division approximately 0.06709 inch (0.1704 centimeter) apart.

Finally, the pace was once attached to the Roman mile (see above). Today, a pace is a general measurement, defined as the length of one average step by an adult human, or about 2.5 to 3 feet (0.76 to 0.91 meters).

What were the ancient definitions of a foot?

Not all feet (or the foot) are created equal. The term foot in measurement has had a long history, with many stories claiming the origin-of-the-first-foot status. In fact, it seems as if the foot has ranged in size over the years—from 9.84 to 13.39 inches (25 to 34 centimeters)—depending on the time period and/or civilization.

For example, the ancient Harappan civilization of the Punjab (from around 2500 to 1700 B.C.E.) used a measurement interpreted by many to represent a foot—a very large foot, at about 13.2 inches (33.5 centimeters; see above for more about the Harappan). Around 1700 B.C.E., the Babylonians put their foot forward: A Babylonian foot was two-thirds of a Babylonian cubit. There are even records from Mesopotamia and Egypt showing yet another measurement system that included a foot of 11.0238 inches (300 millimeters). This is also known as the Egyptian foot, and it was standard in Egypt from predynastic times to the first millennium B.C.E. The Greek foot came close to today’s foot, measuring about 12.1 inches (30.8 centimeters); a Roman foot measured in at 11.7 inches (29.6 centimeters). The list goes on, depending on the country and time period.

How was the standard foot determined?

Whatever the true story, the foot we know today is equal to 12 inches (30.48 centimeters). The true standardization of the foot came late in the 19th century, after the United States and Britain signed the “Treaty of the Meter.” In this treaty, the foot was officially defined in terms of the new metric standards being adopted overseas. In the United States, the Metric Act of 1866 further defined the foot as equal to exactly 1,200/3,937 meter, or about 30.48006096 centimeters; this unit of measurement is still used for geodetic surveying purposes in the states, and is called the survey foot. By 1959, the United States National Bureau of Standards redefined the foot to equal exactly 30.48 centimeters—or about 0.999998 survey foot. This definition was also adopted in Britain by the Weights and Measures Act of 1963; thus, a foot, or 30.48 centimeters, is also called the international foot.

Image

King Charlemagne was ruler of the Franks and Emperor of the Romans in the ninth century. He encouraged education during his reign, and some credit him with creating the standard for the foot, based on his own foot size.

What were some early measurements of weight?

Some of the early measurements of weight included the grain, pound, and ton. Ancient peoples used stones, seeds, and beans to measure weight, but grain (such as wheat or barleycorn) was a favorite. In fact, the grain (abbreviated “gr”) is still one of the smallest units of weight used today (to compare, one pound equals 7,000 grains).

The traditional pound as a unit of weight was used throughout the Roman Empire. But like many other measurements over time, the number of ounces in a pound seemed to shift and change. For example, the number of ounces in the Roman pound was 12; European merchants used 16 ounces to the pound. Eventually, 16 ounces in a pound became standard (for more about ounces, see below).

Who developed the idea of a foot in measurement?

There is quite a mystery about who first developed the idea of the foot as a measurement unit. One story, which most scholars believe is a legend, is that a foot was the length of Charlemagne’s (742–814) foot. Charlemagne (also known as Charles the Great) was King of the Franks and Emperor of the Holy Roman Empire. Standing at six feet four inches tall, he probably had a really big foot.

Still another story involves England’s King Henry I (1068–1135), in which an arm became important. The king ruled that the standard “foot” would be one-third of his 36-inch long arm. This thus became the origin of our standardized unit of 12 inches equals a foot, the inch being 1/36th of a yard. According to the Oxford English Dictionary, the first confirmed usage of the word “foot” as a unit of measurement also occurred during the reign of Henry I. In honor of his arm, he ordered that an “Iron Ulna” (the ulna being the longer, inner bone in the forearm) be made. This iron stick represented the master standard yard for the entire kingdom.

But around 1324, in response to his subjects’ cries for an even more standard measurement, England’s King Edward II (1284–1327) changed things again. Recognizing the “Iron Ulna” was not universally available, he declared that “3 barleycorns, round and dry make an inch,” and 12 inches (or 36 barleycorns) would equal one foot.

It’s interesting to note: Even shoe sizes were tied to King Edward II and barleycorns. He declared that the difference between one shoe size to the next was the length of one barleycorn.

Back in the 19th century, the Americans—who did not like the British larger weights—decided that a hundredweight would equal 100 pounds (the British hundredweight was 112). This meant a ton was equal to 20 hundredweight for the American ton (or the American’s short ton was 2,000 pounds), while the British long ton of 20 hundredweight was equal to 2,240 pounds. There were, of course, debates, but not everyone disagreed with the American short ton. It became the favorite of British merchants, who called it a cental. Eventually, the ton on the international market “went metric,” and today a metric ton is close to the original British long ton. It is equal to 1,000 kilograms, or approximately 2,204 pounds, and is officially called tonne. Although the International System (SI; see below) standard uses tonne, the United States government recommends using the metric ton.

Where did the the pound (and its abbreviation, “lb.”) originate?

The origin of the word “pound” comes from libra pondo, or “pound of weight.” The common abbreviation for pound (lb.) originated from letters in the Latin word libra, or balanced scales.

Why was the troy pound so historically important to weight measurement?

One of the oldest English weight systems was based on the 12-ounce troy pound—the basis by which coins were minted, and gold and silver weighed for trade and commerce. (The troy pound equaled 5,760 grains, and thus, in ounces, was 5,760/12 or 480 grains; twenty pennies weighed an ounce, and thus, a pennyweight equaled 480/20 or 24 grains.) The troy pound—and the entire system of connected weights— was used until the nineteenth century, mostly by jewelers and druggists. One holdover of the troy ounce (a portion of the troy pound) is found in today’s pharmaceutical market to measure certain drugs—and even in the financial market, as the measurement used to interpret gold and silver prices.

What is the difference between the various pounds and ounces?

The story behind the ounce is long and convoluted because people have been dissatisfied with the unit. For example, in medieval times, English merchants were not happy with the troy pound, as it was less than the commercial pound in most of Europe. In response, the merchants developed an even larger pound, called the libra mercatoria, or mercantile pound. But by 1300, the complaints about the mercantile pound grew, because 15 troy ounces (or 7,200 grains) was easily divided by 15 and its divisors, but not as convenient as dividing by 12 troy ounces.

Soon, another type of pound was born in English commerce: the 16-ounce avoirdupois (roughly translated from the Old French as “goods of weight”). Modeled on a common Italian pound unit of the late 13th century, the avoirdupois pound weighed exactly 7,000 grains, which is easily divided for use in sales and trade. But because it was difficult to convert between the troy and avoirdupois units (the avoirdupois ounce is 7,000/16, or 437.5 grains, and 1 grain equals 1/7,000 avoirdupois pound, or 1/5,760 troy or apothecaries’ pound; the troy ounce is 5,760/12,480 grains, or 31.1035 grams in metric), the standard soon shifted to using mostly the avoirdupois unit.

The avoirdupois is currently used in United States and Britain. It is equal to 1/16th of a pound or 28.3495 grams (in metric); the avoirdupois ounce is further divided into 16 drams (or drachm). The troy ounce hasn’t been totally forgotten, though. Today, it is used mainly as units for precious metals and drugs, where it is often called the apothecaries’ ounce (with its subdivisions of the scruple, or 20 grains and the drachm, or 60 grains). In turn, the avoirdupois—our “ounce” for short—is used for almost everything else.

What are significant digits?

Significant digits, also called significant figures, are not just for measurement. They are used to define certain calculations and follow certain rules. For instance, the digits one through nine are always considered significant digits; the number 66 has two significant digits in a calculation, while 66.6 has three significant digits. In division, multiplication, trigonometric functions, and so on, the number of significant digits in an answer should equal the least number of significant digits in the numbers being divided, multiplied, and so on. With zeros, it’s a bit more complicated. For example, zeros before other numbers are not significant digits, such as 0.066 has only two significant digits; and zeros within other numbers are significant digits, such as 6006 has four significant digits. For larger numbers, it’s easier to see significant digits if you use scientific notation. For example, 2.2 × 103 has two significant digits; 2.20 × 103 has three significant digits (zeros placed after the decimal point become significant digits).

What were some early measurements of volume in terms of the gallon?

The names of the traditional volume units are the names of standard containers. Until the 18th century, the capacity of a container was difficult to accurately measure in cubic units. Thus, the standard containers were defined by the weight of a particular substance—such as wheat or beer—that the container could carry. For example, the basic English unit of volume, or the gallon, was originally defined as the volume of eight pounds of wheat. Other volumes were measured based on this gallon, depending on the different standard sizes of the containers.

But like most measurements over time, not all gallons were alike. During the American colonial period, the gallons from British commerce were based on dry and liquid commodities. For dry, the gallon was 1/8th of a Winchester bushel (defined by the English Parliament in 1696 as a cylindrical container 18.5 inches in diameter by 8 inches deep), holding 268.8 cubic inches of material; it was also called a “corn gallon” in England. For liquid, the gallon measurement was based on England’s Queen Anne’s wine gallon (also called the traditional British wine gallon), measuring exactly 231 cubic inches. This is why volume measurements in the United States include both the dry and liquid units, the dry units being about one-sixth larger than the corresponding liquid units.

By 1824, the British weren’t as satisfied with the gallon divisions as the Americans. In response, the British Parliament abolished all the traditional gallons and established a system based on the Imperial gallon. It is still in use today, measuring 277.42 cubic inches, with the container holding exactly 10 pounds of water under specific (such as temperature and pressure) conditions.

What is accuracy in measurement?

Accuracy in measurement is based on relative error and number of significant digits. Relative error is the absolute error divided by the calculated (or estimated) value. For example, if a person expects to spend $10 per week at the local espresso bar, but actually spends $12.50, the absolute error is 12.50 - 10.00 = 2.50; the relative error then becomes (2.50 / 10) = 0.25 (to find out the percent, multiply by 100, or 0.25 × 100 = 25 percent of the original estimate). Significant digits refers to a certain decimal place that determines the amount of rounding off to take place in the measurement; these numbers carry meaning to the figure’s precision. But beware—accuracy in measurement does not mean the actual measurement taken was accurate. It only means that if there are a large number of significant digits, or if the relative error is low, the measurement is more accurate.

Who was Adrien-Marie Legendre?

Adrien-Marie Legendre (1752–1833) was a brilliant French mathematician and physicist. He is known for his studies of ellipsoids (leading to what we now call the Legendre functions) and celestial mechanics, and he worked on the orbits of comets. In 1787 he helped measure the Earth using a triangulation survey between the Paris and Greenwich observatories. In 1794 Legendre published Eléments de géométrie, an elementary text on geometry that would essentially replace Euclid’s Elements and would remain the leading text on the topic for close to a century. Finally, Legendre also had a connection to measurement: In 1791 he was appointed to the committee of the Académie des Sciences, which was assigned the task of standardizing weights and measures.

Image

Rulers in the United States commonly show length measurements in both metric and English units.

What are some common measurement systems in use today?

There are several measurement systems in use today. The English customary system is also known as the standard system, U.S. customary system (or units), or English units. It actually consists of two related systems: the U.S. customary units and the British Imperial System. The background of the units of measurement is historically rich and includes modern familiar terms, such as foot, inch, mile, and pound, as well as less well-known units, such as span, cubit, and rod. The official policy of the United States government is to designate the metric system as the preferred system for trade and commerce; but customary units are still widely used on consumer products and in industrial manufacturing.

What countries have not officially adopted the metric system?

To date, there are only three countries that have not officially adopted the metric system: the United States, Liberia (in western Africa, although some sources say the country does do some marketing and trade in metric), and Myanmar (formerly Burma, in Southeast Asia). All other countries—and the scientific world as a whole—have either used the metric system for many years, or adopted the measurement system in the past several decades. It’s a bit of historical irony to note that the United States has hung on to such measurements as the foot— the standard measurement originated by the English who now use metric.

In order to link all systems of weights and measures, both metric and non-metric, there is a network of international agreements supporting what is known as the International System (SI). It is abbreviated as SI (but not S.I.), in reference to the first two initials of its French name, Système International d’Unités. It was developed from an agreement signed in Paris on May 20, 1875 known as the Treaty of the Meter (Convention du Mètre). To date, 48 nations have signed the treaty. The SI is maintained by a small agency in Paris, the International Bureau of Weights and Measures (BIPM, or Bureau International des Poid et Mesures). Because there is a need to change or update the precision of measurements over time, the SI is updated every few years by the international General Conference on Weights and Measures (CGPM, or Conférence Générale des Poids et Mesures), the two most recent meetings being 2007 and 2011. SI is also referred to as the metric system, which is based on the meter. The word can also be used in mathematics (for example, metric space) or even computing (fontmetric file). It is often referred to incorrectly as “metrical”. (See below for more about the metric system.)

What are the base SI units?

There are several base units at the heart of the International System. The following lists the seven base units:

· meter (distance)

· kilogram (mass; related to weight)

· second (time)

· ampere (electric current)

· Kelvin (temperature)

· mole (amount of substance)

· candela (intensity of light)

Still other SI units—called SI derived units—are defined algebraically in terms of the above fundamental units. All the base units are consistent with the metric system called the MKS, or mks system, which stands for meter, kilogram, and second. Another metric system is the CGS, or cgs system, which stands for centimeter, gram, and second.

What are some of the common metric/SI prefixes?

The common metric and SI prefixes have been around for a while, but some were only recently added. In 1991, in order to apply standard units (SI units; see above) to a wide range of phenomena (especially in the scientific world), the Nineteenth General Conference on Weights and Measures lengthened the list to accommodate larger (and smaller) metric numbers—with the list now reaching from yotta- to yocto-. The following lists the American system (the name for large numbers) and the corresponding metric prefix and numerical equivalent (for comparison with prefixes and the power of ten, see “Math Basics”):

Common Metric/SI Prefixes

American system

metric prefix/symbol

number

1 septillion

yotta- / Y-

1024

1 sextillion

zetta- / Z-

1021

1 quintillion

exa- / E-

1018

1 quadrillion

peta- / P-

1015

1 trillion

tera- / T-

1012

1 billion

giga- / G-

109

1 million

mega- / M-

106

1 thousand

kilo- / k-

103

1 hundred

hecto- / h-

102

1 ten

deka- / da-

10

1 tenth

deci- / d-

10-1

1 hundredth

centi- / c-

10-2

1 thousandth

milli- / m-

10-3

1 millionth

micro-/ fx

10-6

1 billionth

nano- / n-

10-9

1 trillionth

pico- / p-

10-12

1 quadrillionth

femto- / f-

10-15

1 quintillionth

atto- / a-

10-18

1 sextillionth

zepto- / z-

10-21

1 septillionth

yocto- / y-

10-24

Why is the word “centimillion” incorrect?

Centimillion is a word sometimes incorrectly used to mean 100 million (108). But the metric prefix “centi-” means 1/100, not 100. There are ways to name this number: 100 million could be called a hectomillion; in the United States, it could be called a decibillion.

It is interesting to note that “deca-” is the recommended spelling by the International System (SI), but the United States National Institute of Standards and Technology spells the prefix “deka-.” Thus, either one is considered by most references to be correct. There are also spelling variations between countries; for example, in Italy, hecto- is spelled etto- and kilo- is spelled chilo-. But the symbols remain standard through all languages. And as for other numbers in the metric system—such as 105 or 10-5—there are no set names or prefixes.

Why are some prefix names different in measurements?

The main reason why a prefix name would differ has to do with pronunciation and vowels: If the first letter of the unit name is a vowel and the pronunciation is difficult, the last letter of the prefix is omitted. For example, a metric measurement of 100 ares (2.471 acres) is a hectare (not hectoare) and 1 million ohms is a megohm (not megaohm). There are exceptions, though, especially if the resulting prefix and unit sound fine, such as a milliampere. There are even times that another letter is added to make it easier to roll off the tongue. For example, an “l” is added to the term for 1 million ergs, making it a megalerg, not a megaerg or megerg.

How did the metric system originate?

In 1791, the French Revolution was in full swing when the metric system was proposed as a much needed plan to bring order to the many conflicting systems of weights and measures used throughout Europe. It would eventually replace all the traditional units (except those for time and angle measurements).

The system was adopted by the French revolutionary assembly in 1795; and the standard meter (the first metric standard) was adopted in 1799. But not everyone agreed with the metric system’s use, and it took several decades before many European governments adopted the system. By 1820, Belgium, the Netherlands, and Luxembourg all required the use of the metric system; France, the originators of the system and its standards, took longer, finally making metric mandatory in 1837. Other countries like Sweden were even slower: They accepted the system by 1878 and took another ten years to change from the old method to the metric.

Is it possible to convert international units seen on such items as vitamin bottles to milligrams or micrograms?

No, there is no direct way to convert international units (IU) to mass units, such as milligrams. Most familiar to people who read vitamin and mineral bottles, an IU has nothing to do with weight; it is merely the measure of a drug or vitamin’s potency or effect. Although it is possible to convert some items’ IUs to a weight measurement, there is no consistent number. This is because not all materials weigh the same and the preparation of substances vary, making the total weight of one preparation differ from another.

But there are some substances that can be converted, because for each substance, there is an international agreement specifying the biological effect expected with a dose of 1 IU. For example, for vitamins, 1 IU of vitamin E equals 0.667 milligram (mg); 1 IU of Vitamin C is equal to 0.05 mg. In terms of drugs, 1 IU of standard preparation insulin represents 45.5 micrograms; 1 IU of standard preparation penicillin equals 0.6 microgram.

How did the first standard metric measurements evolve over time?

The first standard metric units were developed by 1799: The meter was defined as one ten-millionth of the distance from the equator to the North Pole; the liter was defined as the volume of one cubic decimeter; and the kilogram, as the weight of a liter of pure water.

The standards metamorphosed over the years. For example, the first physical standard meter was in the form of a bar defined as a meter in length. By 1889, the International Bureau of Weights and Measures (BIPM, or Bureau International des Poid et Mesures) replaced the original meter bar. The new bar not only became a standard in France, but copies of the newest bar were distributed to the seventeen countries that signed the Convention of the Meter (Convention du Mètre) in Paris in 1875. The accepted distance became two lines marked on a bar measuring 2 centimeters by 2 centimeters in cross-section and slightly longer than one meter; the bar itself was composed of 90 percent platinum and 10 percent iridium. But it was only a “standard meter” when it was at the temperature of melting ice.

By 1960, the BIPM decided to make a more accurate standard; mostly this was done to satisfy the scientific community’s need for precision. The new standard meter was based on the wavelength of light emitted by the krypton-86 atom (or 1,650,763.73 wavelengths of the atom’s orange-red line in a vacuum). An even more precise measurement of the meter came about in 1983, when it became defined as the distance light travels in a vacuum in 1/299,792,458 second. This is currently the accepted standard.

How is temperature measured?

Temperature is measured using a thermometer (thermo meaning “heat” and meter meaning “to measure”). The inventor of the thermometer was probably Galileo Galilei (1564–1642), who used a device called the thermoscope to measure hot and cold.

Temperatures are determined using various scales, the most popular being Celsius, Fahrenheit, and Kelvin. Invented by Swedish astronomer, mathematician, and physicist Anders Celsius (1701–1744) in 1742, Celsius used to be called the Centigrade scale (it can be capitalized or not; centigrade means “divided into 100 degrees”). He used 0 degrees Celsius as the freezing point of water; the point where water boils was marked 100 degrees Celsius. Because of its ease of use (mainly because it is based on an even 100 degrees), it is the scale most used by scientists. It is also the scale most associated with the metric system.

Image

A thermometer showing degrees Fahrenheit on the left and Celsius on the right.

Fahrenheit is the scale invented by Polish-born German physicist Daniel Gabriel Fahrenheit (1686–1736) in 1724. His thermometer contained mercury in a long, thin tube, which responded to changes in temperatures. He arbitrarily decided that the difference between water freezing and boiling—32 degrees Fahrenheit and 212 degrees Fahrenheit, respectively—would be 180 degrees.

The Kelvin scale was invented in 1848 by Lord Kelvin (1824–1907), who was also known as Sir William Thomson, Baron Kelvin of Largs. His scale starts at 0 degrees Kelvin, a point that is called absolute zero, the temperature at which all molecular activity ceases and the coldest temperature possible. His idea was that there was no limit to how hot things can get, but there was a limit to how cold. Kelvin’s absolute zero is equal to -273.15 degrees Celsius or -459.67 degrees Fahrenheit. So far, scientists believe nothing in the universe can get that cold.

How do you convert temperatures between the various scales?

The following lists how you convert from one temperature scale to another using, of course, simple mathematics:

Fahrenheit to Celsius —C° = (F° - 32) / 1.8; also seen as (5/9)(F° - 32)

Celsius to Fahrenheit —F° = (C° × 1.8) + 32; also seen as ((9/5)C°) + 32

Fahrenheit to Kelvin —K° = F° - 32 / 1.8 + 273.15

Kelvin to Fahrenheit —F° = (K° - 273.15) × 1.8 + 32

Celsius to Kelvin —K° = C° + 273.15

Kelvin to Celsius—C° = K° - 273.15

TIME AND MATH IN HISTORY

What is some of the earliest evidence of keeping time?

No one truly agrees as to what culture(s) first invented timekeeping. Some historians believe that marks on sticks and bones made by Ice Age hunters in Europe around 20,000 years ago recorded days between successive new moons. Another hypothesis states that the measurement of time dates back some 10,000 years, which coincides with the development of agriculture—especially in terms of when to best plant crops. Still others point to timekeeping evidence dating back 5,000 to 6,000 years ago around the Middle East and North Africa. Whatever the true beginnings, most researchers agree that timekeeping is one of those subjects whose history will never truly be accurately known.

What culture took the first steps toward timekeeping?

Around 5,000 years ago, the Sumerians in the Tigris-Euphrates valley (today’s Iraq) appear to have had a calendar, but it is unknown if they truly had a timekeeping device. The Sumerians divided the year into months of 30 days; the day was divided into 12 periods (each corresponding to two of our modern hours) and the periods into 30 parts (each corresponding to four of our minutes).

Overall, most researchers agree that the Egyptians were the first serious timekeepers. Around 3500 B.C.E., they erected obelisks (tall, four-sided monuments), placing them in specific places in order to cast shadows as the Sun moved overhead. This thus created a large, crude form of a sundial. This sundial time was broken into two parts: before and after noon. Eventually, more divisions would be added, breaking down the time units even more into hours. Based on the length of the obelisks’ shadows, the huge sundials could also be used to determine the longest and shortest days of the year.

What was one of the first devices to measure time?

One of the first devices—smaller than the obelisks mentioned above—to measure time was a crude sundial. By about 1500 B.C.E., the true, small sundial (or shadow clock) was developed in Egypt. It was divided into ten parts, with two “twilight” hours marked. But it could only tell time for half a day; after noon, the sundial had to be turned 180 degrees to measure the afternoon hours.

How did our present day become divided
into hours, minutes, and seconds?

Divisions into hours, minutes, and seconds probably began with the Sumerians around 3000 B.C.E., as they divided the day into 12 periods, and the periods into 30 sections. About one thousand years later, the Babylonian civilization, which was then in the same area as the Sumerians, broke the day into 24 hours, with each hour composed of 60 minutes, and each minute having 60 seconds.

It is unknown why the Babylonians chose to divide by 60 (also called a base number). Theories range from connections to the number of days in a year, weights and measurements, and even that the base-60 system was easier to use. Whatever the explanation, their methods proved to be important to us centuries later. We still use 60 as the basis of our timekeeping system (hours, minutes, seconds) and in our definitions of circular measurements (degrees, minutes, seconds). (For more information about the Sumerian counting system, see “History of Mathematics.”)

More refinements of measuring time occurred later. In order to correct for the Sun’s changing path over the sky throughout the year, the gnomon—or object that creates the shadow on the sundial—had to be set at the correct angle (what we call latitude). Eventually, the sundial was perfected using multiple designs. For example, shortly before 27 B.C.E., the Roman architect Marcus Vitruvius Pollio’s (c. 90-20 B.C.E.) De architectura described thirteen different designs of sundials.

How does a sundial work?

The sundial tracks the apparent movement of the Sun across the sky. It does this by casting a shadow on the surface of a usually-circular dial marked by hour and minute lines. The gnomon—or the shadow-casting, angular object on the dial—becomes the “axis” about which the Sun appears to rotate. To work correctly, it must point to the north celestial pole (near the star Polaris, also called the North Star); thus, the gnomon’s angle is determined by the latitude of the user. For example, New York City is located at about 40.5 degrees north latitude, so a gnomon on a sundial in that city would be at a 40.5 degree angle on a sundial.

The sharper the shadow line, the greater the accuracy; in addition, larger sundials are more accurate, as the hour line can be divided into smaller units of time. But the sundial can’t be too large. Eventually, diffraction of the sunlight around the gnomon causes the shadow to soften, making the time more difficult to read.

What is the definition of a clock?

A clock (from the Latin cloca, or “bell”), is an instrument we use for measuring time. There are actually two main qualities that define a clock: First, it must have a regular, constant, or repetitive action (or process) that will effectively mark off equal increments of time. For example, in the old days before our battery-driven, analog and digital clocks and watches, “clocks” included marking candles in even increments, or using a specific amount of sand in an hourglass to measure time.

Image

A sundial, which uses shadows from the sun to mark the passage of time, is one of the oldest timekeeping devices.

Second, there has to be way to keep track of the time increments and easily display the results. This eventually led to the development of watches, large clocks such as Big Ben in London, England, and even the clocks that count down the New Year. The most accurate clocks today are the atomic clocks, which use an atomic frequency standard as the counter.

How does your clock automatically change?

If you ever wondered how your more recent clocks—analog and digital—change without any twisting of the watch stem on your part, it’s merely a matter of radio control. Inside your timepiece are an antenna and a radio receiver, which allows the watch to synchronize with the atomic clock in Boulder, Colorado, through the National Institute of Standards and Technology. The NIST operates the radio station WWVB, and with its high power transmitter (50,000 watts) broadcasts a timing signal 24 hours a day, 7 days a week. All of the broadcast frequencies are in the high frequency radio spectrum, a part that is often called “shortwave;” the time is kept to within less than 0.0001 milliseconds of Coordinated Universal Time (UTC). Your watch or clock catches the signal, and decodes the time code bits into time, day of the year, daylight savings time, and even leap year and second changes. All you have to do is select the time zone so the clock can convert the signal’s UTC to your time.

When the signal gets to your watch, it may be a bit “inaccurate,” as the signal sometimes depends on atmospheric conditions or even distance from the NIST—and we mean a bit. In fact, it may vary as much as a millisecond if the signal is bouncing around between the Earth and the ionosphere. In general, the accuracy is usually off by less than 10 milliseconds. That means you really don’t need to complain—that translates to 1/100th of a second. No doubt you’ll still make the meeting, meet up with your friends after work, and even make it to the baseball game on time!

How was (and is) one second defined?

A second was once defined as 1/86,400 of a mean solar day. By 1956, this definition was changed by the International Bureau of Weights and Measures to 1/31,556,925.9747 of the length of the tropical year 1900. But like most measurements, the second definition changed again in 1964, when it was assigned to be the equivalent of 9,192,631,770 cycles of radiation associated with a particular change in state of a cesium-133 atom (also seen as caesium atom; at its “ground state” at a temperature of zero degrees Kelvin).

Interestingly enough, by 1983, the second became the “definer” of the meter: Scientists determined a meter as 1/299,792,458 the distance light travels in one second. This was done because the distance light travels in one second was more accurate than the former definition of the standard meter.

Image

One of the most famous clocks on Earth is Big Ben in London, England. Although today’s digital and atomic clocks are much more accurate, the charm of an old-fashioned analog clock still has its appeal.

Where was the mechanical clock first invented?

It is thought that the first mechanical clock was invented in medieval Europe, and used most extensively by churches and monasteries (mainly to tell when to ring the bells for church attendance). The clocks had an arrangement of gears and wheels, which were all turned by attached weights. As gravity pulled on the weights, the wheels would turn in a slow, regular manner; as the wheels turned, they were also attached to a pointer, a way of marking the hours, but not yet minutes.

The precursor to accurate timekeeping came around 1500, with the advent of the “spring-powered clock,” an invention by German locksmith Peter Henlein (1480–1542). It still had problems, though, especially with the slowing down of the clock as the spring unwound. But it became a favorite of the rich because of its small size, easily fitting on a mantle or shelf.

MATH AND CALENDARS
IN HISTORY

What is the connection between calendars and math?

A calendar is essentially a numbering system that represents a systematic way of organizing days into weeks, months, years, and millennia, especially in terms of a human lifespan. It was the necessity to count, keep track of, and organize days, months, and so on that gave rise to calendars, all of which also entails the knowledge of mathematics to make such calculations.

When were the first calendars invented?

Although the first crude types of calendars may have appeared some 30,000 years ago—based on the movements of the Moon and found as marks on bones—the Egyptians are given credit for having the first true calendars. Scientists believe that around 4500 B.C.E., the Egyptians needed such a tool to keep track of the Nile River’s flooding. From about 4236 B.C.E., the beginning of the year was chosen as the heliacal rising (when a star is finally seen after being blocked by the Sun’s light) of the star Sirius, the brightest star in the sky located in the constellation of Canis Major. This occurred (and still occurs) in July, with the Nile flooding shortly after, which made it a perfect starting point for the Egyptian calendar. The Egyptians divided the calendar into 365 days, but it was not the only calendar they used. There was also one used for planting and growing crops that was dependent on the lunar month.

What is a lunar-based calendar?

A lunar calendar is based on the orbit of our Moon. The new moon (when you can’t see the Moon because it is aligned with the Sun) is usually the starting point to a lunar calendar. From there, the various phases seen from Earth include crescent, first quarter, and gibbous (these phases after a new moon are also labeled waxing, such as waxing crescent). When the entire face is seen, it is called a full moon; from there, the phases are seen “in reverse,” and are labeled waning, such as waning crescent. Overall, the entire moon cycle takes about 29.530589 days. This cycle was used by many early cultures as a natural calendar.

What was the problem with lunar-based calendars?

Nothing is perfect, especially a lunar month. The biggest drawback with using a lunar calendar was the fractional number of days, which makes a calendar quickly go out of synch with the actual phases of the Moon. The first month would be off by about a half a day; the next month, a day; the next month, a day and a half; and so on. One way to help solve the problem was to alternate 30 and 29 day months, but this, too, eventually made the calendars go out of synch.

To compensate, certain cultures added (intercalations) or subtracted (extracalations) days from their calendar. For example, for more than a thousand years, the Muslims’ lunar calendar has had an intercalation of 11 extra days over a period of 30 years, with each year being 12 lunar months. This calendar is only out of sync about one day every 2,500 years: To see this, mathematically speaking, the average length of a month over a 30-year period is figured out in the following equation: (29.5 × 360) + 11 / 360 = 29.530556 days, in which 11 is the number of intercalated days, 360 is the number of months in a 30-year cycle (12 months × 30 years), and 29.5 is the average number of days in the calendar month, or (29 + 30) / 2.

What is a solar-based calendar?

A solar-based calendar is one based on the apparent movement of the Sun across the sky as we orbit around our star. More than 2,500 years ago, various mathematicians and astronomers were basing a solar year on the equinoxes (when the Sun’s direct rays are on the equator—or beginning of fall and spring) and solstices (when the Sun’s direct rays are on the latitudes marked Tropic of Capricorn [winter in the Northern Hemisphere; summer in the Southern Hemisphere] or Tropic of Cancer [winter in the Southern Hemisphere; summer in the Northern Hemisphere]).

As the measurement of the solar (and lunar) cycle became more accurate, calendars became increasingly sophisticated. But no calendar dominated until the last few centuries, with many cultures deriving their own calendars—some even combining lunar and solar cycles in a type of moon-sun or luni-solar calendar. This is why although there is one “standard calendar” used by most countries around the world, certain cultures still use their traditional calendars, including the Chinese, Jewish, and Muslim calendars.

Why was the Mayan calendar so different?

The Mayan calendar was different because the culture didn’t have just one calendar, but many. The Calendar Round was a calendar based on what we would call a 52 year span. This was thought to be the average life span of an individual, broken into Haabs, or 18,980 days (or 52 times the number of days in a year, or 52 times 365 = 18,980). Mayan astronomers kept meticulous track of the cosmos, even keeping track of, and basing a calendar on, the movements of the planet Venus.

One of the most interesting Mayan calendars was the Long Count Calendar, which represents their longest periods of time. With such a calendar, the Mayans not only had enough “time” to record historic events past 52 years, but future events, too. Mayan scholars seem to think that the Long Count Calendar will run out of its first cycle in 5,126 years. Because of this, and by trying to coordinate our Gregorian calendar with the Mayan Long Count Calendar, the latest cycle ends in 2012. Doomsayers state that the Earth gets trashed at the end of the Long Count Calendar cycle—as if the Mayans had some mystical advice and abilities no one else did (or does) on the planet. Suffice to say, coordinating our modern calendar with the Mayan’s is not an accurate science—which probably means doomsday won’t be here any time soon. (For more information about the Mayan calendar, doomsday predictions, and the year 2012, see “History of Mathematics.”)

How did some ancient cultures refine their calendars?

There were many different ways that various ancient cultures refined their calendars, all of them entailing some type of mathematical calculation. One way to measure the length of a year was by using a gnomon, or a structure that casts a shadow (for more about gnomons and sundials, see above). This was based on the apparent motion of the Sun across the sky, with the shadow not only used to tell daily time, but also to determine the summer solstice, when the shadow created by the gnomon would be at its shortest at noon. By measuring two successive summer solstices, and counting the days in between, various ancient cultures such as the Egyptians developed a more detailed calendar—and as a bonus, determined the exact times of the solstice.

Around 135 B.C.E., Greek astronomer and mathematician Hipparchus of Rhodes (c. 170-c. 125 B.C.E.) decided to compare his estimate of the vernal equinox (spring in the Northern Hemisphere occurring in March) with that made by another astronomer about 150 years earlier. By averaging the number of days, he estimated that a year was equal to 365.24667, a number only off by about 6 minutes and 16 seconds.

What was the Roman calendar?

According to legend, the first Roman calendar appeared when Rome was founded, about 750 B.C.E. When it actually started is still up for discussion, and it apparently changed many times. The calendar was based on the complexity of the solar-lunar cycles. At first, it had ten months, starting in March and ending in December; January and February were added as the calendar was modified. Politics entered into the determination of this seemingly ever-changing calendar, too, with certain officials deciding to add days whenever they desired, and even choosing what to name certain months.

What was the Julian calendar?

By the time of Julius Caesar (100-44 B.C.E.), Roman calendar-keeping was a mess. Caesar decided to reform the Roman calendar, asking help from astronomer and mathematician Sosigenes of Alexandria (first century B.C.E.; not to be confused with Sosigenes the Peripatetic [c. 2nd century], an Egyptian philosopher). The year 46 B.C.E. would have 445 days—a time appropriately called “the year of confusion.”

Why does the western calendar start with the birth of Christ?

The story behind the western calendar—the one that developed into the calendar most often used today—started in the middle of the 6th century. Pope St. John I asked Dacian monk and scholar Dionysius Exiguus (“Dennis the Small,” c. 470-c. 540; born in what is now Romania) to calculate the dates on which Easter would fall in future years. Dionysius, often called the inventor of the Christian calendar, decided to abandon the calendar numbering system that counted years from the beginning of Roman Emperor Diocletian’s reign. Instead, being of Christian persuasion, he replaced it with a system that started with the birth of Christ. He labeled that year “1,” mainly because there was no concept of zero in Roman numerals.

Sosigenes began the reformed year on January 1, 45 B.C.E., a year with 365 days, and proposed an additional day for every fourth year in February (leap day). The months January, March, May, July, August, October, and December had 31 days; the other months had 30 days, except February with 28 or 29 days (leap year dependent). And also in the Julian Calendar, there was only one rule: Every year divisible by 4 was a leap year.

The vain heir to Caesar, Augustus Caesar (63 B.C.E.-14 C.E.; a.k.a. Gaius Octavius, Octavian, Julius Caesar Octavianus, and Caesar Augustus), would change the Julian calendar in a several ways. Not only did he name the ninth month after himself, but he would change the number of days in many months to their present usage, adding more confusion to the calendar.

The Julian calendar would govern Caesar’s part of the world until 1582. Not that the Julian year was perfect: A year’s 364.25 days was too long by 11 minutes 12 seconds. Although the difference between today’s measurement of the year and the Julian year was not great, it adds up to 7.8 days over 1,000 years. But as with many decrees and mandates, Caesar, Sosigenes, and Octavian left it up to future generations to fix the problem.

What is the Gregorian calendar?

By 1582, the discrepancies in the Julian calendar were not interfering with timekeeping, but they were beginning to infringe on dates of the church’s ecclesiastical holidays. The powerful Catholic Church was not amused: Pope Gregory XIII, on the advice of several of his astronomers, decided to reposition days, striking out the excess ten days that had accumulated on the then-present-day calendar. Thus, October 4, 1582 was followed by October 15, 1582.

What are some interesting facts about the Julian and Gregorian calendars?

An interesting fact about the Julian calendar is that it designates every fourth year as a leap year, a practice that was first introduced by King Ptolemy III of Egypt in 238 B.C.E. A quirk about the Gregorian calendar is that the longest time between two leap years is eight years. The last time such a stretch was seen was between 1896 and 1904; it will happen again between 2096 and 2104.

To fix the extra-days problem, the pope made sure that the last year of each century would be a leap year, but only when it is exactly divisible by 400. That means that three leap-years are suppressed every four centuries; for example, 1900 was not a leap year, but 2000 was a leap year. (Today, the Gregorian calendar “rules” state that every year divisible by four is a leap year, except for years that are both divisible by 100 and not divisible by 400.)

Some countries eliminated the ten extra days, starting “fresh” with the Gregorian calendar. But not everyone agreed with the new calendar, especially those who distrusted and disliked the Catholic Church. Eventually, by 1700, those who had not changed their calendars had collected too many extra days. In 1752, the English Parliament decreed that eleven days would be omitted from the month of September. England and its American colonies began to follow the Gregorian calendar, with most other countries following close behind. It is now the standard calendar used around the world.

What is a problem with our modern calendar?

The modern calendar could use some small changes, such as making sure we don’t have to keep changing calendars each year (see below). But the real problem with the modern calendar isn’t the “human factor”; it’s nature. As our Earth orbits around the Sun, it wobbles like a spinning top in a process called precession. Because scientists can measure the planet’s movements more accurately now than in the past, they know that the wobble is increasing. This is because the tides caused by the pull of the Sun and Moon are slowing the Earth’s spin. And like a top, as the spinning slows, the wobble increases and the length of the year decreases.

What does this mean for our calendar? It is already known that our calendar and the length of a year were only off by 24 seconds (0.00028 days) in 1582—a very small discrepancy that will eventually be noticed. But when you add in the slowing down of the Earth’s rotation, it will make the year even shorter. In fact, since 1582, the year has decreased from 365.24222 days to 365.24219 days, or an actual decline of about 2.5 seconds.

Image

The 2012 world calendar. Each year, the calendar is slightly different because there are 365 days in a year, which is not evenly divisible by the seven-day week. Also, every four years—leap years—there is an extra day in February.

Can we change the calendars now in use?

The present calendar is an annual one, and changes every year, much to the happiness of calendar publishers. This is because 365-days-in-a-year is not evenly divisible by the number of days in the week: 365 / 7 = 52, with a remainder of 1 (or 52.142857…). This means that a given year usually begins and ends on the same weekday; and it also means that the next year bumps January 1 (and all following dates) to the next weekday, and a new calendar is born each year. But because the calendar we now have is so ingrained in everything we do, it is doubtful that there will be any changes soon.

Not that there haven’t been suggestions. One is called the World (or Worldsday) calendar, in which each date would always fall on the same day of the week, and all the holidays occur on the same day of the year. With this calendar, each year begins on Sunday, January 1 and each working year begins on Monday, January 2. The reason why the calendar is called “perpetual” or “perennial” is that the year ends with a 365th day following December 30, marked with a “W” for “Worldsday” (our current “December 31”). Leap years would still have to be added, probably at the end of June (some suggest a June 31 be added). Both extra days could act as world holidays.

The drawbacks? Besides the obvious—no one wanting to change an already entrenched system—the suspicious would revolt. After all, on the World Calendar, there are four Friday the 13ths.