MATH IN COMPUTING - The Handy Math Answer Book

The Handy Math Answer Book, Second Edition (2012)

MATH IN COMPUTING

EARLY COUNTING AND CALCULATING DEVICES

Why were counting devices developed?

Early counting devices were developed for a logical reason: to allow people to count items in order to trade or to keep track of stock, such as cattle. They also used simple counting devices to keep track of the seasons (mostly for agriculture—in other words, to know when to plant), and for religious reasons, such as marking days for certain feasts. (For more about counting in ancient times, see “History of Mathematics.”)

What were some early counting devices?

The very earliest counting devices were human hands, with the fingers used as digits. There were limitations to this device, though, especially since each hand has only five fingers. To count more items, some cultures assigned even larger counts to other parts of the body. Such counting methods became tedious, so merchants and others who needed to keep track of assorted items turned to nature, using sticks, stones, and bones to count.

Eventually, devices called counting boards were developed. At first, the counting “boards” were simple, usually entailing drawing lines with fingers or a stylus in the sand or dirt. After all, merchants at outdoor markets needed to count items and calculate the cost of the goods in order to sell, and there was always plenty of sand and dirt at hand.

Portable boards made of wood, stone, or metal soon became more popular, with carved (or even painted) grooves or lines indicating units. These counting boards soon became more sophisticated, with beads, pebbles, or metal discs moved between the grooves or lines, allowing for an even larger number of items to be counted. Over even more time, they grew into what is called an abacus, a device with a frame holding rods with free-moving beads attached.

What is an abacus?

An abacus (the plural being either abacuses or abaci) is one of the earliest counting devices. The term comes from a Latin word with origins in the Greek words abax or abakon, meaning “tablet” or “table”; these words probably originated with the Semitic word abq, or “sand.” The devices—originally made from wood but now usually including plastic—perform arithmetic functions by manually sliding counters (usually beads or discs) on rods or wires.

Image

The Salamis tablet, discovered in 1846, is the oldest surviving counting board. It was used by the Babylonians around 300 B.C.E.

Contrary to popular belief, abaci were not truly calculators in the sense of the word today. They were used only as mechanical aids for counting. The calculations were done inside the user’s head, with the abacus helping the person keep track of sums, subtractions, and carrying and borrowing numbers. (For more about carrying and borrowing in arithmetic, see “Math Basics.”)

Have there been different types of abaci over the centuries?

Yes, there have been many different types of abaci over the centuries, including the Roman abaci mentioned above. The first type of abacus came into use in China about 1300 and was called a suanpan. Historians do not agree as to whether it was a Chinese invention or not; some say it came from Japan via Korea. Although merchants used this type of abacus for standard addition and subtraction operations, it could also be used to determine square and cube roots of numbers.

The Japanese abacus, or soroban, was similar to the Chinese abacus, but it eliminated one bead each from the upper and lower deck in each column. Thus, it is more similar to the Roman abacus. The Russians also have their own version of an abacus; it uses ten beads on each wire, and a single deck. The separation in the wires is created by one wire with fewer beads.

What are the oldest surviving counting boards?

To date, the oldest surviving counting board is the Salamis tablet. Discovered in 1846 on the island of Salamis, it was once thought to be a gaming board, but historians have since determined that the white marble slab was actually used to count items. The tablet, which measures 59 inches (149 centimeters) in length, 30 inches (75 centimeters) in width, and is 1.8 inches (4.5 centimeters) thick, was used by the Babylonians around 300 B.C.E. It contains five groups of markings, with a set of five parallel lines equally divided by a vertical line in the center; below that is a group of 11 parallel lines, all divided by a perpendicular line.

But this was not the only counting board of that time. After the Salamis tablet was developed, the Romans brought out the Calculi and the hand-abacus around 300 B.C.E. to 500 C.E. These counting boards were made of stone and metal. One example of a Roman abacus had eight long and eight short grooves arranged in a row; beads would slide into the grooves, indicating the counted units. The longer grooves were marked I to indicate single units, X to indicate tens, and so on up to millions; the shorter grooves were used to indicate multiples of five (five units, five tens, and so on). There were also shorter grooves on the right side of the abacus, which were probably used to indicate Roman ounces and for certain weight measurements.

How are modern abaci used?

Today’s standard abacus is typically constructed of wood or plastic and varies in size. Most are about the size of a small laptop computer. The frame of the device has a series of vertical rods or wires on which a number of wooden beads slide freely. A horizontal beam separates the frame into two sections called the lower and upper decks.

For example, in a Chinese abacus, the lower and upper decks each have 13 columns; the lower deck has five beads per column, while the upper deck has two beads. Each bead on the upper deck has a value of five, while each bead on the lower deck has a value of one (thus, it is called a 1/5 abacus). To use the abacus, users place the abacus flat on a table or their laps; they then push all the beads on the upper and lower decks away from the horizontal beam. From there, the beads are manipulated, usually with the index finger or thumb of one hand, to calculate a problem. For example, if you wanted to express the number 7, you would move two beads in the lower deck and one bead in the upper deck: (1 + 1) + 5 = 7.

This modern abacus is still used by shopkeepers in Asia and many so-called “Chinatowns” in North America. Students continue to be taught how to use the abacus in Asian schools, especially to teach children simple mathematics and multiplication. In fact, it is an excellent way to remember multiplication tables and is useful for teaching other base numbering systems, because it can adapt itself to any base. (For more about base numbers, see “History of Mathematics” and “Math Basics.”)

Image

The beads are arranged in this illustration of an abacus to represent the number 38,704.

What are the world’s smallest and largest abaci to date?

In 1996 scientists in Zurich, Switzerland, built an abacus with individual molecules as beads that all had diameters of less than one nanometer, or one millionth of a millimeter. The beads of the world’s smallest abacus were not moved by a mere finger, but by the ultrafine, conical shaped needle in a scanning tunneling microscope (STM). The scientists succeeded in forming stable rows of ten molecules along steps just one atom high on a copper surface. These steps acted like the earliest form of the abacus (grooves instead of rods to keep the beads in line). Individual molecules were then pushed back and forth in a controlled way by the STM tip, allowing the scientist to manipulate the molecules and “count” from 0 to 10.

There have also been contenders for the world’s largest abacus. In 2001 the Science Museum in London, England, claimed to have the largest at 15.4 feet (4.7 meters) long by 7.2 feet (2.2 meters) wide. Also in 2001, a new contender from Thailand appeared: an 18-foot (5.5 meter) long abacus claiming to be both the world’s biggest abacus and biggest non-electric calculator. It resides in the resort town of Rayong, southeast of Bangkok, and was made by a pharmacist who wanted to calculate drug bills faster than an electronic calculator; the latter claim has been shown to be true, at least by those who can work the abacus with high proficiency, though this was back in 2008, before today’s many gigabyte personal computers and supercomputers (see below).

Image

The people of the ancient Inca civilization of South America used knotted strings called khipus to make mathematical calculations.

What is a khipu?

Khipus (or quipu, in Spanish) were used by the Incas of South America. A khipu is a collection of knotted strings that record certain information. The approximately 600 surviving khipus use an arrangement of knotted strings hanging from horizontal cords. But these knots are nothing like those made by other cultures: They include long knots with four turns, single knots, figure-eight knots, and a whole host of other knot types. Historians believe these strings and knots represent numbers once used for accounting, inventory, and population census purposes.

There are also researchers who believe the khipus may contain certain messages in some sort of code—a kind of language used by the Incas—based on the strings, knots, and even a khipu string’s type (usually alpaca wool or cotton) and color. But it may turn out that historians will never know the real story behind the khipus. When the Spanish conquered the Inca Empire starting in 1532, they destroyed most of the strings, believing they might be idolatrous items containing accounts of Incan history and religion.

What are Napier’s Bones?

A tool called Napier’s Bones (also called Napier’s Rods) was invented by Scottish mathematician John Napier (1550–1617). These were multiplication tables inscribed on strips (also called rods) of bone (not Napier’s, but animal bone), ivory, or wood. He published the idea in his book Rabdologia, which contained a description of the rods that aided in multiplication, division, and the extraction of square roots. (For more about Napier, see “Algebra.”)

Each bone is a multiplication table for a single digit, with the digit appearing at the top of its bone. As seen below, consecutive, non-zero products of this digit are carved in the rod, with each product occupying a single cell. For example, to multiply 63 by 6, the two bones or rods corresponding to 6 and 3 would be put alongside each other and would look like the following illustration.

Image

In this example of how to use Napier’s Bones, 63 is multiplied by 6 to get the correct result of 378.

The first number would be the number in the diagonal at the sixth position (3); then the product (or solution) is evaluated diagonally, or by adding the two numbers diagonal from each other, or 7 (6 + 1); and the next number in the separate diagonal, or 8. In other words, 63 × 6 = 378.

Initially, the tables were used by merchants to speed up calculations. German astronomer and mathematician Wilhelm Schickard (1592–1635) would eventually build the first calculating machine based on Napier’s Bones in 1623. His device could add, subtract, and, with help, multiply or divide. This is why he is often called the “father of the computing era” (see below).

MECHANICAL AND ELECTRONIC CALCULATING DEVICES

Who built the first known adding machine?

No one knows who built the first adding machine, although many historians believe it was German mathematician Wilhelm Schickard (1592–1635) who first invented a mechanical calculator in 1623 based on Napier’s Bones (see above). Schickard and his family perished from the Bubonic plague. It was not until the mid-20th century that his notes and letters were discovered. They showed diagrams of how to construct his machine. Schickard apparently built two prototypes: One was destroyed in a fire and the other one’s location is unknown, if it survived at all. His device, which he called the “calculating clock,” was able to add and subtract up to six-digit numbers using a mechanism of gears and wheels.

What did Blaise Pascal invent
that eventually caused his interest in math to wane?

French mathematician and philosopher Blaise Pascal (1623–1662) devised the Pascaline in 1642, when he was only 18 years old; he had it built by 1643. This device was possibly the first mechanical adding machine used for a practical purpose. He built it with his father (a tax collector) in mind to help him with the tedious task of adding and subtracting large sequences of numbers.

But the device was not very helpful for a variety of reasons, especially since it used base 10 and did not match up with divisions of the French currency. Other reasons for its rejection are familiar to every century: The device was much too expensive and unreliable, along with being too difficult to use and manufacture. Eventually, Pascal’s interest in science and mathematics waned. In 1655 he entered a Jansensist convent, studying philosophy until his death.

But not all historians credit Schickard. Some believe that there was an even earlier attempt at mechanical computing by Leonardo da Vinci (1452–1519), who also apparently designed an adding machine. Some of his notes were found in the National Museum of Spain in 1967 and describe a machine bearing a certain resemblance to Pascal’s machine (see below).

How did Gottfried Wilhelm von Leibniz advance calculating devices?

German mathematician and philosopher Gottfried Wilhelm von Leibniz (1646–1716) not only described the binary number system—a major concept of all modern computers—but also co-invented differential calculus and designed a machine that would perform the four basic arithmetic functions. By 1674 he had completed his design and commissioned the building of the Leibniz Stepped Drum, or the Stepped Reckoner, as he called his machine.

The device used a special type of gear named the Leibniz wheel (or stepped drum), a cylinder with nine bar-shaped teeth along a length parallel to the cylinder’s axis. As the cylinder was rotated with a crank, a ten-toothed wheel would rotate from zero to nine positions, depending on its position from the drum. The movements of the various mechanisms would be translated into multiplication or division, depending on what direction the stepped drum was rotated.

Although there were apparently only two prototypes of the device (both still exist), Leibniz’s design—along with Pascal’s—were the basis for most mechanical calculators in the 18th century. As with most such machines that could not be mass produced— much less understood by the masses—they were more curiosities for display than machines put to actual use.

How did Joseph-Marie Jacquard’s invention benefit calculating devices?

In the late-18th century, French weaver and inventor Joseph-Marie Jacquard (1752–1834) developed a practical, automatic loom that wove patterns into fabric; it was controlled by a linked sequence of punched cards. This in itself was a major advance in the production of textiles, but it would also prove to be a boon to calculating devices. Borrowing Jacquard’s idea, both Charles Babbage and Herman Hollerith (see below) would use such cards on their own computing machines. The company that Hollerith formed eventually became International Business Machines (IBM), a company that for 30 years promoted and benefited from mechanical punched card processing.

What was the difference engine?

Because of its automatic sequential approach, the difference engine is thought of by most mathematical historians to be the precursor to modern computers. Johann H. Müller, an engineer in the Hessian army, first developed the concept in 1786. His idea was to have a special machine that would evaluate and print mathematical tables by adding sequentially the difference between certain polynomial values. But he could not get the funds to build the machine.

Müller’s idea was soon lost before it was resurrected in 1822, when Charles Babbage obtained government funds to build a programmable, steam-powered prototype of Müller’s device (for more information about Babbage, see below). Because of technical limitations, funding cuts, and Babbage’s interest in a more advanced device of his own design, Müller’s difference engine was only partially completed. Eventually, Swedish inventors George Scheutz (1785–1873) and his son Edvard (1821–1881) would in 1853 build the difference engine, the first calculator with the ability to print.

Image

Recognized for his connection to the famous difference engine, English mathematician Charles Babbage had to abandon his elaborate plans for a mechanical computer because the device was simply too expensive.

Who was Charles Babbage?

English inventor and mathematician Charles Babbage (1792–1871) is considered by some historians to be the “father of computing.” The main reason was his Analytical Engine, which is thought to be the true precursor of the modern computer.

When was the calculating machine first mass produced?

Between the 1624 invention of Blaise Pascal’s calculating machine and 1820, there were about 25 manufacturers of such devices. Because there were so many—most had little funding and only one person involved—very few machines were actually manufactured in any quantity.

By 1820 the first calculating machine to be commercially successful and produced in large numbers was the “Arithmometer.” Invented by Frenchman Charles Xavier Thomas de Colmar (1785–1870) while he was serving in the French army, it was based on a Leibniz’s “stepped drum” mechanism (see above). Colmar’s machine used a simple system of counting gears and an automatic carry (automatically shifting a 1 to the left when the sum of a certain column was greater than 9). The technology of the times also helped catapult Colmar’s success. Because it included springs and other machinery that offset the momentum of moving parts, the Arithmometer stopped at a specific, intended point, unlike what often happened with the older calculating machines.

One of his first ventures into calculating machines was the difference engine, which was based on Johann Müller’s design and some of Thomas de Colmar’s Arithmometer features (for more information about both, see above). The idea was sound, but the execution eventually lacked government funding, not to mention suffering from disputes with the artisan who was making the parts for the machine. Not only that, but Babbage’s ambitions may have caused the difference engine prototype to come to a halt. Initially, he wanted the device to go to six decimal places and a second-order difference; then he began planning for 20 decimal places and a sixth-order difference. This much-larger machine was an overwhelming concept for its time.

The abandonment of the difference engine did not stop Babbage, however. Again approaching the government for funding, he promised to build what he called the Analytical Engine, an improved device capable of any mathematical operation, effectively making it a general purpose, programmable computer that used punch cards for input. This new device would use a steam engine for power, and its gears would function like the beads of an abacus, with the main tasks of calculating and printing mathematical tables. For eight years, he attempted to get more money from the government, but to no avail. He would never build his Analytical Engine.

Although the Analytical Engine was never completed in Babbage’s lifetime, his son Henry Provost Babbage built the “mill” portion of the machine from his father’s drawings, and in 1888 he computed multiples of pi (π) to prove the acceptability of the design. This is often thought to represent the first successful test of a “modern” computer part.

Who is sometimes called the “first programmer”?

One of the first “programmers”—in this case, of a calculating machine—was Ada Augusta Byron (1815-1852; also known as Ada King, Countess of Lovelace), the daughter of Lord George Gordon Noel Byron (1788–1824), the famous English poet. Inventor and mathematician Charles Babbage met Ada Byron around 1833, while still working on his difference engine. Her interest was reportedly more in his mathematical genius, not his machines.

Besides her admiration for him, Ada Byron also put Babbage’s name on the computing map, writing up most of the information about his work, which was something Babbage supposedly could not do as well. For example, she translated an 1842 account of his Analytical Engine (written by French-born Italian engineer and mathematician Luigi Federico Menabrea [1809-1896]) from French into English. Babbage was so impressed that he suggested she add her own notes and interpretations of the machine. With his encouragement, she added copious notes, describing how the Analytical Engine could be programmed, and wrote what many consider to be the first-ever computer program. Her account was published in 1843. She was also responsible for the term “do loop” in computer language (a part of a program she called “a snake biting its tail”) and for developing the “MNEMONIC” technique that eventually helped simplify assembler commands.

Ada Byron’s life deteriorated after writing her notes because of family difficulties, gambling debts (though not her own), the lack of a scientific project to work on, and probably the fact that none of her friends were as deeply—and intuitively—involved in mathematics or the sciences as she was. Babbage was no help, either, having his own difficulties, including his ongoing attempts to obtain governmental funding for his Analytical Engine. In 1852, at only 37 years of age, Ada Byron died of cancer, but she was not forgotten. She was remembered and honored in 1980 when the ADA programming language was named after her.

What is a troncet?

The troncet (or addiator) is credited to J. L. Troncet of France, who invented the device in 1889. He called it his Arithmographe. (In actuality, his work was based on earlier designs that were first started by Claude Perrot [1613-1688].) It was used principally for addition and subtraction.

A troncet’s flat, mechanical, palm-held calculator had three main components: the part for the calculation, a stylus, and a handle to reset the addiator. By inserting the tip of the stylus into notches along a metal plate, numbers could be added by sliding either up or down strips of metal with numbers marked on them. No gears or inter-linked parts were involved. To “carry one” when the sum of two digits was greater than ten, the stylus was moved up to and around the top of the device.

What is a slide rule?

The slide rule is a ruler-like device with logarithmic scales that allows the user to do mathematical calculations. It is portable, with the most common slide rules using three interlocking calibrated strips; the central strip can be moved back and forth relative to the other two. Calculations are performed by aligning marks on the central strip with marks on the fixed strips, then reading marks on the strips. There is also a “see through” sliding cursor with a hairline mark perpendicular to the scales, allowing the user to line up numbers on all the scales.

Sadly for mathematical traditionalists, the use of the slide rule was eventually overtaken by the pocket calculator by the mid-1970s. But in other ways, this development was welcome. The slide rule had two major drawbacks, especially for calculations in mathematics, engineering, and the sciences: It was not easy to add with the device and it was only accurate to three digits.

How did the slide rule evolve?

In 1620 English astronomer Edmund Gunter (1581–1626) was responsible for constructing a scale rule that could be used to multiply. He divided his scale according to Napier’s principle of logarithms, meaning that multiplication could be done by measuring and adding lengths on the scale. (It is also often considered the first analog computer.)

But there is disagreement as to the true inventor of the slide rule. Many historians give the credit to English reverend William Oughtred (c. 1574-1660), who improved upon Gunter’s idea. About 1630 (although that date is highly debated), Oughtred placed two of Gunter’s scales directly opposite each other and demonstrated that one could do calculations by simply sliding them back and forth.

The slide rule was not immediately embraced by scientists, mathematicians, or the public. It took until about 1850, when French artillery officer Victor Mayer Amédée Mannheim (1831–1906) standardized the modern version of the slide rule, adding the movable double-sided cursor that gives the slide rule its familiar appearance. Slide rules were used for many decades as the major calculator for the sciences and mathematics, and ranged in shapes from straight rules to rounded.

When was a mechanical calculating device first used for the American census?

When government officials estimated that the 1890 census would have to handle the data from more than 62 million Americans, there was a slight panic. After all, the existing system was slow and expensive, using tally marks in small squares on rolls of paper, which were then added together by hand. One estimate determined that such an endeavor would take about a decade to complete, which would be just in time to start the process all over again for the 1900 census. In desperation, a competition was set up to invent a device that could easily count the 1890 U.S. census.

What is the Millionaire Calculator?

The Millionaire Calculator, invented in 1892 by Otto Steiger, saved many of the problems associated with other devices’ multiplication. While earlier machines required several turns of their calculating handle to multiply, the Millionaire multiplied a number by a single digit with only one turn of its handle. Its mechanism included a series of brass rods varying in length; these rods executed functions based on the same concept as Napier’s Bones (for more about Napier’s Bones, see above). The calculator was a hit, and around 4,700 machines were manufactured between 1899 and 1935.

Thus, in the 1880s, American inventor Herman Hollerith (1860–1929), who is also known as the father of modern automatic computation, presented his competition-winning idea. He used Jacquard’s punched cards to represent the population data, then read and collated the information with an automatic machine. With his Automatic Tabulating Machine—an automatic electrical tabulating device—Hollerith would put each individual’s data on a card. With a large number of clocklike counters, he would then accumulate the results. From there, he would use switches so the operators could instruct the machine to examine each card based on a certain characteristic, such as marital status, number of children, profession, and so on. It became the first such machine to read, process, and store information.

The machine’s usefulness did not end there, though. Eventually, Hollerith’s device became useful for a wide variety of statistical applications. Certain techniques used in the Automatic Tabulating Machine were also significant, helping in the eventual development of the digital computer. Hollerith’s company would also eventually become well-known, becoming International Business Machines, or IBM, in 1924.

What were some of the first motor-driven calculating devices?

Many historians believe that the first motor-driven calculating machine was the Autarigh, a device designed by Czechoslovakian inventor Alexander Rechnitzer (1879–1922) in 1902. The next step occurred in 1907, when Samuel Jacob Herzstark (1867–1937) produced a motor-driven version of his Thomas-based calculators in Vienna. In 1920 a prolific Spanish inventor named Leonardo Torres Quevedo (1852–1936) presented an electromechanical machine wired to a typewriter at the Paris Calculating Machine Exhibition. His invention performed addition, subtraction, multiplication, and division, and then used typewriters as input/output devices. Interestingly enough, even though the machine made a hit at the exhibition, it was never produced commercially.

Has there ever been a competition
between a calculator and an abacus?

Yes, there was once a competition between someone using a calculator and another person using an abacus. Although the abacus is often considered a “crude” device to do simple calculations, in expert hands it can work just about as fast as a calculator.

The contest took place in Tokyo, Japan, on November 12, 1946, between the Japanese abacus and an electric calculating machine. The event was sponsored by the U.S. Army newspaper Stars and Stripes. The American working the calculating machine was Private Thomas Nathan Wood of the 20th Finance Disbursing Section (from General MacArthur’s headquarters), who was considered an expert calculator operator. The Japanese chose Kiyoshi Matsuzaki, himself an expert operator of the abacus, from the Savings Bureau of the Ministry of Postal Administration. In the end, the 2,000-year-old abacus beat the electric calculating machine in adding, subtracting, dividing, and a problem including all three with multiplication thrown in. The machine only won when it came to problems in multiplication.

More and more such calculating devices with electric motors were invented. By the 1940s, the electric-motor-driven mechanical calculator had become a common desktop tool in business, science, and engineering.

What is an electronic calculator?

Most people nowadays are familiar with electronic calculators: small, battery-powered digital electronic devices that perform simple arithmetic operations and are limited to handling numerical data. Data are entered using a small keypad on the face of the calculator; the output (or result) is most commonly a single number on an LCD (Liquid Crystal Display) or other display. It took a long time to go from the electronic motor-driven mechanical calculator to the electronic calculator. In 1961, the company Sumlock Comptometer of England introduced the ANITA (A New Inspiration To Arithmetic), the first electronic calculator.

MODERN COMPUTERS AND MATHEMATICS

What is a computer in today’s sense of the word?

In general, and simply put, a computer is a machine that performs a series of mathematical calculations or logic operations automatically. Computer specialists often divide computers into two types: An analog computer operates on continuously varying data; a digital computer performs operations on discrete data.

The majority of today’s computers easily process information much faster than any human. The response (output) of a computer depends on the data (input) of the user, usually controlled by a computer program. A computer also can perform a large number of complex operations, and can process, store, and retrieve data without human interference.

There is a great deal of overlap for the word “computer.” In a more archaic sense, a computer is called an electronic computer, or computing machine or device; the more common expressions include data processors or information processing systems. But remember, there is a definite difference between a computer and a calculating machine: a computer is able to store a computer program that allows the machine to repeat operations and make logic decisions.

What problems were computers originally invented to solve?

Originally, computers were invented to solve numerical problems. Now, very few mathematicians, scientists, computer scientists, and engineers—and even the general public—can imagine a world without computers. Advances in technology have led to an increase in accuracy, the number of problems that can be solved, data points that can be entered into the computers, and speed in solving problems. All this is a far cry from the earliest computers and computational devices.

What type of number system is used by modern computers?

Modern computers use the binary system, a system that represents information using sequences of 0s and 1s. It is based on powers of 2, unlike our decimal system based on powers of 10. This is because in the binary system, another number place is added every time another power of two is reached, for example, 2, 4, 8, and so on; in the decimal system, another place is added every time a power of 10 is reached, for example, 10, 100, 1000, and so on.

Computers use this simple number system primarily because binary information is easy to store. A computer’s CPU (Central Processing Unit) and memory are made up of millions of “switches” that are either off or on—the symbols 0 and 1 represent those switches, respectively—and are used in the calculations and programs. The two numbers are simple to work with mathematically within the computer. When a person enters a calculation in decimal form, the computer converts it to binary, solves it, and then translates that answer back to decimal form. This conversion is easy to see in the following table:

Decimal

Binary

0

0

1

1

2

10

3

11

4

100

5

101

6

110

7

111

8

1000

9

1001

10

1010

11

1011

12

1100

13

1101

14

1110

15

1111

16

10000

17

10001

18

10010

19

10011

What was the Turing machine?

In 1937, while working at Cambridge University, English mathematician Alan Mathison Turing (1912–1954) proposed the idea of a universal machine that could perform mathematical operations and solve equations. This machine would use a combination of symbolic logic, numerical analysis, electrical engineering, and a mechanical version of human thought processes.

His idea became known as the Turing machine, a simple computer that performed one small, deterministic step at a time. It is often thought of as the precursor to the modern electronic digital computer, and its principles have been used for application in the study of artificial intelligence, the structure of languages, and pattern recognition. (For more on Alan Turing, see “History of Mathematics.”)

Why was Alan Turing so important
to the development of computers?

Living in his native England during World War II, Alan Turing was instrumental in deciphering German messages encrypted by the Enigma cipher machine. Shortly after the war, he designed computers—first for the British government (1945 to 1948), then for the University of Manchester (1948 to 1954). He also wrote several works on the field of artificial intelligence, a study in its infancy at the time, and developed the theory of the Turing test, in which a computer is tested to see if it is capable of humanlike thought. Tragically, Turing, who is often considered the founder of computer science, committed suicide in 1954.

Who built the first mechanical binary computer?

German civil engineer Konrad Zuse (1910–1995) built the Z1—often thought of as the first mechanical binary computer—in his parent’s living room around 1938. His goal was to build a machine that would perform the lengthy and tedious calculations needed to design building structures. His computer’s design stored intermediate results in its memory and performed sequences of arithmetic operations that he programmed on punched paper tape (he initially used old movie film). This machine led to the Z3 in 1941. Since the machine used a binary number system, it is considered by some to be the first large-scale, fully functional, automatic digital computer.

What were some highlights in the development of modern computers?

The first general-purpose analog computer was designed in 1930 by American scientist Vannevar Bush (1890–1974), who built a mechanically operated device called a differential analyzer. The first semi-electronic digital computing device was built by mathematician and physicist John Vincent Atanassoff (1903–1995) and one of his graduate students, Clifford E. Berry (1918–1963), between 1937 and 1942. It was created primarily to solve large systems of simultaneous linear equations. It is interesting to note that Atanassoff’s computer was overshadowed by the Electronic Numerical Integrator and Computer (ENIAC; see below), which was once credited as the first computer. In 1973, however, a federal judge recognized Atanassoff’s work and voided Sperry Rand’s patent on the ENIAC, saying it had been derived from Atanassoff’s invention. Today, Atanassoff and Berry get the credit.

The Harvard Mark 1, or the Automatic Sequence Controlled Calculator, was built between 1939 and 1944 by American computer scientist Howard H. Aiken (1900–1973) and his team. It is thought of as the first large-scale automatic digital computer. But there are disagreements about this, with some historians believing that German engineer Konrad Zuse’s Z3 (see above) was the first such machine.

Other early computers were the ENIAC and UNIVAC. The ENIAC (Electronic Numerical Integrator And Calculator) was completed in 1946 at the University of Pennsylvania; it used thousands of vacuum tubes. Until 1973, it was thought of as the first semi-electronic digital computer. That credit was subsequently given to Atanassoff and Berry (see above). The UNIVAC (UNIVersal Automatic Computer) was built in 1951 and was the first computer to handle both numeric and alphabetic data. It also was the first commercially available computer.

Image

Vannevar Bush was an American scientist who invented the differential analyzer, the first general-purpose analog computer.

The third-generation integrated-circuit machines were used primarily during the mid-1960s and 1970s, making the computers smaller, faster (close to a million operations per second), and far more reliable. The first commercial microprocessor was the Intel 4004, which appeared in 1971. It could only add and subtract, and it was used to power one of the first portable electronic calculators. The real push in microprocessors came during the late 1970s to 1990s, allowing for increasingly smaller and more powerful computers. For example, in 1974 the Intel 8080 processor had a clock speed of 2 megahertz (MHz); by 2004 the Pentium 4 (“Prescott”) had a clock speed of 3.6 gigahertz (GHz).

What is a microprocessor?

A microprocessor is a silicon chip that contains a CPU, or central processing unit, which is normally located on the main circuit board in a computer. (In the world of personal computers, the terms microprocessor and CPU are often used interchangeably.) These chips, or integrated circuits, are small, thin pieces of silicon onto which the transistors making up the microprocessor have been etched. The computer industry has continued rapid growth, mainly thanks to the increased performance and speed of advanced microprocessors. The microprocessor is the heart of the “average” computer, from personal computers (desktops and laptop machines) and tablets to larger servers. They also have many uses; for example, they control the logic of almost all familiar digital devices— from microwaves and clock radios to fuel-injection systems for automobiles

How fast are some of the more recent microprocessors?

Most of us who own computers realize that microprocessors have increased the speed and performance of our machines in the last few decades, which is why many of us say that once we buy a machine it almost seems immediately obsolete. In other words, a computer we had a decade ago seems to most of us to be very, very slow compared to the processing speed of today’s computers; it’s all thanks to the development of better microprocessors.

Image

Mainframe computers like these can fill a room in a company’s office, but smaller businesses that did not have the money or space for mainframes once used smaller mainframe units called minicomputers. By the late 1980s, microcomputers had become powerful enough to replace minicomputers.

The clock speed, also called the clock rate, is the speed at which the microprocessor of a computer executes instructions. In every computer, an internal clock is responsible for maintaining the rate of the instructions, even synchronizing the other computer components, such as the internal digital clock and date. Clock speed is measured in kilohertz (KHz, sometimes seen as KiloHertz), megahertz (MHz, sometimes seen as MegaHertz), or gigahertz (GHz, sometimes seen as GigaHertz). To understand this measurement, for example, 200 MHz is ten times the speed of 20 MHz; 1 GHz is simply equally to 1,000 MHz, and so on.

Overall, the faster the clock speed, the more instructions can be carried out by the computer’s CPU, or the central processing unit. For example, in 1974, the Intel 8080 processor had a clock speed of 2 megahertz (MHz). The original IBM PC (International Business Machines Personal Computer) around 1981 had a clock rate of 4.77 MHz (that translates to 4,772,727 cycles per second). In 2004, the Pentium 4 (“Prescott”) had a clock speed of 3.6 GHz (some say 3.4); and as of this writing, the highest clock speed microprocessor ever sold commercially is IBM’s zEnterprise 196 mainframe, which, as of 2010, ran cores (“many-core” chips use an array of many processors) continuously at 5.2 GHz.

What is the difference between a minicomputer and a microcomputer?

The term minicomputer is not used much today. It is considered to be the type of computer built mainly from about 1963 to 1987, and refers to the “mini” mainframe computers that were not large enough to be called mainframes but were large enough to take up the space of a small closet. These computers were once popular in small businesses that could not afford the money or space for a mainframe computer. They were much less powerful than a mainframe and were limited in hardware and software, and they were built using what was called low-integration logic integrated circuits. Eventually, they were overtaken by microcomputers built around the microprocessor.

The microcomputer was a later development in computing. It was developed as a general-purpose computer designed to be operated by one person at a time. The single-chip microcomputer (complete with microprocessor) was, in many respects, a landmark development in computer technology, resulting in the commercialization of the personal computer. This is because computers became smaller and less expensive, and the design made parts easier to replace.

What are the more common parts of a computer in use today?

There are many basic parts of today’s computers; the biggest differences between the various types of computers are the amount of memory and speed of the machines. The following lists the common parts of a computer:

Central Processing Unit (CPU) —The CPU is the heart of the computer. It is the component that executes the instructions contained in the computer software; in other words, it tells the entire computer what to do.

Memory—The memory of the computer allows the machine to temporarily (in most cases) store data, programs, and various results from using certain programs (such as remembering the words that are being typed right now).

Mass storage devices—These devices store the “long-term” memory, permanent data, and programs that one needs to retain. It includes the computer’s disk drive.

Input devices—This is just as the words read, devices that allow you to input information into the computer, such as a keyboard, mouse, or other optional input devices such as a scanner.

Output devices—Output devices allow you to see the information processed by the computer, and include the computer screen, a printer, and audio speakers.

What is computer science?

Computer science is, of course, the science of studying computers. It is the study of computation and information processing, involving hardware, software, and even mathematics. More specifically, it is the systematic study of computing systems and the computations that go behind making the computer function. Computer scientists need to know computing systems and methods; how to design computer programs, including the use of algorithms, programming languages, and other tools; and how software and hardware work together. They also need to understand the analysis and verification of the input and output.

What are computer codes and programs?

Mathematics is an important part of computers, because math is used to write computer codes and programs. The codes are the symbolic arrangement of data (or the instructions) in a computer program, a term often used interchangeably with “software.” The code (also called the source code, or just source) is any series of statements written in some programming language understandable to the user. This source code within a software program is usually contained in several text files.

The program is the sequence of instructions (or computations) that a computer can interpret and execute. In other words, most programs consist of a loadable set of instructions that will determine how the computer reacts to user input when the program is running. The connection between codes and programs is often heard by students studying computer science or working professionals—and even in action movies and television programs, as in, “I need to add more lines of code to the program!”

What are the definitions of computers in use today?

It’s easy to see why the word “computer” has so many connotations. They vary greatly, but include the following basic types, which are based mostly on the size and number of people who can simultaneously use the machines.

The first type of computers are called mainframes. These are considered the largest and most powerful general purpose computer systems. It is usually used to fill the needs of a large agency, company, or organization, because it uses hundreds of computer terminals at the same time. For example, it is used by statistical institutes and for meteorological surveys. Supercomputers are sophisticated machines designed to perform complex calculations at maximum speed. Because of their speed and the great amount of data they can process—they can perform hundreds of millions of instructions per second—they are most often used to model huge dynamic systems with many variables, such as complex weather patterns and groundwater flow.

Microcomputers are usually subdivided into personal computers (or desktop computers) and workstations. Oftentimes, microcomputers are linked together in a local area network (LAN) or by joining the microprocessors in a parallel-processing system. This allows smaller computers to work in tandem, giving them comparable power and computational abilities to mainframes.

What makes personal computers, laptops, cellphones,
and even iPods and iPads so popular today?

When it comes to private use outside the business and scientific communities, computers and other high-tech devices are primarily used for communications—and most of that is for entertainment purposes. These devices have a variety of functions and activities to participate in, all of which, in an indirect way, are controlled by mathematics (in other words, they all need computer programs and codes to work). Although there are too many to list here, the following lists a few examples of some popular devices and their “output” to the user.

Modern personal computers and laptops often have enough processing power now to allow several functions never thought of before. For example, a user can partake in what is called a blog (a shortened version of web log) or a website or part of a site used to relay information about, for instance, the user’s personal feelings about an issue or issues. Such blogs are updated whenever a person deems it is needed—daily, weekly, and even monthly or more—and are usually displayed in chronological order.

A personal computer or laptop user can participate in or listen to a podcast, or non-streamed webcast. Podcasts (once more commonly called a webcast until iPod became popular) are downloadable media content files, such as a concert, or an interview or lecture on a specific topic put together by someone (or a group) who is interested in offering such information. These series of audio or video digital media files can be released periodically over a special website, often through web syndication, or through a person’s personal website.

There are also social networking venues. For example, Twitter is a website owned and operated by Twitter, Inc., which offers social networking (essentially “meeting and chatting” with others on the Internet); it also allows a person to partake in what is called microblogging, in which people are able to send short messages about, for example, what is happening in their lives at that moment in messages called “tweets.” These text-based listings, which can contan no more than 140 characters per tweet, as of this writing, can be displayed not only on a person’s cellphone (if it is capable), but also on a person’s user profile page on Twitter.

Devices called iPods and iPads are some of the latest in technology, and are mostly used for entertainment. For example, iPods allow the user to pick and download favorite tunes or webcasts from various Internet sites (many times for a price). iPads are used mostly to browse the Internet and check email, but they also excel at gaming, video (especially through places such as Netflix, which offers downloadable, sometimes streaming movies over computers or iPads), and book reading.

Some of the more familiar, smaller microcomputers today are notebooks and laptops, which are very similar. Laptops are small enough to fit on a person’s lap; notebooks are usually a bit smaller and lighter than a laptop. In recent years, the newest laptops (and sometimes notebooks) have the same capabilities as the “more powerful” desktop computers.

Still other, even smaller microcomputers are the hand-held computers, which fit in your hand, and the palmtops, which, literally, fit in your palm. These computers are limited in their capacity, but are good for such functions as phone books, calendars, and short notes.

Also considered a form of microcomputer is the tablet—a tablet computer developed by the company Apple (for more about the iPad, see below). It is easy to carry, and is a flat rectangle with a 9.7-inch touchscreen that allows the user to maneuver through phone numbers, drawings, word processing, and sundry other tasks with ease. It is only about a half inch thick and weighs a mere one and a half pounds. It is based on a phone (a device called an iPhone) operating system, and thus it can be connected to a wireless connection through what is called wireless fidelity, or Wi-Fi, which is why you can take your more recent laptop or other communication device with a Wi-Fi card inside to some popular bookstores and connect to the Internet.

What is an “app” in computer-terminology?

In January 2011, the American Dialect Society named “app” the word of the year for 2010, which shows how popular the word has become in the world of computers. In particular, the word app is a noun, and is short for “application”—in this case, to a software application program. According to computer scientists, such an app usually refers to software used on a mobile device such as the iPhone, BlackBerry, or iPad (see above), and is often specifically called a “mobile app” or “iPhone app.” There are also “web apps” or “online apps,” which are used mostly by businesses in which the user accesses the app via an online browser.

There are several reasons for using apps. For the general public, apps are found on the iPhone or other devices so you can access one online feature or another. They are also used to find out information concerning business offerings. For example, apps are now found in magazines: you can “scan” the app into your iPhone to take you to an extra feature the magazine is offering, such as a cooking magazine offering more recipes, or as a way to find out more information about an advertiser. For businesses, web or online apps are a more efficient way to share software among employees in a business; and a mobile app can be used by a company’s workers to perform all sorts of business tasks while out on the road.

What are some common types of computer software programs?

There are many types of computer software—too many to mention here. Simply put, software is a set of programs, procedures, algorithms, and their documentation. The following lists the most common ones most of us are familiar with if we own any type of computer technology:

What are some popular programming languages used to run today’s computers and devices?

As stated above, programming languages are the software that keep your computer running smoothly. Some of the first programming languages included COBOL (COmmon Business-Oriented Language) and FORTRAN (derived from IBM’s Mathematical Formula Translating System). More modern programming languages need to be written for newer applications and devices, and include JAVA, which is used to run many applications, including one of the more popular mobile operating systems, called Android; C #, developed by one of the largest computer companies, Microsoft; and C/C++, used for a multitude of applications, such as systems software, device drivers, high-performance servers, and even video gaming software.

Application software—Application software includes what the user can use— mainly programs that help with word processing; it’s also included in software important for video games.

Programming language—These software offerings keep your computer going and are the heart of all computer programs.

Testware—Testware is used to test hardware or other software programs.

Device drivers—These devices are controlled by computer software programs, such as your disk drive, printers, and CD/DVD drives.

Firmware—Firmware is considered “low-level” software stored on electrically programmable memory devices; it is usually treated like hardware, hence the name.

System software—This is the software that runs your computer operating system.

APPLICATIONS

How have computers been used to factor large composite numbers?

Computers have often been used to factor large numbers—and not just by number theorists having some fun. In fact, factoring such numbers has helped to test the world’s most powerful computer systems, to promote designs of new algorithms, and in cryptography used by people who need to protect sensitive information on their computers. For example, in 1978 several computer experts proposed using the reconstruction of the prime numbers from the product of two large prime numbers as an encryption technique. This method of encrypting sensitive data soon blossomed, especially because of the needs of the military and banking industry. The public also reaped the benefit of this idea as it eventually led to encryption methods such as the public-key encryption for banking and personal pages on the Internet.

Have computers been used to determine the value of pi (π)?

Yes, computers have been used to determine the value of pi, but no computer has yet found the “final” number in the long progression of numbers. They probably never will because pi is considered to be an infinite number. But for the sake of just trying, larger and faster computers are often used for this task. To date, pi has been found to around 10 trillion digits as of 2010. (For more about pi, see “Mathematics throughout History.”)

Have computers been used to solve mathematical proofs?

Yes, there have been many mathematical proofs solved with the help of computers. One example is the four color theorem, which stated that it is possible to have a geographic map colored with only four colors so that no adjacent regions will have the same color. Another way of looking at the problem is: What is the smallest number of colors needed to color any flat map so that any two neighboring regions always have different colors? This idea was first presented in 1852, when Francis Guthrie (1831–1899) colored a map of English counties using only four colors. The idea of only four colors took on a mathematical bend and ended up being a theorem to be proved. It took until 1976, with the help of modern computers, before the four-color conjecture was finally proven to be true. Wolfgang Haker (1928-) and Kenneth Appel (1932-) of the University of Illinois took four years to write the computer program for the Cray computer, which took 1,200 hours to check 1,476 configurations. And in 2005, it was proven by Georges Gonthier with general purpose theorem-proving software.

Even though the theorem-proving software allows calculations to be checked along the way (by the computer), not all mathematicians are content with the result. They are troubled by the fact that the theorem was still proven by a computer, feeling that if it’s so easy to understand it should have been proven by hand. Thus, anyone who can truly prove the theorem without using a computer may win the Fields Medal, the math equivalent of the Nobel Prize.

Another proof solved with computers is the double bubble. The double bubble refers to a pair of bubbles that intersect; they are also separated by a membrane bounded by the intersection of the two bubbles. This is similar to two bubbles stuck together when a child blows bubbles using a water and soap mixture. Since the ancient Greeks, mathematicians have worked on the problem of finding a mathematical proof of the efficiency of a single round bubble. The problem became even more rigorous when considering enclosing two bubbles—or two separate volumes. The problem was solved around 1995 by mathematicians Joel Hass, Michael Hutchings, and Roger Schlafly. They used a computer to calculate the surface areas of the bubbles and found that the double bubble has a smaller area than any other when the enclosed volumes are the same. But this isn’t the last word: Scientists are currently working on triple bubbles.

Image

In the problem of the double bubble, ancient Greek mathematicians worked on a proof that would demonstrate the efficient use of space created by two joined bubbles.

How are algorithms connected to computers?

Algorithms are essentially the way computers process information. In particular, a computer program is actually an algorithm that tells the computer what particular steps to perform—and in what order—so a specific task is carried out. This can include anything from working out a company’s payroll to determining the grades of students in a certain class. (For more about algorithms, see “Foundations of Mathematics.”)

What is a teraflop and why is it important in computing?

A teraflop is a unit of measure of a computer’s performance in which one teraflop is 1012 operations per second—in other words, it means a trillion floating point operations per second. It comes from the word “flop,” or “Floating Point Operations Per Second.” So far, teraflops are not found on your everyday desktop computer; such power is currently reserved for servers and desktop supercomputers, but probably not for long.

Teraflops are used whenever scientists and computer specialists need a great deal of power, such as in a server that needs to be fast and capable of doing many tasks at once in the foreground and background of an application. For example, they are used a great deal in video editing, especially with the advent of high-definition television. They are used in the music industry for editing and digital sound capturing. And they are used in servers that require a great deal of online storage.

But businesses and certain industries may not be the only ones that have teraflop capabilities in the future. They are currently being used in the video gaming industry, and because of the need for more power in more realistic gaming, the demand for teraflops in the home computer is increasing. (There are currently some gaming consoles, such as the Xbox 360, that already have a combined computational power of 1 teraflop.)

What are some modern computer games?

There are a plethora of computer games out on the market. Two of the more popular ones are video games and personal computer games. Video games are electronic games that allow the user to interact with an interface—most often visual feedback on a video device (a computer screen). They are played on a platform, or your personal computer or a video game console. Most games have an input device, such as a handheld joystick or button on a mouse. One of the most popular video game consoles is called PlayStation, a brand offered by Sony Computer Entertainment. It also has a PlayStation Network, an online service with over 69 million users around the globe, and includes a store and a social gaming network.

A personal computer game, or PC game, is a video game played on a single personal computer. Older personal computer games were slower and less visual. The more advanced personal computer games are offered on DVDs and CDs, through online downloads, or even streaming services. They are then downloaded into a person’s computer. Because of their more powerful design and processing needs, they also typically require certain specialized hardware to play, such as a special graphics processing unit for 3-D games.

What is cryptography?

Because of the extensive connections between computers across the Internet, it has become necessary to find ways to protect data and messages from tampering or reading. This includes protecting people’s personal information when they buy things over the Internet and those who need to keep banking data secure. One of the major techniques for ensuring such privacy of files and communications is called cryptography.

Cryptography is a mathematical science used to secure the confidentiality (and authentication) of data sent by a user to a certain site. It secures the data by replacing it with a transformed version that can then be reconverted to show the original data, but only by someone with the correct cryptographic algorithm and key. This is why, when ordering over the Internet, it is important to see the “lock” icon at the bottom of the screen. This is your way of knowing that cryptography is working and the data is secure, thus preventing the data’s unauthorized use.

Have computers had an effect on the field of statistics?

Yes, computers have had a definite effect on the field of statistics. In particular, personal computers, software such as spreadsheets and professional statistical packages, and other information technologies are now an integral part of statistical data analysis. These tools have enabled statisticians to perform realistic statistical data analysis on large amounts of data faster and cheaper than ever before.

Statistical software systems are most often used to determine examples, understand existing concepts in the statistics, and to find new trends in statistical data. Most of the packages allow the data to be entered into the computer program, but the emphasis then switches to the statistician’s ability to interpret the data.

There are two well-known programs often used in statistics: the SAS and SPSS, both of which are commercial statistical packages. The SAS system is a statistics, graphics, and data management software package available for personal computers. It allows the desktop computer user to get the quality of results once reserved for users of mainframe computers. The SPSS (Statistical Package for the Social Sciences) is also a popular software package for performing statistical analyses. It enables the user to summarize data, determine if there are significant differences between groups, examine relationships among all the variables, and even graph the results.

To date, what computer has the top speed in operations per second?

The TOP500 project ranks and details the 500 most powerful known computer systems in the world, an idea that began in 1993. The results are published twice a year: the first coinciding with the International Supercomputing Conference in June, the second in November at the ACM/IEEE Supercomputing Conference. The following chart lists the top ten as of November 2011:

Fastest 10 Computer Systems as of November 2011

Image

In actuality, the fastest computer in the world is the human brain, an amazing computing device with the best processor. To compare, the fastest computers measure speed in trillions of operations per second, but scientists speculate the brain can handle 10 quadrillion operations per second. The actual numbers are probably even higher than that.

Can robots help with daily household chores?

After decades of being promised that robots will one day take away those pesky household chores, researchers at Cornell University’s Personal Robotics Lab may have just the right programming skills to develop that dream. The lab develops software for complex, high-level robotics, and one of the goals is to produce robots that can clean up a messy room, arrange books in a bookcase, or even pull out the dishes from your dishwasher and put them away.

And they are not the only ones. Researchers at the MIT Humanoid Robotics Group are developing Domo, the latest robot helper—or “human assistant”—in a series of robots. Domo comes with 29 motors, each complete with a computer chips running off a dozen computers that update the robot’s information continuously. In this way, Domo is able to almost mimic the human response of adapting to “his” surroundings.

One of the most difficult tasks in developing such a mobile robot is to make the machine perceive information in a cluttered or unknown environment. Another challenge is to enable a robot to estimate depth. To do these tasks, the researchers have been developing a fast, efficient algorithms that allows the robots to “know” its location and orientation when, for example, picking up an object or locating where it is in a room. This not only involves detection algorithms, but also a knowledge of geometry and spatial orientation in three dimensions.

The list of robotics groups attempting to make better robots grows each year, but don’t count on seeing any of these robots very soon. Researchers estimate that it will take a another decade or so before we see anything resembling the 1960s animated television program of the future—the Jetsons’ Rosie, the robot maid—in our kitchens. You might want to start saving now, too—one researcher estimates that such a robot may cost you as much as a car.

How is mathematics used by search engines on the World Wide Web?

A search engine is a program that searches for World Wide Web (or WWW, which is why, when we write a URL to get to a certain website, we sometimes use www.) for documents using certain keywords. It then returns a list of documents in which those keywords are found. Search engines like Google are a general class of programs that use a proprietary search algorithm; they rely on probability, linear algebra, and graph theory to create a list that, ideally, produces the most relevant website results for the user.