For over four decades, the Computer History Museum (CHM) has been recording the words and thoughts of the most important people in computing. The first lecture was in 1979 with Cambridge University professor and computer pioneer Maurice Wilkes, creator of the EDSAC computer (1948). Since then, the Museum has recorded dozens of new lectures but many of the earliest recordings were not available to the public. Now, thanks to a Museum initiative to bring these earliest lectures to a wide audience, CHM is releasing lectures from the earliest days of the Museum—when it was known as The Computer Museum (TCM) and located in Boston. Over the next year, CHM will be posting these lectures to its YouTube channel in groups of three or four lectures at a time. We hope you enjoy these unique windows into the minds of computing’s most important innovators.
The story of electronic digital computing begins in the mid-1930s. In this first release from “First Steps: Lectures from the Dawn of Computing,” we hear from three of the most important contributors to computing from that era: German civil engineer Konrad Zuse, George Stibitz from Bell Laboratories, and physics professor John Vincent Atanasoff. Their stories are unique as each struggled with adapting existing technologies in new ways to make a breakthrough, the stored program computer. All three men were motivated by easing the burden of calculation they faced in their ‘real’ jobs—aircraft design (Zuse), theoretical physics (Atanasoff) and electrical circuit analysis (Stibitz). In this first generation of computer designers and builders, there were no experts. Innovators came mostly from engineering, mathematics and physics and did they best they could. Their solutions were amazingly creative—Zuse, for example, built his first computer (the Z1) in 1938 using thousands of meticulously cut metal plates, which slid past each other back and forth performing logical functions in a symphony of sliding metal. The Z1 was mechanical, the movement of the plates driven by an electrical motor which produced a clock rate of 1 Hz. Zuse built it in his parents’ living room in Berlin where, in a 1943 bombing raid, it was destroyed along with all its plans. Amazingly, Zuse rebuilt a second copy of the machine from 1986 to 1989 and it is now on permanent display at the German Museum of Technology (Deutsches Technikmuseum Berlin). Zuse built a series of follow-on Z-machines during World War II and founded Zuse, K.G. in 1948 and developed a successful business based on his previous work, a daring idea given the devastation which surrounded him after the War. In 1950–51, his Z4, for example, was the only working digital computer in continental Europe.
Mention should also be made of Harvard professor Howard Aiken’s work since it was contemporaneous with that of Stibitz, Zuse and Atanasoff. Working for the US Navy, Aiken conceived of a large-scale, relay-based computer—ultimately called the Mark I—that could be used for military calculations and the production of mathematical tables. It was built by IBM and became operational in May of 1944. Some of the earliest calculations for the nuclear weapons being designed as part of the Manhattan Project were run on the Mark I.
This lecture, as Dr. Zuse reminds us in the beginning, is partly based on a lecture he gave at the International Conference on the History of Computing at Los Alamos National Laboratory in 1976. For Zuse, the tedium and exasperation of spending most of his time as an engineer performing calculations. These were “big awful calculations” involving static and aerodynamic structures. Zuse’s early “Z-series” of computers came in rapid succession between 1938 and 1944. The Z1 used mechanical plates while the Z2 and Z3 used electrical relays. Throughout these painful years of development, Zuse scrounged parts from old telephone company inventories and tried hard to convince his employers and others in the aircraft and defense industries that the computer was an important new invention.
Zuse was very methodical in thinking about his computer designs. As he discovered after the war, many of his American and English counterparts built computers without a lot of theory about computation, a situation Zuse found puzzling and a little disappointing. His own machines were based on a complex algebra he had developed based on propositional calculus and were programmed using probably the world’s first programming language, Plankalkül.
Like Zuse, Bell Labs “mathematical engineer” George Stibitz confirms that “there was little communication among [computer] pioneers” in the late 1930s and early 1940s. Stibitz made the connection between the bi-stable (two state) nature of the electrical relays used in the Bell Telephone System throughout its technical infrastructure and the two-state nature of the binary numbering system. Stibitz in 1937 built a simple binary addition circuit out of relays—which he called the “Model–K” since he built it on his kitchen table. The circuit was primitive but demonstrated the principles of binary arithmetic to initially skeptical colleagues.
Bell Labs management gave Stibitz the go-ahead in 1938 and in November of the following year he demonstrated the new machine, called the Complex Number Calculator (CNC). At an American Mathematical Society meeting in September 1940 on the Dartmouth College campus, Stibitz demonstrated the CNC performing calculations in real-time via a teletype connection to the CNC in New York City. It was the first remote use of a computer in history.
Dire need was the motivation for professor John Vincent Atanasoff of Iowa State University to think about automating computation. A theoretical physicist by training, he “spent eight weeks on the calculations for my thesis” and was facing a career in which the difficulty of computation would be a major stumbling block.
Between 1936 and 1937, after exploring analog methods, which he quickly discounted due to the complexity of building such large systems, he considered using (and modifying) some of the IBM punched card equipment the university had. This avenue was closed when an IBM representative heard of his plans and reminded him and the university that this was forbidden.
On a long drive one late night, Atanasoff decided he would try to build his own machine and that it should have four characteristics: it would be electronic (vacuum tube-based) for speed; it would based on the binary number system; it would use a regenerative capacitor arrangement for memory; and it would use direct computation (not simple counting) to perform calculations.
He studied electronics and by 1938 had the basic design worked out. Graduate student Clifford Berry became Atanasoff’s co-developer and the proposed machine was called the “ABC”—Atanasoff-Berry Computer. The ABC was completed in 1939 and worked as planned. It could solve 30 equations in 30 unknowns, a remarkable amount of computing power, optimized for solving scientific and engineering problems in which partial differential equations were involved—a vast domain.
A seemingly innocuous visit to see the ABC in action by ENIAC co-designer John Mauchly in June of 1941 turned into the linchpin of a high-stakes lawsuit (Honeywell, vs. Sperry Rand) more than 30 years later. In overturning the ENIAC patent on the computer, presiding judge Earl Larson ruled at the conclusion to the trial in 1973 that Mauchly had obtained ideas he applied to the ENIAC from that one visit to Atanasoff. As a result Larson declared that, in legal terms, John Vincent Atanasoff should be considered the inventor of the electronic digital computer. It should be noted, however, that despite this legal decision the invention of the computer is still a topic of debate among historians.