Programming the ENIAC: an example of why computer history is hard

By Leonard J. Shustek | May 18, 2016

We take history seriously at the Computer History Museum. It’s our middle name, after all. But it’s not easy history to do, for several reasons.

  • It’s contemporary. If you’re a Civil War historian, no one who was alive at the time can object to what you write. But we get feedback all the time like “That’s not right – and I know because I was there”, or “You didn’t mention XYZ, which I invented.” On the whole it’s wonderful to be able to interact with pioneers, but we always need to be mindful that personal memories, especially after many decades, can be unreliable, and that participants are not always the best judges of historical significance. We depend heavily on contemporary written documents; as the ancient Chinese proverb says, “The palest ink is better than the best memory.”
  • There are no simple answers. The most fundamental unanswerable question is “Who invented the computer?” A 20-year-old might say “Steve Jobs”. My favorite answer is “Charles Babbage, except he never built it.” But all the possible answers have problems. It’s like the automobile: there was no single inventor.
  • Any “first” is problematical. What “first” means depends on precise definitions of fuzzy concepts. Historian of computing and former CHM chief curator Michael Williams famously said that anything can be a first if you put enough adjectives before the noun. Does the “first computer” need to be electronic? Have a program stored in memory? Have been built? Be general-purpose? You get the idea. Our senior curator Dag Spicer wisely advises us to avoid the “F-word” entirely because it’s simplistic and ignores the deeper questions of influence and impact.
  • It isn’t static. Even though the past doesn’t change, the history of it often does because of new discoveries. For example, historians believed for centuries that the first mechanical calculators, like Blaise Pascal’s Pascaline, were built in the 1600s. But in 1900 Greek sponge divers discovered a shipwreck with the remains of a device that, after much study, we now know was an extremely complicated astronomical calculator built at least 1,500 years earlier. That’s history that had to be rewritten.

ENIAC and the Origins of Software

The history of software, into which CHM is placing increasing effort, is particularly hard. I’d like to give one example in some detail: the relationship of ENIAC to the origins of software.

There has been a conventional wisdom about when what we now called “software” began to run. Many textbooks and websites, including Wikipedia and that of the University of Manchester, record June 21, 1948 as “the birth of the stored-program digital computer” because the “Manchester Baby” ran a 17-line program on that day.

The cover in ENIAC in Action by Thomas Haigh, Mark Priestley, and Crispin Rope.

But it’s not that simple. A recent book by computer historian Tom Haigh and colleagues Mark Priestley and Crispin Rope explores the conversion of ENIAC into what they prefer to call a “modern code paradigm” computer. Based on machine logs and handwritten notes, they have discovered that a complex program began running on ENIAC on April 12, 1948.

ENIAC – the Electronic Numerical Integrator and Computer – was a room-sized machine with over 17,000 vacuum tubes. It started running at the end of 1945, and for five years it was the only fully electronic computer running in the US. Estimates are that by the time it was retired in 1955, it had done more calculations than all human beings in all of history.

The initial design of the ENIAC did not use anything like the software we know today. It was basically an assembly of “functional units” that were wired together in a particular way for each new problem. If you wanted to do a multiplication after an addition, you would run a wire from the multiplier to the adder. Control was very distributed, and the machine could do many things in parallel. But designing and setting up new calculations was difficult and time-consuming.

Even before ENIAC was finished, engineers realized that there was a much better centralized way to control such a complex machine, using coded instructions stored in memory and “executed” in sequence. Control operations, such as looping and branching, could be accomplished simply by “jumping” out of order to fetch the next instruction from a different memory address. We now call this a “computer program.”

The origin of this “lightbulb over the head” idea is hotly contested. Physicist and mathematician John von Neumann was the first to describe it in an incomplete document that was widely distributed in June of 1945. But he had been discussing related ideas with many people, including ENIAC engineers John Mauchly and Presper Eckert. No one knows whose idea it was, how many people thought of it “first,” or what prior work might have influenced them. For a discussion of how it was different from earlier ideas, see Tom Haigh’s recent article “Where Code Comes From”.

Regardless of who deserves the credit, it was quickly adopted as the right way to build computers. Even the ENIAC, starting in July of 1947, was converted to use this scheme. Because ENIAC had very little writeable electronic memory, the coded instructions were stored in “function tables,” banks of 10-position switches that had previously been used to store pre-computed numerical constants. It was the modified ENIAC that ran a computer program stored in switches in April of 1948.

ENIAC vs The Baby

So, what’s the importance of this new historical discovery? That it ran 9 weeks before the Manchester Baby? That it happened in the US, not the UK? No. Those are interesting but relatively insignificant facts. Look instead at the substantial differences between the two events.

1. What the programs were:

  • Manchester’s program was a 17-line demonstration to prove that the computer worked.

The original demo program of the Manchester Baby.

  • ENIAC’s was an 840 instruction program that used a subroutine, nested loops, and indirect addressing for both data locations and jump destinations.

A diagram of the original demo program of the ENIAC.

A table of the ENIAC's addresses.

2. What the programs did:

  • Manchester’s program computed the highest factor of 218. It took 52 minutes to give an answer we already knew, on a machine that never did any useful computations of new results and was soon taken apart.

John and Klara Dan von Neumann.

  • ENIAC’s program was a Monte-Carlo simulation of neutron decay during nuclear fission, designed by John and Klara von Neumann. It gave useful and previously unknown answers. It also broke new ground by using a digital computer as a real-world simulator for problems that cannot be solved using traditional mathematics. ENIAC continued running programs like that, including weather simulations, for seven more years.

ENIAC’s April feat was an accomplishment, but there’s at least one good reason why calling it a “first” is problematical: the program was stored in what we now call “read-only memory”, or ROM. Manchester’s, on the other hand, was stored in the same memory used for data. That was the design that von Neumann had described, and is a characteristic of what is often called the “von Neumann architecture”.

Does that matter? Was the modified ENIAC less of a computer than the Manchester Baby because its program was in ROM and could not be changed by the computer? Historian Doron Swade has asked many computer experts about the importance of programs being in memory. He observes that “no one challenged the status of the stored program as the defining feature of the modern digital electronic computer”, but “we struggle when required to articulate its significance in simple terms, and the apparent mix of principle and practice frustrates clarity.”1

Look at it this way: many modern microprocessors, especially small ones for embedded control, have their programs in ROM. If they are modern-style computers, then so was the modified ENIAC. That’s my opinion, anyway; you are free to add or subtract your own adjectives and reach a different conclusion.

Changing history

Haigh’s new book is a refreshing change in the academic treatment of computing history. In the 1970s and 1980s, historical accounts were frequently written by practitioners “who were there”, and tended to focus on technical details. In the 1990s and 2000s, professional historians shifted the discourse primarily to business, political, and social aspects. The pendulum may now be swinging back to a welcome midpoint; the authors describe their book as “an experiment in the re-integration of technical detail into history.”

This is great new history. It’s the kind we encourage, and we do — history that’s complex, nuanced, and not static. If this is what it means to rewrite history, let’s keep doing it.

About The Author

Len Shustek is the founding chairman emeritus of the board of trustees of the Computer History Museum.

Join the Discussion

Share

FacebookTwitterCopy Link