Over the course of this programme, Dr Luca Di Mare has taken us through the history of computing from the very earliest ‘computers’ of the ancient and medieval world, through to the ‘human computers’ of the twentieth century. This final article brings the story up to the present day, exploring how computers have developed over the last 70 years to become the vastly powerful supercomputers which power our modern world.
John Vincent Atanasoff and Clifford E. Berry of Iowa State University built the first electronic computer in 1942 in the US. The Atansoff-Berry Computer (ABC) used electronic devices to represent the state of the computation. Electronic devices can change state much more quickly than mechanical devices and this made electronic computers potentially much faster than their mechanical counterparts. The first programmable electronic computer was the Colossus, built in the UK in 1943, and used to decrypt German military communications during World War II. Colossus was followed quickly by the ENIAC, also programmable but much faster and flexible than Colossus. ENIAC was deployed in support of the war effort and its first calculation was a test problem for the hydrogen bomb. ENIAC was also instrumental in establishing the use of statistical methods in mathematical physics. The people responsible for setting up problems on the ENIAC were human computers, so the world’s first professional computer programmers were women: Kay McNulty, Betty Snyder, Marlyn Wescoff, Ruth Lichterman, Betty Jean Jennings, and Fran Bilas.
Electronic computers became, smaller, faster and cheaper as vacuum valves were replaced by solid devices printed on silicon chips. It was not until 1971 that Intel introduced the first single chip processor, the 8008. Many of the old CPU architectures, such as the 8008 no longer appear in computing equipment as CPUs, but continue being produces as embedded devices and controllers: there are about 2 billion 8080 (a derivative of the 8008) chips around the world right now
Modern chips look very different from the old 8008s and contain an incredible number of devices performing different tasks. A modern AMD Epyc processor contains 39 billion devices on a single chip. The old Intel 8008 had only 3500. Many modern processors are actually composed of large numbers of individual CPUs that share access to memory and input/output devices, thereby multiplying the amount of work the processor can perform in a given time.
As computers evolved and became more and more powerful, smarter and smarter techniques were devised to apply their capability to practical problems. In the UK, like elsewhere, part of the impetus to develop techniques to apply the power of new computers to engineering problems came from research on nuclear energy. For many years, the Harwell national laboratory, near Oxford, was and still is at the cutting edge of developing efficient numerical techniques to tackle large scale problems. One of the most successful libraries for sparse linear algebra applications bears the name of the Harwell laboratory.
Nowadays computers have sufficient power to model essentially any aspect of the life of even the most complex engineering products such as aircraft and their engines, cars, power plants, railways, industrial plants and spacecraft. Engineers and academics still vie to develop more and more efficient methods to learn more and more about their designs before a single component is built.
Computers in the contemporary world
Modern computers are the scientific equivalent of the telescope in the 19th century and the particle accelerator in the 20th century: they are one of the most powerful instruments of discovery available to the scientific community. Most of the fastest computers on Earth are in governmental laboratories, where they are used for research on problems such as nuclear physics, computational chemistry, material science, climate modelling, disaster mitigation.
They are quite good with practical problems too: the fifth most powerful computer on the planet belongs to the Italian company ENI spa and it is called HPC5. HPC5 is used to process seismic and geophysical data to model the structure of rocks deep below the surface of the Earth. HPC5 found the largest gas field ever discovered in the Mediterranean Sea.
Supercomputers have played an essential role in the recent COVID pandemic: molecular interactions leading to the development of vaccines were first understood through simulations. Similarly based on simulations were the analysis of the community spread of the virus that guided the responses of many governments around the world and the spread of the virus with droplets emitted by infected individuals when coughing.
The RIKEN Center for Computational Science in Japan summarised the results of its COVID research with the phrase: “COVID-19: Wear Mask & Ventilate said Fugaku”. Fugaku is the name of their supercomputer, and is composed of 7.6M cores, holding a memory of 5M Gb. It can perform 537000 million million floating point operations in a second and it consumes 30MW of electricity. Fugaku is the fastest computer on Earth… for the time being.
Dr Luca di Mare is Tutorial Fellow in Engineering Science at St John’s College, Oxford