Towards a Third Age of Computing

June 22nd, 2015

On Friday, Tony Hey gave a fascinating lecture at the Computer History Museum about the evolution of computing. With a background in particle physics and, later, data science, Hey coauthored The Computing Universe: A Journey Through a Revolution (2014). It’s a colorful layperson’s introduction to some of the most important ideas and historical events in computing. According to Hey, computing has gone through three significant stages in history. Each of these stages highlights an emergence of new technological breakthroughs and purposes for computing.

Age 1: The Beginning of a Revolution

The first age of computing was marked by an emergence of the Von Neumann architecture (a.k.a. stored-program computers), and following that, an explosion in the transistor density of microchips. Early electronic computers, such as the ENIAC, were basically gigantic calculators with no ability to “remember” operations. The first stored-program computer, the EDSAC, emerged in the middle of the twentieth century. It used paper tape for storing information, which was represented by the patterns of punched holes in the tape. IBM followed suit and started mass-producing mainframe computers that read in programs through punched cards. Hey called this period the “IBM Mainframe Era.”

In the 1950s and 1960s, computers were not personal household items yet. They were simply too big and expensive to justify home use. Eventually, that started to change. A phenomenon known as Moore’s Law started to emerge. Moore’s Law is a prediction about the rate of improvement in microchip technology. According to Moore’s Law, the number of transistors in computer processor chips has doubled approximately every two years. This has been the trend since 1965.

Consequentially, as these computer chips became increasingly cheap and compact, computers became smaller, more powerful, and more affordable. In 1973, the first desktop computer (the Xerox Alto) entered the market. Three years later, the Dan Bricklin invented the first spreadsheet app. As computers became more ubiquitous and versatile, a new purpose started to emerge.

Age 2: Computers for Communication

The second age of computing is defined by the emergence of computer networks and the World Wide Web. The ARPANET, which was a tiny prototype of the modern Internet, only spanned a few computers in the United States. Researchers used this network to share documents. Later, Tim Berners-Lee proposed the concept of hyperlinks, which allow people to link pages in addition to share them. With the advent of hyperlinks, the World Wide Web was born in the early 1990s.

Just as transistor density exploded during the birth of the computer revolution, the Internet and Web grew prolifically. The founders of Google pioneered an algorithm for ranking web pages in search results based on page popularity, making it easy to browse the web. Now a great deal of commerce and collaboration happens across the Internet, making it the lifeblood of modern civilization. Ideas can spread faster than ever before.

Age 3: Computers for Embodiment

Tony Hey argues that we are entering an age where computers are starting to take over tasks that once required the presence of human beings. This emergence is largely the result of improved artificial intelligence and “the Internet of things.” Smart devices such as phones, cars, and refrigerators can connect with the Internet to communicate with the outside world and respond accordingly. For example, smart cars take in map data from the Internet as well as real-time sensory data from motion-sensing to predict collisions and reduce the probability and impact of accidents.

Of course, with these advances in smarter devices, users are increasingly vulnerable to the consequences of hacking. Hey’s presentation included a videoclip of a reporter riding in a smart car that has been hijacked by a programmer. The programmer took full control over the car, causing it to run through a stop sign and barrel over traffic cones. Of course, it was just a demonstration. In real life, it would have been absolutely terrifying!

Another consequence of computers taking over human jobs is unemployment. We are entering an age where people have to compete with mere machines to stay relevant in the workforce. This has an immediate negative effect on people who lose their jobs and the relevancy of their skillsets. In the long term, however, I’m hoping that we can outsource dangerous, boring jobs to machines and enable people to do the fun, creative stuff.

Age 4: Conscious Computers

As we look into the future, we can predict that computers will probably one day possess human level intelligence. As of now, our technology still has a long way to go. Some recent breakthroughs show that computers are becoming more efficient at playing games like chess and jeopardy; but even so, these supercomputers lack awareness.

When computers do gain apparent human-level consciousness, how will we know whether they truly have human-like experiences or whether they are just imitating humans? Perhaps then, we will have the means to look inside the subjective experience of brains to see what others are experiencing; but that is a topic for another day!

Book Signing

At the end of the talk, I bought a copy of The Computing Universe and got it signed by Tony Hey, himself. I told him I wanted to give the book to my dad as a Father’s Day gift. He inquired, “You’re going to read it to, I hope?” Of course! I hope to do much more than read this book. I want to be a part of its story.


Categories: Reviewstechnology