When water cooler conversation turns to movies and lands on The Matrix, what scene first comes to mind? Is it when the film’s hero-in-waiting, Neo, gains self-awareness and frees himself from the machines? Or Agent Smith’s speech that compares humans to a “virus?” Or maybe the vision of a future ruled by machines? It’s all pretty scary stuff.
Although it featured a compelling plot, The Matrix wasn’t the first time we’d explored the idea of technology gone rogue. In fact, worries about the rise of the machines began to surface well before modern digital computers.
The rapid advance of technology made possible by the Industrial Revolution set off the initial alarm bells. Samuel Butler’s 1863 essay, “Darwin among the Machines,” speculated that advanced machines might pose a danger to humanity, predicting that “the time will come when the machines will hold the real supremacy.” Since then, many writers, philosophers and even tech leaders have debated what might become of us if machines wake up.
What causes many people the most anxiety is this: We don’t know exactly when machines might cross that intelligence threshold, and once we do, it could be too late. As the late British mathematician I. J. Good wrote, designing a machine of significant intelligence to improve on itself would create an “intelligence explosion” equivalent to, as he put it, allowing the genie out of the bottle. Helpfully, he also argued that because a super intelligent machine can self-improve, it’s the last invention we’ll ever need to make. So that’s a plus — right?
There are other perspectives on the matter — that fears of a machine-led revolution are largely overblown.
Like technological advances that came before, artificial intelligence (AI) won’t create new existential problems. It will, however, offer us new and powerful ways to make mistakes. It’s smart to take some preventive measures, like putting in alerts that tell you when the machine is starting to learn things that are outside [of] your ethical boundaries.