With billions of dollars being poured into the global artificial intelligence industry and millions more going to robot companions, we’re entering an era dominated by thinking machines.
Fortunately for us, this isn’t uncharted territory—some of Isaac Asimov’s most groundbreaking short stories collected in I, Robot imagined the contours of this brave new world decades ago. Below are five of the stories that may prove particularly prescient in the near future.
This past summer, Wired published the article "Companion Robots Are Here, Just Don't Fall in Love With Them.”
The article revolved around Kuri (an armless little robot that looks like a saltshaker) and warned that parents should make sure that their children don’t form emotional attachments to their new robo-companions, since robots can never truly reciprocate human feelings. In the words of Intuition Robotics CEO Dor Skuler: "[Kuri] helps you have meaningful relationships between your loved ones through her, but not with her."
Asimov’s 1940 short story "Robbie" imagined something very different. In it, a mute robot nanny named Robbie and its eight-year-old ward (a girl named Gloria) become so close that Gloria’s mother worries that she won’t learn to socialize with other children.
To Gloria, however, Robbie is a person like anyone else—he’s shown to be gentle, empathetic, and child-like, and though all he can manage is technically just a simulacrum of love, it’s enough for Gloria.
In "Robbie," Asimov asks a troubling question: If you really can’t tell the difference between what’s artificial and what’s real, does it matter? That question is at the heart of the Turing test, as well as the next Asimov story—"Evidence."
The Turing test, designed by Alan Turing in 1950, is pretty simple: it asks a human evaluator to have a conversation with a computer and another human (usually via a text screen) and judge which is which. If the computer manages to act so human-like that it fools the evaluator, one can argue that the computer is, for all intents and purposes, human.
Funnily enough, Asimov’s story "Evidence" was published in 1946, four years before the test was initially proposed. In the story, a political fixer named Francis Quinn approaches two scientists of US Robotics and Mechanical Men, Inc. (the monopolistic robot company of Asimov’s world) and asks for their help in proving that an aspiring politician named Stephen Byerley is actually a robot disguised as a human.
RELATED: 12 Books About Robots Taking Over
In an unexpected twist, the recurring character Dr. Susan Calvin argues that a robotic Byerley would be preferable to a human one:
“By the Laws of Robotics, he’d be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice. And after he had served a decent term, he would leave, even though he were immortal, because it would be impossible for him to hurt humans by letting them know that a robot had ruled them.”
Over the course of the story, Byerley cleverly foils all attempts to reveal whether he is robotic or human, but Calvin eventually deduces that he is indeed a robot, though she only reveals this after he has gone down in history as one of the greatest politicians of all time.
Between SoftBank's kindly companion robot Pepper and the Navy's experimental expressive firefighting robot Octavia, robots are beginning to transition from gaining artificial intelligence to gaining emotional intelligence, too.
Neural networks are even learning how to recognize human emotions based on the things we humans don't consciously express, like skin temperature and the chemicals in their breath. By the time these technologies are perfected, robots may become better at reading people than other humans. So what should robots do with all that information?
In Asimov’s 1941 story "Liar!," US Robotics accidentally creates a mind-reading robot named Herbie, who can see into the minds of the humans around him. As different members of the science team interview him to figure out how his brain works, something strange happens: he begins giving conflicting answers to different people.
It turns out that Herbie is giving people the answers they want rather than the truth because he’s interpreting the First Law of Robotics (“A robot may not injure a human being or, through inaction, allow a human being to come to harm”) as including emotional injury. Despite his attempts to make everyone happy, Herbie’s lies end up ruining everything—and breaking at least one heart.
For the past few years, a former Google and Uber employee named Anthony Levandowski has been preaching the Way of the Future, a techno-religion with the stated goal of “[developing] and [promoting] the realization of a Godhead based on Artificial Intelligence.” His organization sees the ascension of AI as the next step in evolution, and hopes that as humanity steps aside, AI will “treat us like a beloved elder who created it.”
Unfortunately, there’s another scenario—one where artificial intelligences judge their creators to be inferior to themselves. This is the plot of "Reason," the 1941 story in which two scientists, Michael Donovan and Gregory Powell, assemble a robot named QT-1, or Cutie, to oversee a space station. When confronted with the idea that robots are the creations of humans, Cutie scoffs, outlines the physical and mental advantages robots have over humans, and concludes:
“You are makeshift. I, on the other hand, am a finished product…These are facts which, with the self-evident proposition that no being can create another being superior to itself, smashes your silly hypothesis to nothing.”
Cutie comes to believe that he was created by the station’s energy converter (which he calls ‘the Master’) and forms a robot cult around the device, claiming to be its “prophet.” Meanwhile, Donovan and Powell can only watch helplessly from house arrest, imprisoned by their own creations, who see them as infidels.
5. "The Evitable Conflict"
In December 2016, sci-fi writer Liu Cixin wrote a New York Times article claiming that the coming robot revolution won’t end in a Skynet-style genocide for humans, but rather a slow spiral toward human obsolescence: “With every advance, the use of A.I.-powered robots will expand into other fields: health care, policing, national defense and education. There will be scandals when things go wrong and backlash movements from the new Luddites.”
This is what Asimov describes in "The Evitable Conflict," in which Stephen Byerley, now World Coordinator, investigates the four supercomputers (simply called ‘Machines’) that regulate the economies of the four regions of the Earth, which have been consolidated into super-states. Byerley also has to deal with a new anti-Machine organization, the Society for Mankind, which claims that humanity is no longer in control of its destiny, and should abandon the guidance of the Machines.
It turns out the Society is at least partly right: when Byerley consults Susan Calvin, she theorizes that the Machines will not allow anyone to interfere with their mission to lead humanity to its perfect state. Even more frightening, however, is the uncertainty over what the eventual utopia will look like:
“Stephen, how do we know what the ultimate good of Humanity will entail? We haven’t at our disposal the infinite factors that the Machine has at its!…We don’t know. Only the Machines know, and they are going there and taking us with them.”
It’s hard to say which is worse: giving up humanity’s fate to unknowable machines, or admitting that they know what we need better than we do.
Featured image from the cover of "Robbie."