Older sci fi can help us see the difference between where we are as a culture and where we thought we’d be. A look back at Isaac Asimov’s 1940s robot stories can help us make sense of AI today.
Some science fiction is a potpourri of lasers and explosions and aliens popping out, but the better sort asks what if? We’ve been at science fiction for a while, so looking back at older science fiction to see what the creator got right and wrong can be illuminating. Retro Futures help us to measure the distance between where we are and where we thought we’d be.
Along these lines, a recent article on Yahoo! Finance caught my eye. “AI needs a set of rules—for its own good, and for ours” by Emilia David is a reprint from Business Insider that discussed how lawmakers and industry folks all agree that AI needs strong regulations.
We need AI rules because too many weird things have been happening with ChatGPT, the new Bing, and other algorithms. AIs make things up without realizing it (because they haven’t been programmed to understand the difference between fact and fantasy) and then confidently share their “hallucinations” (this is, believe it or not, a technical term) with human users who believe the lies because they seem plausible.
Folks agree that something has to be done, but nobody knows what to do.
David’s article references one solution from an interesting New York Times Op-Ed by Representative Ted Lieu of Los Angeles: “I’m a Congressman Who Codes. A.I. Freaks Me Out.”
Lieu proposes the creation of a Federal agency like the FDA focused on AI:
What we need is a dedicated agency to regulate A.I. An agency is nimbler than the legislative process, is staffed with experts and can reverse its decisions if it makes an error. Creating such an agency will be a difficult and huge undertaking because A.I. is complicated and still not well understood.
There was something oddly familiar in these two articles, but at first I couldn’t place it.
Then it hit me: on the surface the need for AI rules sounds like Isaac Asimov’s “Three Laws of Robotics,” which he first proposed in the short stories collected in I, Robot (1950).
From I, Robot all the way through the novel Robots and Empire (1985), Asimov explored the philosophical, practical, and ethical implications of creatures that were, in some ways, more (or at least differently) capable than the humans who created them.
Like AIs today, Asimov’s robots worked inscrutably through complicated “positronic brains.” Here’s a description from “Reason,” a short story first published in 1941 where Mike Donovan puts a robot together:
Donovan uncapped the tightly sealed container and from the oil bath within he withdrew a second cube. Opening this in turn, he removed a globe from its sponge-rubber casing.
He handed it gingerly, for it was the most complicated mechanism ever created by man. Inside the thin platinum-plated “skin” of the globe was a positronic brain, in whose delicately unstable structure were enforced calculated neuronic paths, which imbued each robot with what amount to a pre-natal education. (59)
This was before microprocessors, before the digital world we all spend so much of our lives in came into being.
But even if they’re mechanical rather than digital, Asimov’s positronic brains are so complicated that humans don’t know how they work: they are black boxes, like today’s algorithmic AIs.
I found myself wondering, could Asimov’s Three Laws of Robotics be a useful model for regulating AI?
Asimov explicitly spells out the laws in “Runaround,” a short story first published in 1942:
One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Two, a robot must obey the orders given it by a human except where such orders would conflict with the First Law.
Three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (40)
There are two major differences between the laws controlling Asimov’s robots and the challenges we face with AI today.
The first difference is that Asimov focuses on physical harm, which is easy to define. Robots can’t hurt humans directly or indirectly: that’s the most important law.
When it comes to today’s AIs, we don’t worry that the program is going to kill us or break our arm: we worry about things like misinformation and losing our jobs to an AI that doesn’t need to go pick up a fourth grader at the end of the school day.
If we rewrite the first law to focus on truth, it might look like this:
One, an AI may not lie to a human being, or through inaction allow a human being to believe a lie.
But what’s a lie and what’s a truth? Figuring out the nature of truth has kept philosophers busy for thousands of years, and they haven’t made much progress!
My favorite synopsis of this conundrum comes from the late Richard Rorty in his book Contingency, Irony, and Solidarity:
We need to make a distinction between the claim that the world is out there and the claim that truth is out there. To say that the world is out there, that it is not our creation, is to say, with common sense, that most things in space and time are the effects of causes which do not include human mental states. To say that truth is not out there is simply to say that where there are no sentences there is no truth, that sentences are elements of human languages, and that human languages are human creations.
Truth cannot be out there—cannot exist independently of the human mind—because sentences cannot so exist, or be out there. The world is out there, but descriptions of the world are not. Only descriptions of the world can be true or false. The world on its own—unaided by the describing activities of human beings—cannot. (4-5)
If the truth isn’t out there, then it’s hard to make an AI only tell the truth.
The second difference is that in the Asimov robot stories the three laws are baked into the robots’ positronic brains before they ever get out of the factory!
In Asimov, U.S. Robots, the company that creates the positronic brains, doesn’t release phalanxes of super-intelligent machines into the world and only then think, “huh, maybe we should create some rules around this…”
But that’s exactly what has happened with AIs in the real world. Big Tech companies have incorporated these algorithms into the products and services we use moment to moment, but without supervision from the governments of the world.
Representative Lieu’s suggestion that we really need a FAIA (Federal Artificial Intelligence Administration) is a good one, but it’s also rather sadly after the fact.
But I’ll take it.
Leave a Reply