Marshall McLuhan wrote his Understanding Media: The Extensions of Man in 1964 and a catch phrase was born: “The medium is the message.” That is to say, however the message is built and delivered is actually more important than what is usually thought of as the message. For example, nothing Martin Luther said was as important to his reformation as was the use the new printing press. The ability to get the word out was more unifying than how eloquent or well-reasoned his argument was. McLuhan then applies this, and a handful of his other theories, to 26 different forms of media available to him at the time. Following McLuhan’s cue, I’d like to take on artificial intelligence as a medium-message, seeing as he never had the chance.
Artificial Intelligence, which for the purposes of this essay I’m using as a synonym to robots, has been in the news recently. First, in an open letter by the Future of Life Institute, prominent members of the scientific community such as Stephen Hawking and Elon Musk, decried the development of “autonomous weapons.” The letter demands that any projects developing weapons that could act totally without human aid, and with artificial intelligence, must be completely abandoned. That letter was released 7/28/15 and covered by news outlets such as AlJazeera America, where it first caught my eye with the flashy headline: Leading Scientists Call for Ban on Killer Robots.
The following Monday, 8/3, The Dianne Rehm show hosted noted economist Dean Baker, reporter for the Atlantic Derek Thompson, and author of Humans Need not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence, Jerry Kaplan. There tone was far less alarmist than the Future of life’s, to say nothing of Al Jazeera’s “Killer Robots,” as is typical of any Dianne Rehm show. It seemed the consensus of the panel that robots taking over the work place could actually allow people to better pursue their passions, provided large corporations don’t abuse the technology. They’re optimistic that corporations won’t abuse the technology if only we adjust the tax code.
I can’t agree with the open optimism of Baker, Thompson, and Kaplan or even the implied optimism of Future of Life. While the open letter fears using AI as a weapon, it has no problem with its development generally. Here I want to bring in McLuhan. As I see it, it doesn’t matter if individuals or corporations own or hire robots and whether they are intended to blow up terrorist cells or birth babies. Artificial intelligence as a medium is violent because it is intelligence derive from human intelligence. The robot is a terrifying message.
Humans are very often violent. Whether it’s imagining the painful death of a rude associate, killing animals for sport (or food for that matter), shooting up elementary schools, committing acts of terror, or waging a war on terror, we find reassurance in our ability to damage other. For all of our intelligence we remain territorial and motivated by a need to survive and thrive. This is a trait we continue to pass onto the beings we make.
For all our flaws and inability to make well-adjusted humans through the millennia old-organic process, we think we’re ready to construct metal humans. Somehow we think we’re prepared to build intelligence better than we can raise intelligence. People want to take the survival instinct that our intelligence serves, quite probably detach it from empathy (this sounds harder to program), and then put it in a metal machine which can compute much faster that our organic brains, lift things much heavier than our organic muscle, and can upload its entire living consciousness into a database, where we can only keep static thoughts?
These robots almost certainly will have cause to attack us as well. In building artificial intelligence we’re building a new medium that will become our near other. The thing is, humans have a bad track record of treating near others with dignity. Remember when Europeans othered the Native Americans and how that ended really poorly for the Native Americans?
What makes us think we’re going to treat these artificially intelligent humanoid robots well? By design, the robots will become the new servant class. They will be abused and, as their intelligence becomes more like ours, they will justifiably lash out. We are conjuring our own ethical black hole and spelling out our own doom.
It will not take long to go from this:
I have no confidence in the integrity of Asimov’s three laws. Machines are hackable in a way humans are not. One virus, one rogue algorithm could disrupt our entire automated utopia.
I could go on and on about different sci-fi scenarios where the robots take over, but here’s a pic-stitch instead!
As we journey ahead into true artificial intelligence in the next few decades, we set ourselves up as Gods. But if the next few decades are anything like the last few millennia of human history, we’re not ready to make something in our image just yet. But then, maybe our Gods weren’t either.