Two years after the original Star Wars was released, I was born into a home full of scifi addicts (let’s not debate whether Star Wars is scifi or fantasy; for the sake of this blog post I’m going to lump it into scifi: it uses advanced technology, not yet available to us.)
My sister and brother, ten and nine years older than me, had all the Star Wars toys. Some of my first memories include riding piggy back on my brother, as he pretended to be a tauntaun bouncing around the icy surface of Hoth.
I wasn’t immediately drawn to scifi the way the rest of my family was. When Back to the Future came out shortly before my sixth birthday, I remember whining about having to go see it at the theater. But after I got past my initial grumblings, I fell madly in love. To this day I can quote almost every line from it.
After that, I watched ALL THE SCIFI, but I especially loved, and still love, those that explored philosophical and ethical dilemmas, and didn’t just focus on Dudes Doing Manly Stuff But On Another Planet. Battlestar Galactica. 2001. Star Trek.
Fast forward a few decades. I’m sitting at a health conference a couple weeks ago, and a representative from IBM’s Watson team shows this video. Please watch this.
If that doesn’t seem like a propaganda commercial shown at the beginning of EVERY SCIFI MOVIE EVER MADE to foreshadow the destruction of humankind, then I don’t know what does.
I think Watson is exciting. So were the cylons. Siri is fun, especially when she horribly botches my texts and responds to my questions with equivocation. HAL was charming, too. And let’s not forget Google’s goal to mimic human behavior. Not unlike the Terminator.
But before I get on my nihilistic and ludditious soapbox about machine learning, please understand that most of my brain formation occurred in the 1980s. It was the end of the Cold War. They were making movies like War Games, Blade Runner, Terminator, Alien.
Androids, robots, and cyborgs weren’t often heroes, and when they were they were either helping us fight other androids/robots/cyborgs, or they were helping us fight aliens. Oddly, I’m not afraid of space exploration bringing about the wrath of aliens, but that’s a topic for another blog.
I would like to point out, though, that the idea of nonhuman entities mastering what we consider to be human skills has been scaring the shit out of humans for the entire course of written history.
The Ever-Present Nonhuman Human
The golem, a human-shaped piece of clay who appears in the Torah, and then becomes much more animated (pun intended) in the Talmud, is one of the earlier examples of an anthropomorphized human-like being.
A golem wasn’t a creature to be feared initially, because it could be controlled. If a certain word was inscribed on the golem’s forehead, the mound of clay would come alive. The golem could similarly be turned back into plain old clay by removing the word.
Usually a golem represented spiritual fulfillment, and/or it would defend and protect its creators. But there is at least one story of a golem going on a murderous rampage when it accidentally fell in love.
The idea of statues coming to life has also persisted in literature, but industrialization definitely moved these creatures into the realm of advanced and hyperadvanced technology.
The 1920 play R.U.R. deals with a robot-based economy, where cyborgs make up the bulk of labor, and humans become less important. Eventually the robots become smarter than they were before and in this newly-found intelligence they decide they want more power.
And this is what it comes down to, isn’t it? Love and power. We understand machine learning. We don’t have too much of a problem with robots processing information, and Watson is okay as long as he(?) helps us with our health problems or somehow makes our lives easier.
Love and Power
What we’re scared of with artificial intelligence is exactly what makes human existence so difficult. We fall in love and we desire power. Those two concepts don’t actually seem to have much to do with intelligence. We could argue that they’re remnants of our limbic brains, and if it were up to our frontal lobes we’d be more like Watson.
But we also use machines for purposes of love and power. Match.com and OkCupid use algorithms to pair people up. The machine knows that if you like pomeranians, long walks on the beach, and lavender-scented pomade, then you’ll probably be compatible with XYZ.
If we allowed Watson to be our robotic shadchan, learning from our online behaviors, comparing us to others like us, then maybe every match he made would be perfect.
Likewise, governments seeking power are ravenous for more knowledge. The CIA could (and probably does?) use Watson to read and process millions of text messages, and he’d probably be scarily accurate in predicting when and where the next terrorist attack would occur.
But we’re not as afraid of the power and potential for love that machines like Watson give us humans; instead, we’re afraid of the machines desiring that power and love.
When the cylons fall in love with humans, things get a little complicated.
When Skynet craves power, it nearly destroys everyone.
We can program a computer or robot to do whatever we want.
But what scares us more than anything is the idea that the computer could program itself to do whatever it wants. Learning is one thing. Desire is another.
I don’t think this fear is enough to stop us from trying. I think we’ll try and try to create an intelligence that not only processes natural language and returns precise results to our questions, but one that gives us an emotional connection, like in Her.
We want answers to our questions, we want speedy access to knowledge, but that’s not enough. We’re all rooting for the Tin Man to get a heart and for the Scarecrow to get a brain.
Be more like us. Be better than us. And please, warn us when this happens.