Search This Blog

Wednesday, May 11, 2011

The Myth of the Three Laws of Robotics – Why We Can’t Control Intelligence

The Myth of the Three Laws of Robotics – Why We Can’t Control Intelligence

Like many of you I grew up reading science fiction, and to me Isaac Asimov was a god of the genre. From 1929 until the mid 90s, the author created many lasting tropes and philosophies that would define scifi for generations, but perhaps his most famous creation was theThree Laws of Robotics. Conceived as a means of evolving robot stories from mere re-tellings of Frankenstein, the Three Laws were a fail-safe built into robots in Asimov’s fiction. These laws, which robots had to obey, protected humans from being hurt and made robots obedient. This concept helped form the real world belief among robotics engineers that they could create intelligent machines that would coexist peacefully with humanity. Even today, as we play with our Aibos and Pleos, set our Roombas to cleaning our carpet, and marvel at advanced robots like ASIMO and Rollin’ Justin, there’s an underlying belief that, with the proper planning and programming, we can insure that intelligent robots will never hurt us. I wish I could share that belief, but I don’t. Dumb machines like cars, dishwashers, etc, can be controlled. Intelligent machines like science fiction robots or AI computers cannot. The Three Laws of Robotics are a myth, and a dangerous one.

Three Laws of Robotics

Let’s get something out of the way. I’m not worried about a robot apocalypse. I don’t think Skynet is going to launch nuclear missiles in a surprise attack against humanity. I don’t think Matrix robots will turn us all into batteries, nor will Cylons kill us and replace us. HAL’s not going to plan our ‘accidental deaths’ and Megatron’s not lurking behind the moon ready to raid our planet for energon cubes. The ‘robo-pocalypse’ is a joke. A joke I like to use quite often in my writing, but a joke nonetheless. And each of these scifi examples I’ve quoted here aren’t even really about the rise of machine intelligence. Skynet, with its nuclear strikes and endless humanoid Terminators, is an allegory for Cold War Communism. The Matrix machine villains are half existential crisis, half commentary on environmental disaster. In the recent re-imagining of the Battlestar Galactica series, Cylons are a stand-in for terrorism and terrorist regimes. HAL’s about how fear of the unknown drives us crazy, and Megatron (when he was first popularized 30 years ago) was basically a reminder about the looming global energy crisis. Asimov’s robots explored the consequences of the rise of machine intelligence, all these other villains were just modern human worries wrapped up in a shiny metal shell.

Evil Robot

(clockwise) Meet the Terminator, Matrix 'squid', Megatron, Cylon centurion, and HAL...aka Communism, Existentialism, Energy Crisis, Terrorism, and Xenophobia. This post will not be about red-eyed robots.

Asimov’s robots are where the concern really lies. In his world of fiction, experts like Dr. Susan Calvin help create machines that are like humans, only better. As much as these creations are respected and loved by some, no matter how much they are made to look like humanity, they are in many ways a slave race. Because these slaves are stronger, faster, and smarter than humanity they are fitted with really strong shackles – the Three Laws of Robotics. What could be a better restraint than making your master’s life your top concern, and obedience your next top concern? Early in Asimov’s world, humanity largely feels comfortable with robots, and does not fear being replaced by them, because of the safety provided by the Three Laws.

No comments:

Post a Comment