11 Aug 2000
There was a silly fluff piece on the ABC evening news tonight about “thinking” robots with the ability to self-replicate. It was full of sound bites from old guys talking about science’s responsibility to consider the implications of increasingly relinquishing control of our lives to these conscienceless beings. (Of course, they also snuck in snippets from Bill Joy, a man I’d greatly respect if it weren’t for his loudly-voiced rants on this subject). After the segment, the anchorman commented, “[It’s] something we should all think about. The machines are.”
Of course, that’s ridiculous; “the machines” are definitely not thinking about anything. I can understand why someone might make the mistake that the anchorman did, though. When you watch the B-roll of robots that look like C3PO, it’s only natural to want to project human qualities onto them. But you’d be just as accurate in projecting human qualities on a box of springs.
Now, I’m not suggesting that science should unquestioningly march forward and never pause to consider the social ramifications of technology. Those who design transporation technology ought to think about safety, and how their inventions will affect social structures, and the environment. But they needn’t ask “What if the cars get out of control? What if they TAKE OVER OUR LIVES?!!!” Because they won’t. And though it’s more tempting to think, neither will robots, for exactly the same reasons.