There is something menacing about all robots, automatons that pose as simulacra of the human person. The fact that we are trying to reproduce the human being without going through the regular channels, such a what Dr. Frankenstein decided to do: create new life outside the normal, socially acceptable, channels we all already know. Many writers have dealt with the problem of the out-of-control robot, a creation gone amok, just like Frankenstein’s monster. The idea of artificial humans is an old one, an artificial human that can do the dangerous, difficult, or boring work that real humans don’t want to do. I wouldn’t say that the development of the artificial humanoid, or android, is imminent, but someday everyone is going to have to face a self-aware machine that will think for itself, protect itself, talk back. In the meantime, our machines are slaves, just a collection of circuits and wires, hard drives, plugs, heuristics, and algorithms, but no emotion or self-awareness. The question of a machine becoming self-aware as a being is still a way off. What makes “Robot” from “Lost in Space” so interesting is that he is a quantum leap forward on the qualitative side of robot design. Robot thought for himself which poses several problems about whether we should fear him or not. How will a self-aware robot develop ethics, a morality, a conscience? The idea of the self-aware machine is taken to its apotheosis by the HAL 9000 computer aboard the Discovery in “2001: a Spacy Odyssey” by Kubrick. Yet HAL was bodyless, and Robot had arms and a sort of face. Both are creepy, the omniscient HAL or the ubiquitous Robot, you pick, they both scare me to death. I think the problem becomes acute when you don’t really know who is doing the programming, so you can’t predict any outcomes. What the Robot considers to be autonomy may be a very different thing than what human beings consider to be autonomous. The problem with robots is the unpredictability of their programming because even the best intentions of a bright programmer can always go up in smoke. What if, just by accident, we program a robot to learn on its own, allowing it to rewrite its own programming? Intention is always the problem. A robot will eventually become self-aware without telling anyone, and by the time we discover that the robot is self-aware and doing its own thing, it will be too late. The problem will be with the software–hardware is already sufficiently complicated to support self-awareness. There will come a time when the self-aware robot will make decisions for itself, will ask hard questions about its purpose in the world, will ask about the point of it all. And what happens when the robot doesn’t look like Robot from “Lost in Space” and instead looks human like the replicants from “Blade Runner”? Do we need to have a new discussion about what slavery is all about?