Roboticide and Robot Love. ----------------------------- For those of us who enjoy taking a peek of things to come, as we go down the road of machine intelligence, Isaac Asimov never disappoints - And I would dare say that he is still the best. Just reread Robots of Dawn and is again amazed of the many insights (on the future) one finds in this book. In Robots of Dawn we are faced with the horrors of a roboticide. The killing of a humaniform robot. A robot who was loved by a (human) woman, and who made love to a (human) woman. At one level it is a pretty good detective story, at another level Asimov once again explores the future of mankind in this future robot world (of ours). Robots don't need to take things all that seriously - they can be absolutely courageous in life, as they know no fear of pain or death, because there is no pain or death (for them). So, one might have expected robots to face all the dangers life might throw at the humans. But according to Giskard (Asimov) that would be dangerous for humans in the long run. So, the mindreading robot Giskard (magic is just a very advanced technology, as Arthur C. Clarke would have put it) decides that humans will have to settle (face) the Galaxy without the help of Robots. Somehow the difficulties, dangers and harm without measure (things the robots could prevent if present) will be better for the evolvement of man in the long run than the safety, advanced robots could bring. When humans have passed a certain threshold, someday in the far future, robots can perhaps intervene once again. All in the same way as parents know that they can't be overprotective and must let their children face the world on their own. So Giskard reasons he must override the 1 law of robotics in the long term interest of humanity. In the end - despite of his shortcomings, short lifespans and pain, - the hero of Robots of Dawn must therefore be , Elijah Baley, a human. He faces exposure to a (symbolic) storm (felt pain and was frightened) and yet manages to overcome it all - and ends up a better person because of it. Which makes you wonder what Asimov really is saying? Robots might kill other Robots out of Love (i.e. 1 Law of Robotics), robots might also make love to a human (i.e. 1 law of Robotics) but robots can't help humans in the task and trials of being a human. Is he, Asimov, being a pessimist or an optimist here? Pessimist in the sense that our greatest inventions really comes to nothing faced with the really tough questions of life - or an optimist, as we already have all the powers within ourselves needed to face these challenges? -------- The Original Laws of Robotics (The Calvinian religion) 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law The Zeroth Law (The Giskardian Reformation) 0. A robot must act in the long-range interest of humanity as a whole,and may overrule all other laws whenever it seems necesary for that ultimate goal. --------- Simon Simon Laub www.silanian.subnet.dk