Pax Robotica. -------------------- Just reread Isaac Asimovs "Robots and Empire" in order to refresh Asimovs ideas on where and how the three laws of robotics fails. I must confess that I still find the issue less than crystal clear. What did master Asimov really think? ---- Almost every human activity carries some risk. Consequently, conscience stricken robots like R.Daneel and R.Giscard could not permit most of them, according to the three laws of robotics: >1. A robot may not injure a human being or, >through inaction, allow a human being to come to harm. >2. A robot must obey the orders given it by human >beings except where such orders would conflict with the First Law. >3. A robot must protect its own existence >as long as such protection does not conflict As Stephan H.M.J. Houben (stephanh@wsan03.win.tue.nl) wrote in a previous post: > Any group of Asimovian robots worth their salt would >immediately round up all humans and put them in a >Matrix-like computer simulation. >Of course, when you "die" in this simulation, you wouldn't die >in reality (that would violate the First Law), >you just get a mind wipe and be reborn. R.Giscard puts the same idea slightly different to his friend R.Daneel in Robots and Empire: "It is not sufficient to choose (between different evolutions of human society), Friend Daneel. We must shape a desirable species and then protect it, rather than finding ourselves forced to select among two or more undesirabilities .... When we think of humanity, we must save, we think of Earth people and the Settlers. They are vigorous, more expansive. They show more initiative bacause they are less dependent on robots. They have a greater potential for biological and social evolution, because they are short lived, though long lived enough to contribute great things individually". So in the end of "Robots and Empire" R.Giscard destroys the Earth in order to create a Galactic human civilisation. Working under his own Giskardian Reformation, the zero law: >0. A robot must act in the long-range interest of humanity >as a whole,and may overrule all other laws, >whenever it seems necesary for that ultimate goal. ----- So, Asimovs balances between two views: I.e. on the one hand the three laws of robotics have effectively turned humanity into pets under robot control. (human) Free will - an illusion, only working under boundaries R.Daneel and friends think safe. On the other hand it is still important that humanity is creative and shows initiative. I.e. robots are only helpers, they are not actually running the show. But, certainly you can become so gifted in helping that you are in reality taking everything away from the ones you help? And then we are left with robots running the show. In Asimovs "Caves of Steel" R. Daneel has built-in cerebroanalysis. This lets him do basic empathy and a bit more. And in "Robots and Empire" R. Giskard supposedly gives R. Daneel the ability to read minds. Actually, later in the Foundation series R. Daneel has the ability to "write minds" and change human emotions. Certainly, such qualities kind of confuse the roles of master and servant. And whereas humans are shortlived, R.Daneel ends up being at least 20.000 years old (how old exactly?) Actually there is an interesting issue on Daneel's lack of continuity : "Even my positronic brain has been replaced on five different occasions." he told Trevise (In the Foundation series) "My present brain ...is but six hundred years old..." So is R.Daneel in reality many robots? I am not quite sure what Asimov actually meant here - certainly Asimov hints that an intelligent entity can't go on for 20.000 years. Why? And if it is better for humans to be short lived - why doesn't same hold true for robots? One thing is for sure though, as intelligent robots entered the equation, humanity started living under the Pax Robotica. And Asimov was a genious to have forseen it all. February 8th 2003, -Simon Simon Laub www.silanian.subnet.dk