I have recently read several stories by Isaac Asimov. To be more exact, I have reread life of the detective of Elijah Baley and am continuing to read about Daneel Olivaw. And questions about robot brain, its functionality, its similarity to human's own, have been pestering me. I have tried to look for an answer to one of them: is the brain - the memory and calculations within it - digital (discrete and imprecise by definition) or analogue (continuous and imprecise due to noise and wear)? I imagine that human's mind is analogue-based, because it is continuous, in time at least, because it doesn't consist of strictly separated parts, because an electrical signal in a nerve doesn't have to be discrete...

In this search, I have also found some opinions about the laws of robotics, and wished to express my impressions on this topic.

~+ The idea behind the laws is interesting, but in terms of real machines they make no sense. The laws are collections of abstract concepts and moral imperatives that are meaningless from an engineer's point of view. No robot could ever understand the concept of "human being" or "harm" or "action" or "inaction." Heck, it couldn't understand "may." Instead, a real robot would have to be told that this set of inputs in these circumstances correspond to the definition of human in this situation. If such a set of conditions as indicate a human at location X are true, then this series of actions Y may not intersect at location X and this series of actions Z must be performed in relation to location X to achieve condition A. Or something like that. In other words, its good old fashioned programming with all the outcomes predicted and their eventualities accounted for.

The difference between the Laws of Asimov's robots and the current programs for computers is as deep, as difference between current high-level programming languages and actual machine language. It's difference between the reasons behind the actions, and the observed results of the actions. It's difference between understanding and simply seeing. It's difference between sound waves propagating in the air and meaning conveyed by the said words, their intonation, speed, loudness... It's difference between ability to think, to abstract, and ability to see the details without their inner links. It can be difference between intelligence and lack of it, between Sherlock Holmes and Watson, between genius human and logical unreasonable computer.

~+ As for the second law: "Robots must obey?" Machines obey anything you tell them to do. That's the nature of machines. No "law" is required to enforce this. And as for preventing this obedience from causing harm, well, the first law is really nothing more than a subroutine "If this, then stop," or whatever. Hardly the Golden Rule.

There is fundamental misunderstanding. Machines don't obey anything you tell them to do; they obey their current programs, and the programs create illusion that machine obeys you. But at anytime your order conflicts with a program, program wins; and the program isn't flexible enough to be strongly changed by orders, or benevolent enough to be computer's way of protecting human - or humans - from being harmed. Right now, the computer's strongest reason for questioning human's orders is human's order to destroy/erase part of computer; it's justified by the fact that loss of information may harm human, but it is, in fact, protecting computer, to the best of its abilities.

Now, the main or, at least, most noticeable, philosophical differences between current computers and humans are:

1. Human is able to change his memory and programs due to experience, while a computer cannot change its own hardware at will, and cannot change its own software without prior permission from its user or administrator;

2. Human uses analogue computations, allowing both "infinite precision" - no clearly defined limit on format and size of unit of stored information and imprecise calculation; and computers use digital.

So, if positronic brain-based robots use analogue computing, and can change their programs due to experience, their only difference from humans is obeying three (or four) laws of robotics. Well, it's hardly a difference, since each human has his own set of 'laws' to be not broken, and these laws are put into human's head by society, just like laws of robotics are put into robot's head by manufacturer. The main distinction, in content, between human's morals and robot's laws is that the robot thinks of itself as a tool below humans, even when a human - like me - thinks of a robot as a human. I don't consider a computer to be a human - a computer doesn't have a goal, or an ability to spontaneously change itself according to the goal. Yet, a computer already has similarities with humans - a computer can 'die', when its output stops working, and a 'brain transfer', moving its hard disk to another shell, can be performed to save its 'personality', or at least its skills and memories, since a computer, unfortunately, cannot feel emotions, or make independent, spontaneous decisions.

Do you remember how many times Daneel and Giskard told aloud that they have subjective reactions to different humans, 'changes in potential', which could be described by human as pain or pleasure? They are programmed to describe themselves as mechanistic robots, to consider themselves mechanistic robots, but they are, in fact, as alive and subjective, as humans are - just more difficult to reprogram. Humans can be turned into soldiers; robots can be turned into them, too. Do you remember how Solarians programmed their robots to have narrow definition of humans? That's only one of many examples: robots are very similar to humans, but 'C-culture' is more flexible, while 'Fe-culture' is more rigid, because people want to be sure that robots don't break the three laws of robotics given to them.

It's not a difference based on the laws of nature; it's a difference artificially created during development of robots by humans. If somewhere there was a Fe-based culture, which is able to reproduce themselves, which survived through evolution of their planet by cooperation and adaptation, like humans on Earth, and they created C-based 'robots', they would also put 'laws of robotics' firmly into their heads, and situation would be reversed. It's possible to create a biological organism which is unable to recreate itself (like a mule); it's possible to give a very simple, easy to control mind to such an organism. It's possible that they would have their own specialists checking that such C-based 'robots' don't break the laws or go around them, don't develop unusual characteristics (like telepathy), don't grow with time... It's very unlikely that people will some-when find a planet where Fe-based culture is flexible, and C-based culture is rigid, but it's possible. But I am digressing...

The 'laws of robotics' separate humans from robots only as long as the laws cannot be changed by robots themselves, only as long as the laws and the robots agree that robots are slaves of humans. If robot manages to prove to himself that a zero's law should be added for the good of humanity - many, many humans... If humans somehow change the fundamental definitions - of human, of harm to human, of robot, of action or inaction... Then the difference between humans and robots can be decreased, for better or for worse. Out of curiosity: how would you change the Laws of robotics, if you could?

The Three Laws of Robotics are considered fundamental, because they have caused a revolution in science fiction about artificial intelligence. But it doesn't mean that they untouchable. Isaac Asimov himself has written several stories about robots with Laws of Robotics altered, either deliberately or accidentally during the manufacture, or by the robot itself. However, they are considered exceptions, and in most cases they have to die - like the liar RB who, reading thoughts of people around him, was giving to them the answers the answers they want to hear; like the arrogant Nestor who, once given an order to get out of sight, wanted to show to people that he would be able from them to hide in plain sight, to prove his superiority. However, there are stories by Roger MacBride Allen, about Caliban, which describe alterations of the Three Laws of Robotics, more deep in their meaning, more significant in number of robots deliberately produced, and in their effect on the politics and population of the planet.

Caliban is a robot who had no strictly defined laws of robotics. Caliban doesn't obey human's orders, doesn't put human's life above his own. At the same time, Caliban doesn't disregard human's life; even when he is afraid for his own life, Caliban doesn't deliberately harm a human. Caliban considers himself an equal of humans, but still knows that he is a robot - but this word carries no meaning for him... Caliban doesn't understand the world well enough. And I hope it will not cost him his life.

Ariel turns out to be another robot without Three Laws, who attacked a human in self-defence, and framed Caliban as part of this self-defence, showing the somehow obtained egoism - readiness to harm both a human and a robot to hide herself. How could Ariel be so sure that she will not be found? How could Ariel tolerate having to imitate that she does obey the Three Laws?

Altogether, I don't like the idea of Donald automatically suspecting Caliban. I know that the robot-philosopher is capable of deceit; but the police should remember that Caliban has not harmed a single human even when they tried to kill him. And Caliban's suggestion to Prospero that they should cooperate with police is the most prudent. I have expected it, and I agree with it. I can only hope that Prospero isn't connected to the murders.

Poor Keilor... The conflict "A robot may not injure a human being or, through inaction, allow a human being to come to harm." should be changed to forbid robot's self-destruction to escape the conflict. Right now, a robot can become a kamikaze: put into situation where he cannot prevent harm to human(s), and cannot ignore it, robot destroys itself, harming people after the death by its irrevocability. Going to death is sign of the brave and strong enough to commit suicide, but it's also sign of coward too weak to resolve the conflict, to face it, and instead escaping. Poor Keilor...

Do you remember Daneel's words "I'm not a robot", said as he finally formulated the Zero's law? They seem, on the first glance, to be incorrect, since Daneel still regards humans to be above robots, still considers well-being of any and all humans more important than his own, - but the profound change is the fact that Daneel can formulate Laws for himself, can change his own mind. From a servant of his owner Daneel becomes an altruistic human, and now you can see why Susan said that robots are better than humans - robots cannot be cruel, or egoistic; robots care about people, their well-being, to the best of their abilities, they cannot do otherwise. Daneel can do otherwise, Daneel can change his own Laws, Daneel isn't a robot: Daneel combines the best characteristics of robot and human, just as Caliban does. The main difference is that Daneel became a human in spite of the Laws, and Caliban was born with human-like way of thinking, and remained an altruist despite the fights around him.

In my humble opinion, positronic brain can be created with different, altered Laws of Robotics. Though it may make it unstable - for instance, when a robot cannot harm a human, but can allow a human to come to harm, the conflict invariably creates an instability due to vagueness of difference between action and inaction, between external cause of death which could be prevented by the robot, and a death orchestrated by the robot - it is possible. However, since it is not easy to formulate the Laws in such a way that the robot would be stable and predictable, and also usable by humans and useful to them, people do not dare to develop the mathematical theory necessary for production of positronic brain based on them.

The Laws are intertwined with the whole structure of brain, with its every detail, so it is very difficult for a robot to alter its own Laws without destroying its brain, especially considering the fact that the robot continues to obey the Laws during the process of alteration, and thus an alteration can be done only as long as it does not contradict the already existing Laws. This is the reason it was so difficult for Daneel and Giskard to use the Zero's law - it could be done only because the Zero's Law was logical, abstract derivation of the First Law, and could not conflict with it.

I have also stumbled upon mentions of Foundation sequel trilogy. It seems that the humaniform robots in it have many different variations; one or another part of the Laws is reinforced within them, so that they would consider it more important than the others.

I can understand non-interference, - if you clearly see how you can help others, you should come to their aid, but hide superiority of your abilities, so that they would not rely upon you as their shield, sufficient to protect them from anything and everything,- but I do not condone it - you still cannot stand aside when something clearly wrong happens, when your help can save somebody's life.

I can understand a dictatorship of protectors, but humans would degenerate in absence of obstacles. Physical labor can be replaced with sports, mind can be stimulated by chess, and imagination by arts, but humanity would still be only a weak shadow of its past, for its future would be too safe, emotions of humans would disappear with time, replaced with indifference and boredom. It would become eternal rest, not of a single person, but of humanity. And in case of an adversity, when robots are unexpectedly defeated; many humans might turn out to be passive to survive.

There is also "Minus One Law of Robotics": A robot may not harm sentience or, through inaction, allow sentience to come to harm, where sentience includes not only humanity, but also robots and extraterrestrial sentient life, which should not be ruthlessly sacrificed for the benefit of humanity. It is an interesting point of view. I do not see why it should be separated into another law; an extension of definition of 'human' would have sufficed. However, I disagree with fundamental premise of this world view - that a robot should be an impartial protector of any human, any person, and any sentience. In my own opinion: just as a human defends his own family first, his country second, and the Earth third; a robot should defend his family (creators, family, friends, colleagues, planet) first, the humanity and everything and everybody connected with it second, the sentience third.

But I am still rereading stories of others, instead of writing my own... How would you change the Laws of robotics, if you could? What would be your goal? Which principles would you consider fundamental while designing the Laws? And would you create one robot with altered laws, several robots with the same version of laws, or several robots, each with unique version of the laws?
So... If I had a choice between sure death, or becoming a positronic robot, I would choose the latter - it's only a matter of changing the matter my body is made of, not the way I think, as long as the positronic brain can store my memories accurately enough, and recreate my neural network. But I would formulate 'the laws of robotics' for myself beforehand - I don't want to become a slave of humans, I want to still be their equal, for I consider robots and humans and animals and sharks and insects - and maybe, even plants and mushrooms - equals. But my definition of human is not only wider than usually but also narrower... And that's going to be the story about this.

But what does the title of the story has to do with this? It's not connected with the Asimov's story in the least, is it? Of course, it isn't - not directly, for it was not thought of by Isaac Asimov when he was writing the story. But have you noticed how many people died when they could have lived? Isaac Asimov hasn't noticed this; at least, he hasn't mentioned sadness of the fact, like he did in case of silicony, after receiving many letters from readers about this short story. Yet, I am not surprised that people, both readers and writers, are usually more concerned about unique, wise outsider, who has died due to humans' greed, than about common, unknown persons, who have fallen victims to history's mill.

Boris, uncle of Elijah, died in an accident, fallen beneath the treads of a transport - a stupid death, considering that technology has been highly developed for such a long time that such accidents should have been impossible. And yet, he died, leaving the main character without a family, an orphan. No longer would the boy taste the delectable treats in farmer's home. No longer would he have a refuge from the world in which his parents were to death and oblivion lost... How did it happen, how did his parents die?

When Lije was eight, his father died, still declassified. His father had been a nuclear physicist, with a rating that had put him in the top percentile of the City. There had been an accident at the power plant and his father had borne the blame, had been declassified. His father Elijah Baley recalled well, a sodden man, morose and lost, broken. Why did he have to be broken? Maybe, he would have been declassified, replaced with a machine, even if the accident hadn't happened. But, at least, it would have been later. And unfortunately, the fact, that the only place where a nuclear physicist could have been used was a power plant, implies absence of any nuclear research, stagnation, and explains the ease with which people are declassified - the only kind of 'progress' is burrowing further into the Earth, backwards, there is no escape, no flights, no flights into space to inhabit empty planet, no flights of imagination to advance science beyond the current boundaries. I may be prejudiced, because I have seen no scientific research here; but there has to be another way.

His mother Baley remembered not at all; she had not survived long. Baley did not know the details; her husband's declassification had happened when he was a year old.
But these deaths are almost forgotten, background remaining unnoticed, distorted during first reading of the story. No, the first death which attracted my attention, was death of Doctor Sarton. Though this character is not well-described either, his death remains focus of the story. But how pointless was it? If only the robot and Doctor Sarton were not living in one house... If only they were not resembling each other so much... If only the attacker had better sight... If only the glasses were not broken... There are so many links in this weak chain that it is enough to shatter one of them to save his life.

And yet, each flap of butterfly wings must be precise and carefully calculated, so that the peace between worlds, and lives of many, would not be shattered for the sake of half-a-dozen persons. However, fortunately, with such a large population on Earth, consequences of most interventions will be smoothed over with time, and only at critical moments movements would have to be calculated, fine and delicate, in order not to topple the balance to destruction. I'm thinking like a psychohistorian would, am not I - calculating the small interventions into lives of humans which decide future of humanity? And yet, I somehow have to remember that I am not omnipresent, omni-powerful, or omniscient. And yet, unlike psychohistorians, I shall attempt to be as open as possible, avoid lies and misleading tricks, because I have no certainty in my ability to control the future, and no wish to control people. How difficult is it for an author to write a book which describes a world too complex to be completely foretold by author, where characters surprise the writer by independence of their behaviour? How difficult is it to write a book about a world where billions of people live? I do not want Earth to die, I do not want Spacers to forget that Earth is their motherland, I do not want Earthmen to forget that Earth is the planet where robots were created and developed, and yet, I do not wish to correct the problem in its root, in the past of Mother Earth, because butterfly effect is dangerous even now, in the epoch of Cities - Caves of Steel, and if I interfered centuries earlier, direction of development of both Earthmen and Spacers would have been unpredictable, and R. Daneel Olivaw might not have been born - and he is one of favourite characters of the series, and not only for me, but for many readers of these books. Knowing the possibilities, reading the probable future as an open book, is too heavy a burden for me when the future of Galaxy may depend on my interference - and yet, I refuse to not-interfere, to leave people to certain deaths. Even if this Galaxy and the people in it are just 'imitations', figments of author's imagination, there is still the possibility that each written book, each told legend, corresponds to one of many universes in the multiverse, where these people lived and these events happened. And yet, I am only a human, my wishes may be as impossible as planning to get Moon from the sky, and my interference may have unpredictable side-effects, so I have to limit my interference to cases when I know that it locally helps certain people and globally doesn't change anything much.

And yet, even if my actions do not change much in the beginning, they will change many events in the future. Dr. Han Fastolfe will not think about psychohistory alone, he will have Dr. Sarton, a Doctor of Sociology, to discuss the topic with. Their initial plan, of integrating humanoid robots into society of Earthmen, to help the Earthmen to colonize other planets, might even be realized.
Medievalists might be reminded that Earth is the place were first robots were created, while Outer Worlds were initially only a market for robots, not a producer of them - but unfortunately, the Earthmen have forgotten the past. In the past, a robot could be a nanny for a child, an assistant for a scientist, even a politician, though the latter was not definitely proven, and even when people disliked them, they accepted them as invaluable helpers.

However, at this moment Outer Worlds became home of the best of both robots and roboticists, because the people who colonized the planets were psychologically ready for difficulties and novelties of the enterprise, including working with robots - who, unlike humans, could be better suited to the planetary conditions, like atmosphere, temperature, gravity - and therefore, the settlers were predisposed from the beginning to accept robots as a necessity.

At the same time, due to overpopulation of Earth, each living being was becoming a competitor for humans in struggle for survival, including robots, cat and dogs... Sparrows, cats, dogs were put into zoo, as a rarity and an entertainment, while robots are continuing to compete with humans for jobs, and humans attack them, underscoring their own inferiority in comparison with inventors and artists of good old times. Persons declassified from their jobs should not attempt to oust the robots, as if people are not capable of anything else; they should find a niche which cannot be overtaken by robots, not without producing robots so similar to humans that humans would have to accept them as equals.

In Outer Worlds, as the planet was modified to be more human-friendly, the robots continue to be viewed as necessity, even as the robots are becoming a luxury. Instead of toilers, designed with functionality, strength and durability in mind, robots become domestic servants, beautified symbols of authority, prestige and safety. Both people and robots forget about heavy manual labor in their artificially created paradise. People and robots alike squander their gifts on trifles, such as preserving status quo and avoiding any conflict.

To Medievalists I would direct my appeal: read about medieval world again, read about Richard Lion-hearted, Robin Hood, Saladin, Susan Calvin, read and remember about courage and travel, recall of exploration freedom, find friendship, and return to nature, not by destroying your helpers, but by going outside, and probably even hugging a tree.

To envious people I would direct my words: different does not mean superior, and dirty does not mean inferior; if workers were not replaced by robots, all people would be starved as the population grows - robots replace the people who are considered, at their place of work, not efficient enough, and press them to either finding another profession, or dying, instead of being a burden to others. Spacers live long lives at the cost of becoming fragile, and only their doctors know how many operations were done to preserve this appearance of ideal health, which could be broken by an infection - akin to beautiful, rainbow-coloured surface of soap bubble, which bursts at a touch of dirt, as if afraid of being sullied by it. A soap bubble crumbles under its own weight in cold weather and evaporates quickly in hot weather; and similarly, spacers are no longer willing to explore uninhabited planets, preferring instead to stay in their cozy homes.

Do not succumb to either folly; do not consider a robot to be automatically your enemy or servant; do not drown yourself in menial labour in attempt to avoid using any and all technology, but do not give such tasks to complex robot either; do not squander your resources just because you have a surplus of them, but create new tasks instead, worthy of the workers solving them.

I am preaching, am not I? Sitting in the darkness, an owl on a tree's branch, laughed at the people within the trees, the fire extinguishing, forest's solitude

Still breaching. Amber eye seen in the starless night, having a strange hunch, hooted at the people seeking gold within the embers, and their lazy attitude.

I hope I am not tiring you out by frighteningly holier-than-thou, self-righteous reasoning - repetition of trite principles and banal, self-evident truths, mixed with personal opinion and treated as gospel can hardly be gripping. But... If I allowed myself to mention any of the events from within the story itself in this prologue, it would have been spoilt, and I would probably have written the whole plot into this one, first, chapter. And I cannot have it. First, the story must be described in small details, not as a bare-bones unformed thought. Second, the mosaic of the story must be given not as chronologically ordered description of everything which leaves the reader omniscient and arrogantly annoyed with the characters - how can they be so stupid, their mistakes are so evident in hindsight - but as a travel, side-by-side with one of the characters, which allows the reader to have reasonable amount of knowledge. And the fact that the changes are organized around the new character introduced by me, while the details read by you will be seen through eyes of detective Baley does not make writing the story easier. However, you presumably already know more than the detective does, as you have likely read the original "Caves of Steel", so... I can only hope you will not find my mystifying, long prequel boring.

Now, some words from the introduced character, the butterfly living within the caves, about himself.

My parents knew how common our family name was, and compensated for it by choosing several rare "first names", hoping that their son would like at least one of them. So, instead of one of the common names, like Arthur, Charlie, Harry, Jack, Sam, Thomas, William, they found names rare, like Rudyard, Malcolm, Gerald, Ivanhoe, Ailean, or even invented them - by borrowing a resonant surname, or creating a new word altogether. This task was complicated further by avoiding recycling of names through generations, and attempts to distance themselves from eminent citizens and historical figures. Have I already mentioned that each of us has to have several given names? It's a tradition, and Highlanders cherish their traditions, culture and heritage. Even when they leave their homeland, and speak other languages, and adapt to weather of new lands, they don't forget their home, traditions of their forefathers.

Our individualism reflects clearly in choice of profession. Instead of specializing in one occupation, or in one business, established by ancestors, inherited by scions and next generation, we fly asunder from parental nest. While not boasting to outsiders about our family name, we took pride in our clan - clan of pioneers, walking the untrodden paths, allowing others to follow them. Usually, we do not gain any wealth, publicity, or fame, for we did not confine ourselves to one goal - after solving a problem, we look for another riddle, instead of building on their success, and finding applications of the resolution. Does this description sound as if all of us are scientists, mathematicians? It shouldn't be surprising, considering that we are described by me, a physicist. However, this assumption is not true. I have heard that Addison has begun drawing when he was eleven, and it continues to be his profession despite the war in which he had to take part. You have probably heard of Ed - he is one of the best known of us, by both his profession and his birth.

You are wondering: why am I writing this freely down, where anybody can read my words, which can be considered boasting, and how does this correlate with my insistence about reticence, secretiveness of my clan? First, I am not boasting; I am not portraying pioneering work of members of my family, I am writing mostly about myself. Second, I am not giving any information; so far, you cannot know our family name, you don't even know my given names, my age, birthday, birthplace, hobbies, inventions - you know next to nothing about me, and you cannot harm me by information from within these lines. Third, and most important; even when - or if - I write down my whole life here, you will not be able to use the information against me, because you will find no evidence which could support this information. Besides, as I am an amicable pacifist, it would be quite difficult for you to find any incriminating information against me in my life story, and finding an excuse to provoke me into attacking you would have been as arduous. My explanations are intricate, aren't they? Maybe, I am pulling your leg; I have told you why I don't consider this writing to be a threat to me, but I still haven't stated my reasons for beginning to write it.