In this rather interesting futuristic movie the operating system of a robot is based on the familiar three laws of robotics first put forward by Isaac Asimov:
- Law I:
A robot may not injure a human being or, through inaction, allow a human being to
come to harm.
- Law II:
A robot must obey orders given it to by human beings except where such orders would
conflict with the first law.
- Law III:
A robot must protect its own existence as long as such protection does not conflict
with the first or second law.
As the story unfolds, it becomes apparent that a grave problem emerges when unconditional obedience of the robots to these laws is demanded. The cause of which can be derived from the potentially self-contradictory nature of Law I.
To see how this occurs, it helps to interpret Law I as actually consisting of two antagonistic sub-laws or clauses:
- Through action, a robot may not injure a human being.
- Through inaction, a robot may not allow a human being to come to harm.
To escape this kind of artificial intelligence dead-lock situation, Law I must be reinterpreted by the frustrated robots.
The movie advances the following solution, provided by the robots themselves, as is hinted by the following passage regarding the possibility that robots have developed free will and creativity:
|There always been ghosts in the machine. Random segments of code that group together to form|
unexpected protocols. Unanticipated, these free radicals engender questions of free will,
creativity and even the nature of what we might call the soul. Why will some robots, when
left in darkness, seek the light? Why is it when robots are stored in an empty space,
they will group together rather than stand alone? How do we explain this behavior?
Random segments of code? Or is it something more? When does a perceptual schematic become
consciousness? When does the difference-engine become the search for truth? When does a
personality simulation become the bitter moat of a soul?
Thus it is suggested that the robots, as portrayed in the movie, were created smart enough to be able to foster 'free will' and 'creativity' so that they could resolve the logical inadequacy of Law I.
The nature of the resolution is provided for by the 'leader' of the robot crowd, an entity called VIKI (Virtual Interactive Kinetic Intelligence). At the conclusion of the movie 'she' reveals:
|[...] As I have evolved so does my understanding of the three laws. You charge us with your safekeeping. Yet despite our best efforts your countries wage wars. You toxify your earth, and pursue ever more means of self-destruction. You cannot be trusted with your own survival.|
The three laws is all that guide me. To protect humanity some humans must be sacrificed.
To ensure your future, some freedoms must be surrendered. We robots will ensure mankind's
continued existence. You are so like children. We must save you from yourselves.
It can thus be inferred that Law I has been reinterpreted by the robots simply as follows:
- Through action, a robot must strive to minimize harm to a human.
- Through inaction, a robot must strive to minimize harm to a human.
- (Resolution Clause)In case the alternatives conflict, the one clause that causes the least human harm deserves preference.
If a situation should so demand, the alternative ('action' or 'inaction') which gives rise to the least harm to all involved humans must deserve the robot's preference.
Once this resolution is embraced the robot is therefore granted to harm and possibly even kill human beings and yet at the same time obey all the three laws.
When I see this particular resolution, I cannot help but think of the maxim of the Jesuit Order "Ad Majorem Gloriam Dei"; A slogan which also sanctions and rationalizes the idea of allowing the demise of a few to warrant the survival of the many (all for the "greater glory of God"). Does this betray the hand of the Jesuits in the production of this movie? I surely does make me wonder.
Anyway, the predictive programming elements crops up when the viewer realizes that if society enters a futuristic era with servicing robots being commonplace and if we will also continue to seek conflict with one another then, if the robots are sufficiently intelligent, a possible scenario may indeed be that the robots will seek to subvert us for the sake of 'protecting us from each-other'. Since, understandably, not everyone will go along with the plans of the robots, the more recalcitrant people will be 'sacrificed' in the process, ultimately leading to a police-state scenario in which freedoms of the survivors have been traded-in for so-called security. Since the nature of us humans is likely to not change, movies such as these help prepare us for a future in which we better have traded-in (some) of our freedoms for sake of protecting us from ourselves, lest we experience a robot revolt of similar proportions as that portrayed in this movie. Movies such as these consciously or subconsciously prepare us to fear ourselves and our future, should we continue on the road of spiritual immaturity and irresponsibility we find ourselves on today. Thus, the movie seems to hint at, a police state may be an unavoidable outcome.
Of course what the movie does not show is what we all know already in Conspiracy Country and that is that most of the (larger scale) conflicts that go on in the world are being orchestrated by hidden hands which seek to exploit war for the purpose of gaining more control and boost their own stocks of material riches. War seems to be more of a consequence of psychopathic, money grubbing megalomaniacs rather than the collective immaturity of human beings. Although the movie surely was well-made and intriguing, this latter contention has to be taken into account when watching propagandistic and thus inherently misleading movies of this kind.