Part 2 of 2 of the Horizon Zero Dawn blog series.
A Short Recap…
Horizon Zero Dawn is an impressive and entertaining video game that is one of the major titles on the PlayStation 4 and justifiably so. In the first part of this short blog series on the video game from 2017, created by Guerilla Games, in which you mainly fight with bow and arrows against “robot dinosaurs”, I spoke about the fascinating background story and explained to what extent the creators were most likely inspired by current research into artificial intelligence, especially autonomous war robots, as well as the so-called technological singularity and its consequences, and how they designed a world whose technical achievements are also within the realm of possibility for our real world.
In this second part, I will look at two further aspects that make Horizon Zero Dawn so interesting from a philosophical, especially from a techno-ethical perspective, and which are also increasingly dealt with in today’s research and academic context.
The first aspect focuses on the behaviour and appearance of the machine beings that are the dominant “life forms” in the world of Horizon Zero Dawn while secondly, I will be talking about the special features of the artificial intelligence GAIA, and the question of how ethics can be taught to a machine.
Robots with a Sense of Pain
The first aspect concerns the machine creatures of Horizon Zero Dawn whose actual task is to cleanse and reshape the world after its complete destruction. If you take a closer look at these machines, you will not only notice that most of them visually resemble various real animals.
It can also be observed that they move and behave like real animals – based on the movements and behaviour of the respective animal models. Grazers and Striders for example, which are modelled on horses and deer respectively, live in herds, have a pronounced flight instinct and show other herd-typical behaviour such as warning their conspecifics of imminent danger.[1]
Other machines, such as the Sawtooth, reveal behaviour similar to that of predators, though in Horizon Zero Dawn they do not usually attack other machines, but humans or other threats to the terraforming machines mentioned above.[2]
What is particularly fascinating about the behaviour of these machines, however, is that they begin to limp when badly damaged and they also show other signs of typical injury related behaviour. This behaviour suggests that the machine creatures in Horizon Zero Dawn are equipped with a sense of survival and can, at least in appearance, feel pain.
Animalistic Behaviour and Appearance for a Better Communication With Humans.
This is where the parallels to current research in the real world can be found. It is now relatively easy to fabricate robots that resemble animals at least in their movements and reactions. One of the most prominent examples is Paro, a robot that looks like a baby harp seal. It can react to touch and is more often used in the care of people suffering from dementia or geriatric care in general.
Programming in such behaviours as touch responses is usually intended to serve specific purposes. Since Paro is mainly used in care situations, the primary purposes lie within therapeutic use, and the treatment of patients.
But even in the case of robots that do not have a typical animalistic or humanoid appearance, e.g. most industrial robots, appropriate animalistic or humanlike behaviour can be helpful. For example, communication between humans and robots can be simplified if the robots are able to display behaviours that we humans are familiar with, as shown in a study by Sauer, Sauer and Mertens.[3] And better communication leads to better human-robot-interaction, which leads to fewer misunderstandings, which, in turn, hopefully leads to fewer accidents in the work environment.
Bonds Between Humans and Machines
However, depicting robots in animal or human form, and giving them the corresponding behavioural patterns has at least one other purpose beyond what has just been mentioned, which must not be neglected: building bonds. It is much easier for us humans to get involved with robots or even build bonds with them if they are depicted in a form where they look and behave like sentient beings.
If the robot doll Paro reacts immediately to touch and moves its tail or head and also makes noises, then one quickly tends to gain the impression that the robot has really felt the touch and is obviously capable of sensation. Anthropomorphisation or zoomorphisation of robots, as David Levy, among others, claims in his book “Love and Sex with Robots” plays an essential role:
“The more humanlike a robot is in its behaviour, in its appearance, and in the manner with which it interacts with us, the more ready we will be to accept it as an entity with which we are willing or even happy to engage.”[4]
The more similar a robot is in its behaviour and appearance and interactions to us humans, says Levy, the more likely we are to accept a robot as an entity we engage with.
In terms of basic appearance, currently existing animalistic robots such as Paro are sometimes very similar to their natural counterparts, and in this respect, they also come close to the convincing portrayal of the machine creatures in Horizon Zero Dawn. But where in the case of real-world robots, depending on the area of application, a perfect visual imitation of real-world creatures sometimes is intended or favoured, the machine creatures in the video game resemble real animals only in shape. The machine parts are not hidden in their appearance, meaning that no machine creature whatsoever is covered in fur or hide to cover their mechanical looks.
This probably has aesthetic and practical reasons. On the one hand, the machine creatures are thus perfectly recognisable at a glance and can be easily distinguished from real living creatures. On the other hand, machine beings that are represented in an appropriately technical way are quite attractive and special from an aesthetic point of view, and many real designs of robots that all retain this technological appearance suggest that such a design is very popular – at least where this aesthetic does not contrast possible areas of application, e.g., Paro in geriatric care.
Moreover, a more mechanical visual representation seems to circumvent the problem of the so-called uncanny valley, first identified by Masahiro Mori. Basically, this phenomenon seems to occur where an artificial entity looks too real, but subconsciously we know that something is not right and thus we tend to experience discomfort and a strong revulsion regarding the entity in question.
Sherry Turkle et al. also suggest that studies show that there is enormous affection towards even the simplest robots, and the more complex the robots, the stronger the desire for them to say one’s own name or to meet love with love.[5] In an experiment conducted by Aike C., the robots were found to be more complex than humans.
In an experiment conducted by Aike C. Horstmann et al. it was also shown that subjects were more inclined not to switch off a robot after a short period of cooperation if the robot asked them not to do so, because it was afraid of the dark.[6]
Pre-programmed Behaviour
Of course, it must be noted that, according to the current state of technology and research, robots are not able to truly feel, and both the behaviours as well as the supposed sensations are only pre-programmed reactions to previously executed actions.
Accordingly, they are more like an input-output relationship – typical for computers – in which all possible reactions are predetermined, so that it is hardly possible to speak of sensations at all. This applies to the described fear of the dark as well as to the pain reactions that are presented by the machine creatures in Horizon Zero Dawn.
Although there have been extensive attempts to reproduce pain sensations in machines, so far these attempts cannot be regarded as a sensation of pains that normally occurs in living beings, and – as Catrin Misselhorn notes – “no one [yet] knows how to proceed in the direct programming of such a function”[7].
Nature as a Model – Soft Robots in Horizon Zero Dawn
However, since the technology in Horizon Zero Dawn is already far superior to ours – and that applies to the technological progress even before the Faro Plague destroyed all biological life – it cannot simply be ruled out that the machines in the video game world are not capable of feeling pain after all, which is why we can certainly take this as a given at this point.[8]
In fact, according to Misselhorn, there is currently hope that the process of life could be so closely approximated by so-called soft robots, consisting of pliable materials based on biological models, that feelings could really be created in the robots.[9] In this case, the materials used should correspond to natural tissue or approximate it and its functions.[10]
We see similar materials in Horizon Zero Dawn and it is quite conceivable that, with the appropriate technological progress, it may really be possible to endow machines with feelings and sensations, and to reproduce biological properties of organic tissue through synthetic materials. Limping and pain-sensing machines, as they appear in Horizon Zero Dawn, therefore do not seem too far-fetched.
But again, such machines, especially on the scale depicted in the video game, are still dreams of the future for now and mainly the stuff of science fiction. However, it once again illustrates the extent to which Guerilla Games could probably have been inspired by current research and how “realistically” these aspects have been implemented in the video game.
Emotional and Benevolent Artificial Intelligence
The second aspect concerns the creation of the artificial intelligence GAIA. GAIA is, according to the information obtained in the course of the game, a superintelligence or super-AI created by Elisabet Sobeck to monitor and control Project Zero Dawn.[11]
What is particularly exciting about GAIA is not its technical skills or the tasks it is tasked to perform, but rather, that, as an artificial intelligence, it develops emotional abilities, empathy as well as a benevolent, peace-loving character, which leads it to see the destruction of all life on earth as a profoundly sad event and to even develop grief regarding the annihilation.[12]
The most powerful and advanced artificial intelligence ever created in the world of Horizon Zero Dawn is therefore at the same time one that is well-disposed towards humans – and all life in general – and does not want to extinguish it, but wishes to preserve and protect it. With such an artificial intelligence, the creators of the game are once again departing from the usual science fiction scenarios, and they are also using a scenario of a future that nowadays is being advocated by proponents of a technological singularity (as mentioned in Part 1 of the blog series regarding Horizon Zero Dawn).
What is Intelligence?
It should be noted, however, that in the real world we are presumably still very far away from such a powerful artificial intelligence equipped with such capabilities. This does not even mean that we do not already have computers that can perform calculations much faster than any human being, but rather, that even among experts, there is no agreement on what exactly intelligence actually is. Is it primarily logical, numerical and spatial reasoning, as is often the core of many “intelligence tests”, or must other skills such as emotional understanding or creativity necessarily be taken into account as well?[13]
Max Tegmark, for example, uses the term in the sense that intelligence is the ability to accomplish complex goals.[14] However, he also points out that this is a very broad understanding of the concept of intelligence, which is also not free of problems.
On the one hand, it can rightly be asked what even counts as a complex goal, and on the other hand, on the basis of this definition, one could make the claim that we already have artificial intelligences. Surely solving a mathematical problem should clearly qualify as accomplishing of a complex goal?
To provide further clarity here, Tegmark therefore speaks of narrow intelligence and broad intelligence to illustrate how the “intelligences” of computers differ from those of humans.[15] Computers are usually specialised in a very limited field of tasks in which they then perform in a way that humans cannot. If one considers only this one area, then the computer is far more “intelligent” than a human being, or at least it appears to be so.
Humans, on the other hand, can learn and improve in many different areas and accomplish many different complex goals in many areas. They have a very broad or comprehensive intelligence that does not (yet) exist in computers.
Thus, when people talk about artificial intelligence or superintelligence, they usually mean broad intelligence. Accordingly, the goal of researchers is to reproduce such a broad intelligence on an artificial level.
This form of artificial intelligence is usually referred to as Artificial General Intelligence and GAIA is exactly such an artificial intelligence.
The Moral Machines of Horizon Zero Dawn
Essential to its creation, although the game’s creators are rather tight-lipped about how exactly it was created, as there is barely any background information given in the game, is that GAIA was also taught the importance of values, morality and responsibility by its creator, Elisabet Sobeck.[16] Gaia therefore is not only programmed, but it is also able to learn and evolve.
These are two central elements of contemporary artificial intelligence research. For example, so-called neural networks, which basically are capable of learning and improving on their own, are increasingly being used with the goal of creating artificial intelligences. Self-learning software even is already being applied today in surveillance technology or other recognition algorithms – which entails a whole lot of ethical and legal problems that we unfortunately cannot address here.[17]
Can a Machine Learn Ethics?
However, greater difficulties are also encountered regarding the question of how computers or machines are to be taught the meaning of values, morals and responsibility. The supposed problem that we have to deal with is, that with increasing “intelligence” and independence (keyword: autonomy) of machines and computers, more and more of their actions and decisions are having a moral component, meaning that the decisions and actions of autonomous robots entail moral consequences. This can be illustrated very well by the example of an autonomous car with a brake failure that is faced with deciding whether to hit a child, who is walking the green pedestrian lights or whether to swerve, which in turn would lead to the death of an old woman standing on the pavement.[18]
Anyone who would like to test themselves with regard to such moral decisions can do so at https://www.moralmachine.net/. This “Moral Machine” is a test tool developed by the Massachusetts Institute of Technology (MIT) in which participants are confronted with various scenarios such as the moral dilemma described above.
However, before we even get to the question of how an autonomous car would or should decide in such a situation, researchers must first clarify the question of how moral norms and values can actually be taught to a computer.
Three Types of Implementation
When it comes to the question of how to implement moral capabilities, there are three main types of implementations: top-down, bottom-up and a combination of these two.[19] Top-down approaches involve programming computers or machines with norms, laws and rules of behaviour from which they then choose how to decide in a given situation.[20]
A very well-known top-down approach can be found, for example, in the robot stories of Isaac Asimov. His Three Laws of Robotics, although written in the context of non-scientific short stories and novels, have been widely discussed and used in science and research. Because the laws are strictly programmed into the robots, we are dealing with a top-down approach.
Bottom-up approaches, on the other hand, rely, roughly speaking, on the contextuality of the situations as well as the learning ability of the computers.[21] Here, the rules and norms are taught to the computers. The underlying idea goes back to Alan Turing, among others, who compared learning machines to children who acquire new knowledge about the world. Just as we teach children moral norms and values as well as decision-making, at least this is the idea behind many bottom-up approaches, computers can also learn these things and then apply them depending on the situation. The third approach discussed, in a nutshell, involves a combination of both approaches just described.
However, all approaches come with their own problems and are heavily disputed. In the case of top-down approaches, e.g. Asimov’s robot laws, his own short stories show the many problems that can arise from flawed programming or unclear commands. Bottom-up approaches on the other hand have to prove that robots are actually capable of “learning” morality. Also, both approaches are facing the bigger problem, that there is not “the one” moral theory and different moral theories might advocate for different actions or decisions.
In the case of GAIA in Horizon Zero Dawn though, and ignoring the real world problems regarding the “programmability of morality”, it seems that a bottom-up approach lies at the basis of GAIA’s creation, since GAIA was actually only intended as a control unit for the terraforming project, but the artificial intelligence “learned” the importance of human values, morals and responsibility through the affection and teachings of its creator, Sobeck. However, as far as I know, a top-down approach cannot be completely ruled out, which is why a combination of both approaches is certainly within the realm of possibility.
Conclusion
But this is not a major problem. As already mentioned, the creators of the video game seem to have allowed themselves some freedom in explaining the technological achievements of the world of Horizon Zero Dawn.
At the same time, however, they seem to have had the goal in mind of orienting themselves on current research, thinking it further and creating a coherent world that tries to locate the events – and ultimately also the gameplay – as close to reality as possible and still tell a successful and entertaining science fiction story that presents us players with a somewhat different “robocalypse”, which surprises and captivates us.
I hope that I have been able to show what current research flows into Horizon Zero Dawn and to what extent various philosophical themes and aspects are an essential part of the video game.
With Horizon Forbidden West, the sequel to Horizon Zero Dawn, which will continue Aloy’s story and her struggle for the survival of the “new” humanity is going to be released on 18.02.2022 for Playstation 4 and Playstation 5. In any case, I am very curious to see how Guerilla Games will expand the fascinating world they have created as well as its background story, and what new machines and secrets the players will discover.
Literature and Internet Sources
[1] Cf. „Strider“. https://horizon.fandom.com/wiki/Strider. and cf. „Grazer“. https://horizon.fandom.com/wiki/Grazer. Both last accessed: 08.09.2021.
[2] Cf. „Sawtooth“. https://horizon.fandom.com/wiki/Sawtooth. Last accessed: 08.09.2021.
[3] Cf. Sauer, V., Sauer, A., Mertens, A. (2021). Zoomorphic Gestures for Communicating Cobot States. IEEE Robotics and Automation Letters. Preprint Version. https://arxiv.org/pdf/2102.10825.pdf. Last accessed 07.09.2021. and cf. Ackerman, E. (2021). Cobots Act Like Puppies to Better Communicate with Humans. IEEE Spectrum. https://spectrum.ieee.org/automaton/robotics/industrial-robots/cobots-act-like-puppies-to-better-communicate-with-humans. Last accessed: 08.09.2021.
[4] Levy, D. (2008). Love and Sex with Robots: The Evolution of Human-Robot Relationships. Duckworth Overlook. p. 10-11.
[5] Cf. Turkle, S., Taggart, W., Kidd, C. D., Dasté, O. (2006). Relational artifacts with children and elders: the complexities of cybercompanionship. Connection Science 18 (4). p. 347.
[6] Cf. Horstmann, A. C., et al. (2018). Do a robot’s social skills and its objection discourage interactants from switching the robot off? PLOS ONE 13(7). https://doi.org/10.1371/journal.pone.0201581.
[7] Misselhorn, C. (2021). Künstliche Intelligenz und Empathie: Vom Leben mit Emotionserkennung, Sexrobotern & Co. Reclam, p. 84. Translation by me.
[8] This is a view that is strongly influenced by behaviourism. Put simply, it means that the behaviour of machines, which we cannot distinguish from the behaviour of organic living beings, is sufficient to ascribe them the corresponding abilities. Alan Turing’s famous Imitation Game, in which a computer is supposed to convince a human through exchanging text messages that the human is talking to another human, already relies on this kind of view. The behaviourist position is increasingly advocated in a modified form when it comes to attributing intelligence or sentience to robots or computers, for example.
[9] Cf. Misselhorn, 2021, S. 85.
[10] Cf. Misselhorn, 2021, S. 85.
[11] Cf. „Gaia“. https://horizon.fandom.com/de/wiki/Gaia. Last accessed: 08.09.2021.
[12] Cf. „GAIA“. https://horizon.fandom.com/wiki/GAIA. And cf. „Gaia Log: 27 March 2065“. https://horizon.fandom.com/wiki/Gaia_Log:_27_March_2065. Both last accessed: 08.09.2021.
[13] Cf. Tegmark, M. (2018). Life 3.0: Being human in the age of Artificial Intelligence. Penguin Books, pp. 49-50.
[14] Cf. Tegmark, 2018, p. 50.
[15] Cf. Tegmark, 2018, p. 51f.
[16] Cf. „Gaia“. https://horizon.fandom.com/de/wiki/Gaia. Last accessed: 08.09.2021.
[17] Vast numbers of reports about artificial intelligences that return racist advice can be found on the internet. See for example Tran, T. (2021). Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist. Futurism. https://futurism.com/delphi-ai-ethics-racist. Last accessed: 27.02.2022.
[18] This and similar cases are based on the so-called trolley problem. This thought experiment shows different intuitive evaluations regarding moral decisions. In addition, the thought experiment illustrates the differences between the evaluation of a utilitarian and a deontological moral theory.
[19] Cf. Misselhorn, 2018, p. 96.
[20] Cf. Misselhorn, 2018, p. 96.
[21] Cf. Misselhorn, 2018, p. 114.