The ethics of Anthropomorphic Technology: the problem of human-likeness in humanoid robots

Alicja Halbryt
12 min readAug 13, 2023
Lexica AI generated image

Introduction

As the technological developments are reaching the level of extreme human-likeness in robotics, it is especially important to consider ethical implications of implementing those in the society. The problem this essay will analyse is anthropomorphism related to interacting with machines, and more specifically the way humans interact with humanness in humanoid robots. First, it will try to explain the notion of anthropomorphism itself. Afterwards, the essay will dive deep into the concept of Anthropomorphic Technology (AT). This concept will be discussed by considering arguments in favour of and against technologies as such. It will be argued that AT in a way constitutes a potential for the development of conscious AI, and that anthropomorphising technology is a factor positively influencing the user experience of a robot. On the other hand, it will be argued that the deception coming from interactions with AT can have detrimental effects on human psychology.

What is anthropomorphism?

As can be found in Cornelius and Leidner (2021), anthropomorphism is an innate human tendency or a chronic feature. It is a tendency to attribute human traits to non-living objects and nonhuman entities, also animals. Anthropomorphism is categorised as treating nonhuman behaviour as motivated by human mental states and feelings (Damiano & Dumouchel, 2018). Given this view, Damiano and Dumouchel (2018) state that traditionally anthropomorphism has been viewed as a category mistake, a bias, an obstacle to the advancement of knowledge and even a psychological disposition typical for the unenlightened and immature (meaning ‘primitive people’ and young kids). However, the negative traditional connotations of anthropomorphism can be given a positive role, especially by social robots (Damiano & Dumouchel, 2018), that is robots which are designed to follow social behaviours and their designated social role (e.g. in healthcare). It is because the tendency to anthropomorphise is so often manifested between humans that it can be seen as a useful tool to facilitate social exchange between people and robots. This predisposition of humans to anthropomorphise could help involve users in presence of the robots and in their social performances by developing them in a way that they stimulate people to assign mental states and human feelings to the robots. This should promote social interactions and enhance familiarity. Nonetheless, Damiano and Dumouchel (2018) ask if, considering anthropomorphism as a primitive and infantile character trait, it is legitimate for robots to exploit this feature of humans which must essentially be seen as a human failing. In other words, is it correct if robots are built in a way to take advantage of our defect or weakness?

There are, however, many opinions referred to by Damiano and Dumouchel (2018) which stand in opposition to the negative view that anthropomorphism is an early childhood feature or a cognitive mistake. One of the perspectives takes an evolutionary anthropology approach and describes the conception of anthropomorphism as a cognitive tool which boosted human fitness. The tendency to recognise and see human faces or bodies in complex shapes provided an important fitness advantage to early humans. It helped them to distinguish between enemies and friends, efficiently recognise predators, and to build alliances with other tribes (Damiano & Dumouchel, 2018). A recent conception of anthropomorphism denies treating it as an infantile trait, but as a feature fundamental and permanent to a person’s mind.

It is worth noting that anthropomorphism is to a certain extent a personal disposition, meaning that people have a different degree of tendency to anthropomorphise. For instance, preoccupied and lonely people tend to anthropomorphise more than others (Cornelius & Leidner, 2021). What is more, personality traits such as agreeableness and extraversion have an influence on the interpretation of human-like behaviour, and therefore on the level of anthropomorphism.

What is Anthropomorphic Technology?

It takes as little as a head tilt of a robot to make a human ascribe humanness to it (Mara & Appel, 2014). Following Cornelius and Leidner (2021), human-like design in technology is viewed as a situational catalyst thanks to which it is possible to access human-like knowledge representations in one’s brain due to the perceived similarity between the design and human beings. To put it differently, anthropomorphism is facilitated by the human-likeness of technology, which therefore can be called Anthropomorphic Technology (AT).

In their paper on acceptance of Anthropomorphic Technology, Cornelius and Leidner (2021) claim that AT can be human-like in form or function. Human-like form is explained as to be the physical embodiment of the machine and tends to be quite consistent and static. It is a fundamental and basic integration of humanness in design and can be perceived as human-like through observations. Human-like form encompasses characteristics such as shape, gestures, movement, body and the appearance of the machine. Human-like function in AT, on the other hand, concerns a machine’s behavioural traits and in a way displays how humans think and behave with other people, which is visible through the way the machine ‘thinks’ and interacts. Human-like function refers to intelligence, conversational ability and natural language processing, interactivity and the purpose of technology.

Leong and Selinger (2019) explicitly claim that humans are more prone to anthropomorphise a robot which talks and walks than a simple device or appliance which is not made to look like a human, act as one or resemble any living being, such as an animal. The researchers give an example of the Roomba, robotic vacuum cleaner. People intellectually comprehend that it is not alive, yet the sole fact of it moving around as if of its own will and accord is quite enough to trigger feelings like attachment, naming the robot, or even pre-cleaning for the appliance. In other instances, in experimental setting people have objected to torturing robots that simulate pain, and soldiers on real battlefields have objected inhumane treatment of robots which defuse landmines (Leong & Selinger, 2019). The anthropomorphism phenomenon also arises in the case of Amazon Alexa interacting with children. Kids who ‘bark’ orders at the device are thought of as behaving rudely, even though the machine does not have the capacity to be offended. Nevertheless, Amazon introduced a feature which would make Alexa reward children for saying ‘please’ and ‘thank you’ (Leong & Selinger, 2019).

Debate for AT and human-likeness

The arguments in favour of anthropomorphic machines are mainly associated with its positive impact on user experience. It is argued by Cornelius and Leidner (2021) that it is not clear whether either human-like form or human-like function of a machine, or both, are important to receive positive outcomes, which in the case of their paper means acceptance of the technology. What they found in general is that human-like design has a positive impact on anthropomorphism, as well as other anthropomorphism-related concepts. An increase in anthropomorphism is led by the technology humanness in design, which results in beneficial, favourable user responses of social presence and credibility judgments (Cornelius & Leidner, 2021). In other words, these responses are an effect coming from an interaction with Anthropomorphic Technology. The more human-like a machine’s design is, the larger the level of anthropomorphism in a user becomes, and the more favourable they become towards the machine. The authors also point out that studies which do not directly measure anthropomorphism show positive effects of it linked with usage and usefulness, competency and efficiency. Research in this area finds that users of AT develop emotional connections, altruistic behaviour, engagement, trust and interactivity towards it (Cornelius & Leidner, 2021). Anthropomorphic Technology, then, is considered a valuable improvement factor of user experience.

A strong argument in favour of AT, and more specifically in favour of building humanoid robots, is presented by Robin L. Zebrowski (2020). She argues for why humanoid robotics is a worthwhile research project for endeavours aimed at developing machine consciousness. By analysing her claims it can be understood that if people really want to construct a genuine mind through research in AI, the only way to achieve this is to build a machine that, apart from looking like us, is embedded in the cultural, social, physical environments with humans from the very start (Zebrowski, 2020). Namely, robots have to look like us and operate amongst us in order to develop into a conscious machine. Even more, they have to have the humanoid form in order for us to recognise them as conscious (or ‘minded’ in, as she points out, ‘less baggage-laden language’ [Zebrowski, 2020:121]). Of course, the human form necessarily brings about anthropomorphism.

It is emphasised that anthropomorphisation is actually not a clearly defined concept, and that it happens automatically and with little or no cognitive control, the same way as perception does. Humans read humanness from faces, voices and humanlike movements, which all impact how we anthropomorphise. Interestingly, while we cannot provide a clear definition of what anthropomorphism is, people are able to use this human tendency to manipulate how people respond to robots. The ways to do this include simple personifications, for instance, giving a name, a description of character, or just a backstory of the machine. Following Zebrowski’s (2020) views, we cannot help anthropomorphising, which in a way makes regulating the development of humanoid robotics, and answering the ethicists’ worries regarding the development of human-like robots, futile. Interpreting her views, it is clear that anthropomorphism is an inevitable part of human-humanoid interaction. What is more, the tendency to anthropomorphise is not only a reaction we have to a human-like machine, but it is also an important element necessary to succeed in the project of developing conscious AI.

One could deliberate here that Zebrowski’s argument suggests she uncritically assumed that the concept of pursuing the development of conscious AI is actually worth pursuing. This then leads to the conclusion that robots must look like humans in order to achieve this goal. Further, this argument implies that anthropomorphism is a positive, useful feature which us humans have. However, if we take a different perspective and assume that it is not clear whether pursuing the development of conscious AI is the right direction to follow, one could arrive at very different, perhaps negative arguments concerning Anthropomorphic Technologies. The following section of the paper will focus on arguments going against the concept of Anthropomorphic Technology development.

Debate against AT and human-likeness

Research on human-like technology indicates numerous issues arising from the interaction with AT. Cornelius and Leidner (2021) mention that generally the adverse outcomes of interaction with an anthropomorphic form include deception (being deceived) and a decrease in collaboration. Anthropomorphic function, on the other hand, is usually associated with dominance, intimidation, strain, and also preoccupied, lonely behaviours. Both anthropomorphic form and function brings about feelings of eeriness and discomfort, as well as difficulty with classifying humans versus machines (Cornelius & Leidner, 2021).

The main ethical issue considered in the discussion of anthropomorphism in social robotics is deception of humans. Wynsberghe (2021) refers to deception as characterised by, among others, a unidirectional bond between a human and a machine. This means that the robot cannot bond with the human in the same way as the human can bond with the robot, even though the human thinks it can. This issue of unidirectional social relationships appears for many robot ethicists to be disturbing, especially when such a notion is used to steer the design of (social) robots. One of the potential problems brought up by the researcher is that companies might be able to exploit the unidirectional bonds for commercial gains (Wynsberghe, 2021).

The case of unidirectional bond can result in manipulability and even psychological damages. Bartneck and colleagues (2021), authors of An Introduction to Ethics in Robotics and AI claim that friendships between humans and autonomous machines can develop even though this relation works one way only, and in which the human provides all the emotions. They call this concept ‘misplaced feelings’ towards a machine. This kind of anthropomorphism, and this level of deception, can be considered dangerous or detrimental to oneself and one’s emotional well-being. Humanoid robots especially have the power to convince their user that they are able to provide them with genuine, reciprocal affection and real social relations, which is in fact not true.

While deception can be good (e.g. pet robots have proven to be able to increase one’s well-being), Sharkey and Sharkey (2020) refer to a view that deception coming from an imaginary relationship with a robot is deeply wrong and violates the right and duty to see the world the way it is. Hence, those who design and manufacture such deceptive robots act unethically (Sharkey & Sharkey, 2020). The researchers raise a number of different perspectives on the potential risks and negative consequences of emotional deception in social robotics. Starting from the youngest AT users, children and babies exposed to a prolonged ‘child care’ interaction with a robot which does not provide bonding opportunities are at risk of developing attachment disorders. While this example is quite extreme, the increased interest in developing internet-connected toys and robot companions for kids poses similar threats. Children absorbed by an interaction with the machine will miss out on the chance to learn the human, natural ways of give and take, normal to human relationships. This can cause trouble between peers and family, and might even make the child prefer the interaction with the predictable robot, which always agrees, listens and puts up with their selfish behaviours (Sharkey & Sharkey, 2020).

Following Sharkey and Sharkey (2020), emotional attachments to AT formed through anthropomorphism and deception can have significantly negative consequences also for other vulnerable groups, such as people with dementia or with other cognitive limitations. They might end up neglecting their relationships with humans in order to centre their attention and emotions on the machine instead, and at the same time become concerned and anxious about their robotic ‘friend’. Interestingly, through seeing this the family and friends of the deceived person can arrive at a conclusion that the person’s social and emotional needs are fulfilled by the robot, which could reduce the time they spend with them (Sharkey & Sharkey, 2020). This brings to mind also the differences in how humans anthropomorphise. As mentioned earlier on in the text, people have different levels of tendency to anthropomorphise. In other words, everyone reacts to humanness in an object with anthropomorphising it to a different degree. For example, loneliness and extraversion are factors which make one anthropomorphise more. The difference in response to AT is also caused by the level of rationality in an individual, as well as their low propensity to trust (Cornelius & Leidner, 2021). This analysis can potentially give an indication of other vulnerable groups of people potentially affected by the negative consequences of interacting with AT.

On the contrary stands Zebrowski (2020) who argues that, in the AI research engaged in developing conscious AI, it is perfectly foreseeable that people will be deceived by humanoid robots. However, the goal of this research is not deception, but consciousness research. This means that deception is then not intended. And even if deception was always present and happening, it is not clear whether this is a reason for making humanoid robotics an ethically impermissible project (Zebrowski, 2020).

Conclusion

This essay focused on the problem of anthropomorphism related to the interaction with machines, specifically to the way humans interact with humanness in technology. First, the text explained the notion of anthropomorphism to later dive deep into the concept of Anthropomorphic Technology (AT). This was discussed by considering arguments both in favour of and against these technologies. It was argued that AT constitutes a potential for the project of developing conscious AI. Additionally, it was shown that anthropomorphising technology is a factor which has a positive influence on the user experience of a robot. However, it was also argued that deception resulting from interaction with AT can have harmful effects on human psychology. All in all, a debate over permissibility of AT and ethical implications of human-likeness of technology lies between AI developers striving to achieve the level of consciousness in a machine and ethicists pointing out the, in fact, weakness of humans to see humanity in places where there is none.

References

Bartneck, C., Lütge, C., Wagner, A. and Welsh, S., 2021. An Introduction to Ethics in Robotics and AI. SpringerBriefs in Ethics,.

Cornelius, S. and Leidner, D., 2021. Acceptance of Anthropomorphic Technology: A Literature Review. Proceedings of the Annual Hawaii International Conference on System Sciences,.

Damiano, L. and Dumouchel, P., 2018. Anthropomorphism in Human–Robot Co-evolution. Frontiers in Psychology, 9.

Leong, B. and Selinger, E., 2019. Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism.

Mara, M. and Appel, M., 2015. Effects of lateral head tilt on user perceptions of humanoid and android robots. Computers in Human Behavior, 44, pp.326–334.

Sharkey, A. and Sharkey, N., 2020. We need to talk about deception in social robotics!. Ethics and Information Technology, 23(3), pp.309–316.

van Wynsberghe, A., 2021. Social robots and the risks to reciprocity. AI & SOCIETY, 37(2), pp.479–485.

Zebrowski, R., 2020. Fear of a Bot Planet: Anthropomorphism, Humanoid Embodiment, and Machine Consciousness. Journal of Artificial Intelligence and Consciousness, 07(01), pp.119–132.

--

--

Alicja Halbryt

Writing about Technology Ethics and Design. MSc student of Philosophy of Technology (NL), MA Service Design graduate (UK)