‘Artificial Intelligence is like a human brain’: AI metaphors, their origins and impact.

Alicja Halbryt
13 min readAug 13, 2023
Lexica AI generated image
Lexica AI generated image

Introduction

People have used metaphors to link machines and minds for centuries. We have conceptualised the idea of brain through explaining it in ‘machine terms’ and compared it to telegraphs, clocks, hydraulic devices, and other key technologies in human history. Nowadays, we tend to switch this metaphor and explain technologies in human terms, for example through phrases such as machine ‘vision’ or ‘recognition’. This essay will focus on the metaphor we use to define artificial intelligence, and will analyse the comparison we make between AI and human brain. The aim of the essay is to critically approach the AI metaphors we use and reflect on what kind of impact they have on society. First, the notion of metaphors will be explained with regards to how and why we use them, and what impact this has on everyday life. After this, the essay will move on to analysing the metaphors which compare humans and machines. Major attention will be given to the metaphor ‘AI is a brain’. Lastly, the essay will present the problematic nature of metaphorically referring to the technology of AI, and what kind of issues arise from the comparison of AI and brain.

What are metaphors and how do they influence our perceptions of the world?

It is curious to realise the extent to which metaphors are present in our everyday life. Most people consider metaphors to be “a device of the poetic imagination” (Lakoff and Johnson, 1980: 3), a tool in our language we deliberately use in order to make our statements more ‘colourful’ and interesting. For this reason, it is generally thought that we can get along quite well without metaphors. But in fact, when looking closer, they are prevalent in our lives. We think with metaphors, we compare and relate things to each other all the time.

It can be argued that people use metaphors as elements of speech which help translate observations and experiences that are difficult to understand because they are new, emergent, or complex, into something more understandable and familiar (Ganesh, 2022). Following Maya Ganesh’s thoughts, it can be said that metaphors fall into our everyday speech so easily and often unnoticeably, that people tend to forget that these are indeed just metaphors. It is interesting to notice that apart from metaphors being misleading, they can also turn into self-fulfilling prophecies. When a term (a metaphor) is coined to describe a new, emergent and unfamiliar phenomenon, it ‘sticks’ and stays even after the phenomenon became more familiar with time.

Argument and war
Lakoff and Johnson (1980) give numerous examples of how a concept can be metaphorical, and how such a concept structures everyday activities. One of these metaphors stuck in our everyday language, communicated by a vast variety of expressions, is ‘argument is war’. It is reflected in phrases such as “Your claims are indefensible”, “I demolished his argument” or “I’ve never won an argument with him” (Lakoff and Johnson, 1980: 4). As one can notice, we do not only talk about arguments as if they reflected war. We also actually see one another as being on a (verbal) battlefield, we see the other person as an opponent, we defend our position, we plan argument strategies, and so on. This means that the things that we do while arguing are structured by the concept of war. ‘Argument is war’ is a metaphor we live by in the Western culture. It structures the actions performed by us while arguing (Lakoff and Johnson, 1980). It can be understood from Lakoff and Johnson (1980) that culture plays a very significant role in what kind of metaphors we use and how they influence our perception of the world. If one imagines a culture where argument is not like war, but for instance like dance, then people would view arguments differently, talk about them differently, approach and carry them out differently. This, therefore, shows that while metaphors enable certain ways of thinking, they at the same time restrict others.

Human-computer metaphors

It comes as no surprise that language is also full of metaphors connected to technology. Lakoff and Johnson (1980) point out an example of an ontological metaphor ‘the mind is a machine’. Sentences such as “I’m a little rusty today”, “My mind just isn’t operating today” or “He broke down” (Lakoff and Johnson, 1980: 27–28) are very frequently used phrases in the Western culture to express one’s state of mind. This comparison of mind and machine suggests a conception of the mind as an entity having on and off states, a productive capacity, an internal mechanism, a level of efficiency, and an operating condition. It can be concluded from Lakoff and Johnson (1980) that what can bring a potentially problematic effect is the fact that the metaphor feels so natural and pervasive. This makes us believe, or unconsciously assume, that it is a direct description of mental phenomena. As a result, most people (at least of the Western cultures) actually do see the mind as a kind of machine; the metaphor is ‘stuck’. This human tendency to metaphorise can become questionable when applied to emerging global fields and technologies, such as artificial intelligence.

Before turning the focus to AI, though, it is first worth discussing how metaphors shape our thinking of the computer world in general. Being still a relatively young field, metaphors have a big influence on it and on how engineers structure code or design algorithms. Metaphors can even impact the way problems are solved (Videla, 2017). This is not surprising if we consider the fact that even the word ‘computer’ we currently use is a metaphor itself. Around 1950s, ‘computer’ used to be a word for a person who did calculations for engineers. The machines which were developed and did the same work were called ‘electronic computers’. Eventually, the ‘electronic’ component was dropped, bringing us the word ‘computer’ in the sense we understand it nowadays (Videla, 2017).

Moreover, the goal of computer programming is to solve a problem in an understandable manner for other programmers. In order for programs and algorithms to be comprehensive, the metaphors used to explain them have to convey meaning using concepts we already know and understand. For instance, it works best to call a ‘queue server’ a program that makes a computer command other computers to execute tasks, respecting the order of their arrival (Videla, 2017). A queue is a very familiar concept from an everyday life, seen on airports, at supermarkets, etc. People are familiar with how queues work, they know the ‘first come first served’ rule. Therefore, it can be argued that for a computer program expected to treat tasks by the order of their arrival, describing it as a ‘queue’ is a fitting metaphor. This makes it simple for a programmer to explain to the other programmer their solution to a problem. In other words, by selecting the right metaphor, the program can reach a level of abstraction which requires the least effort for someone unrelated to the problem to comprehend the solution (Videla, 2017).

There is also another way to apply metaphors in the field of computing. For instance, researchers of computer science have been drawing an analogy between drosophila fruit fly and chess in the research of artificial intelligence (Ensmenger, 2011). This is something of a larger significance than simply treating chess as an easy experimental tool, similarly to how the fruit fly was treated, to gain knowledge in specific areas. The decision to use drosophila in genetics research had an enormous influence on the field of biology. Comparing this kind of impactful science development to using chess as a tool to explore human and computer intelligence had put expectations and unforeseen consequences on the development of AI (Ensmenger, 2021). Perhaps it reinforced that pressure to understand and make AI as similar to human intelligence as possible. Looking at it from a different perspective, the ability to play chess has been thought to show general intelligence. When AI was taught how to play chess, and eventually exceeded our abilities, the essence of human intelligence was suddenly ‘threatened’ by a machine. However, an average person is not able to play chess to a higher level, nor is everyone able to even learn how to play it well at all. Therefore, playing chess should not be used to examine AI intelligence because, as can be argued, after all it is not a sign of human intelligence. What is more, the chess-playing AI is ‘fed’ with an algorithm which the human is not given. One can say that it is like giving the AI a ready-made answer sheet, and telling it how to use it. It is questionable whether or not this can be called ‘intelligence’, that is whether or not this metaphor of ‘intelligence’ should be applied in this case.

‘AI is a brain’ metaphor
Ascribing human features to artificial intelligence systems is a widely spread metaphorical concept in contemporary discussions on AI. It is enough to take terms such as artificial ‘intelligence’, machine ‘vision’, machine ‘learning’ or artificial ‘neural networks’ to see the extent of attributing human characteristics to AI. What is more, this metaphor is depicted also in images — after typing in ‘AI’ in a search engine, most images one will see show a machine with a human brain, and for some reason full of loud blue colours (Wallenborn, 2022).

Building on the above analysis of Lakoff and Johnson (1980), one could argue that the reason for metaphorically relating AI to a human brain is to make the technology give off a feeling of familiarity, and hence increase social acceptance. Arguably, this metaphor was established naturally because of the human tendency to anthropomorphise objects and non-living things — that is the tendency to assign human features to non-human things. It can be said that the thing that lies underneath this alikeness of humans and AI is the belief and hope that at some point it will be possible for human minds to be technically simulated (Wallenborn, 2022). Nevertheless, the inherently human qualities such as learning and thinking make little or no sense when referred to AI. They are, however, usually dismissed as “innocent metaphors” (Dippel, 2019: 34). It can be seen as problematic given that words do have power and they not only describe the reality surrounding us, but also shape the way we act and think. It is especially significant when we assign ‘intelligence’ to a machine, which influences our perception of it as having human qualities. And yet it is no longer a question whether or not machines can learn — the common assumption is yes (Dippel, 2019).

An interesting point of view can be taken at the concepts of machine learning and neural networks in AI. The considered pioneer of artificial intelligence, Christoph von der Malsburg, originally trained as a particle physicist, focused mainly on visual cognition and memory in his neurobiological research (Dippel, 2019). One can wonder if the brain-related metaphors perhaps come from von der Malsburg’s neurobiological interests. Von der Malsburg, however, claimed that human brains are far less efficient when it comes to memory capacity than machines. To Dippel (2019), and possibly to many other anthropologists, this is a spine-chilling approach since human brains are not fed with raw data sets. Instead, our brains are fed with years of experiences and are not containers for data storage (Dippel, 2019). Therefore, what can be drawn from this is that a human brain and AI are not comparable and compatible.

This claim can be supported by an argument regarding human emotions versus rationality. A general view within a hierarchy of human values is tied to an understanding of intelligence as being more important and superior to emotions. Put differently, “being emotional is considered inferior to being rational” (Baria and Cross, 2021: 6). This means that there is a distinction, or a spectrum, of being less rational and more rational, and it is applied to comparing intelligence of animals and humans, women and men, one human race and another. If AI is ‘intelligent’, then it is also thought to be not emotional, and more rational. Rationality being apparently a valued trait, AI paradoxically succeeds as being the entity of rational thought, and in a way a more trustworthy form of intelligence (Baria and Cross, 2021).

Why, if at all, is it problematic to use AI-related metaphors?

It is widely questioned whether the metaphors we use with regards to AI suit the technology and society or not. It is even debated whether there should be new metaphors created, which would encompass ambitious visions for AI and would describe the technology better. As emphasised in an article promoted by the European Union (Boucher, 2021) on scientific foresight and potentially choosing new metaphors for AI, metaphors are an integral element of language and communication. This means that specific selections of metaphors are worth reflection and care in the way they are used. Some researchers steer the discussion towards the issue of power distribution and ask the question of how using the language which anthropomorphises machine intelligence shifts power division. Meaning, who gains power and who loses it in the AI discourse (Baria and Cross, 2021).

It is interesting to note that the way AI is generally understood defies common sense. The ‘artificial’ in AI should suggest something else than a powerful technology. Artificial flavours, for instance, are usually thought to be fake flavours, a substandard to authentic ones, a chemical ‘cheat’ to fool the end user and their taste buds. For some reason ‘artificial intelligence’ swaps this understanding of artificiality and makes one believe in AI’s superiority and rational thought (Baria and Cross, 2021).

Moreover, it is useful to discuss the problematic nature of the concept of ‘intelligence’. The term brings enduring difficulties when it comes to the definition of the technology. In other words, by not having a clear definition of intelligence, AI can be feared to be always ‘under-defined’. Already human intelligence is a subjective and contested notion, which also makes the concept of artificial intelligence prone to constant debate, reinterpretation and contesting (Boucher, 2021). What also comes about as problematic is the fact that instead naming AI after what it does (application) or how it does it (technique), we call it after its apparent performance (intelligence) (Boucher, 2021). Because of this, AI can be referred to nearly all technologies, even regardless of whether they exist or not. This issue of using the term of AI to refer to such a vast number of applications can enhance the disagreements over what AI is and make discourses less efficient.

Comparing AI to the human brain can have neuroscientists concerned that these metaphors reduce the brain to the condition of a computer. This makes it challenging to envision other conceptualisations of how a technology does something, and what this ‘something’ exactly is. Similarly, this metaphoric thinking grants our software a status of a human mind. This issue is regarded by some as risky for the development of AI (Boucher, 2021), but arguably also for society. First of all, if we think of AI as a brain, we might end up overestimating its capabilities and mistakenly trust it to perform tasks it has no competence of performing. Further, attributing human traits to AI (such as abilities to operate like a brain) might make some assign fault to the machine in an event of things gone wrong, instead of searching for the responsible among people in charge of the given AI system. A third risk, and perhaps most alarming one, is that comparing AI to the human brain builds a relationship of a certain kind between people and machines. If we believe there is an equivalence between how and what people and machines do, we create a competition of who/what can perform a task better instead of cooperation to complement each other’s capabilities (Boucher, 2021). This sparks a major debate over whether it is desired to develop a ‘human-like’ AI at all.

The matter of acceptability of the ‘artificial intelligence’ label has been widely discussed in a fascinating paper by Baria and Cross (2021). They use a term “computational metaphor” to refer to the existing comparison of brain to computer, and vice versa. The main conclusion which can be drawn from this paper is that it is acceptable for experts to use the metaphor ‘artificial intelligence’ because they understand what they are actually referring to. It is more than problematic, however, when this term is presented to and used by a layperson. In other words, what a computer scientist means by ‘intelligence’ in AI is not the same as what a regular person means by ‘intelligence’ in AI (Baria and Cross, 2021). The authors strongly emphasise the problematic nature of this divergence between non-experts and experts, and draw attention to the fact that framing even simple concepts with different terms is likely to lead to drastically different perceptions. What comes to mind in this case is again the Lakoff and Johnson (1980) metaphor of ‘argument is war’, and specifically their example of switching the metaphor to ‘argument is dance’. This switch would change completely our understanding of and approach to arguments. Baria and Cross (2021) mention other examples and show how using different metaphors changes how we perceive matters, such as climate crisis is a ‘war’ instead of a ‘race’, or genetically altered food is ‘engineered’ rather than ‘modified’. Using different phrases can have significant impact on social issues and can shift people’s attitudes in one way or another (Baria and Cross, 2021).

There have been a few new metaphors suggested for defining AI better. One of them is ‘AI is an atlas’, which captures AI’s purpose to find and gather insights through collective knowledge, and also highlights its purpose to become the dominant way of seeing things (Baria and Cross, 2021). Another suggested metaphor is ‘AI is a viewing tool’, which one might understand as conceptualising AI to be a telescope or microscope which is purposed to finding patterns in data which are generally difficult to see. Interestingly, this metaphor also expresses AI’s imperfections, just the way a lens can diffract or distort light (Baria and Cross, 2021).

Conclusions

This essay took up a difficult and abstract topic of using technological metaphors is everyday life, and tried to discuss the impact AI-related metaphors have on society and science. ‘AI is a brain’ metaphor was examined in detail, as this is a comparison probably most prevalent, and potentially harmful, in today’s AI discourse. The essay tried to break down the metaphor of artificial intelligence and point out certain inaccuracies and doubts regarding specific use of words. All in all, the aim of this essay was to suggest that the metaphors we use to describe AI and related tools can be misleading, and might in turn have an unintended impact on the society and the technology development.

References

Baria, A. and Cross, K. (2021) “The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor.”

Boucher, P. (2021) “What if we chose new metaphors for artificial intelligence?,” AT A GLANCE. Scientific Foresight: What if?. EPRS | European Parliamentary Research Service

Dippel, A. (2019) “Metaphors We Live By. Three Commentaries on Artificial Intelligence and the Human Condition,” The Democratization of Artificial Intelligence, pp. 33–42. Available at: https://doi.org/10.1515/9783839447192-002.

Ensmenger, N. (2011) “Is chess the drosophila of artificial intelligence? A social history of an algorithm,” Social Studies of Science, 42(1), pp. 5–30. Available at: https://doi.org/10.1177/0306312711424596.

Ganesh, M. (2022) Between Metaphor and Meaning: AI and Being Human, ACM Interactions. Available at: https://interactions.acm.org/archive/view/september-october-2022/between-metaphor-and-meaning?doi=10.1145%2F3551669 (Accessed: November 6, 2022).

Lakoff, G. and Johnson, M. (1980) Metaphors we live by. Chicago and London: University of Chicago press.

Videla, A. (2017) “Metaphors we compute by,” Communications of the ACM, 60(10), pp. 42–45. Available at: https://doi.org/10.1145/3106625.

Wallenborn, J. (2022) AI as a flying blue brain? How metaphors influence our visions about AI, HIIG. Available at: https://www.hiig.de/en/ai-metaphors/#:~:text=By%20depicting%20AI%20as%20a,%2C%20Evers%20%26%20Farisco%202020 (Accessed: November 7, 2022).

--

--

Alicja Halbryt

Writing about Technology Ethics and Design. MSc student of Philosophy of Technology (NL), MA Service Design graduate (UK)