This blog post is a final assignment for Global Design Futures module within a Service Design Master degree at the London College of Communication. It will talk about drivers, trends and signals in the very broad realm of AI. It will later focus on algorithmic bias and present a possible future scenario based around speculative design practice.
Human reality — the drivers of change
Ever wonder what our world would seem like to someone, or something, coming from a different world, dimension or planet? Probably the first thing they would notice is the very clear dominance and advantage of a human over all the other species. Humans have used their highly developed minds to make themselves as safe and comfortable as possible, from developing different hunting and protection tools to devices and machines replacing people and simplifying our lives. The constant technological advancement is a result of human evolution and ambitious need to discover, and the desire to develop and take control over more and more different functions and processes (be that natural, biological or mechanical processes). The guest from an outer world would probably notice the cost the nature has had to take to let humans live the lives they want to.
They would also see that our world is based on divisions — like race, gender, religions, languages, cultures, the rich and the poor. The whole planet is divided into land and sea, into countries with clearly defined borders, where the number of people and huge contrasts in societies are constantly growing. Moreover, if looked closely, the outer world visitor would learn that the power in a human society belonged and belongs to a handful of people who make decisions for the whole population. Because of that, some people try to oppose mistreatment and create groups and movements fighting for equality, ethical correctness and human rights.
With humans having tendency to constantly develop and push the boundaries of what’s possible, here we are at the verge of a technological breakthrough. Singularity (you can read more about it in my previous blog post: link). The moment when a hyper advanced computer creates a better version of itself. Would it be surprising to the outer world visitor that humans work on a super intelligent artificial system which would exceed their own abilities? Or would it seem to be just another natural step in human evolution? Is our world even ready for the boom of advanced AI? Given the fact that in April 2019 only 4.4 billion people were active internet users — we are not ready. This is 58% of the global population (Statista, 2019). That means half of the world cannot live without internet and connected devices, and the other half can (with or without a choice). One half of the world experiences digital and technological growth in the areas of education, banking, transportation, the other half doesn’t. Shouldn’t the world first ‘equalise’ technological advancement around the globe before implementing the life-changing AI? Shouldn’t we first deal with different inequalities, human rights, ethical issues before we let an artificial system to take over?
What is AI
Singularity University, an educational corporation founded by futurists Peter Diamandis and Ray Kurzweil, created a report on artificial intelligence, The Exponential Guide to Artificial Intelligence (Singularity University, 2019). According to the article, AI is already here. It has merged with our everyday lives. AI is explained as an umbrella term encompassing computer science responsible for creating computer systems which can think and learn. These abilities mimic the way humans understand the world and allow machines to achieve things only humans were once capable of, e.g. complex problem solving, visual interpretation and speech recognition. So, disappointingly, in contrary to science fiction novels and movies, AI is not a perfect, human-like robot. Rather, it is an algorithm based on mathematics and logic running in software. A one thing that differentiates an AI system from a human mind, and also gives it a powerful advantage, is that it can be run in different hardware types, such as smartphones, tablets or self-driving cars. Moreover, it can run pretty much endlessly in most cases. It doesn’t get tired or hungry, it doesn’t have personal needs and doesn’t feel discomfort. It stays focused for as long as we want it to (Singularity University, 2019). The advantage over a human mind is clear.
AI could be broken down into 3 main terms, which have now become omnipresent buzzwords — Machine Learning (ML), Deep Learning (DL) and Big Data. These fields of computer science are all closely related. Big Data is the AI food — massive data sets and user activities which are presented to an AI system, based on which the system learns. ML is a data analysis method which allows computers to learn based on experience, without being explicitly programmed. And lastly, DL is part of the ML and uses artificial neural networks, which are computer simulations imitating human brain operations (Singularity University, 2019).
Trends and future scenarios
The Singularity University guide emphasises the fact that we are unable to predict what the future of AI could be. All we can be certain of is that it will bring huge changes to our lives. And they are not all necessarily threating to humanity. We are still at the stage when human intent and decisions are key factors determining the ‘character’ of AI (Singularity University, 2019). Therefore, many researchers encourage people not to look for dystopian scenarios only. AI already contributes to solving problems of humanity, including making a very positive impact on healthcare (e.g. quickly and accurately examining samples, speeding up drug discovery). AI also assures efficiency and rational analysis of data. Thanks to AI taking care of the technical side of the world, we will be able to focus on things important to and typical for humans, such us co-working and building meaningful relationships. It also accounts for better development of human behaviours, such as empathy and kindness (Singularity University, 2019).
However, if we don’t know what the future of AI is, and we are still persistent in its accelerated development, it is likely to get out of control. In his fascinating book Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark highlighted 12 possible scenarios for artificial intelligence (Tegmark, 2018). Not all of them are dark and most of them reach a time in the future which seems to be a hundred years from now. One scenario assumes that AI will be in charge and will enforce strict rules on the society, and majority of people will see it as a good thing. In Tegmark’s another vision, AI will replace humans, but not forcefully — we will grow to see the AI as our worthy descendants. A scenario which would undoubtedly be thought of as the most probable by the Extinction Rebellion is the one assuming people will never reach the point of advanced AI development. Humanity will disappear from the planet before the breakthrough due to climate crisis, or nuclear war. Another interesting vision presents a world where, to prevent super-intelligent systems from existing, the society reverts to a pre-tech culture (Future of Life Institute, 2018). All these scenarios are absolutely insane and actually equally possible, leaving me feeling upset for not being able to live long enough to see which one comes true! But who knows.
The bias problem
Tegmark’s scenarios don’t seem to describe a world in which race or gender inequalities are an issue when it comes to artificial intelligence. Singularity University also do not mention this problem in their report. Actually, the latter claim that AI will support organisations in treating members of the society more justly than currently (Singularity University, 2019). It is a beautiful vision, but right now it sounds almost impossible.
Bias in AI is a huge problem. In my previous blog post I talked about examples of serious discriminatory AI systems (read here: link). To explain what biased AI is let’s use a very simple example. An autonomous vehicle algorithm learns from data sets which contain 80% of white people’s faces and 20% of black people’s faces. That means the AI system is much better at recognising white faces than black faces. It also means that, when put on the road, the vehicle is quite more likely not to recognise and to run into a black person than white. And it has actually already been proven to take place (Miley, 2019). There are many more examples as such, and even more examples of which we will never know. It seems like the unequal data sets used to teach AI are the issue. But unfortunately, it goes much deeper.
A brilliant report by AI Now Institute researchers, Discriminating Systems: Gender, Race and Power in AI (West et al., 2019), aims to find the reason of the bias. The main problem they perceived was ‘diversity crisis in the AI industry’. They gathered statistical data showing proportions of employees in leading global tech companies. For example, Facebook’s AI research staff comprises of 18% women, Google’s — 10%. Also at Google, only 2.5% of employees are black, at Facebook and Microsoft — 4%. In academia, 80% of AI professors are men. At leading AI conferences 18% of the authors are female (West et al., 2019). The data is alarming. And, as the paper says, the data might be even worse than what the official company reports say. West, Whittaker and Crawford (2019) also underline that the problem is not only about gender inequality. It is about gender and race, but mainly the power which is unequally distributed based on those two traits. All this affects what products are created, for who they are made, and who benefits from their production.
It turns out that biased AI might have its cause in the fact that very small numbers of women enter the tech industry, especially computer science. And this fact alone is caused by less access to computers provided to girls both in schools and at home, IT being viewed as a masculine, geek discipline in which women are discriminated, lack of confidence in females interested in computer science resulting in dropping the interest. It probably applies also to the people of colour, yet there are not enough studies and data to prove it (West et al., 2019).
Lastly, the report emphasises the fact that there has not been any meaningful action towards remedying the inequality phenomenon in the tech industry other than companies expressing desire to improve on diversity (West et al., 2019).
Signals — the society intervenes
There are certain signals and emerging trends which indicate a rising awareness of the bias tech problem. The described AI Now Institute report is an important indicator that something bad is going on and needs concrete action. There are signals in the society, such as conferences and meet ups organised around the threats and good use of AI (e.g. AI for Good, Impactful AI, Ethics, Bias & Algorithmic Fairness in the world of Artificial Intelligence, and many more). There are activist groups formed to draw attention to ethics and inequality in tech, such as AI Ethics Lab, Algorithmic Justice League, Future of Humanity Institute, Women in Tech, to mention just a few. All these groups and movements send a message to the rest of the society, saying that our technological development can go wrong. Companies receive the message, and try to improve on their employee diversity.
Speculative design — a future scenario
Sometimes to imagine a possible future and plan a strategy for a service or a product, it is wise to first provoke some thoughts and find out the desired direction. Speculative design is a perfect tool to determine what future we want and don’t want, based on future forecasting, trends research and signal observations. This results in creation of an artefact — usually an object representing very specifically the discussed future reality and society (Future Cities Catapult, 2017). A discussion and reactions of experts and the public show fears or excitement regarding the presented future (usually fears). To spark some thoughts I have created the following scenario and artefact for the future of our society which takes serious actions to prevent the bias in AI.
Let’s try to imagine a time in the near future when the British society starts to notice and experience the biased AI more than now and pushes the government towards developing new regulations which would mitigate the problem. Riots and protests are successful, and the government creates a set of new regulations which force technology companies to decrease the statistical chasm between employees — males and females, as well as the white and the people of colour. By this, the companies will employ more diverse teams, which will in turn have a positive impact on the developed technology, including AI. Organisations have freedom in how they resolve the issue, with the goal to reach more or less equal numbers of employees when it comes to gender and race by 2050. Amazon, who in 2017 had only one woman among 17 other executives (who were 74% white males) (Del Rey, 2017), decided to solve this issue at the core. That is at the childhood stage.
Amazon created their own program for parents of girls and children of colour — Techy Bear. The toy aims to develop a child’s interest towards STEM subjects. It aims to support children other than white boys in achieving success in the technology industry and, ideally, bring up a new, diverse team of Amazon employees. The toy itself has an embedded AI system which learns to understand the child in order to communicate with them in the right, efficient way. The teddy bear encourages the child to use a computer, sings technology related songs and tells stories about tech role-models, especially Amazon contributors. It aims to create a strong bond with the child and to teach him or her the persistence in following their dreams. It is a gender neutral toy and is available in different colours and with personalised look adjustments.
To sign up to the program it is enough to purchase the Techy Bear on Amazon which comes with free Amazon Prime for life. The teddy bear has a camera and voice recorder embedded, what helps to keep record of whether the toy is being used by the child or not. In case if it is not used, parents lose the right to free Amazon Prime subscription and all other benefits that come with it, such as major discounts for technology and family products, free trip to Silicon Valley, reference letter for the child when applying to a tech school and university, free algorithm writing courses for the whole family. In case the toy is not being used because the child does not like it anymore, Amazon prepared a range of toys for older kids as well, including a small robot, a doll, figures from movies. After first year of school, the child goes under predispositions test, which shows if the child expresses enough interest and is likely to succeed in STEM subjects. If test results are not satisfactory, the program is suspended and the benefits flow is cancelled. If throughout the years of education the child shows promising skills and tech abilities, it is offered a job in one of Amazon’s offices.
In speculative design and design fiction practices, the scenario and the artefact are created to provoke and prompt a specific discussion over possible futures. In this case, the website artefact aims to give the whole scenario a real-feel. It helps the viewer to imagine the possible future reality and sparks a debate. It’s neither about presenting a positive future vision nor negative. It’s an interpretation of the future and current trends and signals which poses a question of whether we want that kind of a future and what we can do to either achieve it or avoid it. The main topics for discussion in this scenario would be data privacy, surveillance and manipulation, child’s freedom of choice regarding their career and happiness, the future of consumerism and to what lengths would people (parents) go to get discounts on Amazon. These would be followed by dozens of ‘what if’ questions, such as:
What if the child finds out that they are part of this program and that their parents gave up their freedom of choice for Amazon benefits?
Is this product available to purchase in other countries than the UK? If not, what if it reaches a child in another country?
What if the Techy Bear is hacked? What if it goes broken?
What if children from Techy Bear program are mistreated by those not signed up to the program, at schools, universities and at work? Why would that happen, or not happen?
What if white boys start feeling excluded?
What if this program turns out to be successful and the tech industry is perfectly diverse one day? How would it affect AI technology exactly?
Artificial intelligence is a major technological achievement which is already present in our daily routine. The AI bias is one of the most unsettling and ethically difficult problems in tech. However, the signals coming from the society, such as movements and institutes guarding human rights, give hope that people will not allow the technology to take over, at least not in a way that we would in turn lose the whole control. This blog post aimed to raise awareness over this issue and tried to define its origins, which turned out to be early school age and the way girls and kids of colour are brought up. It also used speculative design elements to present a possible future scenario for AI and to provoke thoughts through the website artefact. The scenario was followed by ‘what if’ questions which don’t have a specific answer, and which trigger even more debate.
I believe there are many people who want to make the world a better place, even if it means overcoming bias, which is natural for every human (to some extent). However, I also believe company cultures need to improve and set a higher level of diversity and overcome the traps of bias. Companies are already willing to do that, let’s hope they manage to transform before it’s too late.
Del Rey, J. (2017). It’s 2017 and Amazon only has one woman among its 18 most powerful executives. [online] Vox. Available at: https://www.vox.com/2017/10/21/16512448/amazon-gender-diversity-leadership-executives-jeff-bezos [Accessed 14 May 2019].
Future Cities Catapult. (2017). What is ‘Speculative Design’ and how can it impact upon the future of cities? — Future Cities Catapult. [online] Available at: https://futurecities.catapult.org.uk/2017/05/26/speculative-design-can-impact-upon-future-cities/ [Accessed 15 May 2019].
Future of Life Institute. (2018). AI Aftermath Scenarios — Future of Life Institute. [online] Available at: https://futureoflife.org/ai-aftermath-scenarios/ [Accessed 11 May 2019].
Miley, J. (2019). Autonomous Cars Can’t Recognise Pedestrians with Darker Skin Tones. [online] Interestingengineering.com. Available at: https://interestingengineering.com/autonomous-cars-cant-recognise-pedestrians-with-darker-skin-tones [Accessed 12 May 2019].
Singularity University. (2019). Artificial Intelligence — The Exponential Guide to Artificial Intelligence. [online] Available at: https://su.org/resources/exponential-guides/the-exponential-guide-to-artificial-intelligence/?utm_medium=email&utm_source=eblast&utm_campaign=fy19q2-growth&utm_content=emailbutton&mkt_tok=eyJpIjoiTjJNek16STNOemMxWXpNMyIsInQiOiJ3RTR6U2NDYWthajdmRFVlZ1BEYWkzNmE4dTB0V0tjczRTd2RVZjY4akdnOG9UV1RJa2psd3h6ZUlXdjVtZHJpdldLZ2dhYkFlN2gxOE52NWlVQ0dYTmpWY2htazNqamZKVTA3VVV2RGhIazRTNCt3XC9JMmZwRE84WWloNDJRbDgifQ%3D%3D [Accessed 10 May 2019].
Statista. (2019). Global digital population 2019 | Statistic. [online] Available at: https://www.statista.com/statistics/617136/digital-population-worldwide/ [Accessed 8 May 2019].
Tegmark, M. (2018). Life 3.0: Being Human in the Age of Artificial Intelligence. London: Penguin Books.
West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/ discriminatingsystems.html.