Biased AI — is it really?

Alicja Halbryt
3 min readApr 4, 2019

Artificial Intelligence is like a child. It learns from people — from what it is shown. And people? People are racist. And sexist. Even though nowadays some societies try to fight discrimination of different groups, most people still tend to be prejudiced or biased, of course to some extent and even unconsciously, towards certain types of people. Often it is passed between generations or absorbed from the surroundings where one grows up, studies and works. Humans create technology, and so AI learns human prejudice. Or actually, it only expresses what it has learnt — and what it has learned is biased. And what it learns is simply a set of data picked by humans… Therefore, AI is not really prejudiced, is it? For now, it can’t really think for itself and express its own believes, anger or approval (yes, for now). It’s just an algorithm learning patterns existing in data sets prepared by humans.

Examples of bias? An algorithm created to predict when and where crimes will take place repeatedly sent police officers to American neighbourhoods with a high proportion of people coming from racial minorities, regardless of the real crime rates in these areas. Google ads showed much more high-income job positions to men than women. In Israel, a Palestinian worker posted a picture on Facebook with him and a bulldozer with a Hebrew caption ‘good morning’ which Facebook translated as ‘attack them’ (Cossins, 2018). Even automotive cars have issues with detecting people with darker skin colour (Pinkstone, 2019).

The problem is serious.

The Future Today Institute created a trends report for 2019 (Future Today Institute, 2019). The report includes a chapter about AI and all the issues it poses. There is a list of questions for company owners who have crated AI bots to determine if the bot has learned to be biased. For example:

Does the corpus (the initial, base set of questions and answers) you’ve created reflect only one gender, race or ethnicity? If so, was that intentional?

What if your bot interacts with someone (or another bot) whose values run counter to yours and your organization’s?

Did you assign your bot a traditional gender, ethnic or racial identity? If so, does it reference any stereotypes?

The fact there are organisations thinking of regulating these in one way or another is comforting. There definitely is a need for actions working against discrimination. That is if we don’t want to end up in a dramatically unequal society from a dystopian science-fiction tale, where machines use their algorithms to judge who is more likely to survive, get a job, buy a train ticket…oh wait. China (Kuo, 2019).

Although put in a pretty bad light in this blog post, AI does bring a lot of good things into the industry and everyday life. Most of all, it makes it easier. And as many researchers say, AI and human is the best combination. One is much less without the other. However, what is interesting, we are prejudiced towards AI systems too. Will those systems learn our prejudice towards themselves? Will this result in AI hating people or AI with issues? Or maybe AI dividing humans and robots, the way we now divide race?

We will find out.

References

Cossins, D. (2018). Discriminating algorithms: 5 times AI showed prejudice. [online] New Scientist. Available at: https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/ [Accessed 4 Apr. 2019].

Future Today Institute (2019). 2019 Tech Trends Report.

Kuo, L. (2019). China bans 23m from buying travel tickets as part of ‘social credit’ system. [online] The Guardian. Available at: https://www.theguardian.com/world/2019/mar/01/china-bans-23m-discredited-citizens-from-buying-travel-tickets-social-credit-system [Accessed 4 Apr. 2019].

--

--

Alicja Halbryt

Writing about Technology Ethics and Design. MSc student of Philosophy of Technology (NL), MA Service Design graduate (UK)