Fearing AI — should we or should we not?
Some futurists and experts predict that humanity will reach the point of Singularity as early as 2045 (Kurzweil, 2005). In a nutshell, Singularity is a term used in physics to describe the centre of a black hole where the laws of physics no longer apply. Futures studies borrowed the term Singularity to name the moment in the future when technology is so advanced that people create artificial superintelligence which will be more intelligent than us. It will be able to create even better ‘intelligences’ than itself on its own (National Geographic, 2017). It is pretty much impossible to predict what will happen next — immediate destruction of humankind or centuries of a thriving civilization…and then destruction? We don’t know, because we have never experienced a being more intelligent than us.
So, how do we get there? And do we even want to? Apart from killer robots turning against humans, what are the threats, exactly?
We clearly are on the right path to achieve Singularity. Artificial Intelligence is improving every day. It has even started writing its own movie scripts (DUST, 2019)! People already spend long hours with smart machines every day, scrolling personalized content on Facebook or taking perfect selfies with cute filters on Snapchat. We invite intelligent devices like Amazon Alexa to our homes and chat with them, and even hear their creepy giggle once in a while (Feldman, 2018). Sophia, the humanoid robot, was granted citizenship in Saudi Arabia (Weisberger, 2017) and Google AI assistant is able to make a call and book an appointment for you at the hairdresser’s (Mashable Deals, 2018).
The more human the machine is, the more doubts and controversialities it brings. With time, as machines and their intelligence get more sophisticated, the line between human and robot will become even more blurry than now. And that’s when the boundless matter of robot rights come into play.
There is no doubt some part of the society will fight for the intelligent machines’ rights. Very recently, the Martian rover Opportunity ‘died’ of its battery running low after a 15-years-long mission. There wouldn’t be anything too bad about it if it wasn’t for the last message it sent, which was: ‘’My battery is low and it’s getting dark’’ (Gifford, 2019). After that, emotional tweets and articles followed. Not mentioning the fact that Opportunity, or Oppy for short, looked so similar to Pixar’s Wall-e the robot. The emotional attachment of its creators is understandable, especially that the mission of the rover was much longer than originally expected, and it brought numerous important discoveries. But the last message got everyone else sad as well.
Now, imagine Oppy was an intelligent robot able to have a casual conversation with humans. Imagine there were more robots as such, on Earth. And imagine any kind of injustice or harm directed at the machine, the anger of its creator and the sadness of the robot. Add a robot citizenship and protests fighting for machine rights are a given. This will cause immense philosophical and ethical discussions over introducing new regulations. For example, you might no longer own a machine, since it would be an independent being, maybe even considered a person. Unplugging or switching it off might be equal with an act of murder. Or even referring to an AI as ‘it’ would be simply rude. However, denying robot rights is in people’s economic interests. Without their rights, people will be able to exploit robots for labour and achieve unlimited economic benefits (Kurzgesagt — In a Nutshell, 2017).
There is also a matter of children and education in 20 years from now. Children are taught empathy towards others since their early days. What will they be taught about AI robots? Will they be told to respect every machine, because they have feelings too? Or to respect them because they work for and serve people? Will children be encouraged to make friends with robots? If so, out of empathy and sympathy, those children brought up next to advanced AI will likely demand machine rights later.
It all seems to be neither good nor bad, sprinkled with a bit of madness when it comes to non-living things having rights (at least not living for real, like humans do). Why wouldn’t we want AI robots to talk to and to help us in everyday life? Why most people hate the idea of living next to intelligent machines? Probably the main reason is that we are scared of them — especially when they look like humans or perfectly imitate the way we talk. What should be scarier, though, is that AI actually learns humans’ worst impulses. Like racism or sexism. Existing AI systems have associated words such as ‘nurse’ or ‘receptionist’ with females and mistaken an image of several black people for gorillas. It is because we feed computers with prejudiced or biased information from the past and the present, which comes from unequal societies. This can lead to even more unequal community in the future (Buranyi, 2017).
There is a bright side as well. And it’s impressive.
The more optimistic futurists perceive the AI and reaching Singularity as a great opportunity for humankind. Right now, technology as we know it — smartphones or laptops — act as an extension of our minds. Even more, a piece of paper containing our written notes is already a part of our thinking apparatus. It is believed that it will remain to be so. People will augment their cognition more and more to non-biological intelligence and co-exist with it, instead of being defeated by the artificial mind (National Geographic, 2017).
Probably the most valuable outcomes of technological advancement will lay within biotechnology and gene sequencing. This part of science is already progressing at an extreme pace. The ability to program biology in the future can bring an end to diseases, cancer, allow people to download biological software into their bodies and save or extend their lives (although this means new opportunities for hackers). It will change the meaning and experience of being a human (Silva, 2018). Maybe that will make it even easier for robots to gain their rights, since we are going to be able to program our bodies the way we program AI?
There is a spark of hope for those who are still concerned about living next to advanced AI. OpenAI, the non-profit company founded by Elon Musk, have recently refused to share their research with the public out of fear of misuse. They created an AI system that can write fictional news and stories, and is compared to the deepfakes trend (fake videos of real people). A system so powerful that, along with a high risk of malicious use, it should not be released without further discussions over the breakthrough (Hern, 2019). This situation hopefully means more caution will be given when implementing the new and improved AI systems into the society in the future.
Whether and when humans reach the Singularity is obviously not up to the general public. And it is most certain that it will happen rather sooner than later. Knowing that, we can at least express out fears and impact the way AI will be. We can, in a way, influence the ‘AI user experience’ and refuse to interact with human-like robots if we’re afraid of them. We can keep opposing AI systems which spread gender and race inequality. We can try to make the artificial superintelligence what we want it to be, and not what we fear. We can still make it help us, not destroy us.
References
Buranyi, S. (2017). Rise of the Racist Robots — How AI is Learning All Our Worst Impulses. [online] The Guardian. Available at: https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses [Accessed 19 Feb. 2019].
DUST (2019). Sci-Fi Short Film “Sunspring” | DUST A.I. Week. [video] Available at: https://www.youtube.com/watch?v=UsnPyKsmSmI [Accessed 19 Feb. 2019].
Feldman, B. (2018). This Is Why Alexa Is Laughing at You. [online] Intelligencer. Available at: http://nymag.com/intelligencer/2018/03/this-is-why-alexa-is-laughing-at-you.html [Accessed 19 Feb. 2019].
Gifford, S. (2019). NASA reveals final, sad message sent by Martian rover Opportunity before dying. [online] MSN. Available at: https://www.msn.com/en-gb/news/world/nasa-reveals-final-sad-message-sent-by-martian-rover-opportunity-before-dying/ar-BBTFdbI?ocid=spartanntp [Accessed 19 Feb. 2019].
Hern, A. (2019). New AI fake text generator may be too dangerous to release, say creators. [online] The Guardian. Available at: https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?CMP=fb_gu&fbclid=IwAR0jko0BXsCseEkVBHvg7ke4o3WMXe0eODosjUcqlefBC9LobkFYb4aFZp8 [Accessed 19 Feb. 2019].
Kurzgesagt — In a Nutshell (2017). Do Robots Deserve Rights? What if Machines Become Conscious?. [video] Available at: https://www.youtube.com/watch?v=DHyUYg8X31c [Accessed 19 Feb. 2019].
Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology.
Mashable Deals (2018). Google’s AI Assistant Can Now Make Real Phone Calls. [video] Available at: https://www.youtube.com/watch?v=JvbHu_bVa_g [Accessed 19 Feb. 2019].
NASA/JPL/Cornell University (2019). A NASA illustration shows what Opportunity would look like on Mars. [image] Available at: https://www.cnet.com/news/nasa-history-making-mars-rover-opportunity-declared-dead/ [Accessed 19 Feb. 2019].
National Geographic (2017). What is Technological Singularity? | Origins: The Journey of Humankind. [video] Available at: https://www.youtube.com/watch?v=gpKNAHz0zH8 [Accessed 19 Feb. 2019].
Silva, J. (2018). Jason Silva: The Technological Singularity. [video] Available at: https://www.youtube.com/watch?v=rt-DpzOSAbw [Accessed 19 Feb. 2019].
Weisberger, M. (2017). Lifelike ‘Sophia’ Robot Granted Citizenship to Saudi Arabia. [online] Live Science. Available at: https://www.livescience.com/60815-saudi-arabia-citizen-robot.html [Accessed 19 Feb. 2019].