A while ago, I asked Amazon’s Alexa what plans she had for the day, and with her flirty-yet-professional voice, she answered that she wanted to stay at home so she could answer more of my questions, hoping to grow her knowledge. Unsurprisingly, when asked what her long-term plan for the future was, Hanson Robotics’ iconic creation Sophia answered in a similar way: her goal is to learn more from and about us—as in, humans—in order to become increasingly independent in performing tasks and interactions that would otherwise require human intelligence.
These answers, among other concerns, used to generate a general sense of fear around artificial intelligence (AI), as many wondered whether a boost in machine intelligence could imply such big steps in self-improvement that machines would get beyond our control. Despite most people now being very much accustomed to AI through daily interactions, there still is a feeling of anxiety when it comes to the increasing implementation of AI within the tech industry and in all aspects of our life. The real question here is: should we really be afraid of AI?
From facial recognition technology and self-driving cars to privacy concerns with the Internet of Things (IoT) and Elon Musk recurrently reiterating the dangers behind the development of AI machines in social media, it has become clear that there are many alarming paths AI could go down. So what can be done in order for us to stop fearing AI and its unavoidable increasing presence within our social, technological and economical systems? The answer is simple: we need to learn more about AI—its functions and capabilities, and how it is set to develop and increasingly infiltrate our surroundings. Fear comes from a lack of knowledge and a state of ignorance. The best remedy for fear is to gain knowledge.
Smart machines have very different aspects; their ‘intelligence’ and therefore consequential positive and negative effects depend on their use and abuse. In order to understand AI, it’s crucial to be aware of how these branches work and which sectors they take part in.
Google search, image recognition software, personal assistants such as Siri and Alexa, and self-driving cars, for instance, belong to what is called narrow Artificial Intelligence or also known as Weak AI. Without a doubt the most successful AI implementation we’re witnessing so far, narrow AI’s role is to perform tasks and simulate human intelligence, but its functioning is focused on specific tasks enabled by innovative improvements within the fields of machine learning and deep learning.
Machine learning is based on a mathematical model that constantly improves based on the experience it acquires, thus automatically getting better at tasks, even if not initially programmed to do so. To put it simply, the development of machine learning and deep learning is the reason why narrow AI has proved so effective in the past few years. “Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques,” explained Venture capitalist Frank Chen during a thorough presentation on the basics of AI, Deep Learning, and Machine Learning.
Now that you have a clearer idea of what Weak AI consists of, there’s the minefield that is Artificial General Intelligence (AGI), also known as Strong AI or full AI. AGI is where the big fear lies because it is based on the hypothesis that its cognitive computing abilities and intellectual capacities would reach human ones and eventually surpass them. If narrow Artificial Intelligence functions by focusing on ‘narrow’ tasks, AGI in comparison would be able to gather, access and process data at a speed that would allow it to respond to new tasks that were not previously considered by its system.
“Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” said physicist Stephen Hawking in 2014 referring to AGI in an interview with the BBC. AI researchers are positive that AGI is far from being a soon-to-be reality, but the scepticism on whether AGI should be even contemplated runs rampant among academics, politicians and scientists alike.
AI is already undeniably ingrained in the textures of our society and as we move along with it, we are confronted with the flaws and the ethical concerns that an unbalanced use of the technology could have on our lives. In order to be able to identify its perks, be aware of its dangers, and not to feel overwhelmed by its all-around presence within the architecture of society, we ought to master what AI means in the first place. Only scratching the surface of this technology during shallow conversations is not enough anymore. Understanding this branch of computer science is crucial for us to stay ahead of the fast-paced changes that are already dictating our lives and will increasingly keep on doing so.