2021 could bring us the first human-like artificial intelligence – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges


2021 could bring us the first human-like artificial intelligence

2021 is bringing us an acceleration of Artificial Intelligence (AI) evolution, which will undoubtedly change every single aspect of our lives in some way or another. Let’s just say, AI isn’t going anywhere, and hopefully, neither are we. Here are the most significant changes so far.


This AI is the largest language model that has ever been created; it generates human-like text on demand. OpenAI first described GPT-3 in a research paper that was published in May 2020, but the software is now being drip-fed to a few selected techy people that have requested access to a private beta version of it. The tool will probably be turned into a commercial product later on in 2021. So what is it exactly, and how does it work?

In short, it’s a very powerful language tool with the ability to churn out convincing streams of text when prompted with an opening sentence. What makes this different from past language generators is that this particular model has 175 billion parameters (which are the values that a neural network optimises during training).

The tool generates short stories, songs, press releases, technical manuals… you name it. As reported by the MIT Technology Review, Mario Klingemann, an artist who works with machine learning, shared a short story called The importance of being on Twitter, that was written in the style of Jerome K. Jerome, and started with: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Klingemann says all he gave the AI was the title, the author’s name and the initial “It.” Pretty deep for a machine, wouldn’t you think?

Writing poetically isn’t the only thing that GPT-3 can do though, it can actually generate any kind of text, including code, which might be the most important thing to consider here. The tool can be tweaked so that it produces HTML rather than natural language, and web developer Sharif Shameem demonstrated that he could programme it to create web-page layouts by simply giving it prompts like ‘a button that looks like a watermelon’. This might have web developers a little unnerved.

That all being said, it is just a tool, and has still some fine tuning needed. It’s prone to spewing sexist and racist language, which is a rather large problemo if you ask me. GPT-3 mainly seems to be good at synthesising text found elsewhere on the internet, and lacks much common sense. However, a tool like this has enormous potential, and will be very useful when developed further.

Multi-skilled AI

Evidently, AI and robotics lack common sense and are trained on text input, but now, common sense is being flipped on its head. To hold GPT-3’s hand, a group of researchers from the University of North Carolina, Chapel Hill, have designed something that they call ‘vokenisation’, which gives language models like GPT-3 the ability to ‘see’.

Vokenisation, in AI lengo, is named as such because the words that are used to train language models such as the GPT-3 are known as ‘tokens’, so researchers decided to call the image associated with each token in their visual-language model a ‘voken’. Vokeniser is what they call the algorithm that finds vokens for each token, hence, vokenisation is what they call the process.

Combining language models with computer vision has been rapidly growing within AI research. With GPT-3, which is trained through unsupervised learning and requires no manual data labelling, and then image models, which learn directly from reality and don’t rely on the world of text can, for example, label a sheep as white by recognising that the sheep is white in real time.

However, the act of combining these two models is complicated—you can’t just mush the two AIs together in a robotic form, it needs to be built and trained from scratch with a visual-language data set. By compiling images with descriptive captions, an AI model may be able to then recognise objects and also see how they relate to each other, using verbs and prepositions.

In basic terms, the skills of AI senses are expanding by overlapping text and image. This will undoubtedly require an obscene amount of text input and data, however, this is the first step that a system has taken towards gaining ‘human-like’ intelligence, or more realistically, a flexible intelligence. It’s a pretty big deal.


Singularity explained simply: will our technological growth soon become uncontrollable?

In a new essay published in The International Journal of Astrobiology, Joseph Gale from The Hebrew University of Jerusalem and co-authors raised awareness of what recent advances in artificial intelligence (AI) could mean for the future of humanity and robots. The study focuses more specifically on pattern recognition and self learning while also presenting a fundamental shift between super intelligence’s relationship with humans. The futurist Ray Kurzweil predicted that the singularity would occur in 2045, but Gale believes this event may be more imminent, especially with the advent of quantum computing. What is singularity exactly, and what does it mean for humanity?

What is the singularity?

The term ‘the singularity’ has different definitions depending on who you ask, and it often overlaps with ideas like transhumanism. However, broadly speaking, the singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond what is currently possible for humans, and should the singularity occur, technology will in turn advance beyond our ability to foresee or control its outcomes. Basically, the singularity will be the time when the abilities of a computer overtake the abilities of the human brain—it’s a little concerning, I know.

As we know, a human brain is ‘wired’ differently to a computer, and this may be the reason as to why certain tasks are simple for us but challenging for today’s AI. The size of the brain or the number of neurons it contains doesn’t equate to higher intelligence either. For example, whales and elephants have double the number of neurons in their brains compared to humans, and yet, they are not more intelligent than us.

When the singularity occurs, which should come down to if and when we let it due to our current power over the situation, the human race may very well undergo its decline. As theoretical physicist Stephen Hawking once predicted, and told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

Hawking came to this response based on the technology he used to communicate because of the impacts of the motor neuron disease that he lived with, which involved a basic form of AI. According to Kurzweil’s book The Singularity Is Near, humans may soon be fully replaced by AI or some hybrid form of humans and machines.

American writer Lev Grossman explained this prospect in Time magazine by saying that “Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a superintelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks…”

Future posed an interesting experiment on ‘supercomputers to superintelligence’ by proposing that we ask our elders whether they ever dared think that one day in the future (meaning now), everyone would be posting and sharing images and information about one another on a social network called Facebook. Or, if they ever imagined that they would soon be able to receive answers to every and any question from a mysterious entity called Google. Chances are that they would probably answer negatively, and who would blame them?

The thing is that very few would have imagined the future that is now, even if assumptions were made on technologies becoming widespread or how they would fundamentally change society. But here we are, and what we might now idealise of our very futures, may turn out to be exaggerated versions of those ideas, or nothing like them at all.

Changes of any kind, in hindsight, always actualise as dramatic, and this is most definitely the case with technology. These sort of dramatic shifts in thinking are what is called singularity, which originally derived from mathematics and describes a point which we are incapable of deciphering its exact properties, or where the equations make no sense and have no sense of direction. Now the term creates a point that could completely change the way we view, as well as function, as human beings.

Singularity and AI regeneration

Because of the potentially approaching singularity, AI will essentially improve itself once it learns how to, and will do so over and over again without our help. Humans will remain biological machines, but if this superintelligent AI were to be kept on a tight leash, humans would be able to use it to their advantage still, meaning that we could use the advancement produced by this technology to expose and discover the wonders of what we haven’t been able to discover in our world yet, and beyond.

Truthfully, the singularity of some spectrum is most definitely due to arrive, it has already within the gaming world and professional fields like health care. That being said, some humans may struggle with the reality of such a time arriving, and some may ignore it altogether (while still using a mobile phone or calculator, ignorantly). While both of these approaches will most definitely remain disastrously behind, others will realise that the path ahead relies on the increasing collaboration with humankind and computers. I argue that the dawn of singularity is here, possibly that it arrived decades ago, and that only in hindsight will we actualise this point in time as dramatic.