AI Dilemma

Thick grey skies. A few lonely buildings dot the deserted cities that were once home to mankind. Of man, few traces. Artificial Intelligence has taken over, and the end has come for the human species. A science-fiction apocalyptic plot? Or our real future worst-case scenario?

Hollywood Prophecies

HAL 9000, the supercomputer onboard the spaceship Discovery in Stanley Kubrick's A Space Odyssey, was a science-fiction fantasy when the film was released in 1968. A computer capable of performing all the tasks of the human mind, but much faster and with minimal, if any, margin of error? With a human voice, sensitivity, and sensoriality akin to our own, and capable of piloting a ship in complete autonomy? Light-years away. But it was the research and development of technology that traveled at the speed of light, which, 40 years after the big screen release of Kubrick's colossal, gave voice to the first "Hey Siri, what's the weather like today?". And after the first personal voice assistant came standard in Apple smartphones, those of Google, Microsoft, Amazon, and IBM followed. And within a few years, these assistants have gone from having a somewhat robotic voice and a limited lexical and pragmatic repertoire to mastering a knowledge as vast as the web and to gaining a prosody capable of seducing us on the latest AI apps (such as Replika and CandyAI). But voice assistants were not the only prophecy of sci-fi movies like A Space Odyssey to come true. Self-driving cars are now a reality on the road to perfection, home automation has transformed our houses into conversational beings, and recently, generative artificial intelligence has given birth to creations as valuable as our brightest ideas, demonstrating its potential in the artistic and creative fields.

Neural Zoo, by Sofia Crespo. This image collection rearranges nature via computer vision and machine learning, turning artificial neural networks into a tool for creation.

So are the Skynet of Terminator (by James Cameron, 1984) and the creepy world of The Matrix (by Andy and Larry Wachowski, 1999) ready to become reality, too? Alan Turing - the godfather of artificial intelligence - did not give us such a vivid interpretation as those of the filmmakers about our future, but he did, for all intents and purposes, claim that 'once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage, therefore, we should have to expect the machine to take control'.

Towards Super Intelligence

The AI we have today is not as sophisticated as that in the films, not yet. At the moment, it is a weak form of artificial intelligence based on algorithms that enable it to excel at detail-oriented tasks only. This means that AI simulates human intelligence processes - such as natural language processing, speech recognition, vision, and generation of ideas - by ingesting a large amount of data and processing them at a far faster pace than humans can do. But it still lacks the applicability and adaptability that would allow it to apply knowledge from one domain to another, to exercise abstract thinking and common sense. To understand this, just think that artificial intelligence to date is capable of beating the world chess champion and passing the Turing test, but at the prompt "Suppose I left five clothes to dry out in the sun, and it took them five hours to dry completely. How long would it take to dry 30 clothes?" Chat GPT-4 - the newest AI system - answered 30 hours.

Picture by Tima Miroshnichenko on Pexels

The goal of computer and AI scientists, therefore, is to improve on these shortcomings, strengthening the weak form of AI through a more solid, resistant, and resilient structure: the so-called AGI. This will be an Artificial General Intelligence capable of learning, reasoning, correcting itself, and reflecting critically on its own “thoughts”. To what end? The imagined goal is very different from the shapes and colors it takes on in Hollywood dystopias. The AGI would indeed have the potential to synthesize vast amounts of information to regenerate insights that could revolutionize everything, from the way we perform our everyday tasks to even tackle problems that are currently beyond human capability. This scenario is the opposite of dystopia. It is as if filmmakers, acting as spokesmen for the fears of the masses, are picking up the negative sentiments and exacerbating them to the point of creating horror stories, while computer scientists, embracing the confidence and optimism of the scientific community, believe in a utopian future in which everything improves thanks to the coexistence of humans and machines.

By Yaroslav Shuraev on Pexels

And the aspect that experts in the field find most tantalizing balances precisely on that thin line between idyll and catastrophe: the invention of Artificial Super Intelligence. An intelligence with cutting-edge cognitive functions and highly developed thinking skills, going a step further than AGI. These would be the science-fiction-like super-being - hyper-intelligent, needless of rest - potentially able to solve the most persistent medical puzzles, develop life-saving medicines and treatments, and unlock the mysteries of physics and humanity's goals to explore the universe. What could possibly go wrong? Well, this same ASI could become self-aware and, operating without human control, could lead us to the much-feared existential risk.

Givenchy AW99 via pinterest

Ustopia

What utopia and dystopia have in common - as innovation and equity researcher Ruha Benjamin argues - is the dependence they create on human beings in relation to machines. As is often the case, the truth lies somewhere in between, and rather than utopia or dystopia, we should believe in us-topia. That is, in what we are and have right now. On the one hand, we fear the cyborgs we already are. With our phones and the technological advances we have achieved, we have already incredibly increased our intelligence and the potential to live better and longer. Rejecting the further improvements that AI can reach is by no means what we should be hoping for. Quite the contrary. On the other hand, the hype for the reckless race to superintelligence might turn into the regret of having destroyed what we had for a future we no longer belong to. What we must instead realize - at a global level - is that it is imperative to create policies for the ethical regulation of AI, to make sure that today's brilliant inventions do not turn into tomorrow's catastrophe. Just to give an example, programming an AI with the aim of improving the condition of planet Earth and protecting it from its threats could mean indirectly asking the AI to extinguish the human species. We understand that every move must be meditated, planned, and, above all, shared. One of the great current flaws of AI is its being highly expensive and skills-demanding. Instead of benefiting everyone, therefore, it is already exacerbating the inequalities that exist, and it will do it even more in the future. If we don’t prevent it now. So, to truly put AI at the service of the community, teamwork is necessary. Working together, what many believe will be the last invention that mankind will come up with, will be a worthy one, and with the right regulations, apocalyptic scenarios will continue to be the ending of our films, and not turn into those of our lives.

Céline Merlet

Celine is now channeling her storytelling and communication skills as an editorial intern at Raandoom. Her educational background in languages and her practical experiences in various cultural settings have shaped her writing style. Celine's approach is all about connecting with her audience through relatable and compelling stories. She aims to transform ordinary events into captivating tales that speak to a global audience.

Previous
Previous

Cupid's Cinema

Next
Next

Edge of Art