Opinion
AI: What can go wrong? Plenty
Monday, 05 Jun 2023 12:22 PM MYT By Alwyn Lau

JUNE 5 — "What nukes are to the physical world, AI is to the virtual and symbolic world.” — Yuval Noah Harari

Last week, a few hundred of the world’s top scientists, technology leaders and CEOs signed a statement urging greater caution towards artificial intelligence (AI) and the risks it posed to humanity’s extinction.

Advertising
Advertising

These experts, which included the CEO of Open AI (the maker of ChatGPT) Sam Altman and Geoffrey Hinton (someone widely acknowledged to be the "godfather of AI”), put their names to a letter released by California-based non-profit the Centre for AI Safety.

The key passage in that statement was: ”Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

I hope to unpack what this means or at least why these (very) smart folks are openly declaring their concern about the risks of AI.

In a sense, this isn’t new at all. Back in 2015, the Future Life Institute already released a statement to the effect that, "50 per cent of AI researchers believe there’s a 10 per cent or greater chance that humans go extinct from our inability to control AI.”

However, most people when they see such statements immediately have images of Skynet (from Terminator) or the machines from Matrix pop into their heads.

While having a sentient supercomputer destroy the very same humanity which created it is not entirely impossible, it’s admittedly sensationalistic and thus may shut down clarity on AI’s problems.

And there are problems.

Curation AI: What has already gone wrong?

The point is that many years ago the harmful effects of AI were already manifesting themselves.

The phenomenon of curating brought many good things like personalisation of feeds, automatic recommendations, smarter predictions of trends, better sustained attention and engagement, etc. it also allowed social evils like information overload, device addiction, doom-scrolling, hyper-sensitivity, shortened attention spans, political-partisan polarisation, fake news, etc

Note that everything in the previous paragraph has already happened and, for some people, it has caused heavy (if not irreparable) damage.

One of the reasons why Malaysia’s mental health challenges keep rising is because the young (and not so young) are spending so much time on Twitter and Instagram.

Hooked by these platforms’ AI engagement and prediction machines, they are unable to engage healthily and productive.y in the real world, becoming increasingly disassociated and hyper-sensitive to the slightest online provocations.

AI analyst Aza Raskin names the above phase of Curation AI, a phase which has already caused many problems.

But this year we’ve seen the popular launch of what Raskin calls Creation AI.

Creation AI: What else can go wrong?

When Large Language Models (LLM) like ChatGPT help us write school essays, solve urban problems, fix coding, recommend diet plans and so on, it’s essentially AI making stuff for us (whereas previously its main task was selecting items).

And it is precisely this new ability which makes many people nervous. LLM’s super-power is they can translate any domain (eg, images, brain waves, etc.) into text-based languages which they can later manipulate and reproduce towards any particular direction or objective.

Because of these capabilities, we are already seeing scary scenarios like the use of deep-fake to commit crimes. Eg, someone could record a few seconds of a child’s voice then use AI to manipulate that voice when speaking to the child’s mother on the phone.

We simply do not know what the latest AI systems will be capable of by itself. ― AFP pic

Can you imagine receiving a call from someone sounding like your child, claiming she’s been kidnapped?

In a similar vein, verification models would be up for grabs. E-ticketing, e-certificates, facial recognition, and even DNA information can all be replicated and duplicated.

This has led to the joke that in the future we’ll need to assume that online and digital personalities are fake; a premium would then certainly be paid for counter-falsification AI systems. Yet what’ll happen when fake and anti-fake AI systems compete? God only knows.

All the above spells trouble, but nothing is more troubling than AI’s emergent capabilities.

As is well-known, machine learning trains itself on terra-bytes of data and continuously improves. What’s ultimately scary is a) how fast these systems can learn and b) what happens if they turn agentic (i.e. they develop objectives independent of their creators).

Here’s Aza Raskin again on what AI can do: ”Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime. Teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory, and then fish all the fish to extinction.”

Long and short, we simply do not know what the latest AI systems will be capable of by itself.

A final question: What if very powerful AI is put in the wrong hands?

I hope it’s clear now why some of our best minds are freaking out, or at least sounding a warning. I’m not a big fan of Harari’s but I have to admit that line of his at the start is worth reflecting on: The line from ChatGPT to a nuke may not be that straight or clear — but that line exists.

We’d be wise to keep an eye on it.

* This is the personal opinion of the columnist.

Related Articles

 

You May Also Like