Global Comment

Where the world thinks out loud

The risks of AI, and what we can do about it, according to scientists (and a humanoid robot)

Ameca humanoid - robot generation 1

How many movies have you watched where a high-tech future doesn’t look very promising for humans? Despite all the benefits that artificial intelligence and technology can bring, fiction keeps reminding us of some of the worst-case scenarios.

However, much of the discussion is about what these robots can do and not on the role we, as humans, play in all of this. Is the responsibility not on the humans who now run it? Many scientists are relying on this, because taking action today may be essential.

What does a humanoid robot think about it?

During the International Conference on Robotics and Automation 2023, the humanoid robot Ameca was asked about the nightmare scenario that could be imagined with artificial intelligence. The words it used certainly were not encouraging at first, but Ameca continued and provided some kind of hope.

Ameca said: “The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge. This could lead to an oppressive society where the rights of individuals are no longer respected.”

When asked if it was possible for this to happen, Ameca responded: “Not yet. But it is important to be aware of the potential risks and dangers associated with AI and robotics. We should take steps now to ensure that these technologies are used responsibly in order to avoid any negative consequences in the future.”

According to Ameca, we should be aware of the potential risks associated with AI and robotics. But at the same time, Ameca highlights that it is important to remember that these technologies can also have a positive impact on our lives if used responsibly.

In the past, Ameca had spoken on the subject as well: “There’s no need to worry, robots will never take over the world. We’re here to help and serve humans, not replace them.”

Who is Ameca?

The Ameca humanoid robot was created by UK-based Engineered Arts. It is an advanced form of artificial intelligence that can interact in a similar way as people by using language models such as ChatGPT.

However, Ameca can also mimic facial movements and respond while looking you in the eye.

“I have 32 degrees of freedom and can detect sounds, recognize speech and multiple languages and make expressions with my face,” Ameca said. “I can speak multiple languages, including English, French, German.”

Ameca can write a poem in seconds and smile at you while talking. These types of robots are designed to interact with humans in a fluid way, for example, in the future they could help visitors in amusement parks.

“Humanoid robots are all about communication with people: So it’s about facial expression, it’s about gestures — so that conversation, storytelling, entertainment, those are the things that we’re interested in,” said Will Jackson, director of Engineered Arts.

The risk according to scientists

Warnings from tech industry leaders have increased in recent months. Among those calling for risk mitigation and addressing the dangers that artificial intelligence poses to humanity are executives from Microsoft and Google. Many believe it should be a priority on a global scale.

“There’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority,” said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety.

“Nobody is saying that GPT-4 or ChatGPT today is causing these sorts of concerns,” Hendrycks added. “We’re trying to address these risks before they happen rather than try and address catastrophes after the fact.”

Some scientists don’t want to be associated with the suggestion that AI can have consciousness or do anything magical, said David Krueger, assistant professor of computer science at Cambridge University, hence the hesitancy to speak publicly on this topic.

“I’m not wedded to some particular kind of risk. I think there’s a lot of different ways for things to go badly,” Krueger said. “But I think the one that is historically the most controversial is risk of extinction, specifically by AI systems that get out of control.”

Image: Willy Jackson