Sunday Times E-Edition

Will guardrails save us from a bad AI trip?

ARTHUR GOLDSTUCK Goldstuck is founder of World Wide Worx and editor-in-chief of Gadget.co.za

As the artificial intelligence (AI) revolution goes mainstream, we will all have to learn a new vocabulary. One of my favourite terms emerging from the propensity of AI chatbots to make stuff up is “hallucination”. The idea that AI is hallucinating is far more entertaining and descriptive than suggesting it is inventing facts or getting things wrong.

That was one of the quirky language usages that cropped up during a hearing on Tuesday by the US Senate judiciary subcommittee on privacy, technology & the law, designed to reset the AI agenda for the US — and, by implication, the planet.

Another term bandied about, which has far more serious implications, was “guardrails”. The term is defined by the Merriam-Webster dictionary as “a railing, guarding usually against danger”.

The AI search chatbot Google Bard, on the other hand, tells me that guardrails in AI are “a set of rules, regulations and best practices that are designed to ensure that AI systems are developed and used responsibly”.

That, ultimately, was the intention of the Senate hearing.

I asked Google Bard to name all the people who commented on guardrails during the hearing. In three seconds, it named no fewer than 20 people, including five senators, three professors, 10 executives and an author.

It added, helpfully: “It is clear that there is a lot of concern about the potential risks of AI, and that there is a need for guardrails to help ensure that AI is used for good and not for harm.”

The process of discovering this kind of information was highly entertaining, especially for those of us easily amused by horror stories. But

Bard’s comment does sum up what is wrong with technology in general.

It is initially built for its own sake, without ensuring safety and security are part of the fabric of its design. This, in turn, means trust is not baked into the design, but bolted on at the end.

Little wonder that AI has ended up so deeply mistrusted that a bunch of ageing legislators see fit to call its arbiters to order. To their credit, almost all the executives agreed with the concerns, and outlined the necessary approach to creating guardrails.

The most eloquent and concise response was probably that of Gary Marcus, a psychologist, cognitive scientist and author. He was asked by Republican senator John Kennedy: “Talk in plain English and tell me what, if any, rules we ought to implement.”

Marcus replied: “Number one, a safety review like we use with the FDA [Food and Drug Administration] prior to widespread deployment. If you’re going to introduce something to 100-million people, somebody has to have their eyeballs on it.

“Number two, a nimble monitoring agency to follow what’s going on … with authority to call things back, which we’ve discussed today.

“And number three would be funding geared towards things like an AI constitution. I would not leave things entirely to current technology, which I think is poor at behaving in ethical fashion and behaving in honest fashion. And so I would have funding to try to basically focus on AI safety research ... We need to fund models to be more trustworthy.”

Sam Altman, CEO of ChatGPT creator OpenAI, was asked the same, and stuck to the same script. He called for a new agency that licenses any effort above a certain scale of capabilities, a set of safety standards focused on dangerous capability evaluations, and independent audits.

He may have got that from his own chatbot but, if it is a hallucination, at least it aligns with reality.

Trust is not baked into the design, but bolted on at the end

Business | Opinion

en-za

2023-05-21T07:00:00.0000000Z

2023-05-21T07:00:00.0000000Z

https://times-e-editions.pressreader.com/article/282510072927447

Arena Holdings PTY