Pressure mounts on OpenAI as more lobby groups pile on

Written by Nick Wood for Telecoms.com

Image
AI Image

There have been more calls for governments to intervene in the development of AI amid growing fears it that it could spell certain doom for humanity.

A week after an open letter signed by various tech heavyweights called for a six-month pause on training new AI systems, more lobbyists in both the US and Europe have decided to have their say as well.

On this side of the pond, the European Union is currently working on the AI Act, which aims to protect consumers from potential harms and ensure the technology is a force for good. However, the European Consumer Organisation (BEUC) late last week warned that it will be years before the AI Act takes effect. By then, it could be too late to prevent harm from being done. Ergo, it has called on the EU and national authorities to investigate OpenAI’s ChatGPT – powered by its GPT-4 large language model – and similar chatbots.

“In only a few months, we have seen a massive take-up of ChatGPT and this is only the beginning. Waiting for the AI Act to be passed and to take effect, which will happen years from now, is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people,” said Ursula Pachl, deputy director general of the BEUC, in a statement last Thursday.

On the same day in the US, the Centre for Artificial Intelligence and Digital Policy (CAIDP) filed a complaint with the US Federal Trade Commission (FTC) accusing GPT-4 – which is heavily backed by Microsoft – of being biased, deceptive and a risk to privacy and public safety.

In its filing, CAIDP notes that OpenAI acknowledges the risk that its technology could be used for disinformation and influence operations, and for reinforcing ideologies, worldviews, truths and untruths and so-on, but disclaims liability for any of these negative potential consequences.

“It is time for the FTC to act. There should be independent oversight and evaluation of commercial AI products offered in the United States. CAIDP urges the FTC to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses and the commercial marketplace,” CAIDP said.

“The Centre for AI and Digital Policy has raised critical issues about the impact of GPT-4 on consumer protection, data protection and privacy, and public safety,” said Pachl. “EU and national authorities in these fields should launch an investigation immediately into the risks of ChatGPT and similar chatbots for European consumers. These AI systems need greater public scrutiny, and public authorities must reassert control over them.”

Italy has done just that.

On Friday, the Guarantor for the Protection of Personal Data (GPDP) imposed a temporary ban on ChatGPT on grounds that it doesn’t provide information to users about the data it collects and how that is processed. It also noted that ChatGPT doesn’t verify the age of users, and therefore risks exposing children to harmful content.

If OpenAI doesn’t address the Italian watchdog’s concerns within 20 days, it will impose a fine of €20 million or 4% of OpenAI’s global annual turnover.

Such is the commercial interest in AI that OpenAI and its rivals are unlikely to cave into pressure and pause development of their various technologies. The likelihood is it will be up to authorities to regulate for their desired outcomes, steering the development of future versions of ChatGPT and Bard etc. in their preferred direction. With technological singularity – the point at which computers become as intelligent as humans – not expected until the end of this decade at the earliest, there is still a bit of time to play with before the machines take over.


Click here to read the original article.

Share article