Written by Andrew Wooden for Telecoms.com
As the rapid march of generative AI continues, regulators on both side of the pond seem to have decided they had better start taking a proper look at what’s going on.
Yesterday the New York Times published an op-ed by Lina Khan, Chair of the Federal Trade Commission (FTC) in the US with the headline ‘We Must Regulate A.I. Here’s How.’ In the piece, she contrasts the current technological Rubicon we appear to be crossing with the rapid development of generative AI programmes, with the mid-2000s and the emergence of what we now call Big Tech firms like Facebook and Google.
She argues it wasn’t clear at the time that the new innovative search and social services that were cropping up would end up being provided in exchange for huge amounts of personal data that the tech firms could pool and monetise. Coupled with what she calls ‘aggressive strategies to acquire or lock out companies that threatened their position’, this ended up ‘concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.’
It’s no secret the FTC now has its eye on Big Tech – yesterday it proposed a ‘blanket prohibition preventing Facebook from monetizing youth data’, alleging that the company has failed to fully comply with a 2020 privacy order it was given, misled parents about its ability to control who children communicated with through its Messenger Kids app, and misrepresented the access it provided app developers to private user data.
It is proposing Meta/Facebook is prohibited from profiting from data it collects from users under the age of 18, and be ‘subject to other expanded limitations’, including its use of facial recognition technology, and that it be required to provide additional protections for users.
“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”
Actions like this can be seen as something like retrospective curbs on the power and reach of Big Tech which Khan alludes to. Now we are looking down the barrel of a rapidly evolving AI sector gestating in the labs of a few tech firms, she seems to be advocating for getting ahead of the game when it comes to regulating the sector and curbing any potential negative consequences of its disruption.
“While the technology is moving swiftly, we already can see several risks. The expanding adoption of AI risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
“Enforcers and regulators must be vigilant. Dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance. Meanwhile, the AI tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behaviour that unfairly inflates prices — as well as forms of precisely targeted price discrimination… History of the growth of technology companies two decades ago serves as a cautionary tale for how we should think about the expansion of generative AI”
She also argues generative AI risks ‘turbo-charging’ fraud – a subject we covered in our interview with Tenable titled ‘Generative AI is a gift for cyber criminals.’
Meanwhile the UK’s Competitions and Markets Authority yesterday announced it has launched an initial review of AI, which will:
- Examine how the competitive markets for foundation models and their use could evolve
- Explore what opportunities and risks these scenarios could bring for competition and consumer protection
- Produce guiding principles to support competition and protect consumers as AI foundation models develop
“AI has burst into the public consciousness over the past few months but has been on our radar for some time,” said Sarah Cardell, Chief Executive of the CMA. “It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth. It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection.”
The CMA is looking for thoughts from ‘stakeholders’ by 2 June 2023 and encourages them to be ‘proactive in identifying relevant evidence.’ A report laying out its findings will then be published in September.
The tone of the CMA’s statement – which seems to go out of its way to insist that of course new AI models have the potential to be great – is slightly less alarmed than that of the FTC’s Chair, however the guiding principles are similar. Regulators have been playing catch up with keeping Big Tech in check, whether it’s over consolidation of markets or data gathering – and the sentiment appears to be ‘let’s not make the same mistake with AI.’
We’ll have to wait and see what, if any, regulatory action emerges in the months and years to come following these throat clearings by the CMA and FTC on the matter, but the fact they appear to be committed to keeping an eye on the rapidly evolving sector at all is good news for those concerned by the potential effects unfettered development of AI could very quickly have on all sorts of areas.
Click here to read the original article.