The challenge of AI

Lawmakers and the media are racing to catch up with the rapid development of artificial intelligence, an enormous challenge with billions being invested in AI.

On October 28 the AEJ UK hosted a panel that discussed how the European Union is facing up to the huge challenges of shaping laws that address the risks surrounding AI, how it is being used by journalists and the media, and how to litigate the most obvious risks.

Report from AEJ UK Treasurer David Worsfold, freelance business journalist and author

Artificial intelligence: beware of the slop

Failure to robustly face up to these challenges – and face down the powerful big tech interests – could quickly see the internet populated by “slop”; unverified, untrustworthy, endlessly regurgitated and often wildly inaccurate, content, was one of the key messages to emerge from the discussions at Europe House by courtesy of the European Parliament Liaison Office in the UK.

Michael McNamara, an independent Irish MEP and co-chair of the European Parliament’s Working Group on the Implementation and Enforcement of the AI Act, and Graham Lovelace, strategist and consultant on AI, especially its impact on media, led the discussion with some forthright views on the threats and risks posed by AI but both were far from dismissive of the potential value to society of well-regulated, ethically responsible use of AI:
“Artificial intelligence has already become part of our daily lives, whether we’re unlocking our phones with facial recognition technology, getting search results shaped by generative AI, or relying on AI for translation, navigation, and medical diagnostics”, said Mr McNamara, who sits with the centrist Renew group in the European Parliament.
“The potential benefits are enormous. Productivity gains, economic growth, and innovation in science, healthcare, and education, with some even comparing it to transformative developments like the Industrial Revolution, the advent of electricity, and the advent of the internet. But the risks too are significant, from discrimination and bias, to privacy violations, misinformation, threats to fundamental rights, and ultimately, threats to democracy and democratic values themselves.”

Mr McNamara explained how the EU’s regulatory approach to AI has developed since the Parliament set out some clear objectives in 2020, calling for “an ethical, human-centric, innovation-friendly framework”. This led to the AI Act, which was formally adopted in March 2024, and is the first comprehensive legislative framework governing AI globally.
Getting the AI Act passed required compromise with powerful big tech interests but, there were some clear red lines: “Parliament strengthened the list of prohibited AI practices. In addition to banning real-time biometric surveillance in public spaces, it pushed to ban biometric categorisation based on sensitive characteristics like race, gender, political opinions or religion, predictive policing based on profiling or passive behaviour, emotional recognition in law enforcement workplaces, schools and border control. And it also banned the indiscriminate scraping of facial images from the internet, or CCTV, to build facial recognition databases.”

Alongside the Act, the EU has launched an AI code of practice, “a non-binding instrument to guide obligations under the Act for general purpose AI providers, particularly those with systemic risk potential”.
This was published in August and has four key themes: transparency, copyright, systemic risk and governance.
“Most of the large AI providers with the notable exception of Meta have signed up to this, X signed up in part, they didn’t sign up to the copyright part of it”, reported Mr McNamara.
He said copyright remained a highly contentious area with AI systems being trained on vast datasets, including copyrighted material. The AI Act requires transparency about training data, particularly whether copyrighted material has been used but he acknowledged that the current approach is a compromise that has met with lukewarm reception from copyright holders. A key focus now is on whether Europe needs a dedicated licensing framework for AI training. This has exposed a sharp divide. Al developers are arguing that existing text and data mining (TDM) exemptions suffice, while rights holders contend these rules were never designed for generative AI models that ingest copyrighted material on an industrial scale.

He reserved some of his starkest warnings for the threats to journalism, especially how people receive news:
“Another particularly pressing issue is how we currently receive our news, and news publishers across the world are seeing that traffic is declining on their websites, and with that, of course, their ability to monetise their content as AI systems provide direct answers to people’s search queries … They are building a business model on journalism that they don’t pay for, while undermining trust in that journalism through what could be legitimately, I think, called systemic misrepresentation.”

This provided a neat segway to media commentator Graham Lovelace who wasted no time spelling out his basic premise when it comes to generative AI based on large language models (LLM):
“We’ve had AI or the notion of artificial intelligence for about 70 years, but this is different. This is not necessarily going to find a cure for cancer or find a cure for anything. This is content. AI generates text, images, video, sound and music even in response to a prompt … It’s been trained by scraping the web, scraping the web illegally. This is theft. There’s no other word for it. This is the theft of intellectual property on a grand scale.”
He explained that large language models analyse patterns of words and produce outputs based on probability – literally the likelihood of one word following the previous words. This is what makes them highly unreliable due to biased training data and hallucinations, the tendency to make things up when its probability-based model cannot find data that directly answers a question.

He echoed Mr McNamara’s warnings about the threats generative AI poses to journalism. He described it as “probably going to be the biggest disruptive threat” because it provides AI summaries rather than directing users to original sources, causing traffic to news websites to fall “like a stone.” He warned that this threatens the advertising-based business model of many news organisations as well as eroding trust through the proliferation of low-quality, inaccurate content, damaging brand integrity.
He wasn’t entirely negative about the potential uses of AI in the media.
“Despite everything I’ve been saying, AI models and chatbots are great at generating ideas. They can analyse data. An entirely new branch of journalism has essentially been grown out of this called data journalism. It’s looking for stories in the data, stories that are missing inside all of those numbers. They summarise brilliantly long and complicated documents. They can spot and remove jargon very, very quickly and efficiently. They can check facts, but again, they can generate facts themselves or alternative facts. They can translate into multiple languages faster than any human can do. They can transcribe audio, video to text, and they can turn text into audio.

But it is the creeping use of AI that we need to be wary of as it works it way into the editing and publishing process.
“Lots of publishers are now using generative AI to suggest a headline. I was at The Times earlier this year, and they’re using that to suggest a headline based on the story that the sub-editor is actually sub-editing at that moment. Then that places a human in the loop, very important phrase, then says, yes, that’s pretty good, but I’m going to humanly tweak it a little bit more. But that speeds them up. It’s suggesting picture captions. It’s writing the metadata that is associated with images and the story itself, so those stories can be found within archives. No one wants to write metadata. It’s a really painstaking piece of drudgery.”
Going further is when alarm bells should start ringing.
Start using it to produce summaries of longer content and you expose yourself to the risk of poor quality, even inaccurate, material undermining trust in your brand. Before you realise, you are using it to write whole stories, raising complex issues of labelling and transparency. He looked at examples of what can happen once AI is used indiscriminately, including making up quotes when a reporter is too lazy to get the real thing.

Knowing what to trust among the vast outputs of generative AI is a serious challenge and unless responsible media brands maintain integrity and trust everyone will be the poorer:
“The World Wide Web is drowning in what’s called AI slop, low quality, often inaccurate, generally wrong, text and images. And that’s causing a problem among consumers, among readers, of what to believe. Because there are bad actors out there, they’re actually creating this, AI deep fakes. And the more that they’re created and then shared on social media by bad actors – let’s include in that Donald Trump, Elon Musk, many other people of that ilk – the credibility, according to the machines, goes up.”
He said social media and AI were “toxic twins”.

The wide-ranging Q&A session raised questions about digital literacy in education, the impact on employment, and the effectiveness of EU regulation, given the global reach of AI, especially whether EU regulation could make a difference without American co-operation.
Mr McNamara defended the importance of EU regulation, noting that large corporations spend significant resources lobbying in Brussels because “it matters” to their operations and profit margins in Europe.
The Australian government this week ruled out a copyright exception that would have allowed AI developers to train their generative models on creative content without the consent of rightsholders. Announcing the decision, Attorney General Michelle Rowland made it clear that any shakeup wouldn’t include a text and data mining (TDM) exception favoured by AI developers.

Michael McNamara and Graham Lovelace at the AEJ UK 28 October 2025

Audio recording of Michael McNamara and Graham Lovelace at AEJ UK


Reports on AEJ UK meetings are available on this website.

Michael McNamara
Graham Lovelace
Is AI dividing us politically? – BBC The Artificial Human 15 October 2025
AI impacts on consuming news and trust – BBC Radio 4 Media Show 22 October
Journalism is not dead – Llewellyn King AEJ 17 October 2025