Report by David Worsfold, AEJ-UK Treasurer, on an AEJ meeting with Professor Ciaran Martin at Regent’s University on 21 February 2024 on the theme of ‘Artificial Intelligence and Democracy: Friend or Foe?’
Artificial intelligence is here to stay. It will be disruptive but people should not be sucked into believing all the doomsday scenarios. This was the core message of Professor Ciaran Martin’s wide-ranging review of the current and potential future impact of AI on political and electoral processes.
He cautioned against “infantilising” the potential threats from AI by over-hyping it, citing several examples where people – including respected media outlets – had fallen into this trap.
“We sometimes suspend credulity when it comes to technology. We were just getting past the cyber Pearl Harbour scenarios and then AI comes along”. Turning to the journalists in the room he had a simple plea: “Do not suspend your critical faculties just because it is about AI”.
There has been a tendency to forget that many of the sins we attribute to AI are not new, especially the deployment of fake information. The obsession today might be about deepfake audio – which Prof Martin said can be very credible – and deepfake video – about which he was rather more skeptical – but they are just new iterations of a very old phenomenon.
“A lot of this is not completely new. It is just that the technology has made it easier and it enables to spread faster … electoral interference in its current form has been going on for some time. It pre-dates AI by a hundred years.”
He cited the example of the notorious Zinoviev Letter which was published by the Daily Mail during the 1924 General Election campaign. It purported to show proof of connections between the Soviet Union and the British Communist Party and, by implication, the Labour government. It was later proved to be a forgery but its publication just four days before the election contributed to the Labour government’s substantial defeat.
He examined various recent examples from the UK, Europe and United States of attempts to interfere in elections, highlighting where several interventions had failed, although cautioning that did not mean there was any room for complacency.
Looking beyond elections, he said the AI Safety Summit hosted by the UK government last November showed we are still a long way from agreeing an accepted version of what constitutes harm from AI. This does not mean we are left exposed to some of the disaster scenarios beloved of the “AI doomers”, however.
Planes would not fall out of the sky if air traffic control systems had to be taken offline because of a cyber attack because the human pilots were still in place to take over, as was demonstrated during the failure of the UK air traffic control system last August: “Planes still landed safely”.
The doubts about the safety of driverless cars when system fail is the main reason why the vision of our roads now being populated by driverless cars that was prevalent just a few years ago has not come to fruition: “We have choices to make and to let driverless cars on our roads before we know they can be safe when systems fail would be really stupid”.
He felt the corporate world was getting deflected into worrying about things it did not need to fear, partly influenced by the hype surrounding AI.
“Economic disruption caused by technological advances is nothing new. It has been going on since the invention of the automated spinning loom. We are used to managing it”.
This did not mean that it would not be economically and socially disruptive, something society would have to prepare itself for.
He used the question and answer session to remind people of some of the potential limitations of AI.
“The internet is a gargantuan physical construct. It is not ‘virtual’ [a word he said he discouraged his students from using]. It requires huge amounts of energy. That could limit the development of AI.”
Asked about China’s attitude and fears that it, along with Russia, could deploy AI to disrupt elections and more in the west, Prof Martin said China had many concerns about an uncontrolled development of AI: “The idea that China welcomes easy malevolence in the world of AI is nonsense”.
It has made some attempts to influence the data used in the development of generative AI systems, wanting it to build in bias in favour of the Communist system, but had found this difficult and was now hesitating in extending that approach.
Challenged about the potential for interference in UK elections, he said the biggest vulnerability was the political parties themselves as they are small organisations, often not well resourced. He acknowledged that deepfakes will become more common: “It is difficult to stop people making deepfakes but just because they can … does not mean we are powerless”.
The social media platforms need to be more rigorous in policing deepfakes and responsible politicians need to ensure that this “poison does not enter our body politics”. Facing up to this threat was a non-party political issue.
Among the biggest challenges will be developing a system of regulation, a topic that exercised the global leaders who gathered at Bletchley Park at the AI Safety Summit. It raised many questions that will be very hard to answer, including how do you reconcile the European approach of prescriptive regulation with the free market approach favoured by the United States, and where do authoritarian regimes fit in when it comes to developing a system of global regulation?
To answer these questions will require a new level of competence among decision makers: “The next generation of government and business leaders must be fluent in technology … they do not have to be experts but they need to be confident their understanding of it”.
• Ciaran Martin was founding chief executive of the UK National Cyber Security Centre, part of GCHQ and is currently professor of practice in the management of public organisations at the Blavatnik School of Government.
• The meeting was attended by AEJ-UK members, other journalists and staff and students from Regent’s University and facilitated by AEJ-UK chair William Horsley.