Discussing Deepfakes how could we respond to the threat of new deepfake technologies and seize the potential of synthet

From: techUK
Published: Fri Jun 07 2024


In 2024 more than four billion people around the world are eligible to vote in elections.

techUK shares a new discussion paper on deepfakes and synthetic media technology. Read the full report here.

As well as being a major moment for democracy, this year will also be the year when there will be no greater market or demand for deepfakes, misinformation and disinformation by malign actors.

With the UK heading to the ballot box on July 4, political parties, the tech sector and the public will need to think about the risks of deepfakes.

What are synthetic media and deepfakes?

Deepfakes are a subset of synthetic media that focus on manipulating or altering visual or audio information, to create convincing fake content, ranging from images, audio and video.

Text Box

A common example of deepfake use is creating realistic sounding but wholly artificial recordings of someone's voice.

Deepfakes introduce a challenge to the democratic process through two principal methods:

  • Intentional deception: by creating realistic fake audio and/or video aimed at discrediting or enhancing a political candidate or, by using such artificial material to spread social unrest with the intent of effecting the outcome of an election.

  • The liar's dividend: where the widespread use of deepfakes and misinformation engenders an atmosphere of mistrust in all reporting and campaigning. This can undermine voters access to facts and high-quality information ahead of voting.

Deepfakes have already made an impact on European elections. In Slovakia's recent Parliamentary election, a high profile deepfake was used to spread disinformation ahead of polling day.

In this case an audio recording was posted that included two voices. The deepfake audio aimed to show the leader of the liberal Progressive party, and a senior journalist from a major publication, discussing how to rig the election, partly by buying votes from a minority group.

While the recordings were fake Slovakia, like the UK, has a moratorium ahead of elections restricting media coverage and political campaigning. This made it difficult for the affected party to discredit the deepfake in a crucial period ahead of voters going to the polls.

While the impact of the deepfake on the election has not been measured, the liberal candidate who it targeted lost the election.

How can we respond to the threat of deepfakes during elections:

During the UK election we are likely to see a number of threats from deepfakes. There will be deepfakes produced by lone wolf actors, organised groups, as well as the risk of sponsored use of the technology by hostile states.

The UK has an evolving legal framework to tackle deepfakes and misinformation, including the passage of the recent Online Safety Act and the agreement of plans to tackle Online Fraud. However, these new frameworks are early in their implementation and will not cover all eventualities.

Taking best practice from the cybersecurity industry, actors across the journey of a deepfake could collaborate to reduce the risk they pose.

How can we provide a coordinated response to deepfakes during the UK general election?

Tech companies have already developed sophisticated tools to counter deepfakes and disinformation. However, to respond to the risk of deepfakes there will likely need to be coordinate action between technology companies, political parties, the media and Government agencies.

  • Sharing a common understanding of a deepfake: a shared understanding of deepfakes could be agreed across the political and media ecosystem. This could be understanding a deepfake as content, created or altered with the intention to meaningfully deceive its audience with the effect of causing reputational damage or subverting the otherwise usual course of the democratic process.

    Such a shared understanding would focus efforts on deepfakes designed to cause harm while keeping satirical material and material created for artistic purposes out of scope.

  • Authentic content and labelling: political partys, campaign organisations and media could commit to not creating intentionally deceptive content and provide clear labelling and use of content providence tools such as the C2PA standard for any altered materials shared during the election campaign.

    The think tank Demos has recently called for political partys to use generative AI technology responsibly during the campaign. You can read their open letter here.

  • Detection and identification: techUK members are developing advanced detection tools and technologies to identify deepfakes across platforms, bolstering digital authenticity and credibility. Recently, a large consortium of tech companies have kicked off the Deepfake Detection Challenge (DFDC) to find better ways to identify manipulated content and build better detection tools. You can read more in our report .

  • Information sharing: organisations could establish a known named point of contact and a process to share confidential information to aid the in the tackling of suspected unlabelled third party deepfakes should be established. Once a deepfake has been discovered a take down or ‘kill' notice, similar to those used in the publishing industry, could be issued to aid in the removal of the deepfake from all major media and news platforms.

  • A coordinated response: once a deepfake has been identified key actors could take action to supress the sharing of the deepfake and confirm it's inauthenticity. This could be done via removal from social media channels, encouraging supporters and affiliates not to share the deepfake, not reporting on the content of the deepfake in the media or sharing a coordinated message confirming the deepfakes inauthenticity in the event of a widespread incident.

techUK report: Discussing Deepfakes - the opportunities and challenges of synthetic media technology

In a new report techUK explores the growing threat of deepfakes and the positive opportunities presented by synthetic media.

The report outlines a strategy for a collaborative response from government, industry, and society. Effective measures require a multi-pronged approach encompassing legal frameworks, technological advancements, and public education.

The current legal framework attempts to tackle the negative impacts of deepfakes through a range of legislation, from recent measures in the Online Safety Act and amendments in the Criminal Justice Bill among others. But real-time actions are what is crucial to both mitigate harm, both from deepfakes and to continue innovation in synthetic media.

Synthetic media is rapidly transforming how we interact with information, unleashing a new wave of creativity and innovation. It offers immense potential to revolutionise education, enhance accessibility, and empower innovative storytelling, from text to image generators and synthetic voice generation to AI avatars and personal assistants.

As deepfakes become harder to detect with the naked eye, a combination of cross sector tools is necessary to combat them. Solutions such as detection and mitigation technologies play a key role. Cross-sector initiatives, such as the AI Elections Accord, also focus on collaborative efforts to prevent deceptive AI content. Watermarking and labelling techniques ensure media authenticity by embedding cryptographic data into images. Fact-checking tools play a crucial role in disproving deepfakes, while AI-driven identity verification enhances security in digital interactions. Finally, increasing media literacy through education is also essential for the public to discern authentic from manipulated content.

As we move forward, it becomes crucial for governments, industries, and society to collaboratively shape a comprehensive legal and policy framework and scale out solutions.

Any future steps should focus on three key areas:

1. Innovation

Online Safety Sandbox: An online safety sandbox which focuses on disinformation, misinformation and deepfakes could be used to create a vast library of synthetic media, including both real and manipulated content and train AI models to better identify deepfakes in the real world.

Encouraging innovation for synthetic media: Synthetic media should be included as a focus area within existing AI technology challenge funds to enable research and development (R&D) in applications across various sectors.

2. Standards and best practice

Accelerate Adoption of Content Provenance: C2PA should be supported across industry as current best practice in content provenance. This would aid the identification of authentic media and address rising threats of disinformation.

Support Deepfake Detection, Labelling and Fact Checking Tools: Ofcom should promote the existing best practice that companies could use to tackle the malicious uses of deepfakes.

3. Media Literacy and Collaboration

Promote National Media Literacy: The Government, relevant regulators and industry should collaborate to facilitate training sessions for journalists and professions likely to be targeted by deepfakes. Deepfake and disinformation focused media literacy should also form an integral part of children's education in schools.

International Cohesion: Given the global and multifaceted nature of the deepfake problem, addressing it requires a collaborative effort. Other states have begun to invest heavily in labelling and media literacy surrounding deepfakes, and it will be crucial to learn from any actions of best practice.

Company: techUK

Visit website »