Meta unveils team to combat disinformation and AI harms in EU elections


Tech giant’s head of EU affairs says team will bring together experts from across the company.

Facebook owner Meta has unveiled plans to launch a dedicated team to combat disinformation and harms generated by artificial intelligence (AI) ahead of the upcoming European Parliament elections.

Marco Pancini, Meta’s head of EU affairs, said the “EU-specific Elections Operations Center” would bring together experts from across the company to focus on tackling misinformation, influence operations and risks related to the abuse of AI.

“Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognize that speed is especially important during breaking news events,” Pancini said in a blog post on Sunday.

“We’ll use keyword detection to group related content in one place, making it easy for fact-checkers to find.”

Pancini said Meta’s efforts to address the risks posed by AI would include the addition of a feature for people to disclose when they share AI-generated video or audio and possible penalties for noncompliance.

“We already label photorealistic images created using Meta AI, and we are building tools to label AI generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads,” he said.

The launch of AI platforms such as OpenAI’s GPT-4 and Google’s Gemini has raised concerns about the possibility of false information, images and videos influencing voters in elections.

The EU parliament elections, which take place between June 6 and 9, are among a raft of major polls taking place in 2024, which has been dubbed the biggest election year in history.

Voters in more than 80 countries, including the United States, India, Mexico and South Africa, are set to go to the polls in elections representing about half the world’s population.

Meta earlier this month joined 19 other tech companies, including Google, Microsoft, X, Amazon and TikTok, in signing a pledge to clamp down on AI content designed to mislead voters.

Under the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, the companies agreed to take eight steps to address election risks, including developing tools to identify AI-generated content and enhancing transparency about efforts to address potentially harmful material.

The influence of AI on voters has already come under scrutiny in a number of elections.

Pakistan’s jailed former Prime Minister Imran Khan used AI-generated speeches to rally supporters in the run-up to the country’s parliamentary elections earlier this month.

In January, a fake robocall claiming to be from United States President Joe Biden urged voters not to cast their ballots in the New Hampshire primary.


Leave a Reply

Your email address will not be published. Required fields are marked *