Analysis | Fake celebrity statements and videos on Israel-Hamas war — and other news literacy lessons


Here’s the latest installment of a feature I’ve been running for several years: lessons from the nonprofit, nonpartisan News Literacy Project (NLP) that aim to teach students and the public how to sort fact from fiction in our digital — and contentious — age. With the spread of rumors, baseless accusations, and conspiracy theories and disinformation on social and partisan media sites, there has not been a time in recent U.S. history when this skill has been as important as it is now.

The material in this post comes from the Sift, the organization’s newsletter for educators, which has more than 10,000 readers. Published weekly during the school year, it explores timely examples of misinformation, addresses media and press freedom topics, looks at social media trends and issues, and includes discussion prompts and activities for the classroom.

Get Smart About News, modeled on the Sift, is a free weekly newsletter for the public. The NLP also has a free e-learning platform, Checkology, that helps educators teach middle and high school students how to identify credible information, seek reliable sources and know what to trust, what to dismiss and what to debunk. It also gives students an appreciation of the importance of the First Amendment and a free press.

Checkology and all of the NLP’s resources and programs are free. Since 2017, more than 475,000 students have used the platform. The organization has worked with more than 60,000 educators in all 50 states, the District of Columbia, Puerto Rico and more than 120 countries.

Here’s material from this week’s issue of the Sift:

Dig deeper: Don’t miss this week’s classroom-ready resource.

Top picks

1. Should public officials be allowed to block constituents on social media? That’s the First Amendment question behind two Supreme Court cases, including one brought by California parents who sued their public school board members for blocking them on social media. A lawyer representing the school officials argued last week that their social media pages were personal — not government pages. However, a lawyer representing the parents told the justices that the board members’ individual accounts include school information the parents couldn’t obtain elsewhere, so being blocked from the pages violated their First Amendment rights as constituents.

Discuss: Do you use social media to follow school updates? If so, do you think being blocked from viewing or commenting on any of those social media pages would violate your First Amendment rights? Why or why not? If you were a school board member and used your account to post about schools, would you want to be able to block specific parents? If you were a Supreme Court justice, what questions or thoughts would you consider before making a ruling in this case?

Resources:

— “The First Amendment” (NLP’s Checkology virtual classroom).

— “News Lit Quiz: So, what’s the First Amendment?” (NLP’S Resource Library).

Related:

— “School Board Members’ Use of Social Media Faces Key First Amendment Test in Supreme Court” (Mark Walsh, Education Week).

— Opinion: “What Matters Most in the Supreme Court’s Upcoming Social Media Cases” (Jameel Jaffer, the New York Times).

— “Opinion | Those guys yelling on sports shows? Yeah, the First Amendment protects them, too” (Tom Jones, Poynter).

Dig Deeper: Use this think sheet to take notes on the First Amendment issues in this case (meets NLP Standard 2).

2. After Microsoft began relying on artificial intelligence technology to curate news for MSN.com, inaccurate and sometimes bizarre stories and headlines began appearing for its millions of viewers. Other troubling algorithmic decisions include a story from the Guardian newspaper about a woman who was killed that was recently shown on MSN alongside an AI-generated reader poll that asked, “What do you think is the reason behind the woman’s death?” and listed three options — murder, suicide or accident. The poll prompted swift criticism from readers and from the Guardian for being potentially distressing to the victim’s family and tarnishing the reputation of the paper. A Microsoft spokesperson said in a statement that the company was “committed to addressing the recent issue of low quality news.” MSN is the default homepage for Microsoft’s web browser, Microsoft Edge.

Note: Microsoft is one of the News Literacy Project’s funders.

Discuss: How does AI-generated misinformation affect people in real situations? How should AI technology be used in newsrooms? Can you tell if news has been curated by humans instead of AI?

Resources:

— “News literacy in the age of AI” (NLP’s AI page).

— “Practicing Quality Journalism” (Checkology virtual classroom).

Related:

— “News Group Says A.I. Chatbots Heavily Rely on News Content” (Katie Robertson, the New York Times).

— “The Problems Biden’s AI Order Must Address” (The Markup).

3. Out-of-context photos and videos of other conflicts are being passed off as images of the Israel-Hamas war on social media. Some examples include a 2013 photo of dead Syrian children that was falsely described in a post as being recently killed Palestinians and a gruesome video of a girl being attacked in Guatemala in 2015 was falsely described as a Hamas attack. Photographer Hosam Katan, whose work is among the visuals taken out of context, said in a New York Times interview that while these kinds of posts may be intended to gain empathy, “such fake videos or photos will have the opposite impact, losing the credibility of the main story.”

Discuss: What roles do influencers or online creators play in your media diet? What are the pros and cons of following influencers online? Do you follow any creators specifically for updates on news or current events? How can you verify the information they share?

Idea: Ask students to do a reverse image search on images they’ve seen on social media to check if the posts framed them with the correct context.

Resources:

— “Navigating misinformation about the Israel-Hamas war” (News Literacy Project).

— “Breaking News Consumer’s Handbook: Israel and Gaza Edition” (WNYC Studios).

Related:

— “How to deal with visual misinformation circulating in the Israel-Hamas war and other conflicts” (Paul Morrow, the Conversation).

— “Israel-Gaza war sparks debate over TikTok’s role in setting public opinion” (Drew Harwell and Taylor Lorenz, The Washington Post).

— “Using Israel-Hamas war to help students sort through online misinformation” (Hannah Gross, N.J. Spotlight News).

— “The Israel-Hamas War is Taking an Unprecedented and Deadly Toll on Journalists” (Astha Rajvanshi, Time).

RumorGuard Rundown

You can find this week’s rumor examples to use with students in these slides.

False ‘crisis actor’ claims about Israel-Hamas war spread via out-of-context images

NO: The videos and photos in these posts do not show “crisis actors,” or people hired to play a role, staging incidents purporting to occur in the Israel-Hamas war.

YES: The video supposedly showing a dead person texting from inside a body bag was taken in Thailand in 2022 and depicts a participant in a Halloween costume contest.

YES: The video provided as “evidence” that a person was a “crisis actor” because he was “miraculously” healed in one day shows two different people at two different times.

YES: The photos apparently showing the same child surviving three separate Israeli attacks in October 2023 are of a girl being helped after Aleppo, Syria, was bombed in 2016.

YES: False crisis-actor claims often circulate after mass shootings and armed conflicts.

NewsLit takeaway: False and conspiratorial claims about “crisis actors” being used to stage or exaggerate mass-casualty events are frequently spread by bad actors seeking to sow doubt about the authenticity and severity of tragedies, and to create distrust in institutions such as government and legacy news outlets. While these claims evoke sensational notions of nefarious plots and global conspiracies, they rely on the same old tricks of false context that are used to spread a lot of online disinformation. The images included in the above screenshots are all authentic, for example, but none of them involve crisis actors. Conspiratorial claims about crisis actors gain traction by exploiting highly emotional incidents and offering a temptingly simple explanation for otherwise incomprehensible events. While these claims may be emotionally appealing, checking them against credible sources and employing some basic fact-checking techniques provides evidence that they are based on falsehoods.

Fake celebrity political endorsements spread over Israel-Hamas war

NO: This is not a genuine photograph of the soccer star Lionel Messi holding an Israeli flag.

NO: This is not an authentic video showing a Palestinian flag on the actor Jason Statham’s car.

NO: This is not an authentic video of model Bella Hadid giving a speech in support of Israel.

YES: These are fake, digitally manipulated, or out-of-context videos and images.

NewsLit takeaway: Falsely claiming that a celebrity has endorsed a certain political opinion is a common tactic used by purveyors of disinformation attempting to disrupt or distort the cultural conversation. They manipulate an image, in the case of the Messi rumor; present media out of context, so that Statham appears to be in a viral video, when he is not; or use artificial intelligence technologies to create deepfake videos, like the one featuring Hadid. These false rumors all rely on the popularity and appeal of celebrities to influence public opinion. Getting in the habit of examining sources — both the account sharing a rumor and the origins of the post — is a good way to determine whether something spreading on social media is authentic. And remember, breaking news and current events are ripe for exploitation.

Kickers

• More people are turning to online creators and influencers for updates on current events because their coverage is “more accessible, informal” and “feels more relevant,” according to a Reuters Institute report. While some of these influencers have training in journalism, some are activists and partisan commentators who spread misleading information.

• Are news and social media just not meant for each other? The Atlantic’s Charlie Warzel has some thoughts.

• Will AI contribute to election misinformation next year? Most American adults (58 percent) believe it will, and an even bigger majority (83 percent) say it would be a “somewhat or very bad thing” for presidential candidates to use AI to create false or misleading content for political ads, according to a new poll.

• It’s been almost a year since ChatGPT debuted, and educators have adopted a wide range of outlooks on how students can or should use the generative AI text tool.

• Women and teenagers are the most common targets for AI-generated nude images and videos, and there are hardly any regulations in place to protect those victims.

• Black Americans over age 65 are twice as likely (46 percent) as Black Americans under age 30 (23 percent) to see local news coverage about their community “extremely or fairly often,” according to new Pew Research Center findings.

• What kind of news stories are most useful for voters? Should election coverage focus on poll numbers and treat the subject as a competitive race, or should stories be more focused on the issues and the positions and policies of the candidates? This professor thinks the answer is clear.

• An Alabama newspaper reporter and a publisher were recently arrested for revealing grand jury secrets after publishing an investigative report that revealed the improper use of federal coronavirus pandemic funds by local officials. The Committee to Protect Journalists has demanded that the charges be dropped because the reporter and publisher “should not be prosecuted for simply doing their jobs and covering a matter of local interest.”


Leave a Reply

Your email address will not be published. Required fields are marked *