AI’s Latest Hazard for School Employees: Deepfakes


AI’s Latest Hazard for School Employees: Deepfakes

IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The potential for artificial intelligence to fabricate convincing audio and video of real people, which a disgruntled ex-employee in Baltimore recently did to smear a principal, is raising alarms about regulation.

Deepfake,Or,Deep,Fake,Technology,As,Ai,Or,Artificial,Intelligence

Shutterstock

(TNS) — A recent, AI-generated “deepfake” audio recording of a principal making hateful comments has laid bare an uncertain landscape for educators — a bleak one that could consist of costly investigations, ruined reputations, and potential defamation cases.

On April 25, the Baltimore County Police department charged Dazhon Darien, 31, the athletic director at Pikesville High School in Baltimore, Md., with theft, stalking, and disrupting school operations. Dazhon had created and circulated a faked audio clip of Pikesville’s principal Eric Eiswert making racist and antisemitic remarks against students and colleagues. The audio clip, which surfaced in January, quickly went viral and divided the school’s community over its veracity.

For more than a year, U.S. schools and districts have grappled with the wide-reaching implications of AI technology for teaching and learning. What happened to Eiswert, who has since been absolved of wrongdoing, shows that AI can also be weaponized against school officials — and that most districts are ill-equipped to handle that threat.

School leaders have noticed — and they believe something similar could just as easily happen to them or their staff.

“I was very alarmed to see that AI could be used in this way. As someone who is a considered by law to be a public figure, you are always open to criticism of this nature,” said Kimberly Winterbottom, principal of Marley Middle School in Glen Burnie, Md. “Someone could click a picture across a parking lot and put it up on social media. But this is a whole new level.”

A LACK OF POLICY

After the deep fake audio recording came out, Eiswert was put on administrative leave between January and April while the county police and the school district investigated. He isn’t coming back to Pikesville High this school year, said Myriam Rogers, the superintendent of Baltimore County Public Schools, in a statement. The district is “taking appropriate action regarding the arrested employee’s conduct, up to and including a recommendation for termination.”

Rogers did not clarify if Eiswert will return for the new school year.

The faked audio clip, shared over 27,000 times, roiled the Baltimore County school community — with demands that Eiswert be removed as principal. His physical safety, and that of his family, was threatened on phone calls and messages that Eiswert received.

Long before the incident, though, there were growing undercurrents in schools of AI tools being misused to target students and educators alike.

Male students have used apps to generate fake pornographic images and videos of female students; in March 2023, a group of high students in Carmel, N.Y., created a deepfake video of a middle school principal in the district shouting angry, racist slurs and threatening violence against his Black students.

Such cases have shone a light on the yawning gap between a rapidly evolving technology, and a lack of policies to govern it.

“We definitely need some adaptation to bring the laws up to date with the technology being used. For instance, the charge of disrupting school activities only carries a maximum sentence of six months,” said Scott Shellenberger, the Baltimore County state’s attorney, at a press conference held after Darien’s arrest.

PRINCIPALS ARE VULNERABLE BECAUSE OF THEIR POSITIONS

It’s still unclear what specific tool Darien used to create the deepfake. A report by the Baltimore Banner said that Darien had used the school’s internet to search for Open AI’s tools and large language models that could process data to produce conversational results.

As authority figures who must take disciplinary action from time to time, principals contend that they are more susceptible to backlash and vengeful reactions, which can now easily take the form of believable-yet-fake video and audio clips. They fear that the technology will progress to a point where it will be difficult to distinguish between real and fake.

The relative ease with which Darien faked the audio has principals thinking closely about how they communicate with their staff, students, and the parent community.

For one thing, it doesn’t take a lot of data for an AI tool to be able to replicate a voice from an audio clip.

“I have a colleague who sends out a voice message to her student community. I told her she should stop that,” said Melissa Shindel, the principal of Mayfield Woods Middle School in Elkridge, Md. “It could need less than a two-minute audio clip. And you can’t always trace the origin.”

Shindel said she’s been cautioning other school leaders about their unbridled support of AI in their schools.

“People are in denial about the harm it could do,” she said. “Deepfakes are more damaging than negative social media posts. You believe what you see or hear, over what you read.”

A troubling notion, exemplified in the Eiswert case, is that a grievance could spin into AI-fueled revenge — a parent unhappy about how a child was disciplined, a student or staff member angry about a decision. Shortly before Darien created and spread the fake audio clip, Eiswert had been investigating Darien’s alleged misuse of school funds.

“We are vulnerable. Everything that happens in the school funnels to me. Credit and blame,” Shindel added.

Winterbottom, the principal from Glen Burnie, said Darien’s extreme actions made her revisit the impact she has on people, especially when disciplinary issues are involved. But getting charged by the police has made Winterbottom hopeful that the case has set up the right message.

“I’m ecstatic that they were able to trace it [the audio]. The precedent is that you’re going to get caught,” she said.

She hopes it will make people think twice before they jump to conclusions when they encounter a faked charge.

DISTRICTS CAN BE PROACTIVE ON AI USE, BUT CAN’T PREVENT MISUSE

Eiswert, though pilloried on social media and put on administrative leave by his district, had the support of the Council of Advisory and Supervisory Employees, which represents school administrators. CASE’s executive director, William Burke, said in an email that CASE has maintained the audio was AI-generated from the time it surfaced.

CASE engaged AI experts to assess if the audio was real, and put Eiswert through a polygraph test, the results of which, Burke said, showed conclusively that Eiswert had not “made the statements on the audio.” The evidence from the AI experts and the results of the polygraph were shared with the police, Burke said.

Eiswert did not respond to several request for comments sent via CASE, which is handling media inquiries related to the incident.

Such investigations can prove expensive and time consuming for unions, schools, and district administrations, especially if they have to investigate multiple cases.

“School and district administrators will be given what looks like real evidence [of wrongdoing]. If they don’t know the risks associated with a tool like AI, they may believe the evidence even if it’s falsified,” said Adam Blaylock, a lawyer who works with school districts in Michigan.

Blaylock fears that the lack of awareness about AI could put districts at risk of lawsuits. “If you end a young administrator’s career based on something that’s not true, it opens up the district to a huge risk of litigation,” said Blaylock.

As for victims of AI frauds, states’ defamation laws typically put the burden of proof on the victim, who would have to engage experts to prove it’s not them on the audio or video, an expensive proposition.

“A principal, like in Pikesville’s case, may feel personally harmed. But if there was no firing or demotion, they will be hard pressed to show damages,” Blaylock said.

Blaylock is keen to help school districts avoid the pitfalls for AI generated deepfakes. His advice is to build defenses to help identify AI-generated content.

Updating student and employee handbooks with specific clauses about AI is one idea.

“We have one districtwide policy about technology misuse, which covers cellphones. It should now have language specific to AI,” said Winterbottom.

School and district teams should have one member who is either an AI expert, or constantly updates their knowledge about the quick developments in the technology. “There are some telltale signs of AI-generated content. People in AI videos will sometimes have six fingers. The expert on the team will be familiar with these indicators,” said Blaylock.

Ultimately, there is no Turnitin for AI deepfakes yet, Blaylock said, referring to the popular tool used to detect plagiarism in student work. District administrators can only hedge the risk — maintain a list of approved generative-AI tools, train individuals on the appropriate use of AI, and comply with data-protection standards when dealing with any contractor who will use the school’s data.

Blaylock encourages individual school leaders to maintain a healthy skepticism about media that seems incredulous, while not backing away from muscular leadership.

“The risk is going to be focused on how we review information. But I can’t ask school leaders not to do what’s best for their kids … because they’re scared of a deepfake.”

©2024 Education Week (Bethesda, Md.). Distributed by Tribune Content Agency, LLC.


Leave a Reply

Your email address will not be published. Required fields are marked *