Ron DeSantis or ChatGPT? When is it cheating? UCSC chancellor on artificial intelligence in education


UC Santa Cruz Chancellor Cynthia Larive said she would sooner give ChatGPT the task of developing a curriculum than Republican Florida Gov. Ron DeSantis. Of course, when tech guru Guy Kawasaki posed the leading question to Larive during a local artificial intelligence conference last week at Santa Cruz’s Pasatiempo Golf Club, he did so with a playful, impish smile. Larive hardly hesitated, and the answer drew the kind of laugh you’d expect from a crowd of largely California- and Central Coast-based tech leaders, thinkers and hobbyists.

A year ago, Kawasaki’s tongue-in-cheek question and Larive’s answer wouldn’t have landed the same. Sure, it can be fun to roast a political party, but the notion of an artificial intelligence model that could even understand the concept of a curriculum, let alone build one, was an absurdity confined to the pages of science fiction. How quickly things have changed.

The emergence of ChatGPT and other generative AI applications has marked an inflection point in education, technology and beyond. When prompted, these programs can quickly scrape the whole of the internet, synthesize the information and spit out relatively cogent text and images resembling human creation. AI is here to stay, and leaders such as Larive will play a major role in determining whether it will be a tool or a torment.

Since she rose to the post in 2019, unprecedented change has been the only constant in Larive’s tenure as UC Santa Cruz chancellor. She had barely shaken off the pandemic and the shift to virtual and remote learning before ChatGPT and other generative AI systems appeared from around the corner. She will now have to help the university navigate through a revolution in tech that many describe as a seminal moment in human history.

“We know students are going to use ChatGPT and, probably, we want to encourage them and teach them how to deal with digital literacy but also ethical application,” Larive said during the event. “I think we have to embrace the opportunity to figure out how we incorporate it into our curriculum.”

Larive said she is “not too frightened” about artificial intelligence, ChatGPT and other large language models like it. She said the technology is already relieving professors at UCSC of more “rudimentary tasks” such as helping to develop study guides and coming up with example problems for students to solve. She also sees great potential in helping students improve their own writing. “Not just to write for them,” Larive said, but to help them translate their ideas into clearly communicated writing.

She has her concerns, though. Impersonation and attribution worry her, as well as who gets to decide what uses of AI are appropriate. She also doesn’t necessarily consider it cheating to have ChatGPT help write a term paper, or come up with ideas for what to write about. Although she remains steadfast in the value of education, she admits education will need to adapt to this new world of AI.

Larive had a lot to say during the panel discussion with Kawasaki during Santa Cruz Works’ AI Horizons conference last week. Here are some of the most interesting.

Student signup banner

The questions and answers have been edited for length and clarity.

Guy Kawasaki: The fundamental question is, how do you define education? Because, 200 years ago, education was, “Does the earth revolve around the sun?” We’re beyond that. Does a person need to know calculus, or where a comma goes in a sentence? What is an education right now?

Cynthia Larive: I think the value of education is teaching people to think critically and to use knowledge to solve problems, because we’ve got plenty of problems that need to be solved. When I was teaching chemistry, people worried about Google. Google seems to know everything … but the value isn’t what Google knows, or the value isn’t what ChatGPT can scrape from the internet. The value is in, how can you use it to accomplish something, to do something creative, to solve a problem? We have to do that now, not just at scale but at speed, because the world has got problems like climate change that we don’t have a lot of time to be able to solve.

Kawasaki: I’m writing a book right now about people who have made major career changes. I asked ChatGPT to name some famous people who have made mid-career changes and ChatGPT named Julia Child, who, before French cooking, worked for the CIA. I checked to make sure ChatGPT just wasn’t making s–t up, and then went on to do more research, but I would have never thought of Julia Child on my own. Did I cheat?

Larive: No. You did say something that was really important: You checked, right? I think that’s really important. It used to be that you go to the library and you try to look something up in an abstract system and try to find an article and try to find the thing that would lead you to your conclusion about Julia Child. But that took a long time. So, what you were able to do was take a shortcut and do it faster. But, it’s then what you were able to do with that information that makes it creative.

Kawasaki: How would you define sentient?

UC Santa Cruz Chancellor Cynthia Larive.

UC Santa Cruz Chancellor Cynthia Larive.

(Carolyn Lagattuta / UC Santa Cruz)

Larive: I used to think self-aware, but now I’m not sure I’m going to be able to use that for much longer, because maybe the robots won’t take over but they’ll know what they’re doing. For me, maybe it’s feelings. If we’re sentient, we feel, we have emotions, we feel things. I wonder if ChatGPT or AI will have feelings. They will be able to describe them, but will they have them?

Kawasaki: In education, what does AI enable you to do that you could never do before?

Larive: On our campus, there is a history course where students are asked to make a prompt for ChatGPT and a primary resource about some event in history. An example might be the Cuban missile crisis. Then ChatGPT will create dialogues that [the students] can have with people about what the experience was at that time, which helps put students in the context of history.

There is also an art course where students are using AI to look at art that is in the style of a famous artist. They prompt AI to create art that is in the style of say, Gauguin, or Monet. Then they compare it to the original artists and critique it — this teaches critical thinking. This way, we can understand what’s special about different artists that they might be studying.

Kawasaki: You could have never done that before.

Larive: I think it would have been impossible.

Question from the audience: How will AI change us as human beings in the next 10 years?

Larive: I remember when we all got calculators, but we couldn’t use our calculators in class because the teachers thought we’d forget how to add and multiply. With these tools, I come back to the idea that the real goals are creation, innovation and solving problems. So, maybe I’ll be different because I can use these things. Right now, I can get around better in my car — I don’t have a very good sense of direction — because of the tools that tell me where to drive or that there is an accident and I should take an alternate route. I kind of like that. So, I may be different now because of those tools, but maybe I’m just better at accomplishing what I want to accomplish. Maybe with AI it will be the same.

Have something to say? Lookout welcomes letters to the editor, within our policies, from readers. Guidelines here.


Leave a Reply

Your email address will not be published. Required fields are marked *