Anything ChatGPT Can Do, My Students Can Do Better

This is why a media history professor is welcoming AI into her classroom.

Shutterstock

Weekly Newsletter

The best of The Saturday Evening Post in your inbox!

SUPPORT THE POST

CSUN media history professor Elizabeth Blakey draws inspiration from UCLA behavioral ecology professor Peter Nonacs’ 2013 essay, “Why I Let My Students Cheat on Their Exams.”

Comedian Steve Martin once said that teaching is like show business. Keeping this metaphor in mind, I try to approach each of my lectures like a live set. The idea is to keep my students present and engaged so that we can learn together in real time.

But what happens when the entertaining professor gets upstaged by a chatbot that can produce the lecture as well as write student papers and take the final exam? Does the college class become a meaningless joke?

Well, no.

There are people who fear that ChatGPT, Bard, and other generative AI bots will let students outsource their own learning. But I teach media history. I know that new media technologies do not make people obsolete. Video did not kill the radio star.

So rather than slip some language about ChatGPT in the policy section of my syllabus about plagiarism (which won’t stop students who know about the apps that can rewrite papers to evade detection), my plan this fall is to focus on creating interactive lessons that incorporate chatbots directly into my teaching.

Instead of letting chatbots change the learning process, I’ll show my students that anything that chatbots can do, they can do better.

Many of my students were already trying ChatGPT out last year. Because chatbots can be especially useful for performing routine tasks, one student explained that she had started to use ChatGPT at her job in customer service to generate quick responses to complaints, which she would then rewrite to improve.

While chatbots are able to do that kind of task well, more complicated tasks, such as historical essays, can be a disaster. But these limitations also open the door to teaching exercises that show students how to utilize this technology in their work.

Professors teaching writing skills can have chatbots generate outlines, drafts, and other lists of ideas. Then, the professor can direct students to work in small groups to rewrite the text for greater originality.

Chatbots also offer an opportunity to teach critical thinking and media literacy skills. ChatGPT is prone to making up false information out of the data-driven cloud—a phenomenon its handlers euphemistically call “hallucinations.” This means that students have to learn how to check facts and verify information, using citable sources and databases.

Professors can also teach students to be alert to the systemic racism and sexism that AI bots can perpetuate and amplify because of the source texts they’re drawing from. I once asked ChatGPT to write a list of some of the leading scholars of the U.S. Constitution and the First Amendment. Its response only included white men—as if no person from another background, ethnicity, or gender ever studied the U.S. Constitution.

A solution to this problem? Show students how they can give the chatbot follow-up prompts that generate more complete answers—say, specifically to include persons of color, different genders, and diverse backgrounds. When I did this, ChatGPT readily listed Kimberlé Crenshaw, Ange-Marie Hancock, and other prominent constitutional scholars.

For my classes this fall, I’m also creating “AI Moments,” where my students will get a chance to see who does it better: the robot or the professor.

After I present a new lesson and talk about it with my students, I’ll prompt ChatGPT to give a lecture on the very same subject.

To test out this idea over the summer, I asked ChatGPT to rewrite my short lecture on the history of broadcast media. Unsurprisingly, the text it generated was horrible. Just one cliché after another. It was as cold and dull as that slice of ham still relaxing in my refrigerator from the Fourth of July. Now there’s an unexpected image for you—the kind of surprise turn that ChatGPT will never accomplish. The AI-generated draft also made bad word choices—replacing the word “media” with “platform” (not all media are platforms).  It also changed my question, “Did the emergence of broadcast TV mean the end of going to the movies?” and instead asked “whether the emergence of broadcast TV resembled the demise of cinema attendance caused by the rise of radio.” This word choice altered the meaning of the point, which is that new media does not replace the old.

When I recreate this exercise in my classroom, I plan to have my students search ChatGPT’s lecture for bad writing that they will rewrite, turning each cliché into original imagery and poor word choices into something more precise. I’ll also ask them to find and eliminate bias and fact-check for inaccuracies.

What I learned from my practice matches with ChatGPT is that I know more about teaching journalism, writing, and media history—even though the chatbot can draw from vast amounts of information on the internet. And more importantly, it cannot share ideas accurately or in a creative and engaging way.

This is the kind of realization I want my students to have this fall when we engage with the AI-generated text, openly and transparently. My hope is that they will learn to learn to use AI effectively since these tools will become ever more common and maybe even indispensable in workplaces and in education. But also that through this they realize that when it comes to the contest of students versus robots, they will always come out on top.

Elizabeth Blakey is an associate professor at California State University Northridge where she teaches media history. In her free time, she does stand-up comedy.

Originally published on Zócalo Public Square. Primary Editor: Jackie Mansky | Secondary Editor: Sarah Rothbard

Become a Saturday Evening Post member and enjoy unlimited access. Subscribe now

Reply

Your email address will not be published. Required fields are marked *