USD students and faculty discuss AI usage in the classroom

Riti Dey/Asst. News Editor 

After months of petitioning for compensation from OpenAI platforms,  on Sept. 20,    the Authors Guild and 17 fiction novelists   filed a class action lawsuit in New York against public Artifial  Intelligence (AI) platforms, namely OpenAI, for copyright infringement of their works. According to the New York Times The suit highlights the economic and intellectual ramifications of theft and exploitation of content created by writers and novelists. 

ChatGPT is a popular AI chatbot developed by Open AI.
Photo courtesy of @emilianovittoriosi/Unsplash

The Authors Guild classified the rise   of OpenAI as an “existential threat to the author profession wrought by the unlicensed use of books to create large language models that generate texts.”  The complaint explained that the profitability of OpenAI depends on mass copyright infringement, without any regard for copyright owners. The company scans copyrighted material — though it is unknown to what extent — which allows AI platforms to produce works in the style of established authors automatically and for free to consumers. OpenAI then obtains the profits of what these authors could have written for pay. This poses a threat to authors’ livelihoods, as the Authors Guild’s recent income survey reported that the median writing-related income in 2022 for full-time writers in the U.S. was around $23,330. 

While a petition   signed by over 8,000 authors has been circulating the writing community for several months, the most recent lawsuit includes 17 extremely prominent plaintiffs, such as George R. R. Martin and Jodi Picoult, who set off a high-profile legal conflict. For this reason, legal analysts told ABC News that “the case could fundamentally shape the direction and capabilities of generative AI.” This would regulate the technology on a grander scale than it is now. Spokespeople from ChatGPT and other AI platforms argued in a statement to ABC News that the company has had discussions with content creators, who can use the platform to fuel their creative pursuits. 

“Creative professionals around the world use ChatGPT as a part of their creative process. We respect the rights of writers and authors, and believe they should benefit from AI technology,” the spokesperson said.

The suit highlights the harms to both the fiction and nonfiction markets. In fiction, versions of AI are able to mimic authors’ characters and stories, allowing users to ‘enter worlds’ that are the intellectual property of the original creator. 

For nonfiction writers such as author Jane Friedman, AI platforms have been used to misappropriate authors’ names to sell scam books. 

Dr. Dennis Clausen, American Literature professor at USD, spoke to several publications this month about his opinion on the influence of AI in the writing process.  In his letter to the editor published in the New York Times in September titled “ChatGPT is Plagiarism,” Clausen explained that AI steals the intellectual property of authors, repackaging the works as their own property to be sold for profit.

“In   the    process, the  AI chatbots are depriving authors of the fruits of their labor,” Clausen wrote. 

From an academic standpoint, Clausen also noted in his article for Psychology Today that AI should not be used for writing in the classroom because, “writing is thinking, and thinking is writing.” 

He explained that if AI is used in place of the process of developing an idea on one’s own, complex thinking is not possible, hindering one’s intellectual growth. Clausen also acknowledged the issue of using hand-written, in-class writing as an alternative for AI generated work, because of the importance of writing several drafts in order to produce a fully formulated thought. He sees that both AI-assisted writing and in-class writing deny students of the “full writing experience.” 

“Essays that are written over several days or weeks, often while students are engaged in other activities and their brains are mulling over an essay assignment, will inevitably strengthen more complex thinking patterns,” Clausen said. “As Ernest Hemingway argued, writing several drafts leads to increasingly more sophisticated forms of cognitive growth and development; however, students who use AI-generated writing throughout the various drafts of their essays are denying themselves the necessary academic experiences they need to develop these more sophisticated thinking skills.”

USD sophomore Maddie Cantu explained her fears about the rise of AI in writing, specifically for her English major. 

“I feel scared about AI,” Cantu said. “Having students rely on something other than their mind is extremely dangerous to human life. I almost feel my major is being devalued but at the same time, it makes me that much more motivated to continue to pursue my degree because this is such a shaky time.” 

Cantu, who is a USD scholastic assistant (SA) for a first-year English class experiences the ramifications of AI from both a student and SA perspective and continues to question the ethics of AI in an academic setting. 

At USD, AI technology is sometimes used in classrooms for science, technology, engineering and mathematics (STEM) classes. 

A biology professor at USD, Dr. Charisse Winston shared her insight on using AI as a studying tool for her class. She highlighted the effectiveness of ChatGPT and AI for study guides and other administrative objectives, but she acknowledged that AI is harmful for aspects of her class that involve critical thinking. 

“In my profession there is a lot of scientific writing that occurs… you generate an idea, you execute an experiment and you write about what happened,” Winston explained. 

“Initially,  I thought the purpose of AI was to write peoples’ lab reports for them. I’m completely against it in that context,  but if it can save me time on administrative things, and study guides, or how to put something together, I think it can be useful.”

USD First-year Marta Ruiz shared   her opinion about using AI as a tool in and out of classwork. 

“I usually use AI to explain hard concepts that I don’t understand, the same way people Google things and use Wikipedia,” Ruiz described. “Especially when AI was newer, I would kind of play around with and give it really hard questions, and I was impressed. I feel like in general it’s a good tool to use, obviously if you’re responsible with it.”

Ruiz, as well as Winston, stressed the importance of handling AI thoughtfully and responsibly in regards to schoolwork.

Artificial intelligence is contentious in and out of the classroom, and the Authors Guild’s lawsuit reinforced the ongoing debate about the potential harms of such rapidly developing technology. 

USD is a microcosm of this nationwide divide on the use of AI in intellectual pursuits, as there is no standard regulation for the use of AI.

While several professors include  in  their syllabuses that AI use is prohibited in their classes, others teach their students how to use these platforms as a tool. 

As the judgment of the suit remains undecided, generative AI companies remain under heat, and the ethical debate surrounding the issue persists. 

Leave a comment

Trending