Staff Editorial:  ChatGPT: The face of innovation and the end of creativity

GRAPHIC COURTESY OF WIKIMEDIA COMMONs

This semester, the plagiarism clause in USF’s syllabi might need an update. With the recent rise of ChatGPT, an artificial intelligence (AI) that auto-generates text, students now have access to a robot that will do their homework.

Produced by OpenAI, an artificial intelligence laboratory, and released in November 2022, ChatGPT is trained on data collected from various internet databases. The technology can create text for social media posts, online journals, essays, poems, love letters, and even travel itineraries, in any language.

OpenAI does not shy away from noting their imperfections and discloses on their main page that “the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”

This means that although students can input a command like “write an essay about Shakespeare’s “Twelfth Night’” the quality of the output may not be very high — but that may not stop students from copying and pasting. What’s more, the AI will never produce the same answer twice, making it hard to trace any plagiarism.

The USF Honor Code defines plagiarism as “the act of presenting, as one’s own, the ideas or writings of another.” However, the ChatGPT FAQ page states that “you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise.” This “ownership” is dangerous because it gives the user the impression that they are responsible for ideas they did not generate. It discourages human originality and diminishes the critical thought component of assignments.

However, ChatGPT’s threat to academia is not the most pressing concern. As mentioned by Boston University Professor Josh Pederson in the SF Chronicle, “The reality is that plagiarism has long been a fact of life at American universities. ChatGPT just gives students a new tool to accomplish this very old task.”

Additionally, as Annie Lowrey wrote for the Atlantic, ChatGPT “creates content out of what is already out there, with no authority, no understanding, no ability to correct itself, no way to identify genuinely new or interesting ideas.”

Instead of fearing ChatGPT, we should be wondering why, according to data collected by Rutgers, “65-70% of undergrads admit to cheating at least once” between 2002-2015. This pattern could be the result of our societal values of constant productivity and high workload expectations that trickle down through the education system.

Many college students today grew up with technology and education woven together. We have accepted technology and automated teaching in classrooms. Children using iPads in classrooms, teachers using YouTube to teach about the human body, and now online textbooks and exams — these were all once unfathomable new ways of educating that have become incorporated, regulated, and guided by traditionally trained teachers.

So what is driving our desire to automate our creative capability? What continues to push more automation in education? In other words, why create ChatGPT at all?

In a world where our careers need to be prioritized above all else to achieve economic stability, people are pushed to their limits. We are forced to swap out productivity in place of creativity, and to abdicate our health and happiness in order to keep up with society’s demands.

The temptation to resort to ChatGPT and other AI in times of burnout is not surprising. These tools are advertised as making our lives easier, but they are a product of our mistakes. We should not rely on automation to keep us afloat — instead, we should embrace the brain farts, the writer’s blocks, and all the other challenges in thinking for yourself.

We define ourselves by our ability to think critically and constantly redefine ourselves. Now is the time to demonstrate those characteristics in order to preserve them.

Author

Leave a Reply

Your email address will not be published. Required fields are marked *