Staff Editorial: Navigating AI in Education

Academia is constantly adapting to the changing landscape of information access. The use of generative artificial intelligence (AI) is no different. Generative AI is capable of generating original text or images based on user prompts after being trained on vast data sets. Likely the most famous example of generative AI is OpenAI’s ChatGPT, which the Foghorn’s editorial board has previously published work on.

Universities worldwide are creating new policies around generative AI, and USF is in the process crafting its own approach. The Foghorn believes that AI can be useful, and its place in our society is not going away — students need to learn how to use this technology for future endeavors. Using AI to check grammar or brainstorm ideas could be helpful in ensuring quality or bridging accessibility gaps. The Foghorn is calling on the departments of this university to create their own unique responses to AI. 

The Rhetoric and Language Department “held a professional development meeting before the semester began focused on establishing a university AI use policy,” Rhetoric Rhetoric Professor Robert Boller told the Foghorn. At this meeting, the rhetoric department consulted the generative AI policy of other universities, like Harvard and Texas Tech, for ideas. 

USF currently permits professors to set the boundaries of allowed AI use. The full policy can be found on the University’s Education Technology Services new generative AI page.

According to Boller, individual departments should determine which AI uses undermine learning and which prepare students to use technology that will soon be normalized. The Foghorn similarly believes that there is a wide spectrum of helpful generative AI applications, and departments should be trusted to set boundaries for their students. 

The ability to practice effective writing and critical thought should be foundational to the University’s policy on AI. Simply asking ChatGPT to complete an assignment does nothing in furthering a student’s abilities. AI’s use in higher education should be that of a tool, not a ghostwriter.

AI’s advanced nature has caused controversy within academia. Educators have concerns that students may use generative AI to complete whole assignments. Because of its generative model, this type of copying may be harder for educators to detect than traditional plagiarism. 

The rhetoric department does not advocate for a complete ban of generative AI. Many professors are embracing AI with classroom tools like Persuall, which analyzes and grades student annotations with professor oversight. 

USF’s AI policy should embrace it as a tool and allow departments to set their own guidelines, but should take measures to prevent students from curtailing their own education. For example, professors can use Turnitin, a tool that can detect the use of generative AI in student work. 

AI is powerful, but it can’t compare to a fully-actualized human mind. At USF, education expands the potential of one’s own cognitive abilities, and using generative AI as a crutch eliminates the purpose of the education students pay thousands for. USF’s policy should be aimed at producing professionals who can use technology, but still be effective advocates for positive change.


Leave a Reply

Your email address will not be published. Required fields are marked *