The Illusion of Control: Is Banning ChatGPT from Classrooms the Answer?

As end of year exams loomed for students across Australia, concerns about the use of ChatGPT fueled motions to ban the platform in schools. But will banning generative AI address the underlying problems in our classrooms?  

As end of year exams loomed for students across Australia, concerns about the use of ChatGPT fueled motions to ban the platform in schools. But will banning generative AI address the underlying problems in our classrooms?  

Generative AI platforms, like ChatGPT, are language processing models that utilise open source data to generate written responses to prompts. Many of these platforms are publicly available and often free to use, making them accessible to a large demographic.

While ChatGPT-1 was launched in 2018, the fourth iteration, GPT-4, gained popularity in 2022 when people – including students – started to discover the extent of its capabilities.

The power of ChatGPT combined with its ease of accessibility has created a powerful tool that students across the country are increasingly using – and sometimes misusing.

A recent survey across American universities found that 48% of students had used ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper.

Concerns around the misuse of the platform led to public schools across Western Australia, New South Wales, Queensland, and Tasmania banning the use of ChatGPT in schools.

However, ChatGPT is not the cause of plagiarism in education. Plagiarism is the result of an existing problem with educational pedagogy – how we teach students, and how we expect them to learn.

How plagiarism has changed over the years?

Plagiarism has been an issue in schooling since schooling began. 

Prior to the internet, students used books to replicate information. Prior to ChatGPT, they sourced essays and information from the internet. And now, in the age of generative AI, they’re using tools like ChatGPT to generate what they believe are bespoke responses to prompts.

Because large language models source their data from publicly available sources, like books or the internet, it’s likely that responses from prompts on the same topic will have elements of commonality.

This isn’t the first time a technological advancement has raised concerns for schools. 

When autocorrect was first introduced in 1993, many wanted to ban the use of this tool because they believed students wouldn’t learn to spell if an automated tool was constantly correcting their errors. 

However, when a study compared the spelling in 3,000 student papers from 1988 and 2008, the results had not substantially changed.The 1988 survey found 2.26 errors per 100 words and the 2008 survey found 2.45 errors per 100 words.

When considering how to combat plagiarism in schools, the questions we should be asking are not “how do we prevent students from cheating?”, rather, “why do students cheat?”. 

Why do students cheat?

A 2020 study found that a fear of failing, poor time management and disinterest in the assignment to be some of the most frequently stated reasons students choose to plagiarise or cheat.  

It has been argued that traditional educational pedagogies, in which students memorise and regurgitate information, lead to superficial learning where students are not achieving a deep understanding of their subjects. This traditional approach to education doesn’t provide students with advanced problem solving and critical thinking skills. Skills that are, arguably, harder to plagiarise. 

Scandinavia has employed the “Improving School Leadership” program which aims to improve school outcomes. Students are given assessments that capture real world problems. These assessments don’t follow a binary right and wrong answer approach, rather, they give students an opportunity to demonstrate knowledge and skills. 

The broken calculator problem is a salient example of assessment that requires the demonstration of knowledge and skills, rather than the regurgitation of one correct answer. “How would you make a calculator display the number 75 if the 5 key was broken?” This open-ended problem invites multiple strategies and answers, encouraging students to demonstrate problem solving and critical thinking skills.

A different approach to education and assessment can create an environment where students are less incentivised to plagiarise. 

Banning ChatGPT will not reduce student incentives to plagiarise. Shifting the lens slightly, generative AI tools like ChatGPT have the potential to be used in productive and constructive ways. 

Using generative AI for good

A few years after autocorrect was released, a group of programmers developed a word predictive software to help people with disabilities use technology more easily, as typing on small keyboards was challenging for those with limited motor control. What some thought of as a threat to  spelling skills, became a creative foundation for more advanced and productive tools. 

The positive potential of generative AI does not mean that all concerns around these platforms should be ignored. We often see narratives around generative AI, or many other issues in fact, that show an extreme dichotomy - largely negative or largely positive. The reality is that the most effective path forward is a pragmatic middle ground that addresses concerns while also allowing room for productivity and creativity. 

Regulating how generative AI tools, such as ChatGPT, are used, instead of banning them, is a more productive exercise. Regulation can provide a benchmark of acceptable standards for the use of ChatGPT. They are a form of intervention to minimise the harms that come with the misuse of generative AI tools. 

Banning ChatGPT will not address the problem of plagiarism. It will simply give students one less tool to cheat with.

Instead of banning these tools, we should first consider understanding and addressing the root of a cause of plagiarism - ineffective educational pedagogies - and also consider how generative AI tools can be regulated rather than banned.


Previous
Previous

The AI industry is on the verge of becoming another boys’ club. We’re all going to lose out if it does

Next
Next

AI Accountability: Should Artificial Intelligence Be Held Liable When Things Go Wrong?