
In today’s digital landscape, it’s easy to assume that all students are resorting to artificial intelligence (AI) tools, like ChatGPT, to cheat their way through assignments and exams. From sensational headlines in major publications such as the Wall Street Journal to candid confessions from students in various media, the narrative around AI and academic dishonesty is alarming. For instance, a recent feature in New York magazine highlighted a college student who candidly admitted to using generative AI to “cheat on nearly every assignment.”
With a plethora of reports indicating a surge in AI-related cheating, the educational community is understandably on edge. The traditional methods of assessment—exams, readings, and essays—appear threatened as students increasingly lean on technology to produce their work. In extreme cases, some students are even submitting complete essays generated by AI software.
However, this prevailing narrative of widespread academic dishonesty is not the complete picture.
Cheating has been a constant in education, evolving with the times. As an education researcher focusing on AI cheating, I’ve found that while the methods of cheating may have shifted, the overall prevalence of cheating behaviors has not seen a dramatic increase. Our early research indicates that AI may have transformed the landscape of cheating, but it has not necessarily amplified the volume of cheating that has existed for decades.
This isn’t to downplay the new challenges AI introduces into the educational sphere. There are pressing questions that educators, parents, and students must consider: Will AI lead to an uptick in cheating in the future? Does all AI usage in academic settings constitute cheating? How can we best equip students for success in an increasingly tech-driven world?
These inquiries underscore the need to explore the nuances of cheating and how they intersect with students’ use of AI in their academic lives.
Historically, cheating is not a new phenomenon. Research dating back to the 1990s and early 2000s by Don McCabe, a business professor at Rutgers University, revealed disturbingly high levels of cheating among university students. One pivotal study from the ’90s found that a staggering 96 percent of business majors admitted to some form of cheating behavior.
How did McCabe yield such surprising statistics? He employed anonymous surveys that encouraged students to self-report their behaviors without fear of judgment. Unlike more direct questioning methods, these carefully worded inquiries led to higher rates of reported cheating among students.
This methodology persists in contemporary research. Recent studies from McCabe’s team indicate that, as of 2020, over 60 percent of students acknowledged engaging in cheating behaviors.
The motivations for cheating are varied. Many students, particularly those anxious about subjects like math, may resort to dishonest practices simply to pass. Others may cheat on assignments they perceive as trivial or excessive, viewing it as a pragmatic time-saving strategy. If students believe that their peers are also cheating, they may rationalize their actions, feeling that such behaviors are acceptable in a culture that often prioritizes results over integrity.
The situation is similar among high school students, where studies have reported cheating rates exceeding 80 percent. This trend existed long before AI tools like ChatGPT emerged. High school students often feel intense pressure to excel academically, leading them to view cheating as a necessary means to achieve their goals.
The definitions of “cheating” can be broad and may include various behaviors, such as using online services to obtain answers or completing assignments under dubious circumstances. As educators, it’s vital to clarify these definitions and set clear expectations for academic integrity.
So, what about AI-specific cheating?
In research conducted from the 2018–2019 to the 2021–2022 academic years, my colleagues and I examined survey data from over 1,900 students across three high schools. This study aimed to understand the impact of various factors, including the pandemic, on cheating behaviors. Following the introduction of ChatGPT in the 2022–2023 school year, we returned to these schools to assess any changes in cheating practices.
Our findings revealed that cheating rates remained relatively stable before and after the release of ChatGPT, mirroring pre-pandemic numbers. While a notable percentage of students—ranging from 59 to 64.4 percent—reported cheating behaviors after the introduction of AI, this did not indicate a significant increase.
Moreover, behaviors related to plagiarism or copying from peers showed little change, with around 24.8 to 31.2 percent of students admitting to such actions after AI came onto the scene.
Interestingly, while the overall cheating numbers remained consistent, this does not imply that students abstained from using AI. For instance, approximately 30 percent of students continued to report using some form of online sourcing for their work, and around 11 percent specifically utilized AI tools to generate entire papers or projects.
Our research suggests that while AI has certainly carved out a niche in the realm of academic dishonesty, it is essential to recognize that many students may have turned to AI instead of traditional online services or methods.
As we delve deeper into this issue, we acknowledge the limitations of our findings. Not all students were familiar with ChatGPT during our initial studies, as its widespread recognition had not yet taken hold. We are currently analyzing more extensive data from the 2023–2024 and 2024–2025 school years, aiming to gauge the evolving landscape of AI use among students across a more significant number of schools.
Preliminary results indicate a gradual increase in AI usage, with 11 percent of students employing AI for complete assignments in 2024, rising to 15 percent in 2025. Additionally, more than half of students reported using AI to brainstorm and generate ideas, while around 40 percent utilized AI to refine their existing work.
Conversations with students reveal a complex relationship with AI. They express a desire to use AI for assistance rather than outright completion of tasks. One student candidly explained how they rely on AI for help late at night when they struggle to finish assignments on time.
Yet, students also fear the repercussions of being falsely accused of cheating. Instances of students facing disciplinary actions for alleged plagiarism, despite their innocence, underscore the precarious trust between students and educators.
As we navigate these challenges, it’s crucial to understand that educators and students often have differing perspectives on what constitutes cheating. Some students believe that their teachers encourage AI use for specific tasks, leading to confusion about acceptable boundaries.
Moreover, a significant number of educators have yet to establish clear policies regarding AI use in their classrooms. With many districts grappling with the implications of AI, ambiguity remains prevalent, leaving students unsure of what is deemed acceptable.
Despite the alarm surrounding AI-generated submissions—often ranging from 10 to 15 percent of student work—it’s essential to contextualize these numbers within the broader landscape of academic dishonesty.
As we consider the future of education in light of AI, we must confront several key questions: Why are students cheating? Are educators modeling the behaviors they expect from students? Have we clearly communicated acceptable academic conduct? What skills will students need to thrive in an AI-infused world?
Education systems must evolve to address these challenges. Rather than opting for restrictive measures, we should embrace the reality of AI and teach students how to navigate this landscape responsibly. This requires a fundamental rethinking of educational practices, assessment methods, and the skills we prioritize.
Ultimately, we cannot ignore the permanence of AI in our lives. As we adapt, let’s move beyond sensational headlines and engage in meaningful conversations about the implications of AI in education, fostering an environment where integrity and innovation can coexist.