Home News 94% Of AI-Generated College Writing Is Undetected By Teachers

94% Of AI-Generated College Writing Is Undetected By Teachers

by admin

It’s been two years since ChatGTP made its public debut, and in no sector has its impact been more dramatic and detrimental than in education. Increasingly, homework and exam writing are being done by generative AI instead of students, turned in and passed off as authentic work for grades, credit, and degrees.

It is a serious problem that devalues the high school diploma and college degree. It is also sending an untold number of supposedly qualified people into careers and positions such as nurses, engineers, and firefighters, where their lack of actual learning could have dramatic and dangerous consequences.

But by and large, stopping AI academic fraud has not been a priority for most schools or educational institutions. Unbelievably, a few schools have actively made it easier and less risky to use AI to shortcut academic attainment, by allowing the use of AI but disallowing the reliable technology that can detect it.

Turning off those early warning systems is a profound miscalculation because, as new research from the U.K. points out once again, teachers absolutely cannot or do not spot academic work that has been spit out by a chatbot.

The paper, by Peter Scarfe and others at the University of Reading in the U.K., examined what would happen when researchers created fake student profiles and submitted the most basic AI-generated work for those fake students without teachers knowing. The research team found that, “Overall, AI submissions verged on being undetectable, with 94% not being detected. If we adopt a stricter criterion for “detection” with a need for the flag to mention AI specifically, 97% of AI submissions were undetected.”

You read that right – 97% of AI work in university courses was not flagged as possible AI by teachers. But it’s actually worse than that, as the report also says, “Overall, our 6% detection rate likely overestimates our ability to detect real-world use of AI to cheat in exams.”

This is not the first time we’ve been warned that humans cannot find AI work on their own. Last year, a study from the University of South Florida concluded that linguists could not tell the difference between text created by AI and text written by humans.

A different study last year – this one from American universities in Vietnam – found that AI detectors were far better at picking out AI text than human teachers were. The team in Vietnam wrote, “Although [AI detection system] Turnitin correctly identified 91% of the papers as containing AI-generated content, faculty members formally reported only 54.5% of the papers as potential cases of academic misconduct.”

In that study, the teachers were told in advance that papers using AI would be submitted in their courses and they only found little more than half of them. That study also used “prompt engineering” to make the papers more difficult for AI detectors to spot. Still, the machines caught 91%. The humans, 55%. And schools are actually turning these machines off.

Again, the humans were told to look for them and still whiffed. When humans are not put on alert and AI detectors are not used, as was the case in the more recent U.K. study, AI work is missed nearly every time.

Worse, the U.K. study also found that, on average, the work created by AI was scored better than actual human work. “We found that in 83.4% of instances the grades achieved by AI submissions were higher than a random selection of the same number of student submissions,” the repot said.

In other words, a student using the most basic AI prompt with no editing or revision at all, was 83% likely to outscore a student peer who actually did the work – all while having a generous 6% chance of being flagged if the teachers did not use any AI detection software. Keep in mind that, in real classrooms, a flag of suspected AI use doesn’t mean much since professors are very reluctant to pursue cases of academic integrity, and even when they do, the schools are often even more reluctant to issue any sanctions.

Recently, the BBC covered the case of a university student who was caught using AI on an academic essay, caught by an AI detector by the way. The student admitted to using AI in violation of class and school rules. But, the BBC reported, “She was cleared as a panel ruled there wasn’t enough evidence against her, despite her having admitted using AI.”

Right now, if a school or teacher is not getting assistance from AI detection technology, using AI to cheat is very likely to boost your grade while carrying nearly zero chance anything bad will happen. In other words, if your school is not using AI detection systems, you’d have to be an idiot to not use AI on your coursework and exams, especially if your classmates are. Spoiler alert – they are.

Not to be overlooked in the results of the new U.K. study is that the fake students were in online classes and their fake coursework was submitted online – where teachers could not possibly know anything about their students, including whether they were even actually human. It highlights the reality that online courses are more vulnerable to cheating, and AI cheating in particular, because teachers do not know their students and cannot observe their work.

Naturally, schools could use technology to solve this problem too, either by proctoring assignments and exams, or by using writing environments that track changes and revisions. But just like using AI detectors in the first place, many schools don’t want to do those things either because they take work and cost money.

Instead, they prefer to run online classes without AI detectors, without test proctoring, and without the ability to verify the work – leaving it to teachers to be AI detectives. It’s a solution that is not working, does not work, and will not work.

As a result, fraud is rampant, in large part because schools aren’t interested in detection and consequences. Two years in, AI is here. Any willingness to limit its corrosive power has yet to arrive.

You may also like

Leave a Comment