USA

Schools are desperate to weed out AI from students’ work – but what happens when they falsely accuse someone of cheating?

As artificial intelligence becomes more ingrained in our daily lives, schools are increasingly concerned about students harnessing AI to help cut corners in their work.

Nearly half of (43 percent) U.S. teachers with classes from sixth to twelfth grade said they used AI detection tools in the 2024/2025 academic year, according to a recent poll by the Center for Democracy and Technology.

However, while accusations of cheating with AI can lead to docked grades, probation, or even expulsion, they can also have a serious impact on the students themselves, experts say.

Especially when they are wrong.

“One of the most common feelings that students describe to me is anxiety and stress from even going through the process, even if they’re saying I’m innocent,” Lucie Vágnerová, a New York-based education consultant with over 10 years of experience, tells The Independent.

“A lot of them tell me they are not sleeping well – a lot of them have to seek out counseling, and the misconduct process at U.S. colleges and universities often takes at least several weeks, if not months, sometimes, so this is really like a long-range situation, really deeply affecting their mental health.”

More than one in four teachers with classes from sixth to twelfth grade said they used AI detection tools while marking in the 2024/2025 academic year, according to a recent poll (PA Archive)

Marley Stevens, a student from the University of North Georgia, lost her scholarship after being flagged for using AI on a paper in October 2023, according to USA Today. She had used Grammarly, an online spell-checker recommended by the university, but was still awarded a zero.

Stevens was put on academic probation and subjected to a six-month misconduct and appeals process. Despite her protestations, the mark on her paper impacted her GPA, resulting in the loss of the scholarship. “I couldn’t sleep or focus on anything,” she told the outlet at the time. “I felt helpless.”

Experiences such as these ultimately erode trust in the educational process and ultimately the core relationship between students and their teachers, says Vágnerová.

“To me, that’s a huge problem,” she tells The Independent. “Institutions are investing in all this surveillance, and they are not investing in instructors’ ability to build deep relationships with students and build that trust and that vulnerability.”

Ailsa Ostovitz, a 17-year-old high-schooler from the Washington D.C. area, claimed she has been falsely accused of using AI on three separate assignments in two different classes during this academic year alone.

“It’s mentally exhausting because it’s like I know this is my work,” Ostovitz told NPR, “I know that this is my brain putting words and concepts onto paper for other people to comprehend.”

Research has also found that such detection systems are, in the best-case scenarios, limited, and in the worst-case, totally unreliable.

“Detection tools for AI-generated text do fail, they are neither accurate nor reliable,” a study by members of the European Network for Academic Integrity found, noting that all the tools they evaluated all scored below 80 percent.

As artificial intelligence becomes more and more a part of daily lives, schools are increasingly concerned by students using it to help with their work, or even write assignments entirely.

As artificial intelligence becomes more and more a part of daily lives, schools are increasingly concerned by students using it to help with their work, or even write assignments entirely. (Copyright 2023 The Associated Press. All rights reserved)

“In general, they have been found to diagnose human-written documents as AI-generated (false positives) and often diagnose AI-generated texts as human-written (false negatives).”

The study found there were “serious limitations” of even state-of-the-art AI-generated text detection tools and their unsuitability for use as evidence of academic misconduct and that it was also “too easy to game the systems.”

“Therefore, our conclusion is that the systems we tested should not be used in academic settings,” the authors wrote.

Despite this, school districts are still looking to embrace AI technology, though often with strict guidelines on its use.

A bulletin put out by the Los Angeles Unified School District said the district was committed to utilizing AI technologies “in an ethical, transparent, and responsible manner, while recognizing the importance of protecting student and employee privacy and ensuring that the use of these technologies is consistent with ethical and equitable considerations.”

In September, New York City Public Schools Chancellor Melissa Aviles-Ramos announced a four-part framework to ensure responsible engagement with Artificial Intelligence tools in the classroom.

This framework, included preparing students for AI-powered lives and careers, teaching students and staff to use AI responsibly, mitigating bias and ensuring cultural responsiveness when using AI, and leveraging AI to advance operational and instructional efficiencies.”

Ironically, while warning students of the dangers of trusting AI without question, many educators may take the judgment of a detection system at face value. If institutions are going to continue using such systems, greater literacy on AI is needed, Vagnerova says.

Accusations of cheating prompted by such technology can lead to repercussions such as docked grades, but can also have deeper and more serious effects on the students – especially when they are wrong. Many experience negative effects on their mental health as a result

Accusations of cheating prompted by such technology can lead to repercussions such as docked grades, but can also have deeper and more serious effects on the students – especially when they are wrong. Many experience negative effects on their mental health as a result

“I think a lot of people would imagine that the solution is more accurate surveillance,” she tells The Independent. “ To me, that’s sort of the opposite.”

“I think there is a role for AI detection in the education space, but it’s a much, much smaller role than it has now. Ultimately, I think institutions and governments need to invest in compensating educators so that they have the space to create assessments that evaluate student growth meaningfully.”

The ENAI study authors agree.

“Our findings strongly suggest that the ‘easy solution’ for detection of AI-generated text does not (and maybe even could not) exist,” the authors wrote.

“Therefore, rather than focusing on detection strategies, educators continue to need to focus on preventive measures and continue to rethink academic assessment strategies. Written assessment should focus on the process of development of student skills rather than the final product.”

  • For more: Elrisala website and for social networking, you can follow us on Facebook
  • Source of information and images “independent”

Related Articles

Leave a Reply

Back to top button

Discover more from Elrisala

Subscribe now to keep reading and get access to the full archive.

Continue reading