Generative AI tools are reshaping education, offering new avenues for creativity and exploration. Yet, they have also sparked fears of academic dishonesty, leading many educators to seek answers from AI detection tools as a primary defense. While these tools may seem to provide a seemingly straightforward solution, the reality is that even though it is possible to suspect AI usage, it evades reliable detection. As such, AI detection tools carry significant drawbacks that educators must carefully consider.
Despite the availability of AI detection tools, it is virtually impossible to definitively identify AI-generated content in most cases unless there are clear and apparent identifiers of AI use. Examples of clear and apparent identifiers might include: 1) A statement somewhere in the written work that says something to the effect of “As I am an AI, I cannot…” or 2) obvious use of unusual or nonexistent citations, or 3) clear digression or off-topic section that does not make sense in the work.
This article comes from the ITC AI Affinity Group, a panel formed by the Instructional Technology Council to share best practices for distance education instruction and AI. ITC an affiliated council of the American Association of Community Colleges.
Limitations of AI detection tools
AI detection tools are far from flawless, and their over-reliance can undermine educational equity and trust. Some key limitations include:
False positives: These tools often flag legitimate student work as AI-generated, especially if the writing style deviates from conventional norms. This is particularly problematic for:
- Non-native speakers, whose linguistic nuances may trigger false positives.
- Neurodivergent students, whose unique communication styles might be misclassified.
Exclusionary impact: Detection tools can disproportionately affect marginalized groups, creating barriers for students who are already navigating systemic inequities in education.
Ethical concerns:
- Privacy violations: Many AI detection tools store and analyze student data without clear consent, raising questions about surveillance and data security.
- Erosion of trust: The use of detection tools fosters a punitive atmosphere, where students may feel distrusted by default.
Inaccuracy over time: As generative AI tools evolve, they outpace detection algorithms, rendering these tools less effective and more prone to errors.
Focus on punishment over learning: Reliance on AI detection shifts attention away from teaching and learning, reinforcing a compliance-based approach to education rather than fostering curiosity and ethical decision-making.
Lack of transparency: Students and educators often lack insight into how these tools operate, leading to questions about fairness and accountability in their use.
Proactive strategies to counteract AI cheating
Instead of leaning on imperfect detection tools, educators can adopt a proactive, inclusive approach to foster academic integrity and responsible AI use. Below are strategies to counteract or counterbalance easy AI cheating:
- Redesign assessments to promote authentic learning. Create assignments that require personal engagement or local context, such as:
- Reflection papers on personal experiences or class discussions.
- Case studies tied to current or regional events.
- Creative projects that demand originality and critical thinking.
- Implement scaffolded assignments. Break larger projects into manageable pieces with regular checkpoints. This not only supports learning but also allows educators to observe progress and catch inconsistencies. For example:
- Submit a research proposal before a full draft.
- Develop an outline or concept map as an early step.
- Use peer feedback to refine ideas.
- Emphasize process-oriented grading. Shift emphasis from final outcomes to the steps taken to achieve them. Consider grading:
- Draft submissions and revisions.
- Peer review contributions.
- Reflections on feedback received and applied.
- Foster collaborative learning. Encourage group work to reduce reliance on individual AI use. Examples include:
- Group debates or discussions requiring consensus-building.
- Collaborative research presentations.
- Co-creating artifacts like digital portfolios.
- Teach responsible AI use. Educators should actively guide students in using AI as a tool for growth, not shortcuts. Consider partnering with your college’s librarians or student support help desk for AI training and research support. Key elements to teach include:
- How to leverage AI for brainstorming or early drafts while maintaining originality.
- The importance of citing AI-generated content and understanding its limitations.
- Discussions on ethical AI use in professional and academic settings.
- Build relationships to strengthen integrity. Students are less likely to cheat when they feel supported. Build trust through:
- Regular one-on-one check-ins to discuss progress.
- Transparent communication about academic integrity policies.
- Showing interest in their goals and challenges.
- Utilize AI for feedback, not policing. Integrate AI into teaching to enhance learning, such as:
- Encouraging students to use AI for spelling, grammar or brainstorming.
- Demonstrating how to critically evaluate AI-generated outputs.
- Modeling how AI tools can complement — not replace — critical thinking.
- Incorporate oral assessments. Oral defenses or presentations can verify student understanding and discourage misuse. Ideas include:
- Requiring students to explain their thought process during a Q&A.
- Having students defend their project decisions in a short interview.
- Asking students to teach a concept from their assignment to their peers.
- Collaborate with the college’s Writing Center.
- Leverage the expertise of your college’s Writing Center to support students throughout their writing journey. Establish a partnership by integrating Writing Center resources into your course, such as workshops, one-on-one consultations or asynchronous feedback services.
- Encourage students to visit the Writing Center at various stages of their writing process, from brainstorming and drafting, to revising and finalizing their work. Consider inviting Writing Center staff to present in class or hosting joint sessions to familiarize students with the available support.
- Consider process tracking. Process tracking is most effective when students are actively informed and engaged in the process. Rather than being used as a tool to “catch” misconduct, it serves as a developmental resource. (Learn more about process tracking.)
- Explore strategies to document the various stages students navigate while completing a writing assignment.
- By engaging in process tracking, students gain valuable insights into their own writing process, fostering learning and growth. At the same time, it helps build their confidence in the integrity of their work and that of their peers.
- Create an AI-friendly course zone. An AI-friendly course zone is a learning environment where students are encouraged to use generative AI tools responsibly as part of their educational journey. Key components might include:
- Setting clear expectations of AI use.
- Teaching AI literacy.
- Encourage transparency for disclosing AI use.
- Fostering reflection on AI use.
Toward an inclusive and proactive approach
AI detection tools are not the ultimate solution to academic dishonesty. Their limitations — particularly their potential to harm marginalized students and erode trust — underscore the need for more inclusive, proactive strategies. By redesigning assessments, building supportive relationships, and teaching responsible AI use, educators can create learning environments that emphasize growth, integrity and equity.
Through these efforts, we can harness the transformative potential of AI in education while safeguarding the values that matter most.
* * *
Kate Grovergrys, MA, is a full-time faculty member at Madison College in Wisconsin. She develops professional development on topics such as inclusive teaching practices and artificial intelligence.
Tina Rettler-Pagel, Ed.D., is also a full-time faculty member at Madison College. She spends most of her time on projects and initiatives focused on digital learning, but also supports faculty in exploring and planning for the pedagogical opportunities of generative AI.
Grovergrys and Rettler-Pagel are members of the ITC Affinity Group. In addition, they both are participating in a research project for Madison College’s Institute for Equity and Transformational Change focused on leveraging AI for inclusive teaching and learning.