November 4, 2025
12pm – 1pm
Saint Paul University I AmphithEatre
Free event. No registration required.
Catherine Potter – PhD Candidate – Student Perspective
Dr. Laura Armstrong -Full Professor
Shawn Quinn – IT Perspective
Despite a growing body of literature on fraud detection in online research, many researchers, particularly students, remain unaware of the subtle and ever-evolving ways that bots and AI-generated responses can compromise data integrity. This is especially concerning where research findings inform community programs, mental health protocols, and best practices (Pinzon et al., 2024). Methods that were once effective at fraudulent qualitative or quantitative data detection may no longer be valid, given how quickly AI technologies are adapting and improving.
In one recent study, out of 9,163 individuals who completed the eligibility survey, only 197 were found to be legitimate participants (Agans, 2024). For graduate researchers and professors navigating this landscape without adequate training or support, the potential impact can be overwhelming, both in terms of the quality of their data and the potential overwhelm of having to navigate this territory. The burden of fraud detection often falls squarely on students, many of whom are unprepared for the amount of time, stress, and technical knowledge required to address concerning data. Further, many university level graduate programs are not equipped to train students with this knowledge, creating a need for this to be more adequately addressed in our academic institutions.
Our workshop brings together three perspectives: a doctoral student who has firsthand experience with AI-related data fraud, a professor who has supported students through similar challenges, and a university IT professional with foundational knowledge of AI systems. University IT departments and libraries may be core foundations for training staff and students about mitigating the challenges associated with AI-generated survey data. Together, we will present a practical and accessible framework—COMPASS—designed to help students and institutions navigate AI-related threats to research integrity. Through interactive and hands-on activities, participants will learn the importance of both automated and manual strategies for fraud detection, applied across different stages of the research process.
We argue that conversations and supports on AI-generated quantitative survey response data must be taught at the student level to create a ripple effect: improving a standard for data quality at the outset, supporting journals in ensuring that submission are implementing fraud-detection measures, and encouraging survey platforms to evolve and improve their own standards to protect survey data. If such frameworks were integrated into graduate training, students would be better equipped to plan, detect, and mitigate threats to data integrity—saving time and resources, while reducing stress, and improving their understanding of such issues.
This workshop is not just a toolkit, but a call to action for institutions to adopt supportive structures and embed this critical training into research curricula. We all need a COMPASS and a unified approach to navigating research integrity in the age of AI: students, professors, and IT departments alike.