Kyle Wagner

3 articles
University of Minnesota System ORCID: 0000-0002-3201-5325
  1. When collaborating turns into dishonesty: A data-driven heuristic comparing human and AI collaborators
    Abstract

    With respect to AI writing technologies (AIWT), we pose three foundational questions about academic dishonesty. First, do writing instructors and students perceive differences between AI agents and human agents in classroom scenarios? Second, to what extent are writing instructor and student perceptions are aligned? Third, what types of writing scenarios are perceived as academic dishonesty? Answering these questions provides a baseline of comparison not only for future studies of AIWT collaboration but also contextualizes perceptions of human-to-human collaboration. We report on a large-scale experimental survey study that answers these questions using item response theory (IRT). Our findings demonstrate that while there are differences between AI and human agents of collaborations, writing instructors and students are generally aligned in their perceptions. Using a Rasch model, we find that academic dishonesty operates along a spectrum of textual production. Regardless of whether the collaborating agent is human or AI, the more an agent produces text, the more this collaboration is perceived as academic dishonesty. Conversely, the less text that is produced, the less this scenario is perceived as academically dishonest. In our discussion, we provide a data-driven heuristic to guide instructors and administrators.

    doi:10.1016/j.compcom.2025.102947
  2. Comparing Student and Writing Instructor Perceptions of Academic Dishonesty When Collaborators Are Artificial Intelligence or Human
    Abstract

    It remains unclear if perceptions of academic dishonesty concerning artificial intelligence writing technologies (AIWTs) present new challenges or if they reflect prior, non-AI concerns. To structure this problem, we used a randomized control survey experiment. We compared student ( n = 603) and instructor ( n = 312) attitudes toward dishonesty in collaborations involving humans versus AIWT in 10 writing-related scenarios. Results suggest similar perception patterns among students and instructors, with both populations expressing significant differences in perceived dishonesty between AI and human collaborators in some scenarios. This experiment structures the problem of AI writing and academic dishonesty for future research in this emerging field.

    doi:10.1177/10506519241239937
  3. Peering into the Internet Abyss: Using Big Data Audience Analysis to Understand Online Comments
    Abstract

    This article offers a methodology for conducting large-scale audience analysis called “big data audience analysis” (BDAA). BDAA uses distant reading and thin description to examine a large corpus of text data from online audiences. In this article, that corpus is approximately 450,000 online reader comments. We analyze this corpus through sentiment analysis, statistical analysis, and geolocation to identify trends and patterns in large datasets. BDAA can better prepare TPC researchers for large-scale audience studies.

    doi:10.1080/10572252.2019.1634766