All Journals

287 articles
Year: Topic: Clear
Export:
artificial intelligence ×

July 2026

  1. LAWE-CL2: Multi-agent LLM-based automated writing evaluation system integrating linguistic features with fine-tuning for Chinese L2 writing assessment
    doi:10.1016/j.asw.2026.101051
  2. Anchor is the key: Toward accessible automated essay scoring with large language model through prompting
    doi:10.1016/j.asw.2026.101053
  3. Educator perspectives on automated writing scoring and feedback for young language learners: Applying a fairness and justice lens
    doi:10.1016/j.asw.2026.101050
  4. Accuracy and fairness of generative AI in automated essay scoring: Comparing GPT-4o, feature-based models, and human raters
    doi:10.1016/j.asw.2026.101047

June 2026

  1. Computers & composition research at the dawn of generative AI: Threats, opportunities & future directions
    doi:10.1016/j.compcom.2026.103001
  2. Integrating generative AI in first-year writing: Lessons from a pilot initiative
    doi:10.1016/j.compcom.2026.102982
  3. “Article laundry” or “tutor in pocket?”: Multilingual writers’ generative AI-assisted writing in professional settings
    Abstract

    • Generative AI can help multilingual communicators in professional writing. • Generative AI supports email/report writing and meeting summary. • Practical, ethical and legal concerns remain. • Students’ AI use at workplace informs academic writing teaching and learning. Because multilingual students’ languaging practices are not limited to academic settings, it is important to explore their lived experiences communicating in real-world situations to shed light on how to prepare them in college classrooms in the era of generative AI. Drawing upon writing samples, artifacts and interview data, this case study brings attention to the potential and challenges a multilingual international student face in implementing generative AI-assisted written communication during her 5-month internship in the workplace. The findings indicate that generative AI tools, especially ChatGPT, have the potential to help multilingual communicators meet their written linguistic demands in professional contexts, especially in email writing, report drafting and meeting summary. Generative AI-assisted writing tools could assist multilingual students with idea expression and boost their confidence and agency in communication. Yet, despite its many advantages, practical, ethical and legal concerns remain. This study contributes to the scarce yet budding literature exploring multilingual international students’ AI engagement in professional settings and offers concrete pedagogical implications and directions for future research.

    doi:10.1016/j.compcom.2026.102983

May 2026

  1. Leveraging Human-Centered Design and Artificial Intelligence to Improve Rural Healthcare: Wicked Problems, Design Thinking, and Mutable Methodologies
    Abstract

    This study explores how a human-centered design (HCD) approach encourages written communication researchers to rethink methodologies when studying wicked problems, particularly in healthcare communication contexts. We argue for “methodological mutability” as a strategy to address complex and evolving challenges in rural healthcare communication. Using design thinking principles, we investigated how generative AI (GenAI) and machine learning can enhance medical communication, streamline documentation, and improve telemedicine usability. Our research revealed that rural healthcare providers view effective patient-provider communication as their primary challenge. This finding led us to pivot toward exploring how AI applications can structure and enhance patient narratives. We advocate for researchers to adopt a designer mindset, integrating methodological flexibility to move beyond problem analysis and instead develop solutions. By embedding HCD, design thinking, and methodological mutability into research design, researchers can prioritize practical interventions when working in spaces beset by wicked problems.

    doi:10.1177/07410883261440256

April 2026

  1. Selections From the ABC 2025 Annual International Conference, Long Beach, California, USA: Classroom Activities for Teaching Artificial Intelligence (AI) and Social Media Skills in the Business Communication Classroom
    Abstract

    This article presents a curated collection of six teaching innovations presented at the Association for Business Communication 90th conference in Long Beach, California, as well as online, in October 2025. These MFA presenters demonstrated activities in helping students understand the use of artificial intelligence (AI) and social media in business communication. This My Favorite Assignment 34th edition introduces readers to a variety of classroom-ready ideas that integrate tasks involving social media and AI. Teaching support materials—instructions to students, stimulus materials, slides, rubrics, frequently asked questions, links, and sample student projects—are downloadable from the Association for Business Communication website.

    doi:10.1177/23294906261432116
  2. ChatGPT feedback and emotional engagement in L2 writing: A control-value theory perspective using Q-methodology
    doi:10.1016/j.asw.2026.101045
  3. Generative artificial intelligence for automated writing evaluation: A systematic review of trends, efficacy, and challenges
    doi:10.1016/j.asw.2026.101041
  4. Associations of adolescents’ argumentative writing scores and growth when evaluated by different human raters and artificial intelligence models
    doi:10.1016/j.asw.2026.101015
  5. Developing students’ feedback literacy in disciplinary academic writing through generative artificial intelligence
    doi:10.1016/j.asw.2026.101030
  6. Assessing fairness in finetuned scoring models with demographically restricted training data
    Abstract

    The increasing adoption of automated essay scoring (AES) in high-stakes educational contexts necessitates careful examination of potential biases within the systems. This study investigates how the demographic composition of training data influences fairness in AES systems developed from finetuned large language models (LLMs). Using the PERSUADE corpus of 26,000 student essays, we conducted a systematic analysis using demographically restricted training sets to isolate the impact of training data demographics on LLM-AES performance. Each demographically restricted training set comprised essays written by one racial/ethnic group. Four variants of a Longformer-based AES were developed: one trained on demographically balanced data and three trained on demographically restricted datasets. An initial analysis of the human ratings indicated that demographic factors significantly predict human essay scores (marginal R² = 0.125), a pattern that is paralleled in national writing assessment data. LLM-AES systems trained on demographically restricted data exhibited small systematic biases (marginal R² = 0.043). However, the LLM trained on balanced data showed minimal demographic bias, suggesting that representative training data can effectively prevent amplification of demographic disparities beyond those present in human ratings. These results highlight both the importance and limitations of training data diversity in achieving fair assessment outcomes. • 12.5% of variance in human essay ratings was explained by demographics. • We construct demographically restricted training sets to isolate bias. • Balanced training data minimized LLM-AES bias across demographic groups. • LLM-AES trained on demographically restricted data showed more bias.

    doi:10.1016/j.asw.2026.101032
  7. The impact of ChatGPT’s feedback on L2 Chinese learners’ writing outcome, confidence, and emotions: A mixed-method quasi-experimental study
    doi:10.1016/j.asw.2026.101027
  8. How to Write With GenAI: A Framework for Using Generative AI to Automate Writing Tasks in Technical Communication
    Abstract

    Generative artificial intelligence (AI) is reshaping technical communication, necessitating strategies to assess its impact. This article introduces a framework combining human-in-the-loop automation with a task-based approach for communication roles. Effective AI integration requires identifying and organizing key writing tasks to fit into automated workflows. The framework underscores the value of writing expertise and offers practical guidance for practitioners, scholars, and educators. By aligning AI tools with technical communication tasks, professionals can produce accurate and complex communication products. This approach highlights the essential role of human expertise in effective, AI-assisted writing.

    doi:10.1177/00472816251332208

March 2026

  1. Canon to Code: Rhetorical Rulemaking for Generative AI Content Audits and Governance
    Abstract

    This article proposes the Canon to Code (C2C) Auditing Framework for evaluating generative (artificial intelligence) AI output through classical rhetoric, arguing that AI's characteristic failures—guessing instead of knowing, politeness instead of credibility, and confidence instead of judgment—revisit problems that rhetoric has addressed since antiquity. Developed using a rulemaking methodology and drawing on classical rhetorical theory, this framework presents 10 auditing rules that operationalize rhetorical principles into evaluation criteria for AI-generated content, focusing on accuracy, transparency, and accountability. It offers content auditors, technical communicators, and compliance professionals a theoretically grounded method for distinguishing AI output that meets audience needs from output that simulates credibility through pattern matching.

    doi:10.1177/00472816261429907
  2. Human-Centered, Tool-Assisted: Engaging Critically with Generative Artificial Intelligence in the Technical Editing Classroom
    doi:10.1080/10572252.2026.2646515
  3. Integrating Human and Artificial Intelligence: Software in the Age of AI: Steven K. Reed : [Book Review]
    Abstract

    Presents reviews for the following list of books, Integrating Human and Artificial Intelligence: Software in the Age of AI.

    doi:10.1109/tpc.2026.3659967
  4. Chinese EFL learners’ engagement with ChatGPT feedback on academic writing: A case study in Malaysia
    Abstract

    • Postgraduates engaged behaviorally, affectively, and cognitively with GenAI feedback. • Postgraduates dealt with ChatGPT primarily as a tool for refining their proposals, not for generating content. • Postgraduates demonstrated agency by actively questioning, annotating, and negotiating feedback. • Postgraduates engaged in diverse affective responses, ranging from appreciation to frustration. As Generative artificial intelligence (GenAI) tools such as ChatGPT are becoming increasingly integrated into English as a Foreign Language (EFL) academic writing context, learners’ engagement with AI-generated feedback remains insufficiently examined. This case study investigated how four Chinese EFL postgraduates joining a course in a Malaysian university engaged with ChatGPT feedback while revising their academic research proposals. The study triangulated screen recordings, pre- and post-revision drafts, and stimulated recall interviews. Participants displayed a range of behavioural strategies, including accepting, questioning, rejecting suggestions, annotating visually, and seeking external validation. Affective responses ranged from appreciation and curiosity to doubt and frustration, particularly when feedback appeared conflicting or imprecise. Cognitively, learners applied various strategies such as evaluating, comparing, negotiating feedback, and regulating its use. Yet, they showed differing levels of engagement, shaped by individual perceptions and writing intentions. Importantly, participants regarded ChatGPT as a tool for linguistic refinement rather than content generation. Overall, the findings revealed that learners did not passively receive feedback but interacted with it in agentive and critical ways. The study highlights the interplay among these three dimensions of engagement and the importance of individual differences when evaluating the pedagogical potential of GenAI-generated feedback in academic writing.

    doi:10.1016/j.compcom.2025.102976
  5. Wicked modes in UX: Pedagogical considerations for data détournement
    Abstract

    User experience (UX) as both a vocation and a skillset is currently in the center of a wicked knot: emerging technologies such as generative artificial intelligence (GenAI) and large language models (LLMs) are (for the moment) widely accessible in unprecedented ways and are already heavily integrated into modern workplace practices and educational spaces. Further, workplace demands have led to a change in perception of the function and value of UX, and the field is facing new obstacles to hiring and research funding. Our article argues that a resituation of UX is needed: we-as instructors and administrators-need to focus on UX as an act of slow, embodied, and multimodal UX composition. To do this work, we offer the strategy of détournement as central to UX curriculum and preparing students for design work in a variety of rhetorical situations, expressed through our example assignments for instructors to implement within the college classroom.

    doi:10.1016/j.compcom.2025.102977

February 2026

  1. Book Review: Artificial Intelligence for Strategic Communication SutherlandK. E. (2025). Artificial Intelligence for Strategic Communication. Palgrave Macmillan, 483 pp.
    doi:10.1177/23294906261423373
  2. Feedback-Only AI for Writing Instruction: A Constrained-Generative Tool That Preserves Authorship
    Abstract

    This study evaluates a “feedback-only,” constrained-generative AI tool designed to support revision without generating or rewriting student text. StoryCoach was developed for a business communication elective and grounded in cognitive apprenticeship with principles of feedback literacy. The tool generated structured feedback: one strength, one opportunity, and one reflective question per submission. Analysis of 57 paired drafts showed significant gains in feature-specific rhetorical execution, with vividness as the primary quantitative indicator (Cohen’s d  = 1.39), supported by independent reader judgments and student reflections. Findings demonstrate that constrained-generative AI can function as a pedagogical partner that strengthens rhetorical awareness and preserves authorship integrity.

    doi:10.1177/23294906251414835
  3. Generative AI use in college writing classes: An analysis of student chat logs and writing projects
    Abstract

    This study contributes to the emerging research on generative AI and writing pedagogy by exploring how college writing students make use of GAI when offered instruction in a range of responsible uses and latitude to integrate it into their writing process as they see fit. We analyzed chat log data and papers from participants recruited from six sections in which students were guided in experimenting with ChatGPT Plus and permitted to use it to produce up to 50% of submitted work. Through a combination of AI and human thematic content analysis of student chat logs, we found that in 18.6% of prompts, students asked ChatGPT to write for them. The rest of the prompts involved work leading up to or in support of the writing process. Human thematic content analysis of papers showed that students used ChatGPT to generate 8.2% of the writing they submitted. The most common rhetorical purpose of the AI-generated text they included was discussion/analysis/synthesis. English as a foreign language students (EFLs) in the sample prompted ChatGPT to clarify understanding less often than non-EFLs and integrated less AI-generated text into their papers, with a particularly notable difference in their use of AI-generated summaries. This unexpected finding merits further research, but it suggests that EFLs may use GAI for somewhat different purposes than non-EFL peers.

    doi:10.17239/jowr-2026.17.03.05
  4. Empirical studies of writing and generative AI: Introduction to the special issue
    Abstract

    This special issue of the Journal of Writing Research brings together seven empirical studies of the relationship between writing and generative AI, examining what can be systematically observed and measured about the functioning of generative AI in educational and professional writing contexts. Collectively, the studies demonstrate the necessity and value of methodological pluralism for investigating a complex, rapidly evolving phenomenon. In their contributions, the researchers use experimental comparisons, mixed-methods intervention designs, corpus-based analyses, computational linguistic techniques, and qualitative interpretive approaches. Taken together, these methods enable lines of inquiry that no single approach could sustain: comparisons of AI and human performance in professional writing tasks; analyses of how writers at different ages and levels of expertise engage AI tools; examinations of how assessment systems register and respond to AI-generated prose; and investigations of how human readers interpret texts with ambiguous authorship. By foregrounding both the affordances and limitations of different methodological traditions, the articles present a multifaceted approach to the study of writing and generative AI.

    doi:10.17239/jowr-2026.17.03.01
  5. Can ChatGPT do the same? ChatGPT and professional editors compared
    Abstract

    Since the launch of ChatGPT, the use of and debate around generative AI has grown rapidly. Professionals whose work depends on writing have expressed concern about the potential impact of such tools on their roles. But are these concerns justified? Can ChatGPT truly take on the responsibilities of a professional writer? This study investigates that question by comparing the performance of ChatGPT with that of professional editors tasked with optimizing business communication. We conducted two studies, using both qualitative and quantitative methods. In the first, three experienced editors were asked to rewrite four business letters. Their editing processes were recorded using the Microsoft Snipping Tool, and immediately afterward, we conducted retrospective interviews using stimulated recall. These interviews were transcribed and analyzed. Insights from the observations and interviews informed the design of the prompt instructions used in the second study. In the second study, we asked ChatGPT to revise the same four letters using three different prompt types. The Simple prompt instructed the model to “make this text reader-focused.” The B1 prompt referred explicitly to the CEFR B1 language level, requiring ChatGPT to tailor the text for intermediate readers. Finally, the Process prompt simulated the editing steps observed in the professional editors’ workflows. To evaluate outcomes, we conducted both a qualitative comparison of the revised texts and a quantitative readability analysis using LiNT, a validated tool developed for Dutch texts. Our results show that the human editors substantially improved the readability of the original letters, reducing the use of unfamiliar words, shortening complex sentences, and increasing personal engagement through pronoun use. Among the AI outputs, ChatGPT B1 achieved results most comparable to the editors, both in readability and accuracy. In contrast, ChatGPT Simple fell short in terms of clarity and introduced errors through faulty inferences. Surprisingly, ChatGPT Process also underperformed compared to ChatGPT B1 and the human editors. Only the editors' and ChatGPT B1versions were free from errors. In the discussion, we reflect on how generative AI is reshaping the concept of writing within organizations, the skills required to produce effective written communication and the impact on writing pedagogy. Rather than replacing human editors, we argue that generative AI can play a valuable role as a collaborative tool in the organizational writing process.

    doi:10.17239/jowr-2026.17.03.02
  6. Using AI to understand students’ self-assessments of their writing
    Abstract

    This study focuses on a generative AI approach to facilitate qualitative analysis in Writing Studies research. We gathered 13,336 one-sentence to one-paragraph responses written by 3,334 incoming students in a directed self-placement program administered at a large R1 U.S. university. In these responses, students describe their high school writing experience and college writing expectations. In stage one of the project, we pilot the use of Retrieval-Augmented Generation to expedite the selection of relevant responses for a topic—in this case, students’ positive self-assessments as writers. The selected responses were then compared to a random sample and rated by three faculty with writing expertise. In stage two, these faculty generated codes and themes from a subset of the responses, incorporating ChatGPT-4 through the stages of thematic analysis. Results show that the use of AI expedites and enhances qualitative analysis, but human participation in the process is still essential. We suggest a machine-in-the-loop framework with which Writing Studies researchers can more readily integrate generative AI to study large corpora of student writing.

    doi:10.17239/jowr-2026.17.03.07
  7. Prompting for scaffolding: A thematic analysis of K-12 students’ use of educational chatbots for writing support
    Abstract

    With the emergence of generative artificial intelligence, dialogue systems like chatbots are redefining traditional concepts of authorship and impacting critical aspects of writing. In educational contexts, previous research has pointed out new opportunities associated with using chatbots for writing instruction and support. This study involved 108 students across 10 classes in Norwegian K-12 education, examining how they employed educational chatbots as a support tool in L1 writing assignments. Through an inductive, data-driven thematic analysis of 895 student prompts, five recurring patterns emerged: information requests, structural guidance, example requests, content creation, feedback on text, and follow-up clarification. Aggregated results show that information requests were the most common pattern, particularly among younger students, whereas content creation and feedback on text were more prevalent among secondary and upper secondary students. Illustrative examples from the conversations revealed that generative AI extensively produced content on student’s behalf, even when students primarily sought scaffolding. The study proposes that effective scaffolding of writing through educational chatbots requires not only refining students' prompting strategies but also enhancing system designs that better support pedagogical use of generative AI.

    doi:10.17239/jowr-2026.17.03.04
  8. Augmenting AI scoring of essays with GPT-generated responses
    Abstract

    In this study, we examine the feasibility of augmenting student-written essays with those generated by large language models (LLMs) for scoring essays. We found that with correct instructions, generative AI systems such as GPT-4 and GPT-4o can generate essays similar to those written by students in terms of surface-level linguistic features, although material differences may still exist. Systematic analyses revealed that scoring models trained with synthetic data perform comparably to models trained using student essays, but the performance varies across prompts and the sizes of the model training sample. The augmented models could alleviate large discrepancies between human and AI scores on the subgroup level that may be introduced by a lack of training samples for a particular subgroup or due to inherent biases in LLMs. We also explored an established method – DecompX – on token importance to identify and explain AI predictions. Future research directions and limitations of this study are also discussed.

    doi:10.17239/jowr-2026.17.03.06
  9. Enhancing elementary students' writing habits with generative AI: A study of handwritten diary and AI companions
    Abstract

    This empirical exploration investigates how integrating a handwritten diary with a generative AI writing companion can strengthen elementary school students' writing habits and interests in a naturalistic classroom setting. The AI companion serves as a personalized assistant, offering real-time ideas, suggestions, and feedback. By encouraging students to handwrite daily experiences and emotions, then digitize their entries, the approach fosters both reflection and skill development. Over 18 weeks, 32 students from grades three to five (average age 10.5 years old) recorded their diary in Chinese and interacted with the AI companion. This exploratory study employed a pre-post, single-group design, analyzing diary entries, interaction logs, and questionnaire data to assess changes in writing participation and interest. The findings indicate three major outcomes: a notable increase in writing participation, reflected by a rise in the number of ideas and entry length; an enhanced level of writing interest, demonstrating the effectiveness of merging traditional handwriting with AI tools; and improved writing behavior through more frequent and diverse writing activities. When students encountered challenges—such as topic selection or content organization—the AI companion supplied up to three suggestions, preventing information overload and preserving independent thinking. Overall, this interactive, AI-supported environment transformed writing from a solitary task into a dynamic, collaborative process, boosting motivation and quality. The study thus illustrates how strategically blending handwritten diary with innovative AI systems can enrich writing education and sustain students' long-term engagement, while acknowledging its exploratory nature and the need for further research to establish causal links.

    doi:10.17239/jowr-2026.17.03.03
  10. LLMs in Composition: Theory, Ethics, and Implementation in the Workplace and Classroom
    Abstract

    Large Language Models (LLMs) have ignited discourse within the Technical and Professional Communications (TPC) community in relation to authorship and accountability. This article employs a qualitative synthesis of current and theoretical scholarship regarding authorship theory and LLMs. This analysis argues that while LLMs provide assistance to improve human-generated text, LLMs are unable to participate in authorship, as they cannot be held accountable for their outputs, participate in reciprocity, or demonstrate rhetorical awareness regarding audience and context. The analysis urges professors and professionals to consider concrete guidelines surrounding LLM usage to create transparency in the classroom and workplace.

    doi:10.1177/23294906261415597

January 2026

  1. Expanding Human-in-the-Loop: Critical Sensemaking for Technical and Professional Communication With Generative AI
    Abstract

    This article proposes a sensemaking methodology to enhance human-in-the-loop technical and professional communication (TPC) practices when working with generative artificial intelligence (GenAI) output, which is often ambiguous and not always accurate. Sensemaking describes actions and cognitive strategies humans use to make sense of new/ambiguous information. We argue that sensemaking can help TPC students navigate making sense of GenAI output for better judgment in evaluating AI output. Particularly, we leverage sensemaking's Situation-Gap-Bridge-Outcome framework as a heuristic to identify situational contexts outside of GenAI, gaps in knowledge, create bridges for those gaps, and evaluate outcomes and connect this to extant TPC literature and discuss its implications.

    doi:10.1177/00472816251405787
  2. Generative artificial intelligence for automated essay scoring: Exploring teacher agency through an ecological perspective
    Abstract

    Generative artificial intelligence (AI) is increasingly used in writing assessment, particularly for automated essay scoring (AES) and for generating formative feedback within automated writing evaluation (AWE). While AI-driven AES enhances efficiency and consistency, concerns regarding accuracy, bias, and ethical implications raise critical questions about its role in assessment. This paper examines the impact of generative AI on teacher agency through an ecological perspective, which considers agency as shaped by personal, institutional, and sociocultural factors. The analysis highlights the need for teachers to critically mediate AI-generated scores and feedback to align them with pedagogical goals, ensuring AI functions as an assistive tool rather than a determinant of assessment outcomes. Although AI can streamline assessment, over-reliance risks diminishing teachers’ evaluative expertise and reinforcing biases embedded in AI systems. Ethical concerns, including transparency, data privacy, and fairness, further complicate its adoption. To address these challenges, this paper proposes a framework for responsible AI integration that prioritizes bias mitigation, data security, and teacher-driven decision-making. The discussion concludes with pedagogical implications and directions for future research on AI-assisted writing assessment. • Teachers can actively mediate AI-generated scores to maintain agency. • Dependence on AES may weaken teachers’ evaluative skills. • Bias, data privacy, and AI opacity can undermine teachers’ decision-making. • AI literacy and hybrid assessment models can promote teacher autonomy. • A framework for protecting teacher agency in generative AI–based AWE is presented.

    doi:10.1016/j.asw.2025.100990
  3. Extracting interpretable writing traits from a large language model
    Abstract

    Large language models (LLMs) are increasingly used to support automated writing evaluation (AWE), both for purposes of scoring and feedback. However, LLMs present challenges to interpretability, making it hard to evaluate the construct validity of scoring and feedback models. BIOT (best interpretable orthogonal transformations) is a new method of analysis that makes dimensions of an embedding interpretable by aligning them with external predictors. It was originally developed to improve the interpretability of multidimensional scaling models. However, This paper shows that BIOT can be used to align LLM embeddings with an interpretable writing trait model developed using multidimensional analysis of classical NLP features to measure latent dimensions of writing style and writing quality. This makes it possible to determine whether an AWE model built using an LLM is aligned with known (and construct-relevant) dimensions of textual variation, supporting construct validity. Specifically, we examine the alignment between the hidden layers of deBERTA, a small LLM that has been shown to be useful for a variety of natural language processing applications, and a writing trait model developed through factor analysis of classical features used in existing AWE models. Specific dimensions of transformed deBERTA layers are strongly correlated with these classical factors. When the transformation matrix derived using BIOT is applied to token vectors, it is also possible to visualize which tokens in the original text contributed to high or low scores on a specific dimension. • Large language models (LLMs) are increasingly used to support automated writing evaluate (AWE). • LLMs present challenges to interpretability, making it hard to evaluate construct validity of scoring and feedback models. • BIOT is a new interpretation method that aligns embedding dimensions with external predictors. • Specifically, BIOT can be used to align LLM embeddings with classical NLP measures of aspects of style and writing quality. • This demonstrates a general method to determine whether an LLM latently represents construct-relevant dimensions.

    doi:10.1016/j.asw.2025.101011
  4. Generative Artificial Intelligence, Writing Placement, and Principled Decision Making in U.S. Postsecondary Contexts: A White Paper
    doi:10.37514/jwa-j.2026.9.1.01
  5. STEM Gets Personal: The Medical School Personal Statement as Developmental Writing Opportunity Amid Generative AI
    doi:10.37514/atd-j.2026.22.3-4.03
  6. Increasing Literacy on the Scams Targeting Latines: Generative Artificial Intelligence, Digital Technologies, and the Latine Community
    Abstract

    This article builds a heuristic that raises the artificial intelligence (AI) literacy of Latine students. Nefarious people are exploiting marginalized Latine communities by using AI in creative partnerships, similar to those described in technical communication research, to build social profiles of Latines. These people are rhetorically using AI in passive-income and voice-over scams that target Latines who are insecure about their financial and citizenship situations. The heuristic offered here guides instructors on how to increase Latine students’ AI literacy by making these students aware of the rhetorical relationships between nefarious individuals and AI.

    doi:10.1177/10506519251372578
  7. Book Review: Artificial Intelligence for Strategic Communication by Karen E. Sutherland SutherlandKaren E. (2025). Artificial Intelligence for Strategic Communication . Palgrave Macmillan Singapore. 486 pp. $109.99 hardcover, $89.99 eBook. ISBN: 978-981-96-2574-1. https://doi.org/10.1007/978-981-96-2575-8
    doi:10.1177/10506519251372588
  8. Generative Artificial Intelligence, Interdisciplinarity, and the Global English-Medium Knowledge Economy
    Abstract

    This State of the Inquiry (SotI) critically investigates the implications of generative artificial intelligence (GAI) for interdisciplinary research and scholarly communication within the global English-medium knowledge economy (GEMKE). Anchored in three guiding questions, the article interrogates (1) the extent to which GAI facilitates genuine interdisciplinary knowledge production versus reinforcing entrenched disciplinary silos; (2) how GAI’s dependence on established academic infrastructures influences the visibility and legitimacy of particular interdisciplinary fields; and (3) the impact of automated cross-disciplinary synthesis on the epistemic agency and intellectual labor of human scholars. While GAI holds potential to enhance research efficiency and foster new forms of interdisciplinarity, the outcomes of its integration depend largely on how scholars employ these tools; without critical and contextually informed use, it may contribute to epistemic homogenization and the marginalization of nondominant knowledge systems. The SotI advocates for a critically reflexive and contextually informed approach to the integration of GAI in academic practice, while also recognizing the capacity of scholars—particularly those on the (semi)periphery—to actively shape, adapt, and resist these tools in ways that foster inclusive and transformative interdisciplinary scholarship.

    doi:10.1177/07410883251372206
  9. AI Admin: Provocations through Generated Play
    Abstract

    This piece juxtaposes two games created with generative AI: a commentary on the challenges of being an administrator handling competing demands regarding the use of generative AI, and a similar game structure centered on the digital humanities. Together, these two works offer a commentary on the conversations around generative AI in the humanities and a demonstration of the increasing value of these tools as part of multimodal composition.

December 2025

  1. Student Evaluative Judgements of Writing and Artificial Intelligence: The Disconnect between Structural and Conceptual Knowledge
    Abstract

    This paper reports on how undergraduate students evaluated writing outputs created with and without generative artificial intelligence (AI). The paper focuses specifically on two aspects of writing and AI: how prior writing knowledge influenced students’ thinking about AI tools, and how the writing skills to which they were exposed in the writing classroom helped them work with AI-generated materials. This research builds upon Bearman et al.’s (2024) work on evaluative judgement as a pedagogical tool to support learners as they work with AI-mediated texts. The paper uses this lens to identify challenges that learners have in applying writing knowledge to AI-mediated situations and to devise pedagogical means to support student learning in these contexts. We found that, while students could typically evaluate structural components of writing, they struggled to evaluate conceptual ideas both for AI and human generated texts. The findings speak more generally to the need for students to develop their evaluative abilities, as well as ways that AI may reveal and amplify existing challenges that learners have with evaluating the quality of writing, engaging with source materials, and applying genre knowledge to create meaning.

    doi:10.18552/joaw.v15i2.1346
  2. From Chatbot to Classroom: Developing Critical Thinking and Evaluative Judgment With AI
    Abstract

    A customized chatbot and structured interactions with ChatGPT were integrated into professional business communication pedagogy to foster critical reading, evaluative judgment and independent writing skills. The iterative-experiential learning feature of AI was utilized. AI (the chatbot and ChatGPT) was conceptualized as an assistant, coach, and provocateur in learning rather than a shortcut to bypass effort. The effectiveness of the intervention was explored through students’ reflections and learning experiences. The findings suggest that AI interventions for developing critical reading and writing skills can enhance traditional pedagogies and the learning curve. Implications and limitations of the study were also discussed.

    doi:10.1177/23294906251399552
  3. Complexity of Purpose Revisited: AI-Assisted Cognition in Professional Communication
    Abstract

    With ChatGPT’s public release, artificial intelligence (AI) has had a profound effect on professional communication. Although clearly beneficial in manipulating large volumes of information, AI cannot provide the insights into each company’s uniqueness—its culture, organizational dynamics, and operational controls—factors defining the character, precision, and tailoring demanded in professional communications. Those attributes depend on the creativity, reasoning, and theory-based causal logic of human cognition. By reexamining the process of developing professional communications, from discovering embedded purposes through final product, we can demonstrate to students how AI can be applied to encourage creativity and promote the powers of human intellect.

    doi:10.1177/23294906251399540
  4. A Well-Trained Eye : Artificial Intelligence and the Epistechnics of Wonder
    doi:10.1080/02773945.2025.2598736
  5. Translation Studies in the Age of Artificial Intelligence: Sanjun Sun, Kanglong Liu, and Riccardo Moratto, Eds.: [Book Review]
    doi:10.1109/tpc.2025.3612129
  6. Questions as Elements of Argumentation in Political Debates
    Abstract

    The role of interrogative sentences in political argumentation remains largely unexplored. This study addresses this gap by introducing a new Polish-language dataset featuring diverse examples of interrogative sentences in political discourse (election debates). The dataset serves as a unique resource for theoretical research in Argumentation Mining and Natural Language Inference through the annotation of ⟨IS, C⟩ and ⟨IS, P⟩ pairs, where IS denotes an interrogative sentence, C represents its corresponding conclusion, and P indicates a premise. The annotations primarily capture implicitly expressed argumentative structures and can serve as a benchmark for large language models (LLMs), particularly those trained on Polish-language data. Furthermore, this is the first study in Argumentation Mining where annotators independently verbalize the content of conclusions and premises conveyed through speech acts constructed with interrogative sentences. Our findings reveal that interrogative sentences in political debates most frequently function as implicature (approx. 45%), normative propositions (approx. 31%), statements expressing epistemic states (approx. 20%), and presuppositions (approx. 4%). Semantic similarity analysis confirms that annotators achieve a high level of consistency in identifying and verbalizing the content implied by interrogative sentences. The dataset provides a robust foundation for developing advanced language models and for further research into the role of interrogative sentences in political discourse.

    doi:10.1007/s10503-025-09674-z
  7. Trusting Each Other, Trusting Machines: Undergraduate Students’ Perceptions of Copresence Afforded by Writing Technologies, Networked Platforms, and Generative AI in Their Academic Writing Practices
    Abstract

    This article examines how students use and perceive digital writing tools, including chat platforms and generative AI, within academic writing environments. It describes a qualitative study of 15 undergraduate students in guided focus group discussions. In a grounded theory analysis of focus group transcripts, the researchers explored undergraduates’ sense of copresence—their perception of support through both human interaction with both peers and instructors and AI technologies during their writing processes. Findings reveal that students’ trust in both peer feedback and AI assistance plays a crucial role in their writing, shaping their decisions about which tools to use and how they integrate human and AI feedback in the development and revisions of their writing. The study sheds light on students’ nuanced understanding of the affordances and limitations of multimodal chat platforms and generative AI technologies. We conclude by highlighting the need for pedagogical practices that support students’ choice of tools when collaborating in digital spaces. We suggest future research directions that will enable us to better understand how copresence and trust influence students’ writing in these contexts.

    doi:10.3138/wap-2025-0004
  8. Composing with AI
    Abstract

    Composing with AI provides research about the rise of generative AI in composition studies, focusing on histories, policies, reports of classroom and student use, multimodal composing and teaching AI literacies.

  9. What We already Know: Generative AI and the History of Writing With and Through Digital Technologies
  10. The Black-Boxed Ideology of Automated Writing Evaluation Software