All Journals
172 articlesJune 2026
-
“Article laundry” or “tutor in pocket?”: Multilingual writers’ generative AI-assisted writing in professional settings ↗
Abstract
• Generative AI can help multilingual communicators in professional writing. • Generative AI supports email/report writing and meeting summary. • Practical, ethical and legal concerns remain. • Students’ AI use at workplace informs academic writing teaching and learning. Because multilingual students’ languaging practices are not limited to academic settings, it is important to explore their lived experiences communicating in real-world situations to shed light on how to prepare them in college classrooms in the era of generative AI. Drawing upon writing samples, artifacts and interview data, this case study brings attention to the potential and challenges a multilingual international student face in implementing generative AI-assisted written communication during her 5-month internship in the workplace. The findings indicate that generative AI tools, especially ChatGPT, have the potential to help multilingual communicators meet their written linguistic demands in professional contexts, especially in email writing, report drafting and meeting summary. Generative AI-assisted writing tools could assist multilingual students with idea expression and boost their confidence and agency in communication. Yet, despite its many advantages, practical, ethical and legal concerns remain. This study contributes to the scarce yet budding literature exploring multilingual international students’ AI engagement in professional settings and offers concrete pedagogical implications and directions for future research.
April 2026
-
How to Write With GenAI: A Framework for Using Generative AI to Automate Writing Tasks in Technical Communication ↗
Abstract
Generative artificial intelligence (AI) is reshaping technical communication, necessitating strategies to assess its impact. This article introduces a framework combining human-in-the-loop automation with a task-based approach for communication roles. Effective AI integration requires identifying and organizing key writing tasks to fit into automated workflows. The framework underscores the value of writing expertise and offers practical guidance for practitioners, scholars, and educators. By aligning AI tools with technical communication tasks, professionals can produce accurate and complex communication products. This approach highlights the essential role of human expertise in effective, AI-assisted writing.
March 2026
-
Chinese EFL learners’ engagement with ChatGPT feedback on academic writing: A case study in Malaysia ↗
Abstract
• Postgraduates engaged behaviorally, affectively, and cognitively with GenAI feedback. • Postgraduates dealt with ChatGPT primarily as a tool for refining their proposals, not for generating content. • Postgraduates demonstrated agency by actively questioning, annotating, and negotiating feedback. • Postgraduates engaged in diverse affective responses, ranging from appreciation to frustration. As Generative artificial intelligence (GenAI) tools such as ChatGPT are becoming increasingly integrated into English as a Foreign Language (EFL) academic writing context, learners’ engagement with AI-generated feedback remains insufficiently examined. This case study investigated how four Chinese EFL postgraduates joining a course in a Malaysian university engaged with ChatGPT feedback while revising their academic research proposals. The study triangulated screen recordings, pre- and post-revision drafts, and stimulated recall interviews. Participants displayed a range of behavioural strategies, including accepting, questioning, rejecting suggestions, annotating visually, and seeking external validation. Affective responses ranged from appreciation and curiosity to doubt and frustration, particularly when feedback appeared conflicting or imprecise. Cognitively, learners applied various strategies such as evaluating, comparing, negotiating feedback, and regulating its use. Yet, they showed differing levels of engagement, shaped by individual perceptions and writing intentions. Importantly, participants regarded ChatGPT as a tool for linguistic refinement rather than content generation. Overall, the findings revealed that learners did not passively receive feedback but interacted with it in agentive and critical ways. The study highlights the interplay among these three dimensions of engagement and the importance of individual differences when evaluating the pedagogical potential of GenAI-generated feedback in academic writing.
January 2026
-
Expanding Human-in-the-Loop: Critical Sensemaking for Technical and Professional Communication With Generative AI ↗
Abstract
This article proposes a sensemaking methodology to enhance human-in-the-loop technical and professional communication (TPC) practices when working with generative artificial intelligence (GenAI) output, which is often ambiguous and not always accurate. Sensemaking describes actions and cognitive strategies humans use to make sense of new/ambiguous information. We argue that sensemaking can help TPC students navigate making sense of GenAI output for better judgment in evaluating AI output. Particularly, we leverage sensemaking's Situation-Gap-Bridge-Outcome framework as a heuristic to identify situational contexts outside of GenAI, gaps in knowledge, create bridges for those gaps, and evaluate outcomes and connect this to extant TPC literature and discuss its implications.
-
Increasing Literacy on the Scams Targeting Latines: Generative Artificial Intelligence, Digital Technologies, and the Latine Community ↗
Abstract
This article builds a heuristic that raises the artificial intelligence (AI) literacy of Latine students. Nefarious people are exploiting marginalized Latine communities by using AI in creative partnerships, similar to those described in technical communication research, to build social profiles of Latines. These people are rhetorically using AI in passive-income and voice-over scams that target Latines who are insecure about their financial and citizenship situations. The heuristic offered here guides instructors on how to increase Latine students’ AI literacy by making these students aware of the rhetorical relationships between nefarious individuals and AI.
-
Book Review: <i>Artificial Intelligence for Strategic Communication</i> by Karen E. Sutherland SutherlandKaren E. (2025). <i>Artificial Intelligence for Strategic Communication</i> . Palgrave Macmillan Singapore. 486 pp. $109.99 hardcover, $89.99 eBook. ISBN: 978-981-96-2574-1. https://doi.org/10.1007/978-981-96-2575-8 ↗
-
Generative Artificial Intelligence, Interdisciplinarity, and the Global English-Medium Knowledge Economy ↗
Abstract
This State of the Inquiry (SotI) critically investigates the implications of generative artificial intelligence (GAI) for interdisciplinary research and scholarly communication within the global English-medium knowledge economy (GEMKE). Anchored in three guiding questions, the article interrogates (1) the extent to which GAI facilitates genuine interdisciplinary knowledge production versus reinforcing entrenched disciplinary silos; (2) how GAI’s dependence on established academic infrastructures influences the visibility and legitimacy of particular interdisciplinary fields; and (3) the impact of automated cross-disciplinary synthesis on the epistemic agency and intellectual labor of human scholars. While GAI holds potential to enhance research efficiency and foster new forms of interdisciplinarity, the outcomes of its integration depend largely on how scholars employ these tools; without critical and contextually informed use, it may contribute to epistemic homogenization and the marginalization of nondominant knowledge systems. The SotI advocates for a critically reflexive and contextually informed approach to the integration of GAI in academic practice, while also recognizing the capacity of scholars—particularly those on the (semi)periphery—to actively shape, adapt, and resist these tools in ways that foster inclusive and transformative interdisciplinary scholarship.
December 2025
-
Abstract
Taking stock of the diminishing material conditions faced by contemporary writers broadly conceived, this article (re)frames writing as a site and a practice of exploited labor. Arguing that writing scholars have often avoided interrogating writing’s links to labor, particularly with respect to declining working conditions and the appropriation of value from workers, I draw attention to the pervasive crisis of writing’s devaluation under late capitalism. To evidence this assessment, I apply political economist Harry Braverman’s conception of the “progressive alienation of the process of production”—the notion that labor is increasingly eroded through capitalism’s advancement—to the scene of contemporary gig writing, specifically Amazon’s microtask platform Mechanical Turk (MTurk). MTurk, I maintain, offers a paradigmatic illustration of contemporary writers’ material exploitation, both for its efforts to de-skill writers and for its conscription of writers to advance their own exploitation by employing them to train generative AI.
November 2025
October 2025
-
Abstract
Mainstream artificial intelligence (AI) is an extractive industry that exploits both humans and nonhumans. The extractive underpinning of mainstream AI systems means that technical communicators must be careful when advocating for accessibility and inclusivity in AI because those efforts may expose marginalized groups to further exploitation. Extractive AI also necessitates that technical communicators reconsider how their own discipline may be complicit in the damaging logics and practices of extraction.
-
Abstract
Generative artificial intelligence (GenAI) has brought into question how much ownership college students feel for “their” writing when it is AI-generated. This study recruited 88 college writers at one midwestern state university in the United States. In a within-subjects design, participants composed poems about a meaningful, challenging life experience, then prompted GenAI to compose a poem about that same event. Results showed significantly greater ownership for human-made poems; additionally, human-made poems were rated as more accurately reflective of selected lived experiences. Aesthetic merit, however, was rated higher for AI-generated poems for imagery, language, and form—but not for originality. Half the students preferred GenAI poems, mainly because of their textual features, while less than half preferred human poems, mainly for personal connections to the events presented. Implications for GenAI as a tool to support creative writing and meaningful literacy are explored.
September 2025
-
Syntactic Complexity of AI-Generated Argumentative and Narrative Texts: Implications for Teaching and Learning Writing ↗
Abstract
The integration of generative artificial intelligence (AI) into academic writing has raised questions about the syntactic complexity of AI-generated texts compared to human-authored essays. While studies have explored syntactic complexity in human writing, limited research has compared AI-generated argumentative and narrative texts, particularly in isolating cognitive overload and proficiency factors. This study addressed this gap by examining genre-specific syntactic patterns in AI-generated essays. Using the L2 Syntactic Complexity Analyzer, the study analyzed four hundred AI-generated essays (two hundred argumentative and two hundred narrative) and employed paired T-tests and Pearson correlation coefficients to identify differences and relationships among syntactic measures. Results showed that argumentative essays demonstrated higher syntactic complexity than narrative essays, especially in production unit length, coordination, and phrasal sophistication, while subordination measures remained similar. Correlation analysis revealed that argumentative essays compartmentalized ideas through coordinated and nominally complex structures, while narrative essays integrated descriptive richness through longer sentences and embedded clauses. The findings suggest that genre-specific rhetorical demands shape syntactic complexity in AI-generated writing. Implications for teaching and learning writing and future studies are discussed.
-
Using the AI Life Cycle to Unblackbox AI Tools: Teaching Résumé 2.0 with Résumé Analytics and Computational Job-Résumé Matching ↗
Abstract
In response to disruptions introduced to the job market by AI resume screeners, this article introduces a novel theoretical framework for the life cycle of artificial intelligence systems to help unblackbox resume screening AI systems. It then applies the AI life cycle framework to a digital case study of RChilli’s job-resume matching algorithm. The article introduces an eleven-step computational job-resume matching assignment that writing instructors can use in their classrooms to explore the pedagogical implications offered by the AI life cycle framework. The assignment helps students simulate important phases in AI production and development while highlighting biases and ethical concerns in AI screening of resumes. By exploring job-resume analytics, this study helps to teach critical AI and data literacy, make job-resume matching algorithms more explainable, and transform how professional writing can be taught in the age of automated hiring.
-
Abstract
From an unsettled, ambivalent middle between discourses of generative AI integration and refusal, we offer a critical-ethical stance for AI-engaged writing assignments. We apply a critical thinking framework to these assignments, assert critical AI literacy as a kind of critical thinking, and discuss how critical thinking and critical AI literacy can facilitate ethical discernment about generative AI use. This unsettled, critical-ethical stance positions scholars in our field to support context-sensitive pedagogical responses to generative AI across first-year writing, Writing Across the Curriculum, writing centers, and beyond.
-
Abstract
In a relatively short time, market and political forces have intensified the reach of artificial intelligence (AI). AI has become, in a word, climatic—not only a discrete technological system but also a creeping assemblage of ideological, material, and political forces. This article tracks these forces by developing rhetorical climates of AI as a conceptual framework. In doing so, I aim to (1) link the harms of climate change with the rapid buildout of AI infrastructure and (2) shift the frame of the conversation by emphasizing the extractive, exploitative, enclosed, and knotted supremacist conditions that have been prerequisites for building AI systems at scale. While these pervading rhetorical climates may seem unchangeable, I track how microclimates of resistance have developed, in the past and in the present. In particular, I emphasize the importance of bodily intelligence in navigating asymmetrical conditions of power felt in the AI industry. The article concludes by discussing how rhetoric and writing studies can weather the unfolding rhetorical climates of AI by diagnosing conditions, seizing moments, and plotting futures to imagine a less extractive and less harmful world.
-
Abstract
In gaming, cheat codes change how players engage a system by inviting exploration and reducing the fear of failure. Drawing on writing center pedagogy, this article proposes a similar framework for navigating generative AI in writing instruction and positions play as a method for developing critical AI literacy. Writing centers have long served as spaces where students engage collaboratively with new technologies and construct meaning through dialogue. This article extends that tradition by positioning writing center pedagogy as a framework for helping students examine AI’s ethical implications through treating it as a rhetorical situation to be unpacked, which demands principled, human-centered engagement rooted in values such as collaborative exploration. By weaving together writing center praxis and game-informed pedagogy, this article contributes to ongoing conversations in writing studies about how to integrate AI in ways that support critical thinking and ethical reflection. It demonstrates how playful, classroom-tested activities can animate discussions of bias and representation while helping students build rhetorical discernment through experience. Ultimately, the article argues that ethical literacy must be practiced through relational, iterative work. As writing classrooms become one of the few remaining spaces where students encounter generative AI with support and critical context, writing instructors have a vital opportunity to help students learn to write with, against, and around powerful technologies.
-
Abstract
Over the past year, Antonio Byrd, Ira Allen, Sherry Rankins-Robertson, and John Gallagher developed researched recommendations for a Generative AI policy for CCC . From these recommendations, the CCC editorial team wrote an official policy, which is available on our website at https://cccc.ncte.org/cccc/ccc-generative-ai-policy/ . We, the editorial team, are grateful for the thoughtful, generous work of these scholars on this project, which is the foundation of the following symposium.
-
AI Writing Is Always Embodied: Building a Critical Awareness of the Invisible Labor of Humans-in-the-Loop in AI Products ↗
Abstract
I argue that composition studies must build critical awareness about how humans from the Global South train AI with their writing embodiments. To draw our attention to how those working in the Global South train AI in harmful conditions, even though AI companies use algorithms and terms of service to smooth away these embodiments, I adapt the concept of humans-in-the-loop. Critical awareness of humans-in-the-loop moves scholarship in writing studies from a focus on AI-human collaboration that begins after an AI produces a text to one that requires us to see how AI products are always already human authored. Through a case study of Google Translate, I demonstrate how a critical awareness of how AI can erase the writing embodiment of humans-in-the-loop affords me opportunities to ask generative questions: How does language translation play a role in the erasure of embodied writing? Why does AI produce with bias toward marginalized populations when marginalized populations are those that moderate AI? Overall, I ask compositionists to see AI products as already human authored so that writing studies can consider the invisible labor of humans-in-the-loop as the field moves forward in researching AI.
-
Abstract
This Research Brief discusses transformers—the core engine for most artificial intelligence applications. The brief situates transformer technology within the field of rhetoric and composition by surveying recent studies; highlights the innovative aspects of transformers; and, finally, thinks through (Majdik and Graham) the operations of transformers and generative AI through Miller’s theory of topoi, illustrating one way in which rhetoric and composition scholars and teachers can critically engage with generative AI in instruction and research.
July 2025
-
Abstract
ChatGPT has created considerable anxiety among teachers concerned that students might turn to large language models (LLMs) to write their assignments. Many of these models are able to create grammatically accurate and coherent texts, thus potentially enabling cheating and undermining literacy and critical thinking skills. This study seeks to explore the extent LLMs can mimic human-produced texts by comparing essays by ChatGPT and student writers. By analyzing 145 essays from each group, we focus on the way writers relate to their readers with respect to the positions they advance in their texts by examining the frequency and types of engagement markers. The findings reveal that student essays are significantly richer in the quantity and variety of engagement features, producing a more interactive and persuasive discourse. The ChatGPT-generated essays exhibited fewer engagement markers, particularly questions and personal asides, indicating its limitations in building interactional arguments. We attribute the patterns in ChatGPT’s output to the language data used to train the model and its underlying statistical algorithms. The study suggests a number of pedagogical implications for incorporating ChatGPT in writing instruction.
June 2025
-
Abstract
This case study investigates how two ESL graduate students, Ian and Sam, use ChatGPT in their research writing after receiving a comprehensive tutorial based on Warschauer et al.’s (2023) AI literacy framework. We analyzed their engagement with ChatGPT across prompt categories including genre, content, language use, documentation, coherence, and clarity. Data were collected from research paper drafts, ChatGPT chat histories, and interviews. Data analyses included coding ChatGPT prompts, textual analysis of drafts, and thematic analysis of interview transcripts . Results show that while both participants utilized ChatGPT for understanding genre conventions and content development, they developed distinct approaches reflecting their individual backgrounds. Ian selectively used ChatGPT for specific assistance needs, while Sam engaged more systematically, particularly for APA style and coherence checks. Both approaches maintained academic integrity and scholarly voice, demonstrating that Generative AI tools can be effectively tailored to individual needs without compromising ethical standards. This study highlights how advanced ESL writers can adapt GenAI tools to their unique writing processes, offering insights into the diverse ways AI can enhance academic writing while preserving individual agency. The findings suggest that AI integration in academic writing can be customized to support diverse writing goals and backgrounds.
April 2025
-
Review of Annette Vee, Tim Laquintano, and Carly Schnitzler’s TextGenEd: Teaching with Text Generation Technologies ↗
Abstract
Hua Wang Vee, Annette, Tim Laquintano, and Carly Schnitzler, editors. TextGenEd: Teaching with Text Generation Technologies. The WAC Clearinghouse, 2023. https://doi.org/10.37514/TWR-J.2023.1.1.02. The rapid rise of AI, especially since the launch of ChatGPT in November 2022, has intensified debates about the role of AI tools in higher education. While some educators reject AI’s use—particularly in writing […]
-
Recognizing and Articulating Relationships: The Program for Writing Across Campus at the University of Washington, Seattle ↗
Abstract
Megan Callow Abstract Discipline-linked writing programs can pose challenges for administration and enrollment, but they can also offer valuable opportunities for students to learn more deeply about writing and communication in particular disciplinary contexts. This program profile features one enduring discipline-linked writing program at the University of Washington; to describe the program’s history, organization, and […]
-
Automating Media Accessibility: An Approach for Analyzing Audio Description Across Generative Artificial Intelligence Algorithms ↗
Abstract
A surge in public availability of emerging GenAI-AD has brought back the promises of automated accessibility for people who cannot see or see well. This article tests those promises through a double-rendering method that asks GenAI-AD engines to describe a simple portrait of a person and then returns these generated texts into GenAI-AD engines for visualizations of what they earlier had described, revealing insights about GenAI efficacies, ethics, and biases.
-
Synthetic Genres: Expert Genres, Non-Specialist Audiences, and Misinformation in the Artificial Intelligence Age ↗
Abstract
Drawing on rhetorical genre studies, we explore research article abstracts created by generative artificial intelligence (AI). These synthetic genres—genre-ing activities shaped by the recursive nature of language learning models in AI-driven text generation—are of interest as they could influence informational quality, leading to various forms of disordered information such as misinformation. We conduct a two-part study generating abstracts about (a) genre scholarship and (b) polarized topics subject to misinformation. We conclude with considerations about this speculative domain of AI text generation and dis/misinformation spread and how genre approaches may be instructive in its identification.
March 2025
-
Abstract
This article explores teaching writing with generative AI as critical play where students and teachers engage in an ethically dialectical and aleatory game with generative AI. I qualitatively surveyed 24 writing teachers about how they teach writing with generative AI as well as its advantages and disadvantages. I discovered that teachers used generative AI to teach about the ethics of generative AI's design and rhetorical use to avoid plagiarism. Teachers also critically played with generative AI to teach the writing process of invention, drafting, revision, and editing. Specifically, the critical, dialectical interplay of human and machine invents in aleatory and emergent ways, creating moments of epiphany for students and teachers within the writing process for invention, drafting, revision, and editing while the real time pace of generative AI democratizes education, making writing and teaching more accessible for them.
-
Multimodal composing with generative AI: Examining preservice teachers’ processes and perspectives ↗
Abstract
The question of how generative Artificial Intelligence (Gen AI) will reshape communication is causing questions and concerns across the field of education, particular literacy and writing classrooms. Although important questions have surfaced surrounding the varied effects on writing instruction and ethical implications of AI in the classroom, there are calls for deeper investigations about how these tools might shape multimodal composing processes. This study builds upon this developing field by exploring how 21 university students in literacy education courses multimodally composed with generative AI and their perspectives on the use of AI in the classroom. Data sources included screen capture and video observations, design interviews, pre- and post- surveys, and multimodal products. Through qualitative and multimodal analysis, four main themes emerged for understanding preservice teachers’ multimodal composing processes: (1) composing was an iterative process of prompting guided by the AI tools, (2) composers exhibited two distinct processes when designing their projects, (3) AI shaped creative possibilities, and (4) play, humor, and surprise served a key function while composing. Preservice teachers’ perspectives also revealed insights into how AI shaped engagement with content, the importance of scaffolding AI in the classroom, and how ethics were intertwined with technical function and teaching beliefs.
January 2025
-
Revisiting Four Conversations in Technical and Professional Writing Scholarship to Frame Conversations About Artificial Intelligence ↗
Abstract
This article explores four different topics of conversation in technical and professional communication (TPC) scholarship that overlap and connect with contemporary issues in generative artificial intelligence (AI): process and iteration, theory and power, actors and activity, and the social justice turn. The authors offer four nonexhaustive reviews of these conversations, offering insight into key issues and texts that have animated discourse in the field and can directly or indirectly address the complex relationship between TPC work and generative AI.
-
Beyond Academic Integrity: Navigating Institutional and Disciplinary Anxieties About AI-Assisted Authorship in Technical and Professional Communication ↗
Abstract
Generative artificial intelligence (GenAI) tools are already being implemented for a variety of writing tasks in workplaces, where individual (human) authorship is valued less than the efficient production of text. But policies regarding AI use in higher education continue to prioritize academic integrity, focusing on narrowly defined notions of authorship that do not reflect the realities of workplace writing. Through an analysis of 100 university policies on AI, this article shows how AI tools create a tension for faculty in technical and professional communication who must operate within institutional or departmental policies for AI use but must also prepare writers for workplace authorship.
-
Abstract
This article examines issues of authenticity involved in using generative AI to compose technical and professional communication (TPC) documents. Authenticity is defined through an Aristotelian understanding of ethos, which includes goodwill ( eunoia), practical wisdom ( phronesis), virtuousness ( arete), and Fromm's concepts of true self and pseudo self. The authors conducted an initial analysis of AI affordances that align with TPC concerns—genre, plain language, and grammatical/mechanical correctness. The preliminary results show that these affordances may be limited by issues of inauthenticity. The authors suggest that in order to address AI's limitations, writers should adopt a rhetoric of authenticity via real-world engagement, human centeredness, and personal style.
-
Constructing Websites with Generative AI Tools: The Accessibility of Their Workflows and Products for Users With Disabilities ↗
Abstract
Generative AI tools allow anyone without web-design experience to have a business website created when the user provides a few specifications about the business, such as its name, type, and location. But the resulting websites not only fall short of the business's basic needs but they also raise major concerns about their accessibility for disabled users. This study specifically examines whether these AI generated websites are accessible to screen-reader users with visual disabilities. It presents data about the usability and accessibility of the products of three generative AI website builders, highlights the specific problems found by an expert screen reader test along with an automated machine scan of these sites, and discusses some causes of and recommendations for solving these problems.
-
Abstract
This article focuses on the unique ways that technical and professional communication (TPC) researchers can study artificial intelligence (AI) models that challenge the idea that humans and machines are separate yet equal entities. The authors present a brief definition of AI, a recap of HCI research paradigms, and a description of how AI models challenge traditional HCI research and how TPC researchers might respond to these challenges in their studies. Rather than presenting clear-cut methods for studying AI, the article highlights questions that researchers need to consider as they develop approaches for studying AI.
-
Abstract
This article considers the rhetorical risks of using generative AI to compose organizational communication during crises or in the aftermath of tragedies. It focuses on a case study in which representatives of Vanderbilt University’s Peabody College of Education and Human Development disclosed their use of ChatGPT to write a response to a school shooting at another university. The author argues that although generative AI can often be useful in technical and professional communication, it can also undermine perceptions of “rhetorical humanity” if its use is disclosed or discovered, making it rhetorically risky in certain contexts. Thus, knowing when not to utilize AI is an important aspect of AI literacy for practitioners.
-
Technical Communication's Fight Against Extractive Large Language Modeling by Applying FAIR and CARE Principles of Data ↗
Abstract
This article assesses the data practices of Grammarly, the prominent AI-assisted writing technology, by applying data principles that advocate for empowering Indigenous data sovereignty. The assessment is informed by the authors’ work with an Inuit tribal organization from rural Arctic Alaska that generated data and metadata about potentially sacred tribal activities. Their analysis of Grammarly's large-language modeling practices demonstrates how technical communication can hold businesses to principled data practices created by Indigenous nations and communities that understand how to create more just futures.
-
Abstract
The concept of a public—a group of strangers drawn together through their mutual attention to a text—has historically been tied to the notion of human intentionality. The recent popularization of artificial intelligence (AI) large language models (such as ChatGPT) destabilizes this connection. When large language models generate text, they may inadvertently form stochastic publics—groups pulled together through the randomization of biased data patterns drawn from AI training material. This exploratory study draws on a three-phase dialogue with OpenAI's ChatGPT 4 to identify the risks of stochastic publics and suggest human-originated interventions grounded in feminist care ethics.