All Journals
172 articles2025
December 2024
-
Abstract
ABSTRACT A recent surge among scholars of rhetoric seeking to refine and redefine approaches to the study of demagoguery and its rhetorical contours supplies an opportunity to raise a related yet more fundamental question: What is rhetoric’s relationship to democracy, demagoguery’s presupposed injured? Inspired by Jacques Rancière and a rereading of ancient Greek sources, this article seeks to complicate the relationship between rhetoric and democracy by narrowing in on the activity of the dēmos, a political entity undersigning both democracy and demagoguery. In so doing, this article argues that demagoguery appears not as a violation of democratic activity but as a rhetorical phenomenon associated with democratic fulfillment. This article showcases the implications of rethinking demagoguery as a sign of an active and energetic dēmos by revisiting the rhetorical work of the farm workers movement. Rhetoric and democracy, the article concludes, support demagoguery and demagoguery uplifts democracy and rhetoric.
-
When generative artificial intelligence meets multimodal composition: Rethinking the composition process through an AI-assisted design project ↗
Abstract
• This study explores GenAI's role in multimodal composition, including Adobe Firefly and DALL·E. • GenAI reshapes the composition stages of invention, designing, and revising. • Despite its limitations, GenAI offers alternative solutions to wicked problems. • Post-GenAI use, students critically revise and iterate their compositions. • The study contributes to future research and teaching of AI-assisted composition. This study explores the integration of generative artificial intelligence (GenAI) design technologies, including Adobe Firefly and DALL·E, into the teaching and learning of multimodal composition. Through focus group discussions and case studies, this paper demonstrates the potential of GenAI in reshaping the various stages of the composition process, including invention, designing, and revising. The findings reveal that GenAI technologies have the potential to enhance students’ multimodal composition practices and offer alternative solutions to the wicked problems encountered during the design process. Specifically, GenAI facilitates invention by offering design inspirations and enriches designing by expanding, removing, and editing the student-produced design contents. The students in this study also shared their critical stance on the revision process by modifying and iterating their designs after their uses of GenAI. Through showcasing both the opportunities and challenges of GenAI technologies, this paper contributes to the ongoing scholarly conversations on multimodal composition and pedagogy. Moreover, the paper offers implications for the future research and teaching of GenAI-assisted multimodal composition projects, with the aim of encouraging thoughtful integration of GenAI technologies to foster critical AI literacy among college composition students.
-
“Wayfinding” through the AI wilderness: Mapping rhetorics of ChatGPT prompt writing on X (formerly Twitter) to promote critical AI literacies ↗
Abstract
In this paper, we demonstrate how studying the rhetorics of ChatGPT prompt writing on social media can promote critical AI literacies. Prompt writing is the process of writing instructions for generative AI tools like ChatGPT to elicit desired outputs and there has been an upsurge of conversations about it on social media. To study this rhetorical activity, we build on four overlapping traditions of digital writing research in computers and composition that inform how we frame literacies, how we study social media rhetorics, how we engage iteratively and reflexively with methodologies and technologies, and how we blend computational methods with qualitative methods. Drawing on these four traditions, our paper shows our iterative research process through which we gathered and analyzed a dataset of 32,000 posts (formerly known as tweets) from X (formerly Twitter) about prompt writing posted between November 2022 to May 2023. We present five themes about these emerging AI literacy practices: (1) areas of communication impacted by prompt writing, (2) micro-literacy resources shared for prompt writing, (3) market rhetoric shaping prompt writing, (4) rhetorical characteristics of prompts, and (5) definitions of prompt writing. In discussing these themes and our methodologies, we highlight takeaways for digital writing teachers and researchers who are teaching and analyzing critical AI literacies.
October 2024
-
Abstract
Since its release in late 2022, ChatGPT and subsequent generative artificial intelligence (GAI) tools have raised a wide variety of questions and concerns for the field of technical communication: How will these tools be incorporated into professional settings? How might we appropriately integrate these tools into our research and teaching? In this review, we examine research published in 2023–2024 addressing these questions ( N = 28). Overall, we find preliminary evidence that GAI tools can positively impact student writing and assessment; they also have the potential to assist with some aspects of academic and medical research and writing. However, there are concerns about their reliability and the ethical conundrums raised when they are used inappropriately or when their outputs cannot be distinguished from humans. More research is needed for evidence-based teaching and research strategies as well as policies guiding ethical use. We offer suggestions for new research avenues and methods.
-
Role Play: Conversational Roles as a Framework for Reflexive Practice in AI-Assisted Qualitative Research ↗
Abstract
Previous literature has shown that generative artificial intelligence (GAI) software, including large language model (LLM) chatbots, might contribute to qualitative research studies. However, there is still a need to examine the relationships between researchers, GAI technologies, data, and findings. To address this need, our team conducted a thematic analysis of our reflexive journals from an LLM chatbot-assisted research project. We identified four roles that researchers adopted: managers closely monitored the LLM's work, teachers instructed the LLM on theories and methods, colleagues openly discussed the data with the LLM, and advocates worked with the LLM to improve user experiences. Planning for and playing with multiple roles also helped to enrich the research process. This study underscores the potential for using conversational roles as a framework to support reflexivity when working with GAI technologies on qualitative research.
-
Abstract
This introductory article examines the evolving landscape of generative artificial intelligence (GAI) tools, contextualizing their impact through historical tropes of automation as both helper and threat. The authors argue that GAI tools are neither sentient helpers nor existential threats but complex systems that require careful integration into educational and research settings. The article underscores the importance of nuanced, evidence-based approaches, advocating for a balanced understanding of GAI's potential and limitations. It emphasizes ethical considerations and promotes reflective adoption over reactionary measures.
-
Abstract
The use of generative artificial intelligence (GAI) large language models has increased in both professional and classroom technical writing settings. One common response to student use of GAI is to increase surveillance, incorporating plagiarism detection services or banning certain composing activities from the classroom. This paper argues such measures are harmful and instead proposes a “CARE” framework: critical, authorial, rhetorical, and educational—a nuanced approach emphasizing ethical and contextual AI use in technical writing classrooms. This framework aligns with plagiarism best practices, initially devised from when rhetoric and composition scholars considered the pedagogical implications of the Internet.
-
Improving ChatGPT's Competency in Generating Effective Business Communication Messages: Integrating Rhetorical Genre Analysis into Prompting Techniques ↗
Abstract
This study explores how prompting techniques, especially those integrated with rhetorical analysis results, may improve the effectiveness of artificial intelligence (AI)-generated business communication messages. I conducted an experiment to assess the effectiveness of these prompting techniques in the context of crafting a negative message generated with ChatGPT 3.5 ( n = 85). A multiple regression was calculated to explore prompting techniques’ impact on the negative message grades and how each technique influences the message grade. The results ( F(4, 80) = 31.84, p < .001), with an adjusted R2 = .595, indicate a positive relationship between prompting techniques and the effectiveness of AI-generated messages. This study also identified challenges related to students’ AI literacy. I conclude the study by recommending practical measures on how to incorporate AI into business and professional writing classrooms.
-
Abstract
ChatGPT and other LLMs are at the forefront of pedagogical considerations in classrooms across the academy. Many studies have spoken to the technology’s capacity to generate one-off texts in a variety of genres. This study complements those by inquiring into its capacity to generate compelling texts at scale. In this study, we quantitatively and qualitatively analyze a small corpus of generated texts in two genres and gauge it against novice and published academic writers along known dimensions of linguistic variation. Theoretically, we position and historicize ChatGPT as a writing technology and consider the ways in which generated text may not be congruent with established trajectories of writing development in higher education. Our study found that generated texts are more informationally dense than authored texts and often read as dialogically closed, “empty,” and “fluffy.” We close with a discussion of potentially explanatory linguistic features, as well as relevant pedagogical implications.
September 2024
-
Abstract
ABSTRACT The growing capabilities of large language models (LLMs) pose important questions for rhetorical theory and pedagogy. This article offers an overview of how LLMs like GPT work and a consideration of whether they should be considered rhetorical agents. To answer this question, the article considers structural and argumentative similarities in classical theorizations of rhetoric and the philosophy of Wilfrid Sellars. GPT’s particular method of encoding statistical patterns in language gives it some rudimentary semantics and reliably generates acceptable natural language output, so it should be considered to have a degree of rhetorical agency. But it is also badly limited by its restriction to written text, and an analysis of its interface shows that much of its rhetorical savvy is caused by the highly restricted rhetorical situation created by the ChatGPT interface.
-
Abstract
ABSTRACT Despite seemingly broad acceptance within rhetorical theory, the category of the unconscious has remained understudied and misunderstood ever since Kenneth Burke first appropriated the concept from psychoanalysis, and his unquestioned commitment to conventional anthropocentric binaries continues to obscure the role and function of the unconscious within communication into this century. Offering a corrective reanalysis of the Freudian apparatus for contemporary rhetoricians, this article shows where Burke went wrong in his early encounter with psychoanalysis and suggests a vital alternative approach in the cybernetic recasting of Jacques Lacan, which suggests the possibility of an unconscious without Dramatism’s traditional humanist assumptions. In a lateral turn bringing this imagined dialogue between Burke and Lacan into our era, the article demonstrates how a Lacan-inflected posthumanist revision of rhetoric’s unconscious is better suited to address contemporary issues of mediated communication, such as the pedagogical import of AI and ChatGPT.
-
Abstract
This paper examines ChatGPT's use of evaluative language and engagement strategies while addressing information-seeking queries. It assesses the chatbot's role as a virtual teaching assistant (VTA) across various educational settings. By employing Appraisal theory, the analysis contrasts responses generated by ChatGPT and those added by humans, focusing on the interactants’ attitude, deployment of interpersonal metaphors and evaluations of entities, revealing their views on Australian cultural practice. Two datasets were analysed: the first sample (15,909 words) was retrieved from the subreddit r/AskAnAustralian and the second (10,696 words) was obtained by prompting ChatGPT with the same questions. The findings show that, while human experts mainly opt for subjective explicit formulations to express personal viewpoints, the chatbot's preference goes out to incongruent ‘it is’-constructions to share pre-programmed perspectives, which may reflect ideological bias. Even though ChatGPT displays promising socio-communicative capabilities (SCs), its lack of contextual awareness, required to function cross-culturally as a VTA, may lead to considerable ethical issues. The study's novel contribution lies in the in-depth investigation of how the chatbot's SCs and lexicogrammatical selections may impact its role as a VTA, highlighting the need to develop students’ critical digital literacy skills while using AI learning tools.
July 2024
-
Abstract
The generative AI chatbot, as an artificial rhetorical agent participating in the invention and circulation of public discourse, shakes the foundations of rhetorical tenets such as agency, ethos, circulation, and justice; and in doing so, it further isolates rhetoric as amoral, ateleological technē concerned with mere calculated effects and consequences, and may ultimately contribute to a post-rhetoric condition. This article depicts a rhetorical profile of the generative AI chatbot characterized by stochastic rhetoric, which is distinguished from the conventional understanding of rhetoric as (human) conscious and purposeful use of language to induce change. Making a case for the possibility of a post-rhetoric condition, the article considers what it might mean for our conceptualization of ethos, circulation, and justice, and suggests ways of adapting to it.
-
Automating Research in Business and Technical Communication: Large Language Models as Qualitative Coders ↗
Abstract
The emergence of large language models (LLMs) has disrupted approaches to writing in academic and professional contexts. While much interest has revolved around the ability of LLMs to generate coherent and generically responsible texts with minimal effort and the impact that this will have on writing careers and pedagogy, less attention has been paid to how LLMs can aid writing research. Building from previous research, this study explores the utility of AI text generators to facilitate the qualitative coding research of linguistic data. This study benchmarks five LLM prompting strategies to determine the viability of using LLMs as qualitative coding, not writing, assistants, demonstrating that LLMs can be an effective tool for classifying complex rhetorical expressions and can help business and technical communication researchers quickly produce and test their research designs, enabling them to return insights more quickly and with less initial overhead.
-
Using Generative AI to Facilitate Data Analysis and Visualization: A Case Study of Olympic Athletes ↗
Abstract
The ability to work with data is an important skill for students enrolled in technical and professional communication programs, but students with limited mathematical and computer programming literacies might find it difficult to do basic data analysis or customize data visualizations. This article examines the extent to which ChatGPT can make data analysis and visualization more accessible for students with limited technical proficiency. The results suggest that although the tool is poised to have a substantial impact in helping students create effective data visualizations, its efficacy as a data analysis tool is more limited.
-
Comparing Student and Writing Instructor Perceptions of Academic Dishonesty When Collaborators Are Artificial Intelligence or Human ↗
Abstract
It remains unclear if perceptions of academic dishonesty concerning artificial intelligence writing technologies (AIWTs) present new challenges or if they reflect prior, non-AI concerns. To structure this problem, we used a randomized control survey experiment. We compared student ( n = 603) and instructor ( n = 312) attitudes toward dishonesty in collaborations involving humans versus AIWT in 10 writing-related scenarios. Results suggest similar perception patterns among students and instructors, with both populations expressing significant differences in perceived dishonesty between AI and human collaborators in some scenarios. This experiment structures the problem of AI writing and academic dishonesty for future research in this emerging field.
-
Abstract
The authors analyze the ability of ChatGPT to generate effective instructions for a consequential task: taking a COVID-19 test. They compare the output from a commercial prompt for generating these instructions to those provided by the test manufacturer. They also analyze the input, the prompt itself, to address prompt-engineering issues. The results show that although the output from ChatGPT exhibits certain conventions for documentation, the human-authored instructions from the manufacturer are superior in most ways. The authors conclude that when it comes to creating high-quality, consequential instructions, ChatGPT might be better seen as a collaborator than a competitor with human technical communicators.
-
Abstract
How should instructors adapt technical editing courses to account for generative artificial intelligence (AI)? This article addresses what generative AI means for technical editing pedagogy. While AI tools may be able to address rote editing tasks, expert editors are still needed to provide accessible, ethical, and justice-oriented edits. After reviewing impacts of generative AI on editing praxis, the author focuses on the microcredentials that she built into an editing course in order to address these impacts pedagogically. The goal was to enable students to understand AI, argue for their expertise, and edit from ethical and social justice perspectives.
-
Abstract
This case study offers examples of the use of artificial intelligence (AI) writing tools at a small nonprofit workplace dispute resolution center. It explores the limits and strengths of these AI tools, as well as the mediation field's concerns around using AI as a replacement for mediation work. Further, it explores the implications of AI tool use for the ethos of the writer and the AI tool itself as well as for the current pedagogy deliberations occurring in the technical writing field at large.
-
Content Analysis, Construct Validity, and Artificial Intelligence: Implications for Technical and Professional Communication and Graduate Research Preparation ↗
Abstract
Artificial intelligence tools are being increasingly used to do content analysis in technical and professional communication (TPC). The authors consider some of the affordances and constraints of these tools and suggest that construct validity is an underdiscussed form of validity within TPC research that will become more important as artificial intelligence research tools become increasingly prevalent. But construct validity is an important idea for graduate programming on research methods regardless of the type of method, technique, or tool used—whether qualitative or computational. Thus, training in construct validity is important for strengthening graduate research preparation in TPC.
May 2024
-
Abstract
OpenAI's ChatGPT is a large language model (LLM) that excels at generating text and public controversy. Upon its release, many marveled at its ability to author intelligible and generically responsible texts (Herman). Writing about his students' experiences using artificial intelligence (AI) writing assistants, S. Scott Graham remarks that the results were "consistently mediocre—and usually quite obvious in their fabrication." Why might this be true? How can an LLM succeed in some respects and fail in others? We argue that the discrepant reactions to human and AI rhetoric are a question of genre, specifically that AI rhetoric is only generic; AI rhetoric represents a new enactment of "writing degree zero" (Barthes) that is disengaged from immediate rhetorical situations and knowledge bases. AI text generators (currently) have a more difficult time simulating the positioned perspectives that human writers bring to situations and communicate to audiences through their genre usage. Drawing on the work of Bakhtin, we treat this problem as a question of generic form and audience addressivity. We describe the interplay of form and addressivity as genre signaling and offer it as a construct for the analysis of AI rhetoric and genre as a cultural form (Miller). Genre signaling (Hart-Davidson and Omizo) describes a feature of communicative behavior as it occurs over time that can help both humans and machines evaluate written discourse as it exhibits certain stabilized formal features. When texts contain specific genre signals at expected frequencies and intensities, it may be recognized as being generally accurate, reliable, trustworthy. Without these signals, a text with a similar topical focus might fail to be taken as credible or useful. In this essay we propose to quantify genre signaling based on three measures: (1) stability, (2) frequency, and (3) periodicity.
-
Abstract
Rhetoric is a trace retained in and by artificial intelligence (AI) technologies. This concept illuminates how rhetoric and AI have faced issues related to information abundance, entrenched social inequalities, discriminatory biases, and the reproduction of repressive ideologies. Drawing on their shared root terminology (stochastic/artifice), common logic (zero-agency), and similar forms of organization (trope+algorithm), this essay urges readers to consider the etymological, ontological, and formal dimensions of rhetoric as inherent features of contemporary AI.
April 2024
-
Abstract
Abstract This article addresses a pervasive but undertheorized literacy practice: ghostwriting. Drawing on a five-year interview study with undergraduate students, I describe the many ghostwriting tasks that participants were asked to perform for their co-op jobs and how they perceived those tasks. Overall, students were bewildered by ghostwriting and found it very different from, and in some ways at odds with, their academic writing. Given the ubiquity of ghostwriting and the likelihood that much of it will be offloaded to artificial intelligence in coming years, I call for and begin to outline a critical pedagogical approach to ghostwriting grounded in critical language awareness.
March 2024
-
Generative AI in first-year writing: An early analysis of affordances, limitations, and a framework for the future ↗
Abstract
Our First-year Writing program began intentional student engagements with generative AI in the fall of 2022. We developed assignments for brainstorming research questions, writing counterarguments, and editing assistance using the AI tools Elicit, Fermat, and Wordtune. Students felt that the tools were helpful for finding ideas to get started with writing, to find sources once they had started writing, and to get help with counterarguments and alternate word choices. But when given the choice to use the assistants or not, most declined. Generative AI at this stage is unreliable, and many students found the tradeoff in reviewing AI suggestions to be too time consuming. And many students expressed a preference for continuing to develop their own voices through writing. Our experience in engaging AI led to the creation of the DEER praxis, which emphasizes defined engagements with AI tools for specific purposes, and generous use of reflection.
-
Writing with generative AI and human-machine teaming: Insights and recommendations from faculty and students ↗
Abstract
We share our experiences working with large-language model generative AI for a full semester in a professional writing course, integrating it into all projects. We discuss how we adapted our teaching, learning, and writing to using (or purposefully not using) AI. Issues we discuss include balancing integration of AI to avoid potential overreliance, the importance of centering authorial agency and decision-making, negotiating grading and evaluation, the benefits and drawbacks of AI throughout the writing process, and the relationships we build or could build with AI. We close with recommendations for faculty and students.
-
Abstract
Abstract This essay examines the first major American debate over aerial warfare as a case study in the relationship between visual spectacle and warfighting technologies. In the early 1920s, Brigadier General William “Billy” Mitchell mounted a short but intense advocacy campaign to win public approval for a standalone and fully supported air force. He justified his arguments with sanitized depictions of the warplane's idealized deployment. I call such depictions technological spectacles, and I parse their three hallmarks in Mitchell's advocacy: the dissociation of violence and destruction, the self-justification of technology, and the confusion of possibility for probability. I demonstrate that these habits of spectacle pervaded not only Mitchell's rhetoric but the coverage he received in the press. The essay establishes Mitchell as a key figure in the history of American rhetoric about military technology and, in the process, offers new historical context and critical vocabulary for diagnosing rhetorics of technological spectacle.
January 2024
-
Tools, Potential, and Pitfalls of Social Media Screening: Social Profiling in the Era of AI-Assisted Recruiting ↗
Abstract
Employers are increasingly turning to innovative artificial intelligence recruiting technologies to evaluate candidates’ online presence and make hiring decisions. Such social media screening, or social profiling, is an emerging approach to assessing candidates’ social influence, personalities, and workplace behaviors through their publicly shared data on social networking sites. This article introduces the processes, benefits, and risks of social profiling in employment decision making. The authors provide important guidance for job applicants, technical and professional communication instructors, and hiring professionals on how to strategically respond to the opportunities and challenges of automated social profiling technologies.
December 2023
-
Abstract
ABSTRACT This article makes a case for the contemporary relevance of Charles Sanders Peirce’s conception of rhetoric and its further fulfillment through biosemiotics and pragmatist-inflected physiological feminisms. It situates itself in an era when rhetoric is undergoing conceptual change, with the social constructivism that guided much thinking since the 1970s supplanted in part by a family of postconstructivisms. In conversation with new materialist, affective, and biological strands of rhetorical theory, the article maps questions and risks involved in developing newer conceptions of rhetoric not limited to discourse, symbolic action, and exclusively human capacities. It argues that Peircean thinking provides resources for nonreductive understandings of how rhetoric emerges from life itself and is pluralistically mediated through the forming conditions and multimodal consequences that materially give it meaning. Contemporary biosemiotics and physiologically oriented feminisms like Teresa de Lauretis’s then move the promise of Peircean rhetoric closer to reality.
October 2023
-
Abstract
The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers—narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned automated writing evaluation models in three conditions: (1) register-related factors alone, (2) linguistic features alone, and (3) the combination of these two. A total of 1006 essays ( n = 327, 342, and 337 for informative, narrative, and opinion, respectively) written by 92 fourth- and fifth-graders were examined. A series of hierarchical linear regression analyses controlling for the effects of demographics were conducted to select the most useful features to capture text quality, scored by humans, in the three registers. These features were in turn entered into automated writing evaluation predictive models with tuning of the parameters in a tenfold cross-validation procedure. The average validity coefficients (i.e., quadratic-weighed kappa, Pearson correlation r, standardized mean score difference, score deviation analysis) were computed. The results demonstrate that (1) diverse feature sets are utilized to predict quality in the three registers, and (2) the combination of register-related factors and linguistic features increases the accuracy and validity of all human and automated scoring models, especially for the registers of informative and opinion writing. The findings from this study suggest that students’ register knowledge and register-specific motivation add additional predictive information when evaluating writing quality across registers beyond that afforded by linguistic features of the paper itself, whether using human scoring or automated evaluation. These findings have practical implications for educational practitioners and scholars in that they can help strengthen consideration of register-specific writing skills and cognitive and motivational forces that are essential components of effective writing instruction and assessment.
August 2023
-
Abstract
I offer a meditation on current challenges faced by literacy educators and researchers and uses those challenges to suggest new directions for the field. Citing the precipitous decline in interest in the humanities and the field of literacy education, I consider the significance of tools such as ChatGPT for the teaching of writing. I explore the significance of out-of-school literacies and the linguistic diversity of today’s students in terms of their implications for literacy instruction. I also remind us of the chilling political climate in which we find ourselves, especially with regard to LGBTQ+ identities. Given these contemporary challenges, I suggest that we in the field of literacy education rethink the nature of writing instruction, restructure our research paradigm to be more inclusive and democratic, and continue to be forceful political advocates for pedagogies, practices, and policies that will ensure a just and equitable literacy education for all.
June 2023
January 2023
-
Building Better Machine Learning Models for Rhetorical Analyses: The Use of Rhetorical Feature Sets for Training Artificial Neural Network Models ↗
Abstract
In this paper, we investigate two approaches to building artificial neural network models to compare their effectiveness for accurately classifying rhetorical structures across multiple (non-binary) classes in small textual datasets. We find that the most accurate type of model can be designed by using a custom rhetorical feature list coupled with general-language word vector representations, which outperforms models with more computing-intensive architectures.
2023
October 2022
-
Abstract
Abstract This essay maps the logistics and advantages of reading and teaching texts in their original installments as a means of theorizing seriality in the undergraduate literature classroom.
-
Extending Design Thinking, Content Strategy, and Artificial Intelligence into Technical Communication and User Experience Design Programs: Further Pedagogical Implications ↗
Abstract
This article follows up on the conversation about new streams of approaches in technical communication and user experience (UX) design, i.e., design thinking, content strategy, and artificial intelligence (AI), which afford implications for professional practice. By extending such implications to technical communication pedagogy, we aim to demonstrate the importance of paying attention to these streams in our programmatic development and provide strategies for doing so.
March 2022
-
Volume 9.2: NCTE/CCCC Cross-Caucus Present Tense “Diversity is not an End Game: BIPOC Futures in the Academy” ↗
Abstract
“Diversity is not an End Game: BIPOC Futures in the Academy” marks the final installment in a conversation across multiple journals that examines the injustices behind crisis-driven diversity initiatives within the academy and how these initiatives impact BIPOC across the fields of rhetoric, composition, and communication. Following the murders of George Floyd, Breonna Taylor, Amhad Aubrey,1 and too many others—as well as the incompetent and often hypocritical responses by institutions across the nation—we deemed it necessary to highlight the myriad ways that BIPOC are forced to experience duress, navigate threatening spaces, and leverage precious resources within the academy. These unjust conditions reflect the harms that we must already strive to survive in everyday life and disprove the myths of meritocracy and academic “safe spaces.”