Journal of Business and Technical Communication
21 articlesJanuary 2026
-
Increasing Literacy on the Scams Targeting Latines: Generative Artificial Intelligence, Digital Technologies, and the Latine Community ↗
Abstract
This article builds a heuristic that raises the artificial intelligence (AI) literacy of Latine students. Nefarious people are exploiting marginalized Latine communities by using AI in creative partnerships, similar to those described in technical communication research, to build social profiles of Latines. These people are rhetorically using AI in passive-income and voice-over scams that target Latines who are insecure about their financial and citizenship situations. The heuristic offered here guides instructors on how to increase Latine students’ AI literacy by making these students aware of the rhetorical relationships between nefarious individuals and AI.
-
Book Review: <i>Artificial Intelligence for Strategic Communication</i> by Karen E. Sutherland SutherlandKaren E. (2025). <i>Artificial Intelligence for Strategic Communication</i> . Palgrave Macmillan Singapore. 486 pp. $109.99 hardcover, $89.99 eBook. ISBN: 978-981-96-2574-1. https://doi.org/10.1007/978-981-96-2575-8 ↗
October 2025
-
Abstract
Mainstream artificial intelligence (AI) is an extractive industry that exploits both humans and nonhumans. The extractive underpinning of mainstream AI systems means that technical communicators must be careful when advocating for accessibility and inclusivity in AI because those efforts may expose marginalized groups to further exploitation. Extractive AI also necessitates that technical communicators reconsider how their own discipline may be complicit in the damaging logics and practices of extraction.
January 2025
-
Revisiting Four Conversations in Technical and Professional Writing Scholarship to Frame Conversations About Artificial Intelligence ↗
Abstract
This article explores four different topics of conversation in technical and professional communication (TPC) scholarship that overlap and connect with contemporary issues in generative artificial intelligence (AI): process and iteration, theory and power, actors and activity, and the social justice turn. The authors offer four nonexhaustive reviews of these conversations, offering insight into key issues and texts that have animated discourse in the field and can directly or indirectly address the complex relationship between TPC work and generative AI.
-
Beyond Academic Integrity: Navigating Institutional and Disciplinary Anxieties About AI-Assisted Authorship in Technical and Professional Communication ↗
Abstract
Generative artificial intelligence (GenAI) tools are already being implemented for a variety of writing tasks in workplaces, where individual (human) authorship is valued less than the efficient production of text. But policies regarding AI use in higher education continue to prioritize academic integrity, focusing on narrowly defined notions of authorship that do not reflect the realities of workplace writing. Through an analysis of 100 university policies on AI, this article shows how AI tools create a tension for faculty in technical and professional communication who must operate within institutional or departmental policies for AI use but must also prepare writers for workplace authorship.
-
Abstract
This article examines issues of authenticity involved in using generative AI to compose technical and professional communication (TPC) documents. Authenticity is defined through an Aristotelian understanding of ethos, which includes goodwill ( eunoia), practical wisdom ( phronesis), virtuousness ( arete), and Fromm's concepts of true self and pseudo self. The authors conducted an initial analysis of AI affordances that align with TPC concerns—genre, plain language, and grammatical/mechanical correctness. The preliminary results show that these affordances may be limited by issues of inauthenticity. The authors suggest that in order to address AI's limitations, writers should adopt a rhetoric of authenticity via real-world engagement, human centeredness, and personal style.
-
Constructing Websites with Generative AI Tools: The Accessibility of Their Workflows and Products for Users With Disabilities ↗
Abstract
Generative AI tools allow anyone without web-design experience to have a business website created when the user provides a few specifications about the business, such as its name, type, and location. But the resulting websites not only fall short of the business's basic needs but they also raise major concerns about their accessibility for disabled users. This study specifically examines whether these AI generated websites are accessible to screen-reader users with visual disabilities. It presents data about the usability and accessibility of the products of three generative AI website builders, highlights the specific problems found by an expert screen reader test along with an automated machine scan of these sites, and discusses some causes of and recommendations for solving these problems.
-
Abstract
This article focuses on the unique ways that technical and professional communication (TPC) researchers can study artificial intelligence (AI) models that challenge the idea that humans and machines are separate yet equal entities. The authors present a brief definition of AI, a recap of HCI research paradigms, and a description of how AI models challenge traditional HCI research and how TPC researchers might respond to these challenges in their studies. Rather than presenting clear-cut methods for studying AI, the article highlights questions that researchers need to consider as they develop approaches for studying AI.
-
Abstract
This article considers the rhetorical risks of using generative AI to compose organizational communication during crises or in the aftermath of tragedies. It focuses on a case study in which representatives of Vanderbilt University’s Peabody College of Education and Human Development disclosed their use of ChatGPT to write a response to a school shooting at another university. The author argues that although generative AI can often be useful in technical and professional communication, it can also undermine perceptions of “rhetorical humanity” if its use is disclosed or discovered, making it rhetorically risky in certain contexts. Thus, knowing when not to utilize AI is an important aspect of AI literacy for practitioners.
-
Technical Communication's Fight Against Extractive Large Language Modeling by Applying FAIR and CARE Principles of Data ↗
Abstract
This article assesses the data practices of Grammarly, the prominent AI-assisted writing technology, by applying data principles that advocate for empowering Indigenous data sovereignty. The assessment is informed by the authors’ work with an Inuit tribal organization from rural Arctic Alaska that generated data and metadata about potentially sacred tribal activities. Their analysis of Grammarly's large-language modeling practices demonstrates how technical communication can hold businesses to principled data practices created by Indigenous nations and communities that understand how to create more just futures.
-
Abstract
The concept of a public—a group of strangers drawn together through their mutual attention to a text—has historically been tied to the notion of human intentionality. The recent popularization of artificial intelligence (AI) large language models (such as ChatGPT) destabilizes this connection. When large language models generate text, they may inadvertently form stochastic publics—groups pulled together through the randomization of biased data patterns drawn from AI training material. This exploratory study draws on a three-phase dialogue with OpenAI's ChatGPT 4 to identify the risks of stochastic publics and suggest human-originated interventions grounded in feminist care ethics.
July 2024
-
Automating Research in Business and Technical Communication: Large Language Models as Qualitative Coders ↗
Abstract
The emergence of large language models (LLMs) has disrupted approaches to writing in academic and professional contexts. While much interest has revolved around the ability of LLMs to generate coherent and generically responsible texts with minimal effort and the impact that this will have on writing careers and pedagogy, less attention has been paid to how LLMs can aid writing research. Building from previous research, this study explores the utility of AI text generators to facilitate the qualitative coding research of linguistic data. This study benchmarks five LLM prompting strategies to determine the viability of using LLMs as qualitative coding, not writing, assistants, demonstrating that LLMs can be an effective tool for classifying complex rhetorical expressions and can help business and technical communication researchers quickly produce and test their research designs, enabling them to return insights more quickly and with less initial overhead.
-
Using Generative AI to Facilitate Data Analysis and Visualization: A Case Study of Olympic Athletes ↗
Abstract
The ability to work with data is an important skill for students enrolled in technical and professional communication programs, but students with limited mathematical and computer programming literacies might find it difficult to do basic data analysis or customize data visualizations. This article examines the extent to which ChatGPT can make data analysis and visualization more accessible for students with limited technical proficiency. The results suggest that although the tool is poised to have a substantial impact in helping students create effective data visualizations, its efficacy as a data analysis tool is more limited.
-
Comparing Student and Writing Instructor Perceptions of Academic Dishonesty When Collaborators Are Artificial Intelligence or Human ↗
Abstract
It remains unclear if perceptions of academic dishonesty concerning artificial intelligence writing technologies (AIWTs) present new challenges or if they reflect prior, non-AI concerns. To structure this problem, we used a randomized control survey experiment. We compared student ( n = 603) and instructor ( n = 312) attitudes toward dishonesty in collaborations involving humans versus AIWT in 10 writing-related scenarios. Results suggest similar perception patterns among students and instructors, with both populations expressing significant differences in perceived dishonesty between AI and human collaborators in some scenarios. This experiment structures the problem of AI writing and academic dishonesty for future research in this emerging field.
-
Abstract
The authors analyze the ability of ChatGPT to generate effective instructions for a consequential task: taking a COVID-19 test. They compare the output from a commercial prompt for generating these instructions to those provided by the test manufacturer. They also analyze the input, the prompt itself, to address prompt-engineering issues. The results show that although the output from ChatGPT exhibits certain conventions for documentation, the human-authored instructions from the manufacturer are superior in most ways. The authors conclude that when it comes to creating high-quality, consequential instructions, ChatGPT might be better seen as a collaborator than a competitor with human technical communicators.
-
Abstract
How should instructors adapt technical editing courses to account for generative artificial intelligence (AI)? This article addresses what generative AI means for technical editing pedagogy. While AI tools may be able to address rote editing tasks, expert editors are still needed to provide accessible, ethical, and justice-oriented edits. After reviewing impacts of generative AI on editing praxis, the author focuses on the microcredentials that she built into an editing course in order to address these impacts pedagogically. The goal was to enable students to understand AI, argue for their expertise, and edit from ethical and social justice perspectives.
-
Abstract
This case study offers examples of the use of artificial intelligence (AI) writing tools at a small nonprofit workplace dispute resolution center. It explores the limits and strengths of these AI tools, as well as the mediation field's concerns around using AI as a replacement for mediation work. Further, it explores the implications of AI tool use for the ethos of the writer and the AI tool itself as well as for the current pedagogy deliberations occurring in the technical writing field at large.
-
Content Analysis, Construct Validity, and Artificial Intelligence: Implications for Technical and Professional Communication and Graduate Research Preparation ↗
Abstract
Artificial intelligence tools are being increasingly used to do content analysis in technical and professional communication (TPC). The authors consider some of the affordances and constraints of these tools and suggest that construct validity is an underdiscussed form of validity within TPC research that will become more important as artificial intelligence research tools become increasingly prevalent. But construct validity is an important idea for graduate programming on research methods regardless of the type of method, technique, or tool used—whether qualitative or computational. Thus, training in construct validity is important for strengthening graduate research preparation in TPC.
January 2024
-
Tools, Potential, and Pitfalls of Social Media Screening: Social Profiling in the Era of AI-Assisted Recruiting ↗
Abstract
Employers are increasingly turning to innovative artificial intelligence recruiting technologies to evaluate candidates’ online presence and make hiring decisions. Such social media screening, or social profiling, is an emerging approach to assessing candidates’ social influence, personalities, and workplace behaviors through their publicly shared data on social networking sites. This article introduces the processes, benefits, and risks of social profiling in employment decision making. The authors provide important guidance for job applicants, technical and professional communication instructors, and hiring professionals on how to strategically respond to the opportunities and challenges of automated social profiling technologies.