Abstract

The emergence of large language models (LLMs) has disrupted approaches to writing in academic and professional contexts. While much interest has revolved around the ability of LLMs to generate coherent and generically responsible texts with minimal effort and the impact that this will have on writing careers and pedagogy, less attention has been paid to how LLMs can aid writing research. Building from previous research, this study explores the utility of AI text generators to facilitate the qualitative coding research of linguistic data. This study benchmarks five LLM prompting strategies to determine the viability of using LLMs as qualitative coding, not writing, assistants, demonstrating that LLMs can be an effective tool for classifying complex rhetorical expressions and can help business and technical communication researchers quickly produce and test their research designs, enabling them to return insights more quickly and with less initial overhead.

Journal
Journal of Business and Technical Communication
Published
2024-07-01
DOI
10.1177/10506519241239927
Open Access
Closed
Topics

Citation Context

Cited by in this index (2)

  1. Technical Communication Quarterly
  2. Technical Communication Quarterly

Cites in this index (14)

  1. Journal of Business and Technical Communication
  2. Written Communication
  3. Journal of Business and Technical Communication
  4. Technical Communication Quarterly
  5. Technical Communication Quarterly
Show all 14 →
  1. Journal of Business and Technical Communication
  2. Rhetoric Society Quarterly
  3. Journal of Business and Technical Communication
  4. Written Communication
  5. Journal of Business and Technical Communication
  6. Technical Communication Quarterly
  7. Written Communication
  8. Written Communication
  9. Written Communication
Also cites 20 works outside this index ↓
  1. 10.1145/3442188.3445922
  2. 10.1109/TPC.2010.2077450
  3. 10.1016/j.esp.2011.04.001
  4. 10.1093/elt/54.4.369
  5. 10.1016/j.esp.2015.10.001
  6. 10.37514/PRA-B.2019.0230
  7. Hajikhani A., Cole C. (2023). A Critical Review of Large Language Models: Sensitivity, Bias, and the Path Tow…
  8. 10.1007/s11192-022-04358-x
  9. 10.4324/9780429485480
  10. 10.3102/003465430298487
  11. Jiang E., Olson K., Toh E., Molina A., Donsbach A., Terry M., Cai C. J. (2022). Promptmaker: Prompt-based pro…
  12. Kane M. (2020). Communicating the “write” values: Developing methods of computer-aided text analysis for inst…
  13. Larson B., Hart-Davidson W., Walker K. C., Walls D. M., Omizo R. (2016). Use what you choose: Applying comput…
  14. 10.1080/00335638409383686
  15. Omizo R., Meeks M., Hart-Davidson W. (2021). Detecting high-quality comments in written feedback with a zero …
  16. Reynolds L., McDonell K. (2021). Prompt programming for large language models: Beyond the few-shot paradigm. …
  17. 10.1016/S0889-4906(00)00023-5
  18. Spinuzzi C. (2002). Modeling genre ecologies. In Proceedings of the 20th Annual International Conference on C…
  19. 10.1177/1461445609341006
  20. Wang L., Xu W., Lan Y., Hu Z., Lan Y., Lee R. K. W., Lim E. P. (2023). Plan-and-solve prompting: Improving ze…
CrossRef global citation count: 8 View in citation network →