Abstract

OpenAI's ChatGPT is a large language model (LLM) that excels at generating text and public controversy. Upon its release, many marveled at its ability to author intelligible and generically responsible texts (Herman). Writing about his students' experiences using artificial intelligence (AI) writing assistants, S. Scott Graham remarks that the results were "consistently mediocre—and usually quite obvious in their fabrication." Why might this be true? How can an LLM succeed in some respects and fail in others? We argue that the discrepant reactions to human and AI rhetoric are a question of genre, specifically that AI rhetoric is only generic; AI rhetoric represents a new enactment of "writing degree zero" (Barthes) that is disengaged from immediate rhetorical situations and knowledge bases. AI text generators (currently) have a more difficult time simulating the positioned perspectives that human writers bring to situations and communicate to audiences through their genre usage. Drawing on the work of Bakhtin, we treat this problem as a question of generic form and audience addressivity. We describe the interplay of form and addressivity as genre signaling and offer it as a construct for the analysis of AI rhetoric and genre as a cultural form (Miller). Genre signaling (Hart-Davidson and Omizo) describes a feature of communicative behavior as it occurs over time that can help both humans and machines evaluate written discourse as it exhibits certain stabilized formal features. When texts contain specific genre signals at expected frequencies and intensities, it may be recognized as being generally accurate, reliable, trustworthy. Without these signals, a text with a similar topical focus might fail to be taken as credible or useful. In this essay we propose to quantify genre signaling based on three measures: (1) stability, (2) frequency, and (3) periodicity.

Journal
Rhetoric Society Quarterly
Published
2024-05-26
DOI
10.1080/02773945.2024.2343615
CompPile
Search in CompPile ↗
Open Access
Closed
Topics
Export

Citation Context

Cited by in this index (3)

  1. Computers and Composition
  2. College Composition and Communication
  3. College English

References (56) · 7 in this index

  1. Attention Is All You Need
    Advances in Neural Information Processing Systems
  2. Proceedings of the 10th ACM Conference on Recommender Systems, Boston 2016
  3. Speech Genres and Other Late Essays
  4. Writing Degree Zero
  5. 10.1075/scl.28
Show all 56 →
  1. 10.48550/arXiv.2005.14165
  2. 10.1075/cagral.4
  3. Genre and the New Rhetoric, edited by Aviva Freedman and Peter Medway, Routledge
  4. Combat AI with AI: Counteract Machine-Generated Fake Restaurant Reviews on Social Media
    arXiv preprint arXiv:2302.07731
  5. Writing Center Journal
  6. AI-Generated Essays Are Nothing to Worry About
    Inside Higher Ed
  7. Technical Communication Quarterly
  8. 10.4324/9781315836010
  9. 10.1007/978-3-319-51268-6_6
  10. Herman Daniel. “ChatGPT Will End High-School English.” The Atlantic Dec. 2022 https://www.theatlantic.com/tec…
  11. 10.1016/j.dss.2017.06.007
  12. 10.48550/arXiv.1910.13413
  13. 10.5406/illinois/9780252037528.001.0001
  14. Proceedings of the 38th ACM International Conference on Design of Communication, Denton (…
  15. IEEE Transactions on Professional Communication
  16. Kaufer D. and Suguru Ishizaki. “DocuScope. University Carnegie Mellon.” DocuScope - Department of English - D…
  17. 10.1075/scl.109.01kau
  18. 10.1515/9783110469639-008
  19. Contributions to the Theory of Games (AM-28), Volume II
  20. Proceedings of the 34th ACM International Conference on the Design of Communication, Silv…
  21. Lundberg Scott. “Welcome to the SHAP Documentation — SHAP Latest Documentation.” 2018 https://shap.readthedoc…
  22. 10.48550/arXiv.1802.03888
  23. A Unified Approach to Interpreting Model Predictions
    Advances in Neural Information Processing Systems
  24. Journal of Business and Technical Communication
  25. Marciano Jonathan. “Fake Online Reviews Cost $152 Billion a Year. Here’s How e-commerce Sites Can Stop Them.”…
  26. Quality in Product Reviews: What Technical Communicators Should Know
    Technical Communication
  27. 10.1016/j.dss.2017.03.010
  28. 10.1080/00335638409383686
  29. Molnar Christoph. “9.5 Shapley Values | Interpretable Machine Learning.” christophm.github.io 2023 https://ch…
  30. Molnar Christoph. “9.6 SHAP (Shapley Additive ExPlanations) | Interpretable Machine Learning.” christophm.git…
  31. Journal of Writing Research
  32. Proceedings of the 39th ACM International Conference on Design of Communication
  33. You Can Read the Comments Again: The Faciloscope App and Automated Rhetorical Analysis
    DHCommons Journal
  34. Computers and Composition
  35. PowerReviews. “Survey: The Ever-Growing Power of Reviews (2023 Edition).” PowerReviews 11 May 2023 https://ww…
  36. 10.1016/j.jretconser.2021.102771
  37. Schreiner Maximilian. “GPT-4 Architecture Datasets Costs and More Leaked.” THE DECODER 11 July 2023 https://t…
  38. Written Communication
  39. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
  40. 10.48550/arXiv.1908.08474
  41. Swales John and Hazem Najjar. “The Writing of Research Article Introductions.” 1987 https://journals.sagepub.…
  42. 10.1515/9783110214406.165
  43. 10.55177/tc812725
  44. 10.1163/9789004484801_013
  45. Tiku Nitasha. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” Washington Post 11 June 202…
  46. 10.1177/1461445609341006
  47. The Discourse of Online Consumer Reviews
  48. 10.1016/j.pragma.2017.03.011
  49. 10.1080/15252019.2015.1091755
  50. 10.1016/j.jbusres.2021.06.038
  51. 10.1177/0263276420910464