Abstract

This article proposes the Canon to Code (C2C) Auditing Framework for evaluating generative (artificial intelligence) AI output through classical rhetoric, arguing that AI's characteristic failures—guessing instead of knowing, politeness instead of credibility, and confidence instead of judgment—revisit problems that rhetoric has addressed since antiquity. Developed using a rulemaking methodology and drawing on classical rhetorical theory, this framework presents 10 auditing rules that operationalize rhetorical principles into evaluation criteria for AI-generated content, focusing on accuracy, transparency, and accountability. It offers content auditors, technical communicators, and compliance professionals a theoretically grounded method for distinguishing AI output that meets audience needs from output that simulates credibility through pattern matching.

Journal
Journal of Technical Writing and Communication
Published
2026-03-24
DOI
10.1177/00472816261429907
CompPile
Search in CompPile ↗
Open Access
Closed
Topics
Export

Citation Context

Cited by in this index (0)

No articles in this index cite this work.

Cites in this index (4)

  1. Rhetoric Society Quarterly
  2. Computers and Composition
  3. IEEE Transactions on Professional Communication
  4. Journal of Business and Technical Communication
Also cites 8 works outside this index ↓
  1. 10.4324/9781003455158-19
  2. 10.4324/9781003164807
  3. 10.1162/dint_a_00243
  4. 10.4324/9781032671031
  5. 10.55177/tc222744
  6. 10.1080/00335635109381692
  7. 10.1007/s00146-024-01905-3
  8. 10.55177/tc286621