Abstract
This article proposes the Canon to Code (C2C) Auditing Framework for evaluating generative (artificial intelligence) AI output through classical rhetoric, arguing that AI's characteristic failures—guessing instead of knowing, politeness instead of credibility, and confidence instead of judgment—revisit problems that rhetoric has addressed since antiquity. Developed using a rulemaking methodology and drawing on classical rhetorical theory, this framework presents 10 auditing rules that operationalize rhetorical principles into evaluation criteria for AI-generated content, focusing on accuracy, transparency, and accountability. It offers content auditors, technical communicators, and compliance professionals a theoretically grounded method for distinguishing AI output that meets audience needs from output that simulates credibility through pattern matching.