Philosophy & Rhetoric
2 articlesSeptember 2024
-
Abstract
ABSTRACT The growing capabilities of large language models (LLMs) pose important questions for rhetorical theory and pedagogy. This article offers an overview of how LLMs like GPT work and a consideration of whether they should be considered rhetorical agents. To answer this question, the article considers structural and argumentative similarities in classical theorizations of rhetoric and the philosophy of Wilfrid Sellars. GPT’s particular method of encoding statistical patterns in language gives it some rudimentary semantics and reliably generates acceptable natural language output, so it should be considered to have a degree of rhetorical agency. But it is also badly limited by its restriction to written text, and an analysis of its interface shows that much of its rhetorical savvy is caused by the highly restricted rhetorical situation created by the ChatGPT interface.
January 2008
-
Abstract
Research Article| January 01 2008 Evidence, Authority, and Interpretation: A Response to Jason Helms Carol Poster Carol Poster Search for other works by this author on: This Site Google Philosophy & Rhetoric (2008) 41 (3): 288–299. https://doi.org/10.2307/25655318 Cite Icon Cite Share Icon Share Twitter Permissions Search Site Citation Carol Poster; Evidence, Authority, and Interpretation: A Response to Jason Helms. Philosophy & Rhetoric 1 January 2008; 41 (3): 288–299. doi: https://doi.org/10.2307/25655318 Download citation file: Zotero Reference Manager EasyBib Bookends Mendeley Papers EndNote RefWorks BibTex toolbar search Search Dropdown Menu toolbar search search input Search input auto suggest filter your search All Scholarly Publishing CollectivePenn State University PressPhilosophy & Rhetoric Search Advanced Search The text of this article is only available as a PDF. Copyright © 2008 The Pennsylvania State University2008The Pennsylvania State University Article PDF first page preview Close Modal You do not currently have access to this content.