Abstract

In this paper, we investigate two approaches to building artificial neural network models to compare their effectiveness for accurately classifying rhetorical structures across multiple (non-binary) classes in small textual datasets. We find that the most accurate type of model can be designed by using a custom rhetorical feature list coupled with general-language word vector representations, which outperforms models with more computing-intensive architectures.

Journal
Technical Communication Quarterly
Published
2023-01-02
DOI
10.1080/10572252.2022.2077452
Open Access
Closed
Topics

Citation Context

Cited by in this index (7)

  1. Technical Communication Quarterly
  2. Computers and Composition
  3. Journal of Technical Writing and Communication
  4. Journal of Business and Technical Communication
  5. Rhetoric Society Quarterly
Show all 7 →
  1. Rhetoric Society Quarterly
  2. Rhetoric Society Quarterly

Cites in this index (11)

  1. Technical Communication Quarterly
  2. Written Communication
  3. Computers and Composition
  4. Technical Communication Quarterly
  5. Technical Communication Quarterly
Show all 11 →
  1. Technical Communication Quarterly
  2. Journal of Technical Writing and Communication
  3. Technical Communication Quarterly
  4. Technical Communication Quarterly
  5. College Composition and Communication
  6. Journal of Business and Technical Communication
Also cites 32 works outside this index ↓
  1. 10.1145/3442188.3445922
  2. 10.1177/0306312702032002003
  3. 10.1080/02691728.2015.1065928
  4. 10.48550/arXiv.1810.04805
  5. 10.3389/fdigh.2018.00010
  6. 10.26818/9780814214534
  7. 10.22148/16.030
  8. Theory, method, and practice in computer content analysis
  9. 10.1080/17467586.2011.627934
  10. 10.1080/10417940903377169
  11. 10.1080/02691728.2011.578301
  12. 10.1177/03063127030333004
  13. 10.4324/9781315538174-5
  14. 10.1186/s40537-019-0192-5
  15. 10.4324/9781410609748
  16. 10.1111/j.1475-4959.2012.00479.x
  17. 10.1145/2987592.2987603
  18. 10.1109/TPC.2018.2870632
  19. 10.1093/bioinformatics/btz682
  20. 10.1007/s10503-011-9221-z
  21. 10.1080/00335638409383686
  22. 10.1080/17524032.2011.644633
  23. 10.2307/j.ctt1pwt9w5
  24. 10.1111/coin.12157
  25. 10.3115/v1/D14-1162
  26. 10.1123/jtpe.2017-0084
  27. 10.18653/v1/2021.acl-long.170
  28. Strubell, E., Ganesh, A. & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. ar…
  29. 10.1111/j.1369-7625.2012.00810.x
  30. 10.1371/journal.pone.0084217
  31. 10.1017/CBO9780511808630
  32. 10.1145/3453460.3453462
CrossRef global citation count: 10 View in citation network →