Abstract

In this paper, we investigate two approaches to building artificial neural network models to compare their effectiveness for accurately classifying rhetorical structures across multiple (non-binary) classes in small textual datasets. We find that the most accurate type of model can be designed by using a custom rhetorical feature list coupled with general-language word vector representations, which outperforms models with more computing-intensive architectures.

Journal
Technical Communication Quarterly
Published
2023-01-02
DOI
10.1080/10572252.2022.2077452
CompPile
Search in CompPile ↗
Open Access
Closed
Topics
Export

Citation Context

Cited by in this index (7)

  1. Technical Communication Quarterly
  2. Computers and Composition
  3. Journal of Technical Writing and Communication
  4. Journal of Business and Technical Communication
  5. Rhetoric Society Quarterly
Show all 7 →
  1. Rhetoric Society Quarterly
  2. Rhetoric Society Quarterly

References (58) · 14 in this index

  1. Alammar, J. (2018). The illustrated BERT, ELMo, and co.How NLP cracked transfer learning. http://jalammar.git…
  2. Ananthaswamy, A. (2021). Artificial neural nets finally yield clues to how brains learn. Retrieved from https…
  3. Proceedings of the Workshop on Language in Social Media
  4. 10.1145/3442188.3445922
  5. Brownlee, J. (2020). Why do I get different results each time in machine learning? Retrieved from https://mac…
Show all 58 →
  1. 10.1177/0306312702032002003
  2. 10.1080/02691728.2015.1065928
  3. 10.48550/arXiv.1810.04805
  4. 10.3389/fdigh.2018.00010
  5. Technical Communication Quarterly
  6. Technical Communication Quarterly
  7. 10.26818/9780814214534
  8. Technical Communication Quarterly
  9. Written Communication
  10. Critical Approaches to Discourse Analysis across Disciplines
  11. 10.22148/16.030
  12. Theory, method, and practice in computer content analysis
  13. 10.1080/17467586.2011.627934
  14. Rhetoric and the digital humanities
  15. 10.1080/10417940903377169
  16. 10.1080/02691728.2011.578301
  17. 10.1177/03063127030333004
  18. 10.4324/9781315538174-5
  19. 10.1186/s40537-019-0192-5
  20. Journal of Technical Writing and Communication
  21. Karani, D. (2018). Introduction to word embedding and Word2Vec. Retrieved from https://towardsdatascience.com…
  22. 10.4324/9781410609748
  23. Technical Communication Quarterly
  24. 10.1111/j.1475-4959.2012.00479.x
  25. Rhetoric and the digital humanities
  26. 10.1145/2987592.2987603
  27. Latysheva, N. (2019). Why do we use word embeddings in NLP? Retrieved from https://towardsdatascience.com/why…
  28. IEEE Transactions on Professional Communication
  29. 10.1093/bioinformatics/btz682
  30. Journal of Business and Technical Communication
  31. Argumentation
  32. Technical Communication Quarterly
  33. Technical Communication Quarterly
  34. The sociology of science
  35. 10.1080/00335638409383686
  36. Montañez, A. (2016). Unveiling the hidden layers of deep learning. Retrieved from https://blogs.scientificame…
  37. 10.1080/17524032.2011.644633
  38. 10.2307/j.ctt1pwt9w5
  39. 10.1111/coin.12157
  40. Computers and Composition
  41. 10.3115/v1/D14-1162
  42. 10.1123/jtpe.2017-0084
  43. College Composition and Communication
  44. 10.18653/v1/2021.acl-long.170
  45. Sarwan, N. S. (2017). Understanding word embeddings: From word2vec to count vectors. Retrieved from https://w…
  46. Strubell, E., Ganesh, A. & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. ar…
  47. 10.1111/j.1369-7625.2012.00810.x
  48. 10.1371/journal.pone.0084217
  49. Vig, J. (2019, Jan. 7). Deconstructing BERT, part 2: Visualizing the inner workings of attention. Retrieved f…
  50. 10.1017/CBO9780511808630
  51. Wynn, J. (2020). E-thos project: Climate change. Retrieved from https://doi.org/10.1184/R1/12964481
  52. The Routledge Handbook on Language and Persuasion
  53. Communication Design Quarterly