Abstract

Although the usefulness of evaluating documents has become generally accepted among communication professionals, the supporting research that puts evaluation practices empirically to the test is only beginning to emerge. This article presents an overview of the available research on troubleshooting evaluation methods. Four lines of research are distinguished concerning the validity of evaluation methods, sample composition, sample size, and the implementation of evaluation results during revision.

Journal
IEEE Transactions on Professional Communication
Published
2000-01-01
DOI
10.1109/47.867941
CompPile
Search in CompPile ↗
Open Access
Closed
Topics
Export

Citation Context

Cited by in this index (5)

  1. IEEE Transactions on Professional Communication
  2. IEEE Transactions on Professional Communication
  3. IEEE Transactions on Professional Communication
  4. IEEE Transactions on Professional Communication
  5. IEEE Transactions on Professional Communication

Cites in this index (10)

  1. Journal of Business and Technical Communication
  2. Journal of Business and Technical Communication
  3. Journal of Business and Technical Communication
  4. Journal of Business and Technical Communication
  5. IEEE Transactions on Professional Communication
Show all 10 →
  1. Journal of Technical Writing and Communication
  2. Written Communication
  3. Journal of Technical Writing and Communication
  4. IEEE Transactions on Professional Communication
  5. IEEE Transactions on Professional Communication
Also cites 31 works outside this index ↓
  1. 10.1007/BF02300326
  2. 10.2307/357381
  3. 10.1007/BF02299759
  4. 10.1287/mksc.12.3.280
  5. sample sizes for usability studies: additional considerations
    Human Factors  
  6. 10.1080/014492900118777
  7. 10.1023/A:1003073923764
  8. 10.1080/014492997120002
  9. refining the test phase of usability evaluation: how many subjects is enough?
    Human Factors  
  10. 10.1007/978-1-4020-2739-0_16
  11. 10.1007/BF02298151
  12. 10.1080/00140139508925248
  13. 10.1007/BF00121231
  14. 10.1080/01449299408914597
  15. 10.1006/ijhc.1994.1065
  16. 10.1016/B978-0-12-223260-2.50019-0
  17. generalizability of rules for empirical revision
    AV Commun Rev  
  18. learner verification and revision: an experimental comparison of two methods
    AV Commun Rev  
  19. 10.1037//0022-0663.74.5.733
  20. 10.1002/pfi.4150220505
  21. 10.1002/pfi.4150220509
  22. 10.1080/014492997119789
  23. 10.1080/014492997119824
  24. 10.1007/BF01326548
  25. 10.1207/s15327051hci1303_2
  26. 10.1207/s15327051hci1303_3
  27. 10.1207/s15327051hci1303_4
  28. 10.2307/270979
  29. the pretest in survey research: issues and preliminary findings
    J Market Res  
  30. 10.1108/03090569810216091
  31. pretesting in questionnaire design: the impact of respondent characteristics on error detection
    J Market Res Soc