Abstract

The authors compare a reader-focused text evaluation with an expert-focused evaluation by technical writers and subject/audience experts. The experts were asked to predict the problems readers had signaled in a government brochure about alcohol. On average, they predicted less than 15% of the reader problems and produced a lot of new problem detections. In addition, the experts showed little mutual agreement in their problem detections. Their results suggest that a reader-focused evaluation should not be substituted for an expert-focused evaluation. The paper ends with a discussion of methodological issues for this type of research.

Journal
IEEE Transactions on Professional Communication
Published
1997-01-01
DOI
10.1109/47.649557
CompPile
Search in CompPile ↗
Open Access
Closed
Topics
Export

Citation Context

Cited by in this index (5)

  1. IEEE Transactions on Professional Communication
  2. IEEE Transactions on Professional Communication
  3. IEEE Transactions on Professional Communication
  4. IEEE Transactions on Professional Communication
  5. IEEE Transactions on Professional Communication

Cites in this index (4)

  1. IEEE Transactions on Professional Communication
  2. Journal of Technical Writing and Communication
  3. Journal of Technical Writing and Communication
  4. IEEE Transactions on Professional Communication
Also cites 5 works outside this index ↓
  1. refining the test phase of usability evaluation: how many subjects is enough?
    Human Factors  
  2. 10.1007/BF01326548
  3. 10.1201/9781420055948.ch7
  4. teaching writers to anticipate readers' needs: what can document designers learn from usa…
    Studies of Functional Text Quality  
  5. toward a valid design for pretesting and revising leaflets
    Functional Communication Quality