L. Van Waes
5 articles-
Abstract
In keyboard writing, typing skills are considered an important prerequisite of proficient text production. We describe the design, implementation, and application of a standardized copy-typing task in order to measure and assess individual typing fluency. A test-retest analysis indicates the instrument’s reliability. While the task has been developed across eleven different languages and the related keyboard layouts, we here refer to a corpus of Dutch copy tasks (n = 1682). Analyses show that copying speed non-linearly varies with age. Bayesian analyses reveal differences in the typing performance and the underlying distributions of inter-key intervals between the different task components (e.g., lexical vs. non-lexical materials; high-frequent vs. lowfrequent bigrams). Based on these findings it is strongly recommended to include copy-task measures in the analysis of keystroke logging data in writing studies. This supports a better comparability and interpretability of keystroke data from more complex or communicatively-embedded writing tasks across individuals. Further potential applications of the copy task for writing research are explained and discussed.
-
Reporting Writing Process Feedback in the Classroom. Using Keystroke Logging Data to Reflect on Writing Processes ↗
Abstract
Keystroke loggers facilitate researchers to collect fine-grained process data and offer support in analyzing these data. Keystroke logging has become popular in writing research, and study by study we are now paving the path to a better understanding of writing process data. However, few researchers have concentrated on how to bring keystroke logging to the classroom. Not because they are not convinced that writing development could benefit from a more process-oriented pedagogy, but because 'translating' complex and large data sets to an educational context is challenging. Therefore, we have developed a new function in Inputlog, specifically aiming to facilitate writing tutors in providing process feedback to their students. Based on an XML- logfile, the so-called 'report' function automatically generates a pdf-file addressing different perspectives of the writing process: pausing, revision, source use, and fluency. These perspectives are reported either quantitatively or visually. Brief introductory texts explain the information presented. Inputlog provides a default feedback report, but users can also customize the report. This paper describes the process report and demonstrates the use of it in an intervention. We also present some additional pedagogical scenarios to actively use this type of feedback in writing classes.
-
Abstract
In today’s workplaces professional communication often involves constructing documents from multiple digital sources—integrating one’s own texts/graphics with ideas based on others’ text/graphics. This article presents a case study of a professional communication designer as he constructs a proposal over several days. Drawing on keystroke and interview data, we map the professional’s overall process, plot the time course of his writing/design, illustrate how he searches for content and switches among optional digital sources, and show how he modifies and reuses others’ content. The case study reveals not only that the professional (1) searches extensively through multiple sources for content and ideas but that he also (2) constructs visual content (charts, graphs, photographs) as well as verbal content, and (3) manages his attention and motivation over this extended task. Since these three activities are not represented in current models of writing, we propose their addition not just to models of communication design, but also to models of writing in general.
-
Abstract
Error analysis involves detecting and correcting discrepancies between the ‘text produced so far’ (TPSF) and the writer’s mental representation of what the text should be. While many factors determine the choice of strategy, cognitive effort is a major contributor to this choice. This research shows how cognitive effort during error analysis affects strategy choice and success as measured by a series of online text production measures. We hypothesize that error correction with speech recognition software differs from error correction with keyboard for two reasons. Speech produces auditory commands and, consequently, different error types. The study reported on here measured the effects of (1) mode of presentation (auditory or visualtactile), (2) error span, whether the error spans more or less than two characters, and (3) lexicality, whether the text error comprises an existing word. A multilevel analysis was conducted to take into account the hierarchical nature of these data. For each variable (interference reaction time, preparation time, production time, immediacy of error correction, and accuracy of error correction), multilevel regression models are presented. As such, we take into account possible disturbing person characteristics while testing the effect of the different conditions and error types at the sentence level. The results show that writers delay error correction more often when the TPSF is read out aloud first. The auditory property of speech seems to free resources for the primary task of writing, i.e. text production. Moreover, the results show that large errors in the TPSF require more cognitive effort, and are solved with a higher accuracy than small errors. The latter also holds for the correction of small errors that result in non-existing words.
-
Thinking aloud as a method for testing the usability of Websites: the influence of task variation on the evaluation of hypertext ↗
Abstract
In the usability testing of Web sites, thinking aloud is a frequently-used method. A fundamental discussion, however, about the relation between the use of different variants of thinking aloud and the evaluation goals for this specific medium is still lacking. To lay a foundation for this discussion, I analyzed the results of three usability studies in which different thinking-aloud tasks were used: a simple searching task, an application task and a prediction task. In the task setting, the profile of the Web surfer, the communication goal of the Web site and other quality aspects are taken into account. The qualitative analysis of these studies shows that the task variation has some influence on the results of usability testing and that, consequently, tasks should be matched with the evaluation goals put forward.