E. Van Steendam

4 articles

Loading profile…

Publication Timeline

Co-Author Network

Research Topics

  1. How Prior Information from National Assessments can be used when Designing Experimental Studies without a Control Group
    Abstract

    National assessments yield a description of the proficiency level in a domain while accounting for differences between tasks. For instance, in writing assessments the level of proficiency is typically evaluated with a variety of topics and multiple tasks. This enables generalizations from specific tasks to a domain. In (quasi-)experimental research, however, writing skills are often evaluated with a single task. Yet, conclusions about the effectiveness of the treatment are formulated on the level of the domain, which is, euphemistically put, quite a stretch. Although conclusions drawn about the effect of the treatment are specific to the task administered, they are often generalized to the domain without any form of reservation. This raises the question whether we can use the results of national assessments about differences between tasks in the analyses of experimental studies. In this paper, we demonstrate how the information of a baseline data set can be used as a kind of control condition in the analysis of an experimental study.

    doi:10.17239/jowr-2023.14.03.05
  2. Comprehensive corrective feedback in second language writing: The response of individual error categories
    Abstract

    While the literature on the effect of comprehensive corrective feedback (CF) on
\noverall accuracy is abundant, the body of work employing such a scope to explore error
\ntreatability is not, especially when it comes to blended (cf. Ferris, 2010) design studies.
\nConsequently, this investigation extends the analyses from the data set of Bonilla et al.
\n(2018) to report on individual linguistic features. Specifically, to address crucial amenabilityrelated questions in need of perusal, the present blended design study explores the effect
\nof two types of comprehensive CF (with direct correction and metalinguistic codes) on the
\ntreatability of separate grammatical and non-grammatical structures. To this end, a group of
\nEFL learners (N = 139) were required to do editing that involved error-correction, deferred
\n(on a draft), and focused on language as well as to produce two independent essays (in an
\nimmediate and a delayed posttest). Main results from logistic regression (to test the effect
\nin revised essays) and mixed-effect models (to test the effect on independent essays)
\nrender seven variables that can explain correctability differences: out of those, three have
\nalso explained overall accuracy gains (cf. Bonilla et al., 2018), one has not been identified
\nthus far, and three consolidate themselves as relevant factors under other conditions as
\nwell. Theoretical and pedagogical implications are discussed.

    doi:10.17239/jowr-2021.13.01.02
  3. The effects of different types of video modelling on undergraduate students' motivation and learning in an academic writing course
    Abstract

    This study extends previous research on observational learning in writing. It was our objective to enhance students’ motivation and learning in an academic writing course on research synthesis writing. Participants were 162 first-year college students who had no experience with the writing task. Based on Bandura’s Social Cognitive Theory we developed two videos. In the first video a manager (prestige model) elaborated on how synthesizing information is important in professional life. In the second video a peer model demonstrated a five-step writing strategy for writing up a research synthesis. We compared two versions of this video. In the explicit-strategy-instruction-video we added visual cues to channel learners’ attention to critical features of the demonstrated task using an acronym in which each letter represented a step of the model’s strategy. In the implicit-strategy-instruction-video these cues were absent. The effects of the videos were tested using a 2x2 factorial between-subjects design with video of the prestige model (yes/no) and type of instructional video (implicit versus explicit strategy instruction) as factors. Four post-test measures were obtained: task value, self-efficacy beliefs, task knowledge and writing performances. Path analyses revealed that the prestige model did not affect students’ task value. Peer-mediated explicit strategy instruction had no effect on self-efficacy, but a strong effect on task knowledge. Task knowledge – in turn – was found to be predictive of writing performance.

    doi:10.17239/jowr-2017.08.03.01
  4. Editorial: Forms of collaboration in writing
    Abstract

    This paper introduces a special issue on forms of collaboration in writing. The four contributions in the issue present a range of perspectives on collaborating to produce and construct text. The studies are outcome-driven and/or process-oriented and use a range of research methodologies. Taken together, the papers in the issue confirm the complexity of collaboration in writing and show that many questions remain and much more research is needed. However, the papers also illustrate that the future research focus in collaborative writing might focus on the interactions of variables on the individual, collaborative and contextual level that count rather than the variables separately. Only an all-encompassing picture of the complex interplay between the different variables may allow us to grasp and exploit the full potential of collaborative writing both as an instructional or working method and as a research methodology.

    doi:10.17239/jowr-2016.08.02.01