[3] Writing assessment began as a classroom practice during the first two decades of the 20th century, though high-stakes and standardized tests also emerged during this time.
[5] Because of this divide, educators began pushing for writing assessments that were designed and implemented at the local, programmatic and classroom levels.
This wave began to consider an expanded definition of validity that includes how portfolio assessment contributes to learning and teaching.
In this wave, portfolio assessment emerges to emphasize theories and practices in Composition and Writing Studies such as revision, drafting, and process.
Indirect writing assessments typically consist of multiple choice tests on grammar, usage, and vocabulary.
[5] Examples include high-stakes standardized tests such as the ACT, SAT, and GRE, which are most often used by colleges and universities for admissions purposes.
[5] Portfolio assessment, which generally consists of several pieces of student writing written over the course of a semester, began to replace timed essays during the late 1980s and early 1990s.
Portfolio assessment is viewed as being even more valid than timed essay tests because it focuses on multiple samples of student writing that have been composed in the authentic context of the classroom.
[15][17] Timed essay tests were developed as an alternative to multiple choice, indirect writing assessments.
Timed essay tests are often used to place students into writing courses appropriate for their skill level.
Scholars such as Chris Gallagher and Eric Turley,[20] Bob Broad,[21] and Asao Inoue[22] (among many) have advocated that effective use of rubrics comes from local, contextual, and negotiated criteria.
[23] Eric Turley and Chris Gallagher argued that state-imposed rubrics are a tool for accountability rather than improvements.
Many times rubrics originate outside of the classroom from authors with no relation to the students themselves and they are then interpreted and adapted by other educators.
"[24] They go on to say it is to be interpreted as a tool for writers to measure a set of consensus values, not to be substituted for an engaged response.
A study by Stellmack et al evaluated the perception and application of rubrics with agreed upon criteria.
Bob Broad notes that an example of an alternative proposal to the rubric is the [26]“dynamic criteria mapping.” The single standard of assessment raises further questions, as Elbow touches on the social construction of value in itself.