Data-driven instruction

[1] Late in the last century and in the early 2000s, an increased emphasis on accountability in public organizations made its way into the realm of education.

[2][3] Following NCLB, more recent legislation under the Race to the Top Act further pushed states to use data gathering and reporting to demonstrate school’s ability to meet the demands of the public.

Embedded in both NCLB and the Race to the Top Act is an assumption that the collection and use of data can lead to increased student performance.

Examples of quantitative data include test scores, results on a quiz, and levels of performance on a periodic assessment.

[5] Examples of qualitative data include field notes, student work/artifacts, interviews, focus groups, digital pictures, video, reflective journals.

[7] Conversely, summative assessments are designed to determine whether or not a student can transfer their learning to new contexts, as well as for accountability purposes.

[7] Formative assessment is the use of information made evident during instruction in order to improve student progress and performance.

Instead, a variety of information gleaned from different forms of assessments should be used to make decisions about student progress and performance within data-driven instruction.

However, schools already had active internal accountability systems that place high emphasis on an ongoing cycle of instructional improvement based on the use of data including formative assessment results and behavioral information.

Richard Halverson and his colleagues, employing case study approaches, explore leaders’ effort of coordination and alignment process which occurs between extant “central practices and cultures of schools” and “new accountability pressure” in a pursuit of improving student achievement score.

[9] In their article, Richard Halverson, Jeffrey Grigg, Reid Prichett, and Chris Thomas suggest that the DDIS framework is composed of six organizational functions: data acquisition; data reflection; program alignment; program design; formative feedback; test preparation.

Richard Halverson and his colleagues states that program alignment function refers to “link[ing] the relevant content and performance standards with the actual content taught in classroom.”[9] For example, the benchmark assessment results, as “problem-finding tools,” help educators to identify the curricular standards that are not aligned well with the current instructional programs.

[11] Another critical component of the responsibility of the district is to provide the leadership and vision to promulgate the use of information about student performance to improve teaching practice.

When the leadership creates and maintains an environment which promotes collaboration and clearly communicates the urgency to improve student learning, teachers feel supported to engage in data use.

Through the use of data, teachers can make decisions about what and how to teach including how to use time in class, interventions for students who are not meeting standards, customizing lessons based on real-time information, adapting teaching practice to align to student needs, and making changes to pace, scope and sequence.

Additionally, teachers must have access to learning opportunities or professional development which helps them understand the pedagogical framework and technical skills required to obtain, analyze, and use information about students to make instructional decisions.

A major criticism of data driven instruction is that it focuses too much on test scores, and that not enough attention is given to the results of classroom assessments.