A controlled experiment conducted in a Massive Open Online Course (MOOC) setting found that students' final grades improved when feedback was delivered quickly, but not if delayed by 24 hours.
[5] Teacher's evaluation role makes the students focus more on the grades not seeking feedback.
This will improve subsequent work and allow students time to digest information and may lead to better understanding.
[8] Peer feedback can also enhance learners' audience awareness, promote collaborative learning, and develop a sense of ownership of the text.
Text analysis of the survey conducted by Fan, Yumei, and Jinfen Xu showed that these students provided content-focused and form-focused evaluative feedback to their peers, while their peers provided feedback on manuscripts or orals in class.
[9] Turpin and Kristen M’s experiments found that peer feedback is very important for learners to participate in the learning process.
They suggest that teachers implement systems to moderate grading by students in order to catch unsatisfactory work.
[21][22] One of the benefits of such studios comes from structured contrasts which can help novices notice differences that might otherwise have been accessible only for experts.
[24][25] Some researchers designed systems that support comparative examples to surface helpful comparisons in educational settings.
However, with each piece of work to be evaluated differing so vastly in content, the path towards those qualities in a specific feedback performance remains largely unknown.
Effective feedback is not only written actionably, specifically, and in a justified manner, but more importantly, contains good content; good in the sense that it points out relevant things, brings in new insights, and changes the minds of its recipients to consider the problem from a different angle, or re-represent it completely.
This is permissible if the student in question really did do truly little work but may require the instructor's intervention before it ends up as the final result.
Professors Ryan, Marshall, Porter, and Jia conducted an experiment to see if using students to grade participation was effective.
[34] The peer-assessment mechanism is also the gold-standard in many creative tasks varied from reviewing the quality of scholarly articles or grant proposals to design studios.
One is that because no one providing assessment has a global understanding of the entire pool of submissions, local biases in judgment may be introduced (e.g. the range of a scale used to assess may be affected by the pool of submissions the assessor reviews) and noises in the ranking aggregated from individual peer-assessment may be added.
On the other hand, because the ranked outcome is of utmost interest in many situations (e.g. allocating research grants to proposals or assigning letter grades to students), ways to systematically aggregate peer-wise assessment to recover the ranked order of submissions has many practical implications.
To tackle this, some researchers studied (1) evaluation schemes (e.g. ordinal grading,[35] (2) algorithms to aggregate pairwise evaluation to more robustly estimate the global ranking of submissions,[36] and (3) produce more optimal pairs to exchange feedback either by considering conflicts of interest[37] or (4) by modeling a framework that reduces the error between individual- and community-level judgment on the value of a scholarly article.
[38] The legality of self- and peer-Assessment was challenged in the United States Supreme Court case of Owasso Independent School District v. Falvo.