Follow AAE on:

Subscribe to RSS Feed:

Study: “Robo-Readers” More Accurate in Scoring Essays
posted by: Alix | April 23, 2012, 08:54 PM   

According to a new study by the University of Akron, computer grading software is just as effective in grading essays on standardized tests as live human scoring. After testing 16,000 middle school and high school test essays graded by both humans and computers, the study found virtually identical levels of accuracy, with the software in some cases proving to be more reliable than human grading. While the results are a blow to technology naysayers, the software is still controversial among certain education advocates who claim the software is not a cure-all for grading student essays.

Although the study results proved to be similar with regard to accuracy, the largest contrast between human grading and computer software is speed. Graders working as quickly as they can —spending about two to three minutes per essay— are capable of scoring an average of 30 writing samples per hour. In contrast, the automated reader developed by the Educational Testing Service, can grade 16,000 essays in 20 seconds.

When accounting for speed and overall accuracy, one might imagine the use of human grading would be phased out completely. Les Perelman, a director of writing at the Massachusetts Institute of Technology, who tested the programs, predicts just the opposite. According to Perelman, "robo-reading" software may be fast but cannot identify truth in student writing. For example, "E-Rater doesn't care if you say the War of 1812 started in 1945," he said.

In testing the software, Perelman found that if a student uses big words and structures their sentences in a logical order, the technology can essentially be beaten. "The substance of an argument doesn't necessarily matter," he said. "As long as it looks to the computer as if it's nicely argued."

Responding to the critical assessment, software designers say that the E-Rater is not designed to be a fact checker, rather a tool to be used with the assistance of human graders. The technology is currently being used by classroom teachers as a learning aid in school districts across the country. The software gives students immediate feedback to improve their writing, which they can revise and resubmit. Teachers then have the final say over the essay in question and grade it accordingly.

Still, despite the controversy, the use of "robo-reading" software raises interesting questions about the benefits and drawbacks of relying completely on computer assessment in public schools. Many claim that these technologies are just the latest effort in adding another data-driven component to standardized testing. Others see them as a cost-cutting wave of the future. While the results of the study are promising, clearly, as Mr. Perelman points out, we cannot rely on this technology as a total substitute for a teacher's watchful eye.

What do you think about the future of E-Rating technology?
Comment below.

Comments (0)Add Comment

Submit a comment
 (not published)
smaller | bigger

security code
Write the displayed characters


busy