Winners of the Automated Essay Grading Contest Announced
Have you ever wished that someone else could grade that stack of student essays for you? It turns out that software exists to do just that! A recent automated essay grading contest has produced software that grades essays more consistently than human beings do. With international contestants hailing from a wide variety of fields, the contest brought together some of the globe’s brightest minds to design software that could make teachers’ lives significantly easier. However, the exact role of automated essay grading in the future of education remains unclear.
Automated Essay Grading Contest
The William and Flora Hewlett Foundation awarded $100,000 to the winners of its Automated Student Assessment Prize earlier this month. The goal of the contest’s 2,500 entrants was to develop software to replicate the way human educators grade essays. The foundation had previously issued a report providing evidence that currently available software has already approximated this task, but the winning team’s software has been shown to grade essays even more consistently.
And The Winners Are …
Perhaps the most surprising thing about the 11 individuals who make up the automated essay grading contest’s three winning teams is that none of them come from educational backgrounds. The first place team, which took home $60,000, was composed of a particle physicist, a semantic analyst pursuing a graduate degree in computer science and an engineer for the National Weather Service. Members of the two runner-up teams included a Ph.D. in artificial intelligence, a fraud investigator and a software engineer.
Applications in Education
The role of automated essay grading software in the future of education is somewhat controversial. The most likely application for such software is probably to grade essays included in standardized tests, which would allow for cheap, efficient and relatively objective evaluation of students’ writing skills. But the role of standardized testing in education is a controversial subject in itself, one that is too large to tackle here. And critics of the software have argued that it places too much emphasis on linguistic structure, without deeply evaluating the coherence of a student’s argument.
Automated essay grading software could also allow teachers to more frequently assign practice writing and quickly return the assignments to students with detailed feedback on areas that need improvement. This would be a valuable time-saving tool for teachers, and by reducing (but not eliminating) the time teachers spend grading essays, this software has the potential to free educators’ time for more engaged and responsive teaching tasks. This is the most exciting possibility for educational software: that it would assist student learning while also reducing the strain on our country’s overtaxed educators.
Regardless of exactly what role automated essay grading software ends up playing in education, it seems clear that it will have a significant impact. The labor-saving potential of the software alone makes it a significant contribution to the field. However, the challenge now will be finding ways to integrate the software into our education system in such a way that it complements the work of educators. Although some people are likely to see automated essay grading as a replacement for human staff, no algorithm can really replace a teacher’s engagement with a student; what it can do is help our teachers to more effectively serve students.