Program areas at The Learning Agency Lab
The 2022 fiscal year focused primarily on the development of new research and data competitions to drive open-source educational interventions. The Lab completed three high-profile data science competitions which developed automated feedback tools to improve writing in students in middle and high school grades, resulting in global participation from over 3,000 teams and resulting algorithms demonstrating human-level accuracy in evaluating student writing. Additionally, the team identified and laid the groundwork for a series of competitions focused around the release of assessment focused datasets called the Open Data Assessment Fund. The automated feedback tools competitions, also known as The Feedback Prize, consisted of three contests each building on the goals of the previous. The first competition, launched in December 2021, asked teams to develop algorithms that can identify elements in student writing for grades 6 to 12. More specifically, teams developed models that could segment an essay into elements of an argument (e.g., claim, evidence) and label each one. The second, launched in May 2022, built on its predecessor by tasking participants with evaluating argumentative elements in essays written by students in grades 6-12 as effective, adequate, or ineffective. Finally, the third, launched in September 2022, tasked participants with developing models that score essays based on language proficiency. Together, the algorithms developed in this competition series help students receive more individualized feedback on their writing, and with automated guidance, students can complete more assignments and ultimately become more confident, proficient writers. Over 3,000 teams participated across the Feedback Prize competitions, with winning algorithms demonstrating human-comparable accuracy and several novel algorithms and techniques emerging. Participants dedicated numerous hours to sharing their algorithmic approaches and engaging in reviews of others solutions on the platforms discussion forums. In the Feedback Prize data science competitions, it is conservatively estimated that competition participants invested time worth more than $240 million across all three competitions, to win a combined prize purse of $270,000. The resulting Feedback Prize datasets were also featured in prestigious academic journals such as Assessing Writing. Building on the success of the Feedback Prize, the Lab sought ways to continue research and innovation on school data. The resulting program was the Open Data Assessment Fund (ODAF), a series of competitions designed to respond to the current lack of high-quality, open source assessment datasets in education. The goal of ODAF is to provide innovators and researchers access to these datasets in order to develop new solutions (e.g., artificial intelligence and machine learning) that can reduce the cost and time to develop and administer assessments. The first competition in this series launch in FY21 and was titled The Quest. The Quest dataset explored the role of automatic question generation in supporting reading assessment for students in kindergarten through eighth grade. By leveraging this dataset, teams were tasked with creating models that will support the use of question items for improving reading comprehension, especially for texts that lack associated resources. Final models will pave the way for algorithms that can support automatic question generation for readers of all levels. While promoting further algorithmic research and development, the Lab has also advocated for the growth of the learning engineering community as a whole with both The Learning Engineering Ambassador Program and The Learning Engineering Internship Program. The Learning Engineering Ambassador Programs second cohort ran from August to December 2022 with nine ambassadors. Due to interest from applicants outside of the student population, both undergraduate and graduate, the program was expanded to allow early career professionals and individuals new to LE to participate as ambassadors. By expanding the parameters on participation, the Lab saw a huge influx of rich ideas for events and collaboration. Each ambassador is tasked with running three events during the program, which works out to one event per month, but many have gone above and beyond with multiple monthly events. These events range in size and scope from pizza parties with undergraduate students focused on LE-related career opportunities, to teacher training sessions that demonstrate how to collect and use data in the classroom to improve learning outcomes. The Learning Engineering Internship Programs second cohort also launched in fall of 2022 with five interns. The interns were students undergraduate, graduate, and PhD candidates or recent graduates, and they were matched with five ed tech-focused educational organizations. Interns have been working on a broad range of activities, including research, data preparation and visualization, and coding.