Policy recommendations from the DOE's first report on AI and Education
In the summer of 2022, the Department of Education convened a group of listening sessions centered on the topic of Artificial Intelligence (AI) and education. Over 700 constituents representing educators, technology developers, researchers, policymakers, learners and their caregivers showed up to voice their hopes and concerns regarding this rapidly developing technology and its potential impact on education.
The 71-page report generated after these listening circles acknowledges the possibilities that AI opens up to educators, such as providing assistance to overextended teachers and creating personalized learning solutions for diverse learners, while cautioning against potential risks including data privacy and bias.
In the report, the Department of Education outlines four foundations that should be considered in regards to moving forward with AI and K-12 education.
Keep it human-centered
Noted as a strong favorite among the constituents, this first foundation is simple: humans, and in this case teachers, should remain at the center of the educational process. Artificial Intelligence should never attempt to replace educators– its purpose should be to assist and enhance the work of educators and students.
While many educators are enthusiastic about the ways in which AI might aid them as teachers, they also expressed significant concerns about privacy. The report suggests that as policies are developed to deal with these concerns, we ensure that human decision makers remain at the center of these policies. As the report states on page 7, "Society needs an education-focused AI policy that protects civil rights and promotes democratic values in the building, deployment, and governance of automated systems to be used across the many decentralized levels of the American educational system."
AI Must Advance Equity
Attendees at the listening session consistently expressed concern about racial equity and AI. Because datasets are used in the development of AI, there was a strong push to create policies which ensure that the datasets used in the development of AI leave no room for bias. The report points out that the historical data that is used as a basis for AI algorithms can, in many cases, be rife with bias.
The report offers the example of algorithms that might be used in colleges or universities to make admission decisions, flag students who might need intervention or alert educators to potential cheating. These algorithms, the report suggests, must be scanned for bias in both the development of the systems and once they’re put into action.
Privacy and Effectiveness
Data safety and privacy was another provocative topic. AI relies on data; developers must be vigilant in regards to data privacy. As the report points out, most AI models have not been developed to consider for use in schools or with student or teacher privacy in mind; thus, the models are unlikely to adhere with existing student and state privacy laws.
Beyond privacy issues, educators made it clear that effectiveness is a key principle of education. They argue that leaders need evidence proving that AI-enhanced edtech aligns with existing policies, such as the Elementary and Secondary Education Act (ESEA).
Proceed... With Transparency
Attendees voiced that educators need more than disclosures as they begin to incorporate AI into their teaching– they should be able to explicitly understand how AI models work so that they can look for and spot problems as they occur. As developers continue to create AI systems and tools for education, teachers must be an integral part of the process, even if that means a slower development process.