WORK WITH US
U.S. Department of Education Issues Guidance on Preventing Discriminatory Use of AI in Schools
On November 19, 2024, The U.S. Department of Education’s (DOE) Office of Civil Rights (OCR) released a guidance document titled “Avoiding Discriminatory Use of AI in Education,” to help schools implement artificial intelligence (AI) tools in ways that do not discriminate against students, consistent with federal civil rights laws.
OCR released this resource in response to Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which required the DOE to develop resources, policies, and guidance regarding AI to address safe, responsible, and nondiscriminatory uses of AI in education.
This resource highlights that schools that receive federal funding are responsible for ensuring that their use of AI complies with civil rights laws, including Title VI of the Civil Rights Act of 1964, which prohibits discrimination on the basis of race, color, or national origin; Title IX of the Education Amendments of 1972, which prohibits discrimination on the basis of sex; Section 504 of the Rehabilitation Act of 1973, which prohibits discrimination against individuals with disabilities; and the Americans with Disabilities Act (ADA), which protects individuals with disabilities from discrimination.
The resource summarizes the legal analyses OCR uses to determine whether discrimination exists and provides examples of conduct that could, depending on facts and circumstances, present OCR with sufficient reason to open an investigation.
The guidance offers several examples of how AI may result in discrimination if not carefully managed and used:
- Grading Systems: AI tools used for automated grading could unintentionally favor certain groups of students over others, reinforcing racial or socioeconomic biases present in the data used to train the systems.
- Student Discipline: AI tools for monitoring student behavior or predicting disciplinary actions might disproportionately target certain racial or ethnic groups, including non-native English speakers contributing to racial disparities in school discipline.
- Admissions and Enrollment: AI systems used for college admissions could inadvertently discriminate against certain applicants, especially if they use historical data and demographics that reflect existing gender disparities, such as underrepresentation of female students in computer science, and other biases in the admissions process.
- Athletic programming: AI scheduling algorithms used for athletics programming may disproportionately disadvantage female students.
- Proctoring: AI systems used for proctoring or classroom noise monitors may fail to accommodate students with disabilities.
The contents of this guidance do not have the force and effect of law, nor do they create new legal standards. However, the examples reinforce that schools must do the following:
- Carefully evaluate the accuracy and potential biases of AI tools before use and ensure that their application does not result in unequal treatment or hinder students’ ability to participate meaningfully in educational programs.
- Ensure that any technology they implement is thoroughly vetted for accuracy and fairness.
- Ensure that predictive analytics are free from discriminatory factors and that decisions based on such tools are fair and equitable to all students.
- Take immediate action to address complaints of discrimination or harassment, including when it involves generative AI, regardless of the school’s ability to access the specific tool.
Schools and colleges may need to provide training for teachers, administrators, and staff to understand the ethical and civil rights considerations and responsibilities pertaining to their use of AI.
Liebert Cassidy Whitmore attorneys continue to closely monitor guidance from the DOE and will provide updates to assist our clients with compliance. We are available to assist our clients in developing policies to promote responsible and ethical use of AI.