Back to the blog
2 min read March 23, 2026

Generative AI for IEPs: The Ethics of Automated Support Plans

Can ChatGPT write a legal education plan? We explore the ethical boundaries of using LLMs in special education and where the human expert must remain.

AI brain graphic

The promise: "Click a button, write an IEP." The reality: "Hallucinated targets and generic waffle."

Generative AI is transforming every industry, and Special Education is no exception. But when we are dealing with the legal rights of a child, "good enough" isn't good enough. The ethics of AI in SEN are complex.

The Hallucination Problem

Large Language Models (LLMs) like ChatGPT are designed to sound convincing, not to be truthful. In tests, we've seen generic AI suggest interventions that are scientifically debunked (like "learning styles") simply because they appear frequently in its training data.

This creates a dangerous liability for schools. If you paste an AI-generated plan into a legal document, you are endorsing it.

Image: AI brain graphic

The MeritDocs Approach: Augmented Intelligence

We believe AI is a tool, not a teacher. At MeritDocs, we use AI to assist, not replace.

  • Curated Datasets: Our models are fine-tuned on high-quality, peer-reviewed educational strategies, not the open internet.
  • Human-in-the-Loop: We never "auto-publish." The AI suggests phrasing to save you typing time, but the human expert must edit and approve every line.
"AI should be the drafter, not the author. It lifts the heavy weight of the blank page, but the professional judgment? That stays with the human."

We are building the responsible future of AI in education. One where technology amplifies human empathy, rather than simulating it.