55% of K-12 educators want to use AI but lack the knowledge to do so effectively [Green Flag]. That single number captures a contradiction playing out in classrooms across the country: teachers are eager, but unprepared. Last week, ETS launched a new AI literacy assessment designed to measure how ready educators actually are to integrate artificial intelligence into instruction [EdWeek]. The timing matters. During the 2024-25 school year, 85% of teachers and 86% of students reported using AI tools, and the share of educators using AI “often” for school-related purposes jumped by 21 points [Green Flag]. Microsoft recently rolled out its own professional development program to address the training gap [EdWeek]. Yet even as AI use surges, one fundamental question had gone unanswered: do teachers actually understand what they’re using? The early signals from the ETS assessment suggest the answer is far more uncomfortable than most school leaders expected.
The ETS Test That Changed the Conversation
The common belief has been straightforward: if teachers use AI tools regularly, they must be developing AI literacy along the way.
The ETS assessment challenges that assumption at its foundation.
ETS designed the test not to measure whether educators could log into ChatGPT or generate a lesson plan with an AI assistant. Instead, it targets practical competencies: evaluating AI-generated content for accuracy, understanding algorithmic bias, and navigating data privacy concerns in classroom settings [EdWeek]. The framework measures applied knowledge, not surface-level familiarity.
The distinction matters enormously. A teacher who uses an AI tool daily may still lack the ability to identify when that tool produces biased or inaccurate outputs. The ETS assessment was built to expose exactly this kind of gap: the distance between tool adoption and genuine mastery. Early indications suggest that distance is significant, particularly in under-resourced districts where professional development budgets are thinnest.
95% of educators already believe AI is being misused in some capacity at their institution [Green Flag]. The ETS test gives that widespread unease a measurable foundation.
Using AI Is Not the Same as Understanding It
Here is the misconception the ETS data corrects most sharply: tool usage does not equal competency.
Many educators conflate daily interaction with AI platforms with a genuine understanding of how those systems work, where they fail, and what risks they carry.
The progression from casual user to literate practitioner requires a different kind of learning. Consider the specific skill areas the ETS assessment targets:
-
Content evaluation: Can a teacher reliably distinguish AI-generated misinformation from accurate material in a classroom scenario?
-
Bias recognition: Does the educator understand how training data shapes AI outputs, and how that could affect students from different backgrounds?
-
Data privacy: Can teachers articulate what student data an AI tool collects and how it is used?
-
Ethical application: Does the teacher know when AI use crosses from helpful to harmful in an instructional context?
These aren’t abstract concerns. 36% of U.S. K-12 educators already list “increase in plagiarism and cheating” among their top AI concerns [Green Flag]. But plagiarism detection is only one narrow slice of the literacy framework educators need. The real challenge is building critical thinking skills to evaluate AI across every dimension of classroom life. One-off tool tutorials simply can’t deliver that.
Busting the AI-Ready Teacher Myth
A persistent belief in education circles holds that younger, tech-savvy teachers are inherently better prepared for AI integration.
The ETS assessment framework challenges this directly.
Age and general technology comfort are poor predictors of AI literacy. A teacher under 35 who grew up with smartphones may be perfectly comfortable navigating an AI interface yet still lack the foundational knowledge to evaluate whether that interface produces reliable, unbiased results. Similarly, STEM educators often score well on technical AI concepts but show significant gaps on the ethical and pedagogical dimensions of AI use in student instruction.
AI readiness is a distinct competency: separate from general tech skills, separate from subject-matter expertise, and separate from enthusiasm. It requires dedicated, structured learning with measurable outcomes.
This myth persists partly because of a widening perception gap. There’s a growing disconnect between leaders who believe adequate AI training support exists and educators who say they haven’t received it [Green Flag]. Administrators see tool adoption rising and assume mastery is following. The ETS assessment provides a framework to test that assumption, and the early signals suggest it doesn’t hold up.
Why School Systems Are Structurally Behind
The teacher readiness gap is not primarily an individual failure.
It reflects structural misalignment in how school systems have approached the AI transition.
Most districts have invested heavily in AI-capable devices and platforms while directing far less toward educator training. Hardware procurement consistently outpaces professional development spending by wide margins. The result is classrooms full of powerful tools operated by educators who haven’t received the preparation to use them responsibly.
Three structural factors compound the problem:
- No unified framework: Most districts lack a coherent, district-wide AI literacy progression for educators. Without shared standards, training stays fragmented and inconsistent.
- Slow curriculum cycles: State curriculum review processes typically take years, far too slow to keep pace with AI’s rapid evolution.
- PD model mismatch: The dominant professional development model still relies on short workshops focused on specific tools, rather than sustained, cohort-based learning that builds transferable critical thinking.
Microsoft’s recent push to provide AI-focused professional development acknowledges this structural gap [EdWeek]. But corporate training programs alone can’t substitute for the systemic reform districts need to build lasting educator competency.
Closing the Gap Through Measurable Standards
Correcting course requires moving beyond the idea that more AI exposure automatically produces more AI-ready teachers.
The practical impact of the ETS assessment is that it offers something education has lacked: a measurable baseline.
With a validated assessment tool, districts can identify specific skill deficits rather than guessing at readiness levels. That baseline becomes the foundation for targeted professional development: not generic workshops, but structured progression tied to competency benchmarks.
The shift looks something like this:
-
From tool tutorials to critical literacy: Training should emphasize evaluation, ethics, and applied reasoning, not just how to click buttons.
-
From one-day sessions to sustained cohorts: Extended, collaborative learning models produce stronger knowledge retention than isolated events.
-
From optional enrichment to core requirement: AI literacy needs to move from the margins of teacher preparation into the center of certification standards.
Currently, very few accredited teacher preparation programs include mandatory AI literacy coursework. Until that changes, every new cohort of teachers enters classrooms with the same foundational gap the ETS test is designed to measure. The assessment doesn’t solve the crisis, but it makes the crisis impossible to ignore.
Schools can no longer afford comfortable assumptions about teacher readiness. With AI use surging among both educators and students, the gap between tool adoption and genuine mastery carries real consequences for instruction quality and student outcomes. The path forward is clear: establish measurable baselines, invest in sustained competency-based training, and embed AI literacy into the foundation of teacher preparation. Students navigating AI-shaped classrooms right now deserve educators whose readiness has been tested, not assumed.
Photo by
Photo by
Photo by
Photo by
Photo by