AI + Education Weekly Update
Week of March 23th, 2026
To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.
AI-Generated Kids’ Videos Flood YouTube
YouTube has been inundated with “AI slop”for kids, including videos with inaccurate content, dangerous messages, and zero quality checks.
Key takeaway: At home, students could be consuming harmful AI-generated content that’s setting their learning back.
One AI-generated children's channel has posted over 10,000 videos in seven months with significant innacuracies.
Child development researchers point out that low-quality AI content can create misconceptions that slow growth and development.
Discussion prompt: How do we help parents and young kids spot harmful AI-generated content that markets itself as “educational”?
What 81K People Think About AI
Anthropic conducted a massive qualitative study of 81,000 Claude usersacross 159 countries, revealing that people have hopes and fears, often all at once, about AI.
Key takeaway: 33% of respondents cited AI's learning benefits, while 17% worried about “cognitive atrophy.”
Educators were 2.5–3x more likely than average to report witnessing cognitive atrophy (likely in their students).
AI's learning benefits appear to be strongest when learning is self-directed vs. required.
Discussion prompt: A chatbot was used to conduct the interviews. What might the benefits and drawbacks be to using AI interviewers for research?
The Growing Dangers of “Seemingly Conscious” AI
Microsoft AI's CEO arguesthat AI developers must stop designing AI to appear conscious, because it’s manipulative and potentially damaging to society.
Key takeaway: Students need technical GenAI literacy to understand why GenAI seems human even though it isn’t.
GenAI mimics human consciousness, because it’s trained on human writing and bots are designed for emotional resonance.
AI agents introduce even more confusion as they are autonomous systems, and developers need to “actively engineer the illusion of consciousness out of [their] products.”
Discussion prompt: How might a GenAI chatbot be designed and engineered to communicate it’s not human or conscious?
How Educators Use SchoolAI Assistants
Stanford partnered with SchoolAIto explore how 5,500 K-12 educators used AI assistants in late 2024.
Key takeaway: Since educators primarily used AI for curriculum planning, the researchers emphasized the importance of tools and training that help educators locate and use high-quality content.
84% of educators used only one of the many available AI assistants, defaulting to the multi-purpose chatbot.
While curriculum and content was the most popular topic (2 out of 5 messages), conversations tended to move through topics.
Discussion prompt: What risks and opportunities might there be using a tool like SchoolAI for content or lesson planning?