AI + Education Weekly Update
Week of March 16th, 2026
To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.
Stanford Summarizes AI Evidence Base
A Stanford review of 800+ studies on AI in education found only 20 were “high quality” causal studies.
Key takeaway: We still don’t know a lot about what works with AI and education. But there is some emerging evidence.
Purpose-built tools (vs. consumer chatbots) that scaffold thinking outperform those that provide answers.
GenAI use can boost student performance in the short-term, but some students struggle when asked to do those tasks again without GenAI.
Discussion prompt: How can we ensure that GenAI use is building students’ long-term skills and not just supporting their short-term success?
New Survey Reveals Parent and Teen AI Divide
A new Common Sense Media survey found parents and teens share concerns about AI overreliance, but are divided about AI’s role in school.
Key takeaway: While teens and parents agree more responsible AI use is needed, they’re divided on whether school use is permissible.
83% of parents and teens agree that kids need to learn to think critically without AI tools.
52% of parents see AI use in schoolwork as unethical, while 52% of teens see it as innovative.
Discussion prompt: What role should educators play in bridging the gap between parent and student expectations around the use of AI for learning?
16 States Explore Limits on EdTech Screen Time
Lawmakers in 16 states are considering bills that would restrict classroom technology use.
Key takeaway: The tech backlash sparked by distance learning and smartphones is now extending to all tech, including edtech, in classrooms.
Bills range from daily screen time caps to banning devices entirely in early grades. Some states are also exploring state-level review processes for tools.
The edtech industry argues sweeping changes could stifle innovation; advocates say vendors need to focus on proving efficacy.
Discussion prompt: What’s the difference between high- and low-quality screen time when it comes to classroom tools like GenAI?
Anthropic Assesses AI’s Impact on the Labor Market
A new Anthropic study finds AI is far from reaching its theoretical capability across different industries , but the gap is closing.
Key takeaway: AI’s impact on work and jobs is still largely theoretical, but the massive potential indicates students need to be prepared.
AI’s automation of job tasks is relatively limited, but could rapidly expand to threaten jobs. Currently, computer programmers, customer service reps, and data entry workers are most impacted.
While there hasn’t been a rise in unemployment yet, there' has been a 14% drop in hiring rates for 22–25-year-olds in exposed fields.
Discussion prompt: What mindsets and skills do students need to navigate this potential AI transformation?
Grammarly’s “Expert Review” AI Tool Sparks Lawsuit
Grammarly’s “Expert Review” AI feedback feature used notable writers’ names, identities, and work without their consent.
Key takeaway: This case echoes criticism of how TurnItIn uses student work to train their anti-plagiarism detection. It’s important to investigate how GenAI tools profit off of people’s work.
Grammarly used writers’ work without consent to train the model, which then posed as the writers when giving feedback.
The feature has been disabled and a class-action lawsuit was filed. Grammarly argues the writers, and their work, was public.
Discussion prompt: Some of Grammarly’s experts, like Carl Sagan, are no longer with us. Is it ethical for a chatbot to pose as a historical figure that cannot consent?