AI + Education Weekly Update

 
 

Week of April 27, 2026

To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.

A New Framework for GenAI Literacy

AI for Education released the SEE Framework, a practical, field-tested guide to building GenAI literacythrough safe, ethical, and effective practices.

  • Key takeaway: This is a framework built specifically for GenAI, giving educators a shared language for moving from reactive to intentional practice.

    • SEE defines GenAI literacy as the knowledge, mindsets, and practices forusing GenAI safely, ethically, and effectively.

    • The framework features practical resources such as common GenAI misconceptions, relatable “Framework in Action” scenarios, and age-appropriate look-fors and activity ideas.

  • Discussion prompt: SEE identifies five key mindsets for GenAI literacy (Be Intentional, Stay Critical, Be Transparent, Act Responsibly, Keep Learning). Which of these resonates most with the challenges you’re facing right now?

LAUSD Reverses Course on Classroom Tech

LAUSD, the nation's second-largest district, unanimously passed a resolution to set research-based screen time limitsacross all grade levels starting next school year.

  • Key takeaway: After years of 1:1 devices, LAUSD is pulling back from screen use, especially for their youngest learners, as developmental concerns about edtech and AI grow.

    • The forthcoming policy aims to prohibit device use until second grade, cap screen time ~1 hour daily for grades 3-5, and restrict devices during lunch and recess. The use of 1:1 devices is left to schools.

    • The resolution was championed by a parent coalition, and also recommends more handwritten work and a public audit of all existing classroom technology contracts.

  • Discussion prompt: How do we balance concerns with tech with preparing students for an AI-enabled workforce? What does healthy engagement with GenAI look like in classrooms?

Gen Z AI Use Steady as Concerns Grow

A new Gallup, Walton, and GSV survey of Gen Zersfound that while AI use has been consistent, excitement declined 14 points and anger rose 9 points in a single year.

  • Key takeaway: Gen Z’s exposure to AI (51% using daily to weekly) isn't building trust or enthusiasm, suggesting young people increasingly see AI as a concerning force in their lives.

    • Only 22% of respondents were excited (-14 points since 2025) about AI and just 18% were hopeful (-9 points).

    • Daily users had more positive views of GenAI, but their views also declined significantly over the past year.

  • Discussion prompt: What do you think is causing the sharp decline in positive attitudes toward AI? How do we best address these concerns?

The Deepfake Abuse Crisis in Schools

A Wired/Indicator analysisfound AI-generated child sexual abuse material (CSAM) incidents, often called “deepfake nudes,” at ~90 schools worldwide since 2023.

  • Key takeaway: This is an urgent AI-related harm facing schools, and most schools are unprepared, making proactive crisis response and survivor support and protection planning essential.

    • The report admits their numbers are low (~600 students targeted since 2023). UNICEF estimates 1.2 million children had AI-generated CSAM made of them last year.

    • In nearly all cases, teenage boys use "nudify" apps to target female classmates. Victims report humiliation, school avoidance, and lasting fear the images will resurface.

  • Discussion prompt: What role can AI literacy play in preventing this harm, and what are its limits? What else do schools need to do?

Youth AI Use Is Dynamic, Not Monolithic

A new Rithm Project survey of 2,383 young people (ages 13-24) found risk rises with intimacy of AI use, but human connection protects.

  • Key takeaway: Young people aren't all using AI the same way, and what keeps them safe isn't limiting access; instead, they need to feel seen, safe, and free to be their authentic selves.

    • Markers of high-risk use (dependence, emotional attachment, turning to AI over people) appear across all clusters, but escalate as AI use gets more personal and relational.

    • The strongest predictor of high-risk AI use was not social isolation, but young people feeling like a burden or having an inability to be authentic with other people. Students with friend groups can still be at risk.

  • Discussion prompt: Trust and openness with others is key to help youth navigate AI safely. What does that mean for how schools and families approach AI use?

Weekly Update: April 7, 2026>>>

Next
Next

Our Next Step Toward AI literacy for a One Million Educators