AI + Education Weekly Update

Week of October 13th, 2025

To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.


Week of October 13th, 2025

CDT Releases Research on AI Risk to Students

New research reveals that schools are rapidly implementing AI without adequate preparation, creating a dangerous gap between use and understanding.

Key Highlights:

  • 85% of teachers and 86% of students use AI in classrooms, but only 48% of each group received training or guidance on its use

  • Schools with extensive AI use face significantly higher rates of data breaches, deepfakes, NCII exposure, system failures, and problematic chatbot relationships

  • Students with disabilities, LGBTQ+ students, and immigrants face heightened risks including unfair treatment, unauthorized disclosure of sensitive information, and surveillance leading to law enforcement or immigration contact

The Rithm Project Releases SPARKS Toolkit for Youth

The Rithm Project has launched the SPARKS Toolkit to help young people navigate AI's growing role in their social lives, as 72% of teens now use AI for companionship.

Key Highlights:

  • Interactive activities developed alongside teens and educators, including card games and role-playing scenarios that explore AI's influence on human connection

  • Focuses on creating honest conversations about AI's benefits and risks rather than gatekeeping technology from young people

  • Targets ages 13-22 and the adults who support them, with free resources adaptable across educational and community settings

Alaska Publishes AI Guidelines

Alaska establishes statewide AI framework designed to empower educators and students while protecting Alaska's unique cultural and geographic needs:

Key Highlights:

  • Includes seven guiding principles: Human-centered, fair access, transparency, oversight, security, ethical use, and cultural responsiveness

  • The framework rejects both outright bans and unrestricted access, instead promoting a balanced approach with clear guidelines, comprehensive AI literacy training for all stakeholders, and educator autonomy in making professional decisions about classroom AI use.

Deloitte Cognitively Offloads to AI

Deloitte Australia admitted to using AI in creating a $440,000 government report that contained fabricated academic references, nonexistent quotes, and multiple errors, leading to a partial refund and a corrected version being quietly released.

Key Highlights:

  • The corrected report revealed Deloitte used Azure OpenAI GPT-4o for "traceability and documentation gaps" but failed to disclose this AI use in the original July report.

  • Deleted errors included three nonexistent academic references, a fabricated quote from a Federal Court judgment, and multiple typographic errors

Silicon Valley Students Draft AI Policy

Los Altos School District takes innovative approach by placing students at the center of AI policy development rather than hiring consultants or relying solely on adult perspectives:

Key Details:

  • A group of student “tech interns” lead conversations on AI policy through interactive workshops where students, parents, and educators collaboratively discuss real scenarios like AI surveillance tools and varying classroom policies

  • The students created an AI chatbot that generates draft policies for school districts

  • The school has found that younger students respond more openly to peer facilitators than adults, raising critical concerns like automation bias and the risk of administrators over-trusting AI-flagged content

Weekly Update: October 6th, 2025>>>

Previous
Previous

Webinar: From Passive to Active

Next
Next

Research Survey Opportunity