AI + Education Weekly Update

Week of September 29th, 2025

To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.


Week of September 29th, 2025

Mashable Reviews Foundation Models Learning Modes

Mashable reviewed the learning modes of the major foundation models—Google's Gemini Guided Learning, Open AI’s ChatGPT, and Anthropic's Claude—revealing differences in their approach, benefits, and drawbacks. The researchers found that the tools were were best suited for self-directed learners and had the following to say:

  • Claude’s use of Socratic questioning was well positioned to help student’ develop their understanding, but the researcher found that the bot’s avoidance of giving answers led to long back and forths with little resolution.

  • ChatGPT showed a tendency to give answers and rewrite student work rather help them improve their work on their own.

  • Gemini excelled at math instruction with visual scaffolds. However, it frequently failed to address provided assignments and performed unpredictably across subjects.

Massachusetts MCAS AI Scoring Error

A scoring malfunction in Massachusetts' AI-graded MCAS essay system this year revealed both the risks of automated assessment and potential gaps in oversight protocols.

The problem occurred when a "temporary technical issue" caused the AI to incorrectly grade roughly 1,400 MCAS essays. A teacher discovered the error while reviewing her student's grades, noticing the AI had deducted points without clear justification—including penalizing students for missing quotation marks that were actually present.

The scoring errors affected students across 145 districts statewide, requiring officials to notify families and rescore the impacted exams.

The incident has raised questions about whether the current quality control measure—human review of 10% of AI-scored essays—is sufficient. Officials are emphasizing the need for greater caution and stronger safeguards as AI use in education continues to expand.

AI for Student Engagement

The Digital Education Council analyzed 106 global case studies, identifying 24 AI methodologies for student engagement. 

Key Findings:

  • Identified 24 AI methodologies across six engagement areas: faculty interaction, peer exchange, content assessment, instructional delivery, experiential learning, and inclusive environments.

  • AI is creating four new opportunities in student engagement: deeper faculty-student engagement, broader peer exchange, richer content interaction, and responsible human-AI collaboration.

  • Early results show promise in specific implementations.

  • Success depends on intentional pedagogical design rather than tool deployment alone.

CZI Releases Knowledge Graph Tool

The Chan Zuckerberg Initiative’s Learning Commons is releasing their Knowledge Graph and Evaluators as training data sets for AI tools to improve pedagogical outputs.

The Knowledge Graph serves as an educational "GPS" for AI systems, providing machine-readable datasets that map skills, academic standards, and learning progressions. When integrated with AI platforms like Anthropic's Claude, it helps ensure that AI-generated lesson plans and materials are pedagogically sound, aligned with state standards, and grounded in research-based learning pathways.

The Evaluators complement this system by assessing AI-generated content for accuracy, grade-level appropriateness, and educational rigor. Currently focused on literacy for 3rd and 4th grades, these open-access tools are designed to be expandable to other subjects and grade levels.

Both tools are moving from private beta testing to general availability in 2026, with the goal of making AI-generated educational content more reliable and effective for classroom use.

How Americans View AI

Pew Research Center’s comprehensive survey of 5,023 U.S. adults in June 2025 assessed American attitudes toward AI's expanding role in society.

Key findings:

  • Americans express more concern than excitement about AI integration, with 50% more concerned than excited compared to just 10% more excited than concerned.

  • Public sentiment toward AI's impact on human capabilities is negative, with 53% believing AI will worsen creative thinking and 50% saying it will harm people's ability to form meaningful relationships.

  • Americans show selective acceptance based on application type, strongly supporting AI for analytical tasks while rejecting its use in personal domains.

  • 61% want more control over AI use in their lives and 76% consider it important to identify AI-generated content.

Weekly Update: September 22nd, 2025>>>

Next
Next

Women in AI+ED Turns Two