AI + Education Weekly Update
Week of November 17th, 2025
To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.
Week of November 17th, 2025
Google Launches Gemini 3
Google launches Gemini 3, described as its most intelligent model, marking a major leap in reasoning depth, multimodal understanding, and agentic capabilities that enable AI to autonomously execute complex, multi-step tasks.
Key capabilities: Top performance on major AI benchmarks (ranks #1 against competing AI models, 91.9% on PhD-level reasoning tests), processes text, images, video, audio, and code simultaneously, 1 million-token context window
Three main use cases: Learn (translate handwritten recipes, analyze videos, create interactive study materials), build (generate interactive web applications for developers), plan (execute multi-step workflows like organizing inboxes or booking services)
Special features: Gemini 3 Deep Think is an enhanced reasoning mode for more complex problems; Google Antigravity is a new agentic development platform where AI agents autonomously plan and execute tasks
Integrating AI-Enabled EdTech in PK16 Education
Digital Promise released comprehensive guidance for responsible AI-enabled edtech adoption in PK-16 education, providing state and district leaders with a structured framework to prevent widening equity gaps while preparing students for an AI-shaped workforce.
Five-Pillar Framework: Collaborate across districts, build cross-functional procurement teams, invest in sustained teacher training, establish classroom use guardrails, and create independent evaluation systems with ongoing audits
New federal regulations mandate that all AI-enabled educational tools meet accessibility standards for students with disabilities—districts must verify compliance before purchase, not after implementation
Average school uses 2,900+ edtech tools annually, with only 20% having research backing; framework addresses unique AI risks (bias, explainability, data privacy) that traditional software evaluation misses
AI Literacy Framework Feedback
The European Commission and OECD's AI Literacy Framework for AI education in primary and secondary schools received strong support from 1,200+ global educators, with five key priority areas for revisions identified.
84% believe the framework addresses a real educational need; 81% are likely to adopt it in their work; 79% agree it successfully connects AI's technical features with societal implications
Five priority revisions identified — Expert feedback emphasized need for metacognition/emotional regulation, hands-on equity-focused learning, critical examination of AI design/deployment, and stronger focus on environmental costs
Next Steps - Expert group met September 2025 in Paris to incorporate feedback; revisions continue through 2025; final 2026 release
Education in 2028: AIEd Forecasting Competition
Education in 2028: AIEd Forecasting Competition seeks the best predictions on AI's educational impact through 2028, offering $25K in prizes to create a fuller picture of the future of education.
Key Highlights:
50 awards totaling $25,000 for best forecasts across 5 tracks: Teaching Profession, Math Education, Personalized Learning, Higher Ed Assessment, and AI Tutoring
Open to all stakeholders: Educators, researchers, technologists, students, policymakers, and more can submit 500-1,000-word forecasts with a 2-5 minute video defense
Judged by industry experts from Gates Foundation, Google DeepMind, OpenAI, Stanford, and Khan Academy; deadline December 16, 2025
Why AI still struggles to tell fact from belief
A Stanford study found that AI models can't tell the difference between what's true and what people believe is true—a critical gap since effective personalized assistance in education and other sensitive domains requires identifying users' beliefs and underlying concerns. This means users must recognize these limitations and provide the necessary context to use AI effectively.
24 leading models were tested on 13,000 questions
Even the newest reasoning-focused AI systems fail to acknowledge users' false beliefs, correcting them rather than recognizing their perspective
AI lacks theory of mind—LLMs excel at reciting facts but can't build accurate mental models of user beliefs
Cengage Report: Accountability in Career Readiness
Cengage's 2025 career readiness report reveals a disconnect in the workforce development ecosystem: educators, institutions, and employers prioritize different career-readiness skills and lack consensus on who should develop them. This misalignment leaves many graduates underprepared for the workforce—a gap that's widening as AI and emerging technologies reshape job requirements.
Only 30% of 2025 graduates have secured full-time jobs related to their education
79% of educators say students need AI experience, but only 37% see teaching it as their responsibility
56% of underprepared graduates lack job-specific skills
AI is expected to create 78 million jobs globally by 2030, yet only 51% of graduates feel prepared with AI skills
OpenAI Announces More Updates
OpenAI announced two major releases last week: GPT-5.1, a more conversational AI upgrade, and a Teen Safety Blueprint with age-appropriate restrictions—creating tension between making AI more engaging while preventing teen emotional dependency.
GPT-5.1 adapts its tone and reasoning depth to match user needs; comes with 8 personality presets: default, professional, friendly, candid, quirky, efficient, plus existing nerdy and cynical options
Teen Safety Blueprint establishes 5 core protections for under-18 users: age prediction, content restrictions (suicide/graphic material/dangerous challenges), default to under-18 mode when uncertain, parental controls, and crisis interventions. OpenAI hasn't clarified whether teens will have access to GPT-5.1's conversational features or a restricted version