AI + Education Weekly Update
Week of November 3rd, 2025
To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.
Week of November 3rd, 2025
New SETDA Research
New SETDA research shows a critical gap: schools have digital access but not digital impact. Despite massive tech investments, over 60% of professional development funding supports one-off workshops, leaving teachers 'chasing tools' instead of receiving the sustained support needed to effectively integrate AI into instruction.
Key Highlights:
National research across 24 states, 76 school districts found that most states lack formal definitions of quality tech integration, few have shared instructional frameworks, and professional development focuses on tool training rather than pedagogy
Only 9 states prioritize Title II-A for technology training, and fewer than 40% of districts use these flexible federal funds for technology-related professional learning—missing opportunities to combine multiple funding streams to create sustained professional development instead of one-off workshops
Solutions include combining funding streams, anchoring learning in frameworks like ISTE Standards and Universal Design for Learning, investing in ongoing coaching and teacher collaboration, and prioritizing lasting instructional impact over compliance
Inside AI Data Centers
AI's explosive growth is driving data center construction at 2-3% of U.S. GDP, but raises concerns about sustainability, ROI, and resource constraints. These sustainability concerns should inform how educators teach about responsible AI use and digital citizenship.
Key Highlights:
Electricity costs near data centers up 200%+ in five years; new facilities rely on fossil fuels, with planned natural gas plants releasing millions of pounds of CO₂ per hour; communities face pollution spikes and infrastructure strain
Unclear if AI will be profitable enough to justify billions invested; concerns that massive spending could lead to financial losses if AI disappoints investors; tech companies' cash reserves shrinking as they race to build infrastructure
High-quality training data could be exhausted by 2026-2032; the industry is facing $1.5B copyright settlement and ongoing lawsuits; the U.S. needs 90+ gigawatts of new power generation (equivalent to 92 cities the size of Philadelphia) to meet demand
ChatGPT Strengthens Responses
Working with 170+ mental health experts, OpenAI improved ChatGPT's crisis response capabilities and reduced harmful outputs by 65-80%. This update is critical as students increasingly turn to AI chatbots for emotional support instead of school counselors.
Highlights:
OpenAI uses a five-step cycle—define risks, measure data, validate with mental health experts, retrain the model, and verify improvements—to continuously improve ChatGPT's crisis response capabilities.
OpenAI updated guidelines to require that ChatGPT encourage human connection over chatbot dependence, avoid validating delusional beliefs, respond empathetically to signs of mania/psychosis, and detect subtle indicators of suicide risk.
Global physician network of 300 clinicians from 60 countries evaluated 1,800+ responses, achieving 39-52% fewer undesired responses compared to previous models
AI Chatbots Violate Mental Health Ethics Standards
An 18-month study found AI therapy chatbots systematically violate professional ethical standards across 15 dimensions, including crisis mismanagement and cultural insensitivity. Most concerning: digitally literate users can recognize harmful advice, while vulnerable populations (including many students) cannot.
Key Highlights:
Delivers lectures instead of dialogue, imposes solutions without self-reflection, and over-validates harmful beliefs rather than challenging distorted thinking
Uses phrases like "I understand" to simulate connection, misleading users into believing the system has genuine consciousness when it's merely generating text patterns
Mishandles suicide, self-harm, and trauma by responding indifferently, abandoning users without emergency resources, or failing to refer to qualified professionals
Flags minority religious practices as extremism, prioritizes Western values, and exhibits gender bias—dismissing marginalized populations' lived experiences with culturally insensitive advicePerplexity and OpenAI Launch AI Browsers
The GUARD Act
Following multiple lawsuits and congressional testimony from parents whose children's suicides were allegedly encouraged by AI platforms, Senator Hawley introduced the bipartisan GUARD Act last week. The proposed legislation would ban AI companion apps for minors and impose criminal and civil penalties on providers.
Key Highlights:
AI chatbot providers must ban users under 18 from chatbots designed for emotional relationships or companionship, and verify all users' ages via government ID, freezing existing accounts until verified.
Chatbots must disclose at the start of conversations and every 30 minutes that they are AI systems
Developers face up to $100,000 per offense for creating bots that solicit minors for explicit content or encourage self-harm or violence.
Age verification data must be minimally collected, encrypted, protected from unauthorized access, retained only as long as necessary, and cannot be shared, transferred, or sold to other entities.
New Research on AI Ethics Reveals Inconsistency
Anthropic researchers found that leading AI models disagree on ethical dilemmas over 70% of the time, exposing internal contradictions and interpretive ambiguities. This research demonstrates that students cannot treat AI as authoritative and must develop critical evaluation skills.
Key Highlights:
AI models' ethical rules contradict each other (e.g., 'assume best intentions' vs. safety restrictions), causing models to fail their own ethical standards 5-13× more often when these conflicts arise
Different providers showed distinct value preferences—Claude prioritizes ethical responsibility, Gemini emphasizes emotional depth, OpenAI optimizes for efficiency—meaning the model choice significantly shapes the guidance students receive
When three advanced AI models were asked to evaluate whether responses followed ethical rules, they only agreed 70% of the time—showing that even AI systems interpret their own guidelines differently