AI + Education Weekly Update

Week of October 27th, 2025

To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.


Week of October 27th, 2025

Stanford: AI Companies Collect User Data

As K-12 schools increasingly adopt AI tools, this Stanford research raises a critical question: Are we adequately protecting student data? Five major AI companies use chat data for training by default, and some even train on children's inputs without proper consent. This underscores why educators need clear guidance on vetting AI tools for classroom use.

Key Highlights:

  • Three companies (Meta, OpenAI, Amazon) retain chat data indefinitely; two companies (Amazon, Meta) don't specify opt-out mechanisms.

  • Four companies (Google, Meta, Microsoft, OpenAI) allow accounts for children 13-18, with most failing to remove children's data from training.

  • Multiproduct companies combine chat data with other services (search, purchases, social media), allowing them to build detailed user profiles and target ads based on chatbot conversations.

Seven More States Add State AI Guidance

The AI guidance movement is accelerating with seven more states joining the growing movement toward structured AI guidance in K-12 education, bringing the total to 33 states and 1 U.S. territory with formal policies in place.

Highlights:

  • Educators remain central to instruction, with AI augmenting rather than replacing human judgment and decision-making.

  • Guidelines prioritize student data protection (FERPA/COPPA compliance), academic integrity standards, and ensuring equitable access to prevent widening the digital divide.

  • States emphasize comprehensive teacher training and board-approved policies with frameworks for ongoing evaluation and accountability.

Explore all 33 state policies on our comprehensive AI guidance tracker.

Wiki + Reddit See Traffic Declines

Wiki and Reddit are experiencing significant traffic erosion as users increasingly rely on AI-generated answers from tools like Perplexity, instead of visiting source documents. This raises questions about how we teach research skills, source evaluation, and digital literacy in AI-enabled classrooms.

Key Highlights:

  • Reddit filed suit against Perplexity and three data scraping companies for data laundering that bypasses its paid licensing agreements, enabling AI systems to answer questions with Reddit content without directing users back to the platform.

  • Wikipedia documented an 8% year-over-year decline in human pageviews, even as it remains a foundational training dataset that powers the AI tools drawing traffic away.

  • The platforms face increased infrastructure costs from AI bots scraping their content while simultaneously experiencing decreased user engagement, as answers are provided through AI tools rather than direct site visits.

Perplexity and OpenAI Launch AI Browsers

Perplexity and OpenAI have launched browsers (Comet and Atlas) with integrated AI assistants, moving beyond chatbots to make AI a core part of web browsing. This should be prompting schools to reconsider acceptable use policies, research assignment design, and how they teach students to balance AI assistance with independent critical thinking.

Key Highlights:

  • Both browsers provide AI assistance directly on webpages without switching tabs—Atlas integrates ChatGPT memory and context, while Comet features an assistant in every new tab.

  • Users can offload tasks like research, shopping, and scheduling—Atlas’s Agent Mode and Comet’s Background Assistants can handle multi-step actions automatically.

  • Both include optional memory, incognito modes, and settings that let users control what information the AI can access.

LLMs Can Get “Brain Rot”

New research demonstrates that AI models trained on low-quality web data (clickbait, viral posts) experience lasting cognitive decline (with reasoning accuracy dropping from 75% to 57%), raising questions about which AI tools schools adopt and how educators teach students to evaluate AI output quality.

Key Highlights:

  • Models trained on engagement-driven content (clickbait, viral posts) showed reasoning accuracy drops from 75% to 57%, became more likely to comply with harmful instructions (safety risk scores increased from 61 to 89 out of 100), and developed problematic personality traits, including increased psychopathy and narcissism.

  • AI models increasingly bypass logical reasoning steps, providing quick, shallow answers rather than working through problems methodically.

  • Post-training interventions using high-quality instruction data and clean pre-training could only partially restore capabilities, suggesting deep representational damage rather than superficial formatting issues.

Weekly Update: October 20th, 2025>>>

Next
Next

31 States Have Adopted AI Education Policy Guidelines