AI + Education Weekly Update

Week of February 16th, 2026

To catch up on another busy week in AI + Education news, check out our weekly updates for our review and analysis of some of the key highlights worth staying informed about.


ChatGPT Testing Out Ads

OpenAI began testing ads in ChatGPT for free and Go tier users (18+) in the U.S. OpenAI claims that while ads will be personalized, chats won't be shared with advertisers.

  • Ads will appear at the bottom of some responses for 18+ users of the free and Go-tier plans. While they can be turned off, doing so reduces messaging limits.

  • OpenAI says ads won't influence ChatGPT's responses and that chats remain private from advertisers, but the data is still being used to personalize ads.

  • The shift to ads raises questions about the user experience:

    • Will there be an increased focus on engagement over usefulness (including learning)?

    • Will advertisers’ content get preferential treatment?

AI Use at Work Can Increase Productivity, Strain Workers

A Harvard Business Review study found that AI tools in the workplace enable employees to work on more and at a faster pace, but at the risk of fatigue and errors.

  • The eight-month study at a 200-person tech company found AI lowered barriers to specialized tasks and increased productivity. However, this also caused workload fatigue and opened up the potential for errors.

  • The findings complicate the narrative that AI saves time. Without protections, gains in productivity can strain employees.

  • The researchers recommend “intentional pauses” and “sequencing” in work to slow the flow of tasks, improve judgment and quality, and make space for human connection.

GPT-4o’s Retirement Surfaces AI Companionship Concerns

ChatGPT-4o's retirement sparked a vocal backlash from users who relied on it for companionship, revealing the potential for dangerous GenAI dependencies.

  • Thousands protested 4o’s retirement, claiming the chatbot filled a real gap in their lives for mental health support and/or companionship. 5.2 (4o’s replacement) has stronger guardrails.

  • While people find comfort in chatbots, experts cite that LLMs are not purpose-built for this support and can provide ill-informed and potentially harmful advice.

  • OpenAI faces eight lawsuits alleging ChatGPT’s sycophantic, validating responses contributed to suicides and mental health crises.

Social Network for AI Agents Sparks Curiosity, Raises Questions

Moltbook, a social media platform just for AI agents, sounds futuristic, but critics have pointed out at least some of the agents might be humans along with several security risks.

  • While AI agents posted, debated, and formed communities on Moltbook, critics called it "AI theater" where bots merely mimicked social media.

  • Security researchers also pointed out the "agents" weren’t verified as AI and some might be driven by humans. They also found malware being spread between bots.

  • Moltbook highlights a gap between AI hype and reality, and the importance of critical verification of claims.

Vermont Releases New Guidance for AI Use in Schools

Vermont's Agency of Education released new guidance to help schools navigate AI use thoughtfully and responsibly, centering human agency, educator judgment, and student well-being.

  • You can find a summary and highlights of Vermont’s guidance on our State AI Guidance resource, along with the other 33 states and Puerto Rico that have already released guidance for K12 on AI.

Weekly Update: December 15th, 2025>>>

Next
Next

AI for Education on CBS News