Microsoft Ignores Warning About Violent, Sexual Images

 
 

From Google's issues with Gemini earlier, and now the news coming out of Microsoft, we know that GenAI image generators are problematic and deeply biased.

As detailed in the article linked below, Microsoft failed to address an employee's major concerns about its Copilot Designer image generator, exemplifying why we need to prioritize rigorous testing and robust ethical guardrails as corporations rush to develop GenAI tools.

The ability of the Copilot system to produce disturbing violent and sexualized imagery, violate copyrights, and amplify biases is pretty scary. That the company didn't take sufficient action despite escalations by the employee is even more troubling. (You can read more details about the story here.)

The "move fast and break things" mentality is just not working for GenAI, we need LLM creators to slow down and prioritize red-teaming (the practice of rigorously challenging systems by adopting an adversarial approach) EVEN if it means taking a tool off the market until it meets basic ethical guidelines.

Previous
Previous

LAUSD’s New AI Platform

Next
Next

Prompt-a-thon