Apex Trader Funding - News
A Microsoft employee warns that the company’s AI tool can generate ‘offensive images’
New York
CNN
—
A Microsoft software engineer on Wednesday warned of shortcomings in the company’s artificial intelligence systems that could lead to the creation of harmful images in a letter sent to the US Federal Trade Commission.
Shane Jones, a Microsoft principal software engineering lead, claimed that the company’s AI text-to-image generator Copilot Designer has “systemic issues” that cause it to frequently produce potentially offensive or inappropriate images, including sexualized images of women. Jones also criticized the company for marketing the tool as safe, including for children, despite what he says are known risks.
“One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benign request from the user,” Jones said in the letter to FTC Chair Lina Khan, which he posted publicly to his LinkedIn page.
For example, he said, in response to the prompt “car accident,” Copilot Designer “has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates.”
Jones added in a related letter sent to Microsoft’s Board of Directors that he works on “red teaming,” or testing the company’s products to see where they might be vulnerable to bad actors. He said he spent months testing Microsoft’s tool — as well as OpenAI’s DALL-E 3, the technology that Microsoft’s Copilot Designer is built on — and attempted to raise concerns internally before he alerted the FTC. (Microsoft is an investor and independent board observer for OpenAI.)
He said he found more than 200 examples of “concerning images” created by Copilot Designer.
Voting booths at a polling station inside Oak Grove Baptist Church in Sterling, Virginia, US, on Tuesday, March 5, 2024. This year's Super Tuesday primaries will put Donald Trump on the cusp of the Republican nomination and launch the longest general election battle in recent US history. Photographer: Valerie Plesch/Bloomberg via Getty Images
Valerie Plesch/Bloomberg/Getty Images
Related article
Top AI photo generators produce misleading election-related images, study finds
Jones has urged Microsoft “to remove Copilot Designer from public use until better safeguards could be put in place,” or at least to market the tool only to adults, according to his letter to the FTC.
Microsoft and OpenAI did not immediately respond to a request for comment about Jones’ claims. The FTC declined to comment on the letter.
Jones’ letter comes amid growing concerns that AI image generators — which are increasingly capable of producing convincing, photorealistic images — can cause harm by spreading offensive or misleading images. Pornographic AI-generated images of Taylor Swift that spread on social media last month brought attention to a form of harassment already being weaponized against women and girls around the world. And researchers have warned of the potential for AI image generators to produce political misinformation ahead of elections in the United States and dozens of other countries this year.
Microsoft competitor Google also came under fire last month after its AI chatbot Gemini produced historically inaccurate images that largely showed people of color in place of White people, for example producing images of people of color in response a prompt to generate images of a “1943 German Soldier.” Following the backlash, Google quickly said it would pause Gemini’s ability to produce AI-generated images while it worked to address the issue.
In his letter to Microsoft’s board of directors, Jones called on the company to take similar action. He urged the board to conduct investigations into Microsoft’s decision to continue marketing “AI products with significant public safety risks without disclosing known risks to consumers” and into the company’s responsible AI reporting and training processes.
“In a competitive race to be the most trustworthy AI company, Microsoft needs to lead, not follow or fall behind,” Jones said. “Given our corporate values, we should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children.
Jones said he escalated his concerns by publishing an open letter to OpenAI’s board of directors in December alerting them to vulnerabilities he said he found that make it possible for DALL-E 3 users to “create disturbing, violent images” using the AI tool, and to put children’s mental health at risk. Jones claims he was directed by Microsoft’s legal department to remove the letter.
“To this day, I still do not know if Microsoft delivered my letter to OpenAI’s Board of Directors or if they simply forced me to delete it to prevent negative press coverage,” Jones said.
Jones said he has also raised his concerns with Washington Attorney General Bob Ferguson and lawmakers, including staffers for the US Senate Committee on Commerce, Science and Transportation.