preloader icon



Apex Trader Funding (ATF) - News

OpenAI puts flirty ChatGPT voice that sounds like ScarJo in ‘Her’ on hold

New York CNN  —  OpenAI says it’s hitting the pause button on a synthetic voice released with an update to ChatGPT that prompted comparisons with a fictional voice assistant portrayed in the quasi-dystopian film “Her” by actor Scarlett Johansson. The retreat by OpenAI follows a backlash to the artificial voice, known as Sky, which critics described as being overly familiar with users and sounded as if it had emerged from a male developer’s fantasy. It was widely mocked for its flirtatious tone. “We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI said in a post on X Monday. “We are working to pause the use of Sky while we address them.” The voice in question is not derived from Johansson’s, the company said in a blog post Sunday, but instead “belongs to a different professional actress using her own natural speaking voice.” OpenAI said that with each of its AI voices, it tried to create “an approachable voice that inspires trust,” one that contains a “rich tone” and is “natural and easy to listen to.” The ChatGPT voice mode that used the Sky voice had not yet been widely released, but videos from the product announcement and teasers of OpenAI employees speaking with it went viral online last week. From OpenAI Related article OpenAI unveils newest AI model, GPT-4o Some who heard Sky derided it as perhaps too easy to listen to. Last week, the controversy inspired a segment on The Daily Show in which senior correspondent Desi Lydic described Sky as a “horny robot baby voice.” “This is clearly programmed to feed dudes’ egos,” Lydic said. “You can really tell that a man built this tech.” Even OpenAI CEO Sam Altman appeared to acknowledge the widespread parallels users were drawing with Johansson when he posted to X on the day of the product’s announcement: “her.” “Her” is the title of the 2013 film in which Johansson portrays an artificially intelligent voice assistant with whom the protagonist, played by Joaquin Phoenix, falls in love, only to be left heartbroken when the AI admits she is also in love with hundreds of other users and later becomes inaccessible altogether. Questions about leadership The criticism surrounding Sky highlights broader societal concerns about the potential biases of a technology designed by tech companies largely led or funded by White men. The announcement came after OpenAI leaders were forced to defend their safety practices over the weekend after a departing employee called the company’s priorities into question. Jan Leike, who formerly led a team focused on long-term AI safety but left OpenAI last week along with Co-Founder and Chief Scientist Ilya Sutskever, posted a thread on X Friday claiming that “over the past years, safety culture and processes have taken a backseat to shiny products” at OpenAI. He also raised concerns that the company was not devoting enough resources to preparing for a possible future “artificial general intelligence” (AGI) that could be smarter than humans. Altman quickly responded saying he appreciated Leike’s commitment to “safety culture” and added: “He’s right we have a lot more to do; we are committed to doing it.” The company also confirmed to CNN that in recent weeks it had begun to dissolve the team Leike led, and instead was integrating members of the team across its various research groups. A spokesperson for the company said that structure would help OpenAI better achieve its safety objectives. OpenAI President Greg Brockman responded in a longer post on Saturday, which was signed with both his name and Altman’s, laying out the company’s approach to long-term AI safety. “We have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it,” Brockman said. “We’ve repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks.” He added that as AI becomes smarter and more integrated with humans’ daily lives, the company is focused on having in place “a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities.”