Scarlett Johansson Challenges OpenAI Over ChatGPT’s Voice Similarity
In an escalating dispute, actress Scarlett Johansson publicly accused OpenAI, a prominent artificial intelligence company, of using a voice in their ChatGPT project that closely mimics her own.
According to Daily Wire, this controversy was initiated last fall when OpenAI approached Johansson, asking her to lend her voice to their interactive AI, a proposal she declined. The CEO of OpenAI, Sam Altman, emphasized that her voice would add a "comforting" element to the AI, yet her refusal was categorical.
Following the rollout of a new voice option termed "Sky," which many noticed bore a striking resemblance to Johansson’s voice, the actress took legal steps against OpenAI, leading to a temporary removal of this feature.
The voice "Sky" was part of a recent update and immediately attracted the attention of Johansson's network and admirers due to its uncanny similarity to her voice. Her shock and dismay upon discovering this similarity led her to confront the company and its leadership.
Public Reaction and Legal Interventions
Johansson expressed her frustration and disbelief on how closely the “Sky” voice resembled hers, feeling misled as Altman had previously hinted at an intentional similarity on his social media by posting “Her” — a movie in which Johansson voiced an AI system.
This correlation did not sit well with Johansson, perceiving it to reaffirm her concerns about privacy and consent in the AI sphere. "I was shocked, angered, and in disbelief," the actress remarked about the incident.
As Johansson's concerns grew, she sought legal advice, eventually forcing her to engage legal professionals to address this breach in likeness. Her legal team promptly reached out to OpenAI, demanding an explanation and a detailed explanation of how the "Sky" voice was developed.
Subsequently, OpenAI paused the use of the “Sky” voice. They admitted to using another professional actress for the voice creation, staunchly denying any intention to mimic Johansson directly.
Nonetheless, Sam Altman expressed regret over the situation, admitting the company could have handled the communication with Johansson more effectively. "Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products," Altman disclosed.
Legal and Ethical Questions Arise
The reaction to this incident has been a mixture of support for Johansson and intrigue in the capability of AI technology to emulate human characteristics closely. This occurrence has triggered a wider debate on the ethical implications of digital replication and consent, especially concerning celebrity likenesses in AI applications.
Johansson highlighted the broader significance of this case, stating, "In a time when we are all grappling with deepfakes and the protection of our likeness, our work, our own identities, I believe these are questions that deserve absolute clarity."
While legal proceedings move forward, Johansson looks forward to a resolution that will clear her concerns and help establish clearer legislation on identity protection within digital and AI domains.
"I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected," Johansson concluded.
OpenAI's approach to resolving this controversy included suspending the use of the disputed voice and reaffirming its commitment to ethical AI development practices.
They acknowledged the necessity for better communication and ensured such misunderstandings would be minimized in the future.
Conclusion: A Call for Rights and Regulations
The case of Scarlett Johansson versus OpenAI underscores significant questions about voice identity and copyright in the artificial intelligence age.
From initial excitement about technological advances to controversy over voice likeness, this situation emphasizes the need for clear ethics and consent practices in AI.
Johansson hopes for legislative advancements that offer protection against unauthorized usage. Meanwhile, OpenAI’s interim measures reflect a shift towards recognizing the importance of these concerns. As this story develops, it will likely catalyze discussions on digital rights, adding depth to how we perceive and regulate AI interactions.