Microsoft engineer, Shane Jones, was allegedly silenced by the company’s legal department after raising concerns about DALL-E 3. Jones claims to have discovered security vulnerabilities allowing the generation of explicit images. Despite attempting to alert the public and lawmakers, his efforts were met with suppression. Microsoft, denying the allegations, insists on addressing employee concerns. This whistleblower’s case sheds light on potential risks associated with the AI model and the challenges of disclosing such vulnerabilities.
Microsoft engineer, Shane Jones, is claiming he was hushed by the company’s legal team for sounding the alarm about potential security issues with DALL-E 3, an artificial intelligence model.
Security Gaps and Troublesome Revelations
Jones, a key player in Microsoft’s engineering team, says he stumbled upon a big problem in DALL-E 3. According to him, this hiccup could let explicit and unsettling images slip through. Trying to deal with what he saw as a threat to public safety, Jones reported it up the chain of command at Microsoft. The response, however, wasn’t what he expected.
Early in December, Jones reported the flaw to his bosses, who told him to talk directly to OpenAI, the organization behind DALL-E 3. When he did, he claims to have found out that the flaw could lead to the creation of harmful images. This discovery prompted him to take a bold step.
Suppressed Whistleblower Appeals to Capitol Hill
To bring attention to the issue, Jones wrote a detailed letter to US Senators Patty Murray and Maria Cantwell, Rep. Adam Smith, and Washington state Attorney General Bob Ferguson. The letter, published by GeekWire, outlined his concerns and urged the suspension of DALL-E 3 until OpenAI addressed the risks.
But instead of support, Jones says he faced pushback from Microsoft’s legal team. When he posted the letter on LinkedIn, calling for the suspension of DALL-E 3, he claims Microsoft’s legal team quickly demanded he take it down. Although he complied, he says he never got a clear explanation for the suppression.
Microsoft and OpenAI Respond
Both Microsoft and OpenAI have responded to Jones’ claims. An OpenAI spokesperson said they looked into Jones’ report right away and confirmed the technique he shared didn’t bypass their safety systems. They stressed their commitment to safety and the steps they’ve taken to filter explicit content from DALL-E 3‘s training data.
On the flip side, a Microsoft spokesperson reiterated the company’s commitment to addressing employee concerns and emphasized using internal channels for reporting issues. Microsoft also stated that the techniques reported by Jones didn’t bypass their safety filters in any AI-powered image generation solutions.
Connection to Deepfakes
Jones also linked the security flaws to recent deepfake incidents involving Taylor Swift. He claimed the deepfakes, widely circulated, were a result of similar vulnerabilities. Microsoft Designer, using DALL-E 3 as a backend, was reportedly part of the deepfakers’ toolkit. According to 404 Media, Microsoft patched the loophole after being notified.
The whistleblower sees these incidents as examples of the potential abuses that could happen if such vulnerabilities are ignored. He’s urging the US government to set up a system for reporting and tracking specific AI vulnerabilities, emphasizing the need to protect employees who speak out.
Microsoft’s Reassurance and Internal Reporting
In response to the deepfake allegations, Microsoft added that its Office of Responsible AI has set up an internal reporting tool for employees to report and escalate concerns about AI models. They reiterated their commitment to connecting with the whistleblower to address any remaining concerns he may have.
The situation raises broader questions about corporate responsibility, the accountability of AI models, and the challenges faced by employees when trying to disclose potential risks to the public. Jones concludes by urging lawmakers to take action, stressing the importance of holding companies accountable for their products’ safety.
The controversy surrounding DALL-E 3 continues to unfold, with Microsoft and OpenAI caught in the crossfire of a whistleblower’s claims. As the dust settles, the incident sheds light on the intricate balance between innovation, transparency, and the responsibility that comes with deploying advanced AI models into the public domain. The challenges faced by Shane Jones serve as a stark reminder of the evolving landscape where technology and ethics intersect.