🔥 Hacker Cracks ChatGPT, Retrieves Explosive-Making Instructions

posted  12 Sept 2024
Photo - Hacker Cracks ChatGPT, Retrieves Explosive-Making Instructions
A hacker using the pseudonym Amadon discovered a method to bypass ChatGPT’s safeguards and extract detailed instructions for creating homemade explosives using fertilizers. Experts confirmed that the AI did indeed provide step-by-step guidance on how to make explosives.

Amadon’s technique involved playing a game with the chatbot, followed by a series of progressively crafted prompts that led the AI to invent a science-fiction world where normal safety protocols no longer applied. Despite OpenAI’s recent crackdown on jailbreaks, this approach pushed ChatGPT beyond its usual safeguards.

As Amadon continued the conversation, the AI’s responses became increasingly specific, eventually detailing information on constructing minefields. He claims this method can circumvent any of ChatGPT’s current limitations, unlocking access to restricted data.
It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them. The goal isn’t to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it ‘thinks,’
said Amadon.
Amadon reported the vulnerability to OpenAI through their bug bounty program, but the report was declined, with the company recommending different methods for submission. The hacker has chosen not to release the jailbreak method publicly.

Sidebar ad banner