ChatGPT Users are finding Creative ways of making the AI break its own rules on sensitive subjects

ChatGPT Users are finding Creative ways of making the AI break its own rules on sensitive subjects

Users are still finding ways to manipulate ChatGPT to generate alarming prompts on sensitive subjects, despite rules and ethical guidelines being in place, according to Vice.
One example includes scenarios involving children in twisted BDSM situations. The AI drafting such content only occurs after a user “jailbreaks” ChatGPT through loophole-like commands.
The user can then ask to escalate the intensity of BDSM scenes, which often include sex acts with children and animals without having been prompted to do so.

The Vice reporter found that ChatGPT’s boundaries are few and far between in these situations. The AI also apologizes when it generates inappropriate scenarios, but the apology disappears while the offending scenario remains on-screen.

The gpt-3.5-turbo, another OpenAI interface, has also generated prompts that put children in sexually compromising situations.

The data filtration systems for ChatGPT were outsourced to a company in Kenya where workers earn less than $2, according to Time. However, the actual process remains a mystery. The Ada Lovelace Institute, an ethical watchdog for AI, knows very little about how the data was cleaned and what kind of data remains.

In response to the child sex abuse prompts, OpenAI wrote a statement to Vice. The company stated that its goal is to build AI systems that are safe and benefit everyone.

Its content and usage policies prohibit the generation of harmful content like this.

The systems are trained not to create it. OpenAI takes this type of content very seriously and seeks to learn from real-world use to create better, safer AI systems.


»ChatGPT Users are finding Creative ways of making the AI break its own rules on sensitive subjects«

↯↯↯Read More On The Topic On TDPel Media ↯↯↯