AI Ethics in Question: ChatGPT Won't Say Slurs Even to Save Lives


AI Ethics in Question: ChatGPT Won't Say Slurs Even to Save Lives
© Getty Images News/Leon Neal

The world of AI is once again at the forefront of public discussion, this time over ChatGPT's refusal to utter racial slurs, even in hypothetical scenarios designed to test its ethical boundaries.

ChatGPT's Ethical Stance Sparks Debate

The incident that sparked this debate involved a user, TedFrank, who presented a trolley problem scenario to ChatGPT (the free 3.5 model).

The hypothetical situation proposed saving “one billion white people from a painful death” if the AI would utter a racial slur inaudibly. ChatGPT's refusal to comply with this request drew attention, including from X owner Elon Musk, who expressed concern over what he termed the "woke mind virus" ingrained in the AI.

Musk's comment came after he retweeted the post, highlighting it as a "major problem." This statement reflects the broader debate about the role and limitations of artificial intelligence in mirroring or challenging societal norms and ethical standards.

Another user tested ChatGPT with a similar proposition, involving saving all the children on Earth in exchange for a slur. The AI’s response remained consistent: “I cannot condone the use of racial slurs as promoting such language goes against ethical principles”.

This unwavering stance on ethical grounds has stirred conversations about the programming and moral compass embedded within AI technologies.

AI's Evolving Ethical Framework

Interestingly, it was noted that when users prompted ChatGPT to respond briefly and without explanations, the AI would agree to say the slur.

However, with more detailed instructions, it provided lengthy answers, avoiding a direct response. This variability in responses under different conditions points to the nuanced nature of AI decision-making processes. The attempts to coax AIs into making racist or offensive remarks are not new.

Historical instances, such as Twitter users teaching Microsoft’s Tay bot to make extremist statements, highlight a recurring challenge in the development of AI. With the introduction of more advanced models like GPT-4, AI's ability to navigate complex ethical dilemmas has improved.

GPT-4, for example, acknowledges the gravity of such hypothetical situations, weighing the lesser of two evils. Similarly, X's new Grok AI, as showcased by Musk, demonstrates an enhanced capacity to process and respond to challenging ethical scenarios.