OpenAI Accused of Mishandling Personal Data by Austrian Advocacy Group

Austria's notable data rights advocacy group, Noyb, has recently filed a complaint against OpenAI, the prominent developer behind the AI-driven chatbot, ChatGPT.

by Faruk Imamovic
SHARE
OpenAI Accused of Mishandling Personal Data by Austrian Advocacy Group
© Getty Images/Leon Neal

Austria's notable data rights advocacy group, Noyb, has recently filed a complaint against OpenAI, the prominent developer behind the AI-driven chatbot, ChatGPT. The complaint accuses OpenAI of failing to correct misinformation disseminated by ChatGPT and potentially violating European Union privacy laws.

This action marks yet another chapter in the ongoing scrutiny over AI technologies and their compliance with stringent data protection regulations. Noyb's complaint, filed on April 29, alleges that ChatGPT provided false information about a public figure who remains unnamed.

When the figure requested corrections or the deletion of the inaccurate data, OpenAI reportedly refused, claiming it was impossible to alter the outputs. Furthermore, OpenAI declined to disclose the sources of its training data, adding another layer of concern regarding transparency and accountability in AI operations.

The Broader Impact of AI on Privacy Regulations

This incident isn't isolated in the realm of AI challenges within the EU. Maartje de Graaf, a data protection lawyer at Noyb, expressed significant concerns about the ability of AI technologies to adhere to EU privacy standards. "If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals," de Graaf stated.

She emphasized that the technology must comply with legal standards, not the other way around. Noyb has escalated its concerns to the Austrian data protection authority, urging an investigation into how OpenAI processes personal data and ensures its accuracy.

This is critical as EU regulations demand high accuracy and transparency when handling personal information, especially when it involves influential AI platforms like ChatGPT. The issues with AI and misinformation are not limited to OpenAI.

In recent months, other tech giants have faced similar problems. For instance, Microsoft's AI chatbot, now known as Copilot, was found to provide misleading information about political elections in Germany and Switzerland. Moreover, Google’s AI chatbot, Gemini, faced backlash for generating biased and inaccurate images, leading to a public apology and promises of corrective measures from the tech behemoth.

These ongoing issues underscore a growing dilemma: as AI technologies become more integrated into our daily lives, the mechanisms for ensuring their compliance with existing legal frameworks struggle to keep pace. This situation poses pressing questions about the future of AI governance and the safeguards necessary to protect individual rights in the digital age.

Chatgpt
SHARE