Cybersecurity NewsNews

Austria Accuses OpenAI of Privacy Breach As ChatGPT Makes Critical Errors

OpenAI recently openly admitted it is unable to correct ChatGPT when it provides the wrong information.

The admittance breaks the European Union (EU) General Data Protection Regulation (GDPR), which mandates that all individual information must be simultaneously accurate and accessible.

OpenAI Response

However, the organization cannot provide the sources or exact information on the data that ChatGPT stores about individuals. OpenAI is aware of this issue, but its response seems to be cold.

The firm suggests that “research on the level of truthfulness in large language models has yet to be carried out fully.” Because of this, noyb (None Of Your Business) complained about OpenAI to the Austrian Data Protection Authority (DPA).

Launched in November 2022, ChatGPT received both excitement and doubts about AI. While people use the bot for many functions, including inquiries, the firm behind it stipulates that all the computer does is predict the likely words in reply to prompts.

This, therefore, does not guarantee that large data sets will be totally factually accurate. Generative AI systems are sometimes prone to giving fake responses, which is called “hallucination.”

Instances of Misinformation and Compliance Issues

There is room for this in academic settings, but in the case of individuals’ information, it becomes a question of ethics and law. EU law, dating back to 1995 and now included in the GDPR Article 5, provides for the accuracy of personal data.

Furthermore, people have the right to correct inaccurate data (Article 16) and to know about data processing, including data sources (Article 15).

Maartje de Graaf, a data protection lawyer at Noyb, points out the possible outcomes of false information, particularly for individuals. She thus stresses that technology has to match the legal rules, not to the contrary.

A recently released New York Times report reveals that chatbots have fabricated information at high rates, ranging from 3% to 27%. For example, in the case against OpenAI, ChatGPT frequently provided wrong information concerning a public figure’s birthday.

On the other hand, OpenAI states that changing the data is not practical. The company says it can filter out or block specific prompts without affecting all information connected to the individual. In addition, it failed to respect GDPR obligations by ignoring access requests.

Related Articles

Back to top button