top of page

ChatGPT and Privacy

Artificial intelligence has become increasingly sophisticated, and its potential being unlocked every day. ChatGPT, an advanced language model trained by OpenAI is a prime example of how AI can be used to simulate human-like communication. However, with the growing use of technology that deals with personal data, privacy concerns have also increased. Here’s a peek into the concerns that arise with the use of ChatGPT and what users can do to protect their data.


What is ChatGPT?

ChatGPT, an AI-based chatbot from OpenAI is a powerful tool that can answer questions, generate text, and assist in writing tasks. This tool is trained on a massive text data that contains an extensive range of language patterns. It then uses this data to generate appropriate responses to user queries.


ChatGPT can understand and respond to complex questions, making it an effective tool for a wide range of applications.


The Concerns Frequent and widespread usage of this AI-powered tool led to several concerns related to privacy. They are -

Personal Information collection ChatGPT relies on vast amounts of data to function, which means it needs to collect and analyze data from multiple sources. This can include anything like –

  • Text messages

  • Emails

  • Search history

  • Social media posts, etc.

An article from BBC’s Science Focus cites that the data stored by this chatbot is around 570 gigabytes in total, somewhere around 300 billion words. This raises concern about the privacy of the data being used to train the model, leading to three major questions –

  1. How the data is being used

  2. Who has the access to the collected data

  3. How the data is being protected

For example, if the data used to train ChatGPT contains sensitive information like personal or financial data, location, contact details, etc., it could pose a threat to the privacy of the individuals involved. Or, if the employees use ChatGPT with corporate data, the intellectual property and privacy can be compromised. Additionally, if the data used to train this tool is not anonymized, it could lead to the identification of every individual whose data was included to train the model.

Security Breaches Another major concern with ChatGPT is the possibility of using it for malicious purposes like security breaches and more. Cybercriminals and hackers could potentially use this platform to collect sensitive information as quoted above, for malicious purposes. ChatGPT can be used to generate legitimate looking phishing emails. There is genuine fear that ChatGPT can significantly alter the threat landscape as Cybercriminals exploit its potential for malicious purposes.


OpenAI’s Privacy Policy

The company’s privacy policy states that that it collects different data including their IP addresses, device information, usage data, to train their AI model. Also, the company stated that it might share ‘personal information’ with third parties (unspecified) for business purposes with no prior information.

Amazon’s Warning to its employees

Business Insider’s piece on examining Amazon’s internal communication suggested that the company’s legal department warned its employees against ChatGPT. The piece also stated that Amazon’s employees should not share the organization’s codes or any confidential information with ChatGPT. This move from the company was immediately after the chatbot’s responses replicated the company’s internal data.


Another instance showed ChatGPT’s abilities to solve Amazon’s interview questions that were exclusive to the company’s recruiting board. Experts suggest that it won’t take long for this tool to train itself and generate technical questions or maybe decode their patterns at least.


OpenAI is implementing several safety measures, to reduce the impact of ChatGPT on data privacy. Risk posed by these systems accentuates the need to establish standards. Regulations around high risk emerging technologies can protect consumers without stopping innovation.

EU’s Proposed Artificial Intelligence Act (AIA)

The EU AIA (AI Act), first of its kind, is being implemented in this regard, to help restrict the possible privacy breaches from AI-powered chatbots. This act allots AI-based applications into different risk groups –

  • Unacceptable risk – AI systems posing such risk are banned.

  • High risk – AI systems subject to certain legal requirements.

  • The last group of AI systems is neither banned nor under high-risk.

These groups help understand the extent of threat from these chatbots letting the users decide if the tool is safe to use.

Protecting User Privacy

Apart from relying on different laws to be formulated against privacy concerns from AI-based chatbots like ChatGPT, every user can implement any of the below-listed techniques to minimize the impact. Primarily, every user needs to be vigilant about their privacy when using ChatGPT. Some tips to help protect your data –


Strong password: Use a strong and unique password for your ChatGPT account to restrict unauthorized access.


Be mindful of what you share: Avoid sharing personal/sensitive information when using this chatbot.


Place a Delete Request: Some chatbots like ChatGPT provide an option for placing a request to delete the data collected during their conversations. You can start a conversation with the chatbot, type in ‘delete my data’ or ‘delete my chat history’ for it.


ChatGPT has the potential to be a valuable tool for individuals and businesses alike. However, it's crucial to balance the benefits of technology with concerns about data privacy. By being aware of what personal data is being collected and limiting the sharing of personal data, users can protect their privacy while still taking advantage of such language models.

Comments


Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page