Thought Leadership

Consent and Advanced PII in the Context of Conversations with an AI

Author

Read Time

5 min

Posted

Share Article

Consent and Advanced PII in the Context of Conversations with an AI

Over 100 million users have signed up to use ChatGPT since OpenAI’s generative AI product launched in November 2022.1 ChatGPT users have prompted the advanced LLM (large language model) with fun and innocuous inquiries, like coming up with the perfect chocolate chip cookie recipe. Or using its generative capabilities to create playable table-top role-playing game scenarios. The possibilities seem endless.

Many in the digital world recognize generative AI’s potential and contemplate how they can integrate it into their business; however, there’s a catch. The personal data inputted into AI chatbots can be compromised, creating privacy and consent risks. These engines introduce an added layer of complexity to your technology stack, which can impact your business and your user’s experiences. 

The first concern is managing consent. When you input a chat prompt and receive an output, you’re feeding information into a collective algorithm.2 According to the makers of ChatGPT, they do not recommend divulging personal, confidential information or secrets.3 Not everyone will read the full terms of service and data privacy statements when interacting with AI chat, potentially forfeiting their confidential information to the collective model. The current implementation also does not overly warn users of the potential risks or provide clear instructions to avoid these risks. As people push the limits of the tech, there may be output prompts that deviate from the topic of focus, are factually untrue, or inappropriate for minors.

There’s also the question of consent to communication preferences—as there seems to be few in current iterations of AI chatbots, especially when it comes to topics and subject matter. In traditional marketing channels, users can typically choose which channels they receive communications (SMS, email, etc.), the topics, and the frequency. GDPR, the regulation which protects data and privacy in the EU, dictates several stipulations to define marketing consent, to which current generative AI does not readily adhere. The framework states that marketing consent includes, but is not limited to:

  • Consent must be clear and easily understood 
  • Consent must be given freely with no deception or coercion
  • Consent is a one-time, non-editable event for a specific item or action
  • Consent cannot be posed in an overarching manner (i.e, “I consent to everything”)
  • Consent must be a positive/affirmative action executed by the user

Microsoft Bing released an AI chatbot that turned conversations into odd, alarming territories. A New York Times reporter released a transcript of his conversation with the chatbot wherein it claimed that he was not happy in his marriage and that the chatbot loved him.4

Snapchat introduced “My AI” in late February, which uses a modified version of OpenAI’s GPT technology for its Snapchat+ subscribers. The chatbot for Snapchat does possess some limitations—it won’t engage with topics concerning politics, violence, swearing, and academic essay writing (given the typical Snapchat demographic).5 

Another issue that will come from implementing these new technologies is the question of monetization and topic promotion. If a user feeds personal identifiable information (PII) or protected health information (PHI) into its algorithm, AI chatbots can absorb that information. Some may argue it will be the user’s responsibility, but it isn’t as cut-and-dry. For example, a lawyer might input some information to generate contract language, unwittingly adding that personal information into the collective.6 

As AI technology advances, there will be discussions on how PII is handled or monetized by third-party groups. For instance, would it be ethical for a generalized chatbot to promote a skincare product if prompted to describe an ideal nighttime skincare routine? 

Understanding new technologies and their implementation, like the ChatGPT large language model, is how AMPXD stays at the top of our field. We analyze new technology and determine how you can integrate it into your existing platforms. As experts in data privacy regulations (GDPR, HIPPA, CAN-SPAM, COPPA, CCPA), you can feel confident about implementing generative AI into your technology stack in ways that don’t unknowingly compromise customer PII or PHI. 

GDS brings together the sharpest minds in the industry to solve tomorrow’s marketing technology challenges. AMP XD has over 25 years of experience and a culture of accountability. We’re excited to be part of the conversation and find a solution to transform your business through generative AI capabilities. 

1Engadget, “How AI will change the way we search, for better or worse.” https://www.engadget.com/how-ai-will-change-the-way-we-search-for-better-or-worse-200021092.html

2Forbes, “Generative AI ChatGPT Can Disturbingly Gobble Up Your Private And Confidential Data, Forewarns AI Ethics And AI Law.” https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=71790ff97fdb

3OpenAI, ChatGPT FAQ. https://help.openai.com/en/articles/6783457-chatgpt-general-faq

4Engadget, “Microsoft limits Bing conversations to prevent disturbing chatbot responses.” https://www.engadget.com/microsoft-limits-bing-conversations-to-prevent-disturbing-chatbot-responses-154142211.html

5ZDNet, “ChatGPT is coming to Snapchat. Just don’t tell it your secrets.” https://www.zdnet.com/article/chatgpt-is-coming-to-snapchat-just-dont-tell-it-your-secrets/

6Forbes, “Generative AI ChatGPT Can Disturbingly Gobble Up Your Private And Confidential Data, Forewarns AI Ethics And AI Law.” https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=71790ff97fdb

Let's start something great