We are bullish on the potential of generative AI to drive efficiency in the way individual employees work, and to elevate the entire organization. Our employees use it to streamline workflows, work smarter, and improve the quality of their work.
But we are also realistic about its risks. For this reason, we want to ensure that all of its users are aware of its limitations, and to put guardrails in place to mitigate them.
This blog post explores four critical risks that every AI user needs to understand to better protect themselves and their organizations while still harnessing AI's benefits.
AI Hallucinations: A Critical Risk
AI tools can generate false information that looks and sounds credible but is completely fabricated. This happens because:
- AI models predict what sounds plausible rather than verify facts.
- They may combine unrelated information in misleading ways.
- They can invent citations, statistics, and quotes that don't exist.
As an AI expert we interviewed told us, LLMs are “word calculators.” They predict what someone may say based on what was said in the past. They are extremely helpful in organizing information, but no one should regard outputs and solid facts.
Real-World Impact:
- In April 2024, Grok, the xAI chatbot, falsely accused NBA star Klay Thompson of throwing bricks through the windows of multiple houses in Sacramento, California.
- NBC reporters received fabricated quotes attributed to Michael Bloomberg, including one that claimed, “It’s not about giving back; it’s about buying influence.” (Unable to verify the quotes, NBC didn’t use them, but they did publish a story about ChatGPT hallucinating.)
Protect Yourself: Always verify AI-generated facts, quotes, and statistics against reliable sources before using them. And as always, if you use them, cite them with hyperlinks.
Quick AI Hack: While generative AI can make up facts, other generative AI tools, like Perplexity, can easily verify what’s true and what’s not. And it couldn’t be simpler -- just ask whether your fact is true or false:
Importantly, Perplexity will respond with links to sources. But -- and it’s a big but -- it’s up to you to determine whether or not to trust those sources. Perplexity often gets its information from company websites, which you may or may not consider authoritative. As with all AI, the role of humans is never eliminated.
AI and Copyright Issues: Understanding the Risks
AI-generated content raises complex copyright concerns because:
- AI models are trained on existing content, including copyrighted materials.
- AI outputs may reproduce portions of that training data, leading to you inadvertently plagiarizing without proper citation.
- The originality and ownership of AI outputs still need to be clarified. You can enter a killer prompt, but what you receive as a response may not be original thinking.
Real-World Impact:
- Getty Images sued Stable Diffusion for using copyrighted photos in training.
- In early 2024, several class action lawsuits were filed against OpenAI by authors alleging unauthorized use of their works to train ChatGPT. These lawsuits claim that OpenAI harvested literary works from illegal online "shadow libraries" to train its language model, resulting in outputs that could mimic authors' styles or reproduce portions of their text.
Protect Yourself: Treat AI as a creative assistant, not a source of original content. Always:
- Check outputs for potential copyright issues.
- Use tools that provide source citations (e.g., Waldo.fy, Perplexity.ai).
- Don't assume AI-generated content can be copyrighted; you may be attempting to copyright another person’s IP.
- Consider AI outputs as drafts as starting points, not final work.
Privacy and Security Risks: Protecting Sensitive Data
AI tools can expose confidential information because:
- Free/public AI models may retain and use inputs for future training. Data retention policies vary by provider, so it’s critical to check before using a tool. This means that data you enter can appear in a random user’s outputs.
- Bugs happen! In March 2023, OpenAI announced a bug that exposed its users' payment information to other users.
Real-World Impact:
- In April 2023, Samsung engineers inadvertently leaked internal source code and other confidential data by uploading it to ChatGPT.
Protect Yourself: Treat free AI tools like public spaces. Use paid or enterprise versions for any business-related work. Never input PII, financial, or other sensitive data. Learn and follow your organization’s AI security protocols.
But don’t assume that because you pay for a tool your data is protected. Learn how to specifically instruct the tool to exclude your data as part of its training data. For instance, with ChatGPT you must turn off the feature, “Improve the model for everyone.”
AI and User Protection: Understanding Potential Harm
AI tools can present risks to users and organizations because:
- Although the major AI providers have taken steps against it, AI may generate biased, offensive, or harmful content.
- The garbage in/garbage out (GIGO) rule applies. AI output quality is heavily dependent on input quality. This is a topic we discuss in the AI Council guide The Role of Data Quality in AI.
- Without proper guardrails, AI can produce inappropriate or dangerous content. In some cases, the AI tool is designed to be biased. This is a topic discussed at length in the DDH AI Council Guide, Responsible AI .
Real-World Impact:
- Meta's Galactica AI was shut down after generating convincing but false and biased scientific information.
- Tessa, a poorly trained AI-powered chatbot designed for people with eating disorders, was disabled after providing users with dangerous health advice.
Protect Yourself: Establish healthy habits for safe AI. Start by choosing tools with strong content safeguards and creating clear guidelines for acceptable use. If you encounter inappropriate or offensive responses, report them to the tool providers.
Remember: even well-designed AI tools can generate inappropriate content if prompted in certain ways, which is why human oversight remains essential for all AI outputs.
Parting Thoughts
We hope that by presenting these risks we aren’t dissuading you from using generative AI. As we said earlier, we are bullish on the technology. The trick to using it successfully is to understand the risks as well as the potential, and to strike the right balance that works for you. With proper awareness and precautions, AI can be a powerful ally in your work—not a source of potential problems.
For more information, please see the AI Council guide, Best Practices for Generative AI Prompting.
About the DDH AI Council
The DDH AI Council was founded to address a growing concern: the widening divide between organizations that embrace generative AI and those that are hesitant to adopt it. Generative AI is rapidly reshaping the way we work, raising the overall caliber while enabling teams to innovate faster. We understand that for many business leaders, generative AI is still an unknown technology and one that comes with a lot of risks. Our goal is to demystify generative AI, and to provide the education and insights business leaders need to build a roadmap for its adoption, with full confidence that its use will be safe and transformative.1
1 Disclaimer: The responses provided by this artificial intelligence system are generated by artificial intelligence based on patterns in data and programming. While efforts are made to ensure accuracy and relevance, the information may not always reflect the latest data and programming news or developments. This artificial intelligence system does not possess human judgment, intuition, or emotions and is intended to assist with general inquiries and tasks. Always conduct your own independent in-depth investigation and analysis of ANY information provided herein, and verify critical information from trusted sources before making decisions.
Interested parties should not construe the contents of ANY responses and INFORMATION PROVIDED herein as legal, tax, investment or other professional advice. In all cases, interested parties must conduct their own independent in-depth investigation and analysis of ANY responses and information provided herein. In addition, such interested party should make its own inquiries and consult its advisors as to the accuracy of any materials, responses and information provided herein, and as to legal, tax, and related matters, and must rely on their own examination including the merits and risk involved with respect to such materials, responses and information.
We nor any of our affiliates or representatives make, and we expressly disclaim, any representation or warranty (expressed or implied) as to the accuracy or completeness of the materials, responses and information PROVIDED or any other written or oral communication transmitted or made available with respect to such materials, responses and information or communication, and we, nor any of our affiliates or representatives shall have, and we expressly disclaim, any and all liability for, or based in whole or in part on, such materials, responses and information or other written or oral communication (including without limitation any expressed or implied representations), errors therein, or omissions therefrom.