With the advent of AI, many believe productivity has increased tenfold, creative obstacles are easier to overcome, and deadlines are quicker to meet. However, as some aspects of work have lessened in difficulty and time consumption, new hurdles arise. Let’s review one hurdle in particular: AI’s tendency to “hallucinate”.

What is AI hallucination?
To avoid much of the hard jargon, AI hallucination refers to a phenomenon whereby the generative system will invent an answer to a question it doesn’t understand or for which it doesn’t have an answer. [1] It’s not generating these “solutions” out of thin air however. The internet – the reservoir of which most generative AI systems pull – is full of historical fact and medical advice, but also untruths and garbage. Because of how AI consumes and regurgitates this information, it can often create convincing language with inaccurate statements that may also not even exist in its training data. [2]
Are the hallucinations inevitable?
Avoiding as much hallucination as possible is largely dependent on how you prompt. [3] So let’s list the steps to the best AI prompting rules and principals:
1) Assign a role or identity to the AI
Instead of outright asking a question, hoping that it will give answers based on nondescript expertise, actually type “Act as {insert industry} expert” before asking your question. It will then tailor its tone and reply to that field with more precision and accuracy.
2) Be purposeful and specific
Avoid vague and overly broad language. To do this, give as much detail as needed. This could help eliminate a lot of misinterpretation.
3) Provide examples and context
Don’t be afraid to tell the AI all relevant background information as well as including links or attachments to templates or other examples of the output you’re expecting. Be sure to only include publicly available data, no proprietary information, financial, confidential, or HIPAA protected data.
4) Reassure the AI that it doesn’t have to give an answer it doesn’t know
This doesn’t sound like a real step, but it absolutely is. These systems are trained to ALWAYS give an answer, so if you tell it to “ask you questions to refine its answer”, you’ll get a better response and that will lead you to the final step.
5) Test and refine your prompt
If you’ve prompted correctly, the AI should be asking you for more information, e.g. “How many of this do you expect in that?” or “Is this required for that?” Whatever info may have been missing that an expert would ask.

A Brief Example:

These tips are for general use as a small business, but what if you operate a company that handles private health information? Should the prompt perimeters change? We will answer that question and more in our coming Tech Tips and newsletters, so stay tuned.
At the end of the day, AI isn’t a perfect solution and you should always double and triple check the answers it gives you. But if you prompt better based on these tips, you could greatly reduce the amount of doubt, or worse, embarrassment that could come from running on information from a hallucinating computer.
Citations:
[1] Antonio, J. (2023, March 15). ChatGPT and the Generative AI Hallucinations. Retrieved from Medium: https://medium.com/chatgpt-learning/chatgtp-and-the-generative-ai-hallucinations-62feddc72369
[2] Metz, C. (2023, March 29). What Makes A.I. Chatbots Go Wrong? Retrieved from NY Times: https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html
[3] Weaver, A. (2023, July 26). AI Hallucinations: How to reduce inaccurate content inputs. Retrieved from Writer’s Room: https://writer.com/blog/ai-hallucinations/

Leave a Reply