Things to consider before inviting AI into your organization

By Brent Zomerlei, BS in Computer Science

Artificial Intelligence (AI) tools for business are becoming commonplace. AI is poised to be the next big Information Technology (IT) game changer. Considering the possibilities, AI may ultimately have a larger impact than mobile computing and devices like the iPhone. AI, specifically generative AI, is dominating IT advertising and marketing and is generating lots of interest from average technology users due to easy access and low cost of available systems. Here are some practical considerations related to using AI systems safely in your company.

What can you do with generative AI tools today in your organization? Generative AI tools are being used by chatbots to provide simple customer support and access to frequently asked questions. Generative AI can be used to read transcripts of meetings and provide summaries, it can generate content for marketing or email drafting, and it can assist with data entry tasks. Language translation and programming code writing are additional areas where AI can be a competent assistant.

Generative AI tools like ChatGPT or Google’s Bard are a sub-group called Large Language Model, or LLM. A simple explanation of the LLM is a system that can read and understand the context of a question and then formulate an answer based on its database of content from books, articles, and websites accessible via the internet.

The key thing to understand about large language model AI systems is that the programmers will use your questions, also known as prompts, to improve the performance of their models. There should be no expectation of privacy assumed when using these public, cloud-based tools. You and your employees must keep this in mind before any utilization of these AI tools. You can learn the specifics of the tool’s privacy policy by examining the end-user license agreement (EULA).

Employees in your company may already be using AI tools without explicit permission from management. When using company related tools with ChatGPT or other tools, users must be careful not to expose company secrets and confidential data. According to data from Cyberhaven, as of June 2023, 11% of employees have used ChatGPT and 9 percent have pasted company information into ChatGPT. Nearly 5 percent of this data was estimated to be confidential data1 .

There have also been examples of bugs or errors in the system exposing user data to other users. On March 21, 2023 ChatGPT was temporarily shut down to fix a problem that linked prior conversations with the wrong user, potentially exposing confidential data to the incorrect user2. This expectation of privacy was broken in this situation. Do not share information that you would not share in a public forum.

On April 6, 2023, Samsung discovered employees were debugging source code and putting transcripts of meetings with confidential data into ChatGPT. This was only a few weeks after Samsung lifted a ban on ChatGPT. Samsung subsequently enacted procedures to limit the amount of data that could be sent to about 750 words.

Using an LLM AI within your business environment carries risks, however, there are numerous ways to mitigate this risk.

Blanket Ban. Employers can establish policies and implement procedures to prevent employees from accessing these sites or downloading software on company assets.

Access Controls. If your company has implemented robust protocols to limit access to sensitive data, consider expanding these controls to include generative AI systems.

Enterprise License. Companies that choose to use these tools can seek special license agreements that will limit what the vendor can do with the inputs they receive. Companies can have all inputs eliminated from being used for training, or they could only allow the data to be used to improve the model, but only for the company excluding all other parties.

Offline Systems. There are various LLM systems that can be downloaded and run on local computers that do not connect back to the Internet. This option does provide the most protection for your company data, however; it is not a viable option for smaller companies due to the high technical requirements for installation and maintenance.

Sensitive Data. If you are a company that has PCI, HIPAA, or PHI security considerations, you need to be extremely cautious with how you handle inputs into an LLM. You either need to be able to completely scrub the source data to obfuscate PHI or you need to use an offline or private model. Additionally, you should seek a HIPAA Business Associate Agreement with the vendor to protect any PHI.

Employee Awareness. The education of your employees, raising awareness of trade secrets, and protecting sensitive information is key to protecting information from leaking into public view. Companies should consider adding specific language to their existing Acceptable Use Policy and updating employee handbooks to reflect the use of these tools.

AI tools are improving at phenomenal rates. They can artfully develop written and graphical content, but caution should be exercised when using content generated by large language models.

It is possible for systems like ChatGPT to produce output that may not be 100 percent accurate. There is a phenomenon with these systems called AI hallucination. An AI hallucination is when the AI generates incorrect information, but presents it as fact, and might even cite a made-up source3 . This may happen if the model does not understand the prompt, or it does not contain the required information. There are many tactics you can employ to reduce these hallucinations, such as rephrasing the prompts and limiting possible outcomes by framing the prompts.

Always review the output of the LLM, especially if those outputs may be used within your company for other uses. You may want to consider the output of the AI to be a rough draft document, always verify facts being asserted, and rewrite the output in your own voice and style.

Generative AI systems have the potential to be a strong tool within business environments. If your company does not have a policy related to using tools like ChatGPT or Bard, create one and educate your users. It is important that your employees understand how to protect sensitive data, whether it is company secrets or regulated data.


1Cole, C. (2023, June 18). 11% of data employees paste into ChatGPT is confidential. Cyberhaven. https://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt/.
2Mihalcik, C. (2023, March 24). ChatGPT Bug Exposed Some Subscribers’ Payment Info. CNET. https://www.cnet.com/tech/services-and-software/chatgpt-bug-exposed-some-subscribers-payment-info/.
3Brodkin, J. (2023, May 31). Federal judge: No AI in my courtroom unless a human verifies its accuracy. Ars Technica. https://arstechnica.com/tech-policy/2023/05/federal-judge-no-ai-in-my-courtroom-unless-a-human-verifies-its-accuracy/.