Introduction: AI in the Workplace
The age of AI (artificial intelligence) is here. And it is here to stay. K3 is a strong believer in innovation and automation, but that does not mean that AI should be used without caution or boundaries. If you have not done so already, you should strongly consider writing an acceptable use policy for AI in the workplace, or, at the very least, provide guidance to your employees on how and when AI should be used.
A few of the obvious dangers of AI are potential data privacy and security leaks, AI bias, over-reliance on AI, potential plagiarism, and legal and ethical concerns. It is important to think through how these issues and others could affect the integrity of your organization if AI is misused by your employees. Let’s break down the highlighted concerns:
Data Privacy and Security Leaks
Many users of ChatGPT and the like are inputting several queries a day in these systems with hopes of getting answers to problems, producing blog articles, or to obtain other assistance with work assignments. Extreme caution should be used in doing this. Private or sensitive information, classified data, or any other sensitive data types should never be input into an AI system. Make sure that your employees are removing any such data from queries prior to input.
Another concern in this arena is application or web coding; if you have developers on your staff, any assistance with coding from AI tools should be thoroughly vetted by your security team and tested to ensure that the code does not introduce vulnerabilities to your application or website. Using AI to generate source code could also be a cause of concern for source code leaks should the AI platform used to generate code become breached or otherwise leak the generated code.
AI Bias, Over-Reliance, and Plagiarism:
All AI systems have a creator, right? What bias might the creators of the AI systems have that could negatively impact the data provided to end users? Critical thinking skills should be used when dealing with AI. Do not allow your employees to blindly publish blog articles, newsletters, or other internal or publicly facing publications without first modifying it to the exact use case and ensuring that there is not inherent bias, misinformation on inaccuracies within the AI output. It is also important to keep in mind that certain systems, such as ChatGPT, are utilizing outdated information; so the output the system provides may have been correct 2 years ago, but is no longer accurate today.
It might be considered bad practice to simply copy and paste AI output; instead, AI tools like Bard and ChatGPT should be used to assist in creating outlines and ideas for work assignments, which can then be used by the end user as a catalyst to create their own material. Using AI in this way will also help to prevent the potential for plagiarism that can happen by copying and pasting outputs from AI systems.
Ethical and legal concerns:
We could probably write a book on this topic alone! The ethical and legal implications of AI are vast. Who owns the intellectual property created by AI? If an AI system you use makes a decision that has a legal or ethical consequence, is your company responsible or is the AI system responsible? If one of your employees shares private data with an AI system and it is leaked, how do you know about the leak and how do you respond? There is also a lack of transparency with many AI systems; what is AI’s decision-making process, and how do you thoroughly understand the conclusions and outputs provided to you by AI? All of these questions and more should be heavily weighed when considering AI in your workplace.
AI can be a tool used for good and can aid in making some of your business processes more efficient and automated. But like any powerful technology, ensure that you have the right safeguards in place to protect your organization from the inherent risks. Consider engaging with our vCISO to build a risk management plan around AI use in your workplace.