paint-brush
ChatGPT is Exasperating the Insider Threat Risk by@isaac-kohen-teramind
298 reads

ChatGPT is Exasperating the Insider Threat Risk

by Isaac Kohen May 30th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

ChatGPT is a conversational AI tool that allows users to ask questions to be answered by a machine. Apple, JP Morgan and Samsung have banned their employees from using ChatGPT due to the risk of sensitive data leaking out. The author suggests ways for organizations to work with Generative AI securely.
featured image - ChatGPT is Exasperating the Insider Threat Risk
Isaac Kohen  HackerNoon profile picture

There is a mole working its way into organizations across the world, collecting information and solidifying its position amongst the workforce.


And employees and employers seem to love it.


ChatGPT has burst onto the scene in the past few months, exciting many with its ability to write mediocre yet passable texts, and produce conversational search results.


While still in its early days, interest is growing around how it can help save time, allowing its users to focus on the higher level of thinking that is still the realm of the human brain. For organizations, ChatGPT and its competitors offer automation for rote tasks and can have a big role in giving them a competitive advantage when it comes to productivity.


But like every other technology, from fire to the internet, even the most useful of tools have the potential to burn us if we are not careful.


A growing number of companies, from Apple to JP Morgan to Samsung, are banning their employees from using generative AI tools like ChatGPT to reduce their risk of sensitive data leaking out. However, given the pressure to embrace these tools, bans seem untenable.


So for organizations that take security seriously but don’t have their own SCIFs, we will have to find ways of working with it securely.


To get a handle on the challenges facing organizations that are continuously on the hunt for the next emerging threat - from insider risks within segmented networks to how new technologies can find exploitive ways to work from the inside, companies need to take a closer look at what this new technology is, what the risks are, and the steps we can take to reduce our risk.

How do Generative Chatbots Work?

Different machine-learning models have been in the works for a while now. But for the most part, they have been behind the scenes.


Machine learning, deep learning, neural networks, et al, they all collect a lot of big data and produce results that we hope are helpful. One of the big advances in machine learning that has helped to make it more useful for the public is in the field of Natural Language Processing (NLP).


NLP is what allows users to interact with the AI speaking as a person would, improving the experience for both the ease of asking questions and for the quality of the responses.


These AIs (an imperfect if widely recognized term) scrape data from across the web and produce synthetic answers. If you have not yet tried them out, go ahead and play with them a little bit. You’ll either fall into a rabbit hole or get bored after about 15 minutes.


The scraping and processing of information is far from a static process. These AIs are constantly learning from users, both in the interactions to improve quality, but also from the input that users give back to them.


All that data gets stored in the data lakes and then turned into new outputs.


This is fine for general information, helping people quickly learn from publicly available information to find the answers to their questions. However, it is when more sensitive data gets thrown into the mix that we start to see issues arise.

Generative AI as an Insider Threat

Employees used to asking ChatGPT questions may that can harm the organization. Not because they mean to cause harm to their employers, but because they were unaware of the risks.


At this point, it should be fairly obvious that one should not upload your product’s source code to GitHub or sensitive information to social media. Right? Right?


What is less clear is about inputting data into a tool online. After all, they upload data to O365 or other commercial software all the time. They also do queries on Google, and that’s normally ok. So what’s the big deal?


Workers have access to a wide range of information. Some of it is fairly privileged and needs to be handled with extra care. Some of this may be strategic data about plans for the coming year. In other instances, intellectual property like chip designs might end up in the data sets that generative chatbots are using to learn off of.


If the data becomes usable for learning and queries, then someone else may be able to pull that information out later as well.


We are already where people are putting company plans into ChatGPT to tell it to turn them into presentations. It is not hard to believe that workers will use client information to compose emails, saving time but likely breaking all kinds of regulations and raising their risk of exposure.

3 Tips for More Secure Use of Generative AI

From what we see at this point, most people are acting responsibly with their generative AI tools, so the situation may not be as dire as it could be.


That said, it is always better to prepare for risks early. Here are a few tips for getting ahead of the challenges.

Create and Explain Policies for Use

Lay out a clear policy of what is and is not allowed to go into ChatGPT.


Chances are that most folks will have the common sense to know that you do not put anything sensitive into a system you do not control. However, we need to assume that plenty of people do not fully understand how these technologies work and that they are at risk of causing potential harm.


Remind them that data like:


  • Intellectual property
  • Client information
  • Company financials


Should not be put into ChatGPT under any circumstances.


As a general rule, it is good policy to lay out expectations for security policies and explain why they matter. You will find that you can get significantly better buy-in from team members when you speak to them like adults and provide them with some context.

Monitor the Use of Chatbots

Organizations need to monitor what their employees are inputting into generative AIs. They can do this by using application monitoring tools, either selectively or more broadly, depending on their threat modeling.


This is a similar use case to how User Behavior Analytics are utilized to monitor browser use or uploading of information to chat, email, file transfer, or other apps that pose a risk of leakage.

Monitor Sensitive Data and Apps

Monitor logs of access to sensitive data to create a trail in case you discover that your IP or other controlled data ends up being exposed.


This is important both for your own ability to go back and do incident response as well as a reminder to employees that they are accountable for their actions.

Still Early Days

We are only really in the early stages of the usage of generative AI. Culture and awareness of how to use new tools and technologies take time to develop, so our expectations with ChatGPT et al will have to align with reality.


Right now, these tools are somewhere between productivity hacks and total gimmicks.


It is hard to properly assess the risk level we face here at this point, so some companies are choosing to put the brakes on its use until they can establish effective policies. This approach can only hold for so long until the levees break and demand for more integrations of AI products force organizations to let them in.


Now is the time to get a head start on preparing your workforce on how to use these technologies securely, developing a culture of responsible use and accountability.


As to where we go next, go ask ChatGPT for the answer.
바카라사이트 바카라사이트 온라인바카라