Using Artificial Intelligence to Prevent Insider Threat

By Celina Stewart, Director of Cyber Risk Management at Neuvik, in collaboration with NextLabs

Increase in Business Use of Artificial Intelligence

Artificial Intelligence (AI) has significantly changed the way organizations work. Generative AI (GenAI) enables employees to rapidly parse, synthesize, and create content as part of their workflows, from the development of internal or consumer-facing documents to the rapid development of thousands of lines of software code. Generative Adversarial Networks (GAN) – commonly known for its ability to generate “deep fakes” – is being used to develop hyper-realistic product prototypes and 3D renders, saving hundreds of hours of labor.  

Today, corporate workflows incorporate AI in everything from customer-facing products to internal workflows across critical functions such as Human Resources, operations, business development, and customer support. In fact, a 2024 study by McKinsey & Company highlighted the impact AI has already had on the workplace. This study found that 65% of organizations regularly use AI, with two-thirds of respondents using it across one or more business functions.  

However, the study also noted that while AI use is increasing, most organizations have not yet implemented appropriate oversight to reduce risk from its use. The same study identified that only ~25% of ChatGPT accounts utilized in corporate settings leverage the “Enterprise” version of the software, which allows the company to moderate privacy and security settings. It also found that less than 20% of organizations have enterprise-level AI governance, as only 18% have an AI Council or Board to oversee AI-related decisions.  

Given this lack of oversight, it is no surprise that the increased use of AI has had unexpected consequences. Many organizations have already begun to see risks emerge. A recent CyberHaven study found that only one-third of organizations using AI regularly require an AI risk awareness training course or impose the use of controls around AI use for technical talent. Further, the McKinsey & Co. study cited above found that 25% of organizations reported AI-introduced inaccuracies in their work product, with 16% reporting cybersecurity issues.

Rise of Insider Threat from Increased Business Use

One issue in particular is growing from increased AI use: insider threat. AI has made it easier than ever for a malicious or negligent insider to create risk for the business. Disgruntled employees in roles responsible for “training” AI tools can poison baselines to perceive malicious activity as normal. Those looking to commit corporate espionage or profit from the sale of intellectual property can use Computer Vision or Optical Character Recognition (OCR) to exfiltrate and recreate data sets without business permission 

Worryingly, employees simply looking to enhance their productivity or “outsource” their labor to AI can negligently introduce “bad artifacts” generated by AI in work products without even realizing. These “bad artifacts” can range from erroneous data and/or “hallucinations” to other organizations’ intellectual property (IP). Not only do these artifacts impact work quality, but they introduce legal, regulatory, and reputational risks if unchecked. And, given the lack of oversight, complacency itself has become a form of negligent insider risk – if no one is overseeing AI-related decisions, are employees trained in best practices for AI use and secure use behaviors? If not, how can they be expected to protect the company’s IP and prevent risk?

Use of an AI-enabled Approach to Identify and Prevent Insider Threats

So, how can organizations prevent this increased risk of insider threat? Perhaps surprisingly, by leveraging AI itself to supercharge existing insider threat identification programs, helping to identify and prevent risks before they occur. An AI-enabled approach can be used to flag negative employee sentiment, to determine changes in digital behavior, and to reduce the likelihood of negligent insider activity.  

Negative sentiment is a frequent precursor to insider threat, as disgruntled or malicious insiders may explicitly or inadvertently communicate their intent before acting. To identify this sentiment early, use AI-enabled Enhanced Sentiment Analysis tools. These tools parse employee communications to identify signals (such as words, phrases, tonal changes) that suggest an employee is or could become disgruntled. Of course, these tools require appropriate tuning to ensure their usefulness, including tailoring keyword analytics to words or phrases most likely to indicate “risk” to the organization’s specific context. Be sure to tailor these tools to allow for “healthy” negative communication (i.e., peers blowing off steam about a frustrating coworker) to limit false positives.  

Enhanced Sentiment Analysis is especially effective when correlated with traditional insider threat identification tools, which leverage access controls to determine patterns of attempted access to assets outside an employee’s specific role purview.  

To further increase effectiveness, be thoughtful about how and where to deploy Enhanced Sentiment Analysis. Avoid the pitfall of “over-deploying” tooling across a blanket employee population. Instead take a risk-based approach: which assets have the highest criticality of the business (e.g., trade secrets, formulas, proprietary algorithms, etc.)? Who are the employees with access to those assets, or who might be manipulated by a malicious insider to provide access to those assets? Where might those insiders communicate (i.e., email, chat, collaboration platforms, possibly even social media, or fields in asset-specific tooling)? Be sure to include language disclosing the use of this tooling in communications-related policy and as part of awareness training to set expectations appropriately with employees.  

Changes in digital behavior – including attempted access escalation, high data download or upload volume, and opening thousands of files in rapid succession – can also signal an insider threat. Many types of AI-enabled tools can assist with tracking these changes. Data Loss Prevention software can identify and prevent the exfiltration of large volumes of data, leveraging AI to perform user analytics and alert on any unusual employee behavior (including repeated exfiltration attempts). File Integrity Monitoring and File Server Monitoring tools use AI not only to identify patterns in file access but also determine patterns in the timing of file access that could suggest an insider threat (e.g., after-hours access, frequent reopening, etc.). Dark Web Monitoring can identify any leaked organization data or intellectual property that may suggest an insider has performed exfiltration. These represent only a sub-set of AI-enhanced tooling useful in insider threat programs, so each organization should use their risk profile and insider threat prevention goals to inform tool selection.  

As with Enhanced Sentiment Analysis, the use of AI-enhanced tooling to identify changes in user behavior must be calibrated appropriately to reduce false positives. Similarly, these tools should be integrated with other cybersecurity technology – such as Threat Intelligence, Attack Surface Management, and Security Incident and Event Management or Security Orchestration and Automated Response tooling – to ensure that possible insider threats are not only flagged, but tracked, escalated and investigated properly against relevant user and organization data.  

The final aspect of an AI-enabled approach aims to combat negligent insiders. The fact is that employee AI use has begun and will only continue to grow. Counterintuitively, one of the best ways of using AI to prevent insider threat is by enabling its use, but with appropriate company oversight and guidance. This will not only prevent negligent activity but also reinforce best practices that reduce business risk from any technology use.  

To prevent negligent activity, organizations should consider several activities. First, provide “Enterprise” instances of popular Generative AI tools, which allow the company to dictate security, privacy, and monitoring controls. Similarly, set up and encourage the use of local instances of popular AI tools such as “copilots” and/or “sandboxes” through which employees can test AI functionality within a restricted environment. Lastly, where budget and time allow, build AI models in-house, leveraging only company data as training inputs. By allowing the use of approved AI tooling for specific activities, organizations shed light on what may currently be considered “shadow AI” or AI use occurring without company oversight. This significantly reduces the risk of negligent insider threat, as employees will be unable to inadvertently release company intellectual property and have a much more limited likelihood of generating or incorporating “bad artifacts” in their work product.  

Finally, negligence often stems from a lack of awareness, which in turn stems from a lack of governance and oversight. Implement appropriate governance mechanisms to create and enforce AI-related policies, provide oversight on decisions related to AI (and the purchase or use of AI tools on company devices), and to perform AI risk assessments to ensure the fidelity of AI tools in use. Provide role-based awareness training for employees on the secure, responsible, and ethical use of AI. Reinforce existing best practices, such as pre-publication review of all documents and quality assurance testing for all software, to catch any bad artifacts, insecure dependencies, or AI-generated “hallucinations” that may have been included.  

Conclusion

While artificial intelligence has created increased opportunities for insider threats, it also can be leveraged to identify and prevent insider threats more effectively. By taking a thoughtful approach to AI, organizations can detect changes in user sentiment, identify anomalous behavior patterns, and increase awareness to prevent negligence.  

Watch NextLabs’ Expert Series episode with Celina Stewart where she discusses more about how AI has increased the risk of insider threat. In the two-part video series, she explores AI’s growing role in cybersecurity and insider risks. This includes issues like threats from AI corpus generation, the use of bad artifacts, computer vision, and complacency, along with strategies to mitigate these risks. 

To comment on this post
Login to NextLabs Community

NextLabs seeks to provide helpful resources and easy to digest information on data-centric security related topics. To discuss and share insights on this resource with peers in the data security field, join the NextLabs community.