<img src="https://ad.doubleclick.net/ddm/activity/src=11631230;type=pagevw0;cat=pw_allpg;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;npa=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};ord=1;num=1?" width="1" height="1" alt=""> The Complete Guide to Building an Insider Threat Program

The Complete Guide to Building an Insider Threat Program

TABLE OF CONTENTS

    See Virtru In Action

    { content.featured_image.alt }}

    You can’t run a business without giving employees access to resources, and you can’t give them that access without some degree of risk. Insiders can damage your company in countless ways, from destroying valuable equipment, to leaking sensitive data, to providing access to unauthorized third parties. These corporate security incidents can lead to lost revenue, compliance fines and lawsuits, and serious damage to your reputation.

    An insider threat program can help you anticipate and address risky or destructive individual behavior before major damage is done. However, it’s crucial to address insider threats based on a realistic assessment of risks. Most companies face far more danger from lack of attention or training by insiders than from actual malice, but it’s still crucial to understand the security risks both pose. Fostering a collaborative culture of security will earn employee buy-in, and provide better results (and morale) than a top down “everyone’s a suspect” approach.

    What Is an Insider Threat?

    The phrase “insider threat” is often used to refer specifically to malicious data theft or sabotage of an organization’s data or electronic resources by insiders. The National Counterintelligence and Security Center, for example, defines an insider threat as, “when a person with authorized access to U.S. Government resources… uses that access to harm the security of the United States.”

    Malicious insiders may be driven by a wide range of motivations, including greed, economic desperation, desire for revenge for a perceived wrong, and loyalty to a different organization. Their actions can vary greatly, from erasing a friend’s debt, to selling the organization’s records, to actual physical violence.

    But in many cases, the most serious insider threats come from inadvertent insiders — people who unknowingly allow outside threat actors into your organization through a mistake. Some of these mistakes are negligent, but most of them are just ordinary mistakes — often resulting from lack of training. A worker could cause email security issues by reusing a password for a work account, or unknowingly opening an unsafe email attachment, for example, giving an attacker access to the company’s resources.

    Because malicious and inadvertent threats are so so different, creating an insider threat program takes commitment and ongoing work.

    3 Types of Insider Threats

    1. Malicious Insider Threats

    Malicious insiders and inadvertent insiders are very different. Malicious insider threat detection can benefit from profiling to a degree. Malicious insiders  are often disaffected workers out to make a buck. They aren’t, by and large, the misguided masterminds of Hollywood movies: although 26% work in skilled technical positions, the majority of these insiders (58%) work in roles that don’t require deep technical skills.

    Malicious insiders breach cyber security for a range of reasons. They’re most often motivated by money (54%), but may be driven by revenge (24%) or a specific grievance (14%), and may have multiple motives. Often, a specific event (56%) such as a termination or demotion triggers or influences their decision to breach security. Once triggered, they generally plan their actions ahead of time (88%).

    But their poor secrecy and incomplete planning skills can be an asset in insider threat detection. Malicious insiders overwhelmingly don’t think about the consequences (90%), and often talk enough to give others some information about their activities or plans (58%). They may ask suspicious questions, or even discuss their plans with friends or coworkers. Many also display “inappropriate or concerning behavior” (43%) which could serve as a warning sign to supervisors or coworkers.

    It’s crucial to understand that these insiders generally aren’t criminal masterminds, and tend to just opportunistically take advantage of unsecure access policies and lax internal controls. Most attackers (85%) used their own authorized access, but access control gaps generally contributed (69%).

    2. Third-Party Insider Threats

    With an expanded attack surface, external actors continue to pose a significant challenge. According to Verizon’s Data Breach Investigations Report 2020, almost three-quarters of attacks are perpetrated by external actors. Because of this, external actors have understandably received the most attention, while a more nuanced framing of insider threats has failed to emerge on pace with technological changes. This may be starting to change as organizations begin to integrate risks associated with a wide range of third-party providers.

    Organizations increasingly use third-party providers—from cloud services to messaging apps—to conduct their most sensitive of business. For instance, over 83% of organizations are pursuing a multi-cloud strategy, with multiple private and public clouds for different business applications. While it has helped create significant business efficiencies, it involves a high level of trust in a third-party to protect your data. Cloud services and application providers become the de facto data security provider as well.

    Supply chains also reflect another form of third-party providers that similarly are too frequently overlooked when crafting security strategies. Contractors and their data access can significantly help business productivity, but they also can be a security vulnerability. Whether serving as an access point to the broader network or error in cloud server configurations, supply chains can pose another form of insider risk to organizations. In fact, misconfigured cloud servers are often overlooked but have resulted in numerous data compromises and millions of breached data records.

    In short, with their internal data access, third-party providers should also be considered an additional form of internal threat.

    3. Inadvertent Insider Threats

    For the careless or inadvertent insider, unfortunately, profiling doesn’t help. The culprit could be a receptionist who misplaces a file, a police officer who doesn’t understand CJIS compliance rules, the head of IT security or the president. And at one time or another, it may have even been you. And because these accidental threats are much more common, insider threat detection needs to rely heavily on training, supervision and testing, backed up by good security measures. Watching for warning signs can stop the bad guys, but it won’t stop well-meaning ones.

    Why Insider Threats Are Such a Big Deal

    Insiders have direct access to data and IT systems, which means they can cause the most damage. According to a 2015 Intel Security study, insider threat actors were responsible for 43% of attacks, split evenly between malicious and unintentional actors. According to the IBM X-Force 2016 Cyber Security Intelligence Index, insider cyber security threats are an even bigger problem. From 2015 to 2016, the percentage of attacks carried out by all insiders grew from 55% to 60% according to the study. Of those, about 73.6% were carried out by malicious insiders — or 44.5% of total attacks.

    These numbers are alarming enough, but they don’t even tell the whole story. Because even among successful attacks carried out by outsiders, virtually all have at least one insider threat component. At some stage of the process, someone in the targeted organization or a partner organization had forgotten to upgrade software, left a default root admin password in place (or had improper permissions inside due to malice or a mistake), transferred sensitive data over an insecure connection, or done something else that exposed the organization to attackers.

    Realistically, your insider threat program can’t anticipate every possible mistake that could harm security. Some issues — such as internal security and compliance initiatives — may lie outside the realm of your insider threat plan altogether. In other cases, an attacker may be using a novel tactic such as a new type of Business Email Compromise (BEC) attack that your organization hasn’t anticipated. But your insider threat program doesn’t have to be perfect — just getting employees to be aware of security issues and be vigilant can significantly reduce insider threat risk.

    Government Insider Threat Programs and Initiatives

    The U.S. government has created the National Insider Threat Task Force to develop and enforce minimum insider threat program standards across government organizations and contractors. Their policy gave covered organizations 180 days to “establish a program for deterring, detecting, and mitigating insider threat[s].” Organizations were required to take a number of steps to protect against insider threats, including:

    • Monitoring users on classified government networks.
    • Examining background information about users.
    • Training employees to spot and detect insider threats.
    • Creating mechanisms to analyze and share insider threat information.

    The government has also taken other steps to promote insider threat programs, including researching current programs, and developing an insider threat roadmap through a public/private partnership.

    All of this work has generated an awakening, according to Michael Gelles, a Naval Criminal Investigative Service veteran and insider threat expert. Gelles pointed out that, although insider threat detection has been going on for decades, popular awareness of the insider threat program is new:

    “For me, having spent a career with it, it is almost like folks have finally awakened to this issue despite the fact that it has been something that the government has long been focused on.”

    Gelles points out that the typical insider threat program has been reactive historically, and focused on malicious theft of proprietary and classified information. The government would debrief spies once they were caught, and study their motivations, but they didn’t have good mechanisms in place to catch insiders until relatively recently.

    In the digital age, both “complacent insiders” who leave doors open through negligence, and “ignorant or uninformed insider(s)” who haven’t been trained have become a much bigger security risk, according to Gelles. This has a lot to do with how technology has changed things.

    A Cold War-era intelligence bureaucrat working for the CIA couldn’t accidentally leak a secret file stored in their workplace — they’d have to copy the file without getting caught, and meet with a handler in person to hand it off. But these days, a bureaucrat could very easily compromise much more information just by choosing a bad password, or clicking on a suspicious link. Unfortunately, although we’ve gotten much better at predicting the behavior that precedes a malicious link, few insider threat program plans adequately address the risk of inadvertent leaks.

    How to Create Your Own Insider Threat Program

    Unfortunately, there’s no one-size-fits-all insider threat solution. Businesses need to come up with their own program to assess risks, choose security tools, train and supervise their employees to minimize the risks of insider threats. Here’s a basic roadmap for businesses beginning their insider threat program initiative.

    1. Insider Threat Program — Pre-Planning

    In this phase, your organization will plan out the scope of the project, and identify internal assets and stakeholders. For SMBs, it’s usually best to limit the scope, and execute a pilot insider threat program based around your organization’s most pressing risks. That could mean focusing on employees handling advanced research or preparing for a merger.

    Alternately, you might want to focus on an area of your organization under heightened compliance pressure — such as meeting HR HIPAA requirements. If you’ve had a major incident, or a series of minor incidents in a particular department, that could also be a natural place to pilot.

    Once you’ve broadly outlined the scope of your insider threat program, it’s time to look at internal assets and stakeholders. What security and compliance programs do you already have? What software are you running that could help identify insider threats? Do you have people with security expertise, or training in insider threat detection? What about external suppliers or consultants who might be able to offer support? In many cases, your insider threat program will benefit greatly from an outsider to give you some fresh perspective.

    2. Insider Threat Program Build Your Team

    Now, it’s time to establish your team. Involve people from the pilot department as well as the security staff and partners you’ve identified. As we said earlier, your insider threat program should not be a top-down project that treats your staff with deep suspicion — it should be a collaborative process where staff are encouraged to voice their concerns and lend their help. Not only will this improve morale (no one wants to be part of a “pilot program” that treats them as the enemy), it will also lead to a more successful insider threat program, since workers will be able to help sport risky or suspicious behavior.

    3. Insider Threat Program — Management Buy-In

    Management buy-in is essential, not just because you need them to sign off on resources and changes, but because you need them as participants. Management have access to the most valuable resources. They’re the most valuable targets for sophisticated attackers, and when they maliciously leak data or make a careless mistake, they’re the ones who can do the most harm. On a more positive note, if they feel part of the program from the beginning, it will be much easier to expand the program later.

    An outside vendor like Virtru can provide guidance and help you prepare to make the pitch. However, when it comes time to take the insider threat program to management, you’re the best advocate. Coming to management with an initiative to secure business data is almost always going to look more convincing than immediately turning over the mic to an outside partner. However, having a trusted security partner with experience in implementing insider threat programs in your pocket early on will help you avoid pitfalls, and craft an effective project from day one.

    4. Insider Threat Program — Identifying Risks

    Now that you’ve sold the insider threat program to management, it’s time to take a close look at what risks you’re trying to prevent, and what data you’re trying to protect. Start by listing all the different kinds of sensitive data people in your pilot program have access to. For each type of data, you need to answer several questions, including:

    • What is the value of this type of data to you?
    • What would be the consequences if it were stolen or vandalized — include compliance fines, lawsuits, loss of business, and loss of competitive positioning.
    • Who would be interested in stealing this data, and why? How valuable would it be to hackers, competitors, etc.?
    • In what ways could the data be lost? Is it a likely target for a malicious insider? Is it something that an employee could easily accidentally email to an unauthorized party? Could a partner leak it?
    • In what way could it contribute to other, further breaches?

    Some data (e.g. passwords) is valueless in itself, but incredibly valuable to a hacker trying to perpetrate an attack. Other data might be valuable to certain competitors, but won’t be valuable to anyone else. For example, a parts invoice might contain valuable intelligence about a manufacturer that a competitor could use to learn more about your company, but won’t be any help to anyone else. Business data protection should be based around preventing the most likely risks.

    As you can imagine, things can get pretty complicated pretty quickly. That’s okay — a pilot insider threat program doesn’t have to address every risk on day one. You can always prioritize certain risk mitigation steps, and put others off for another day.

    5. Insider Threat Program — Plan Risk Remediation

    With your big risk list, you’ll be able to identify the most urgent risks for your insider threat program. This is when knowledge of your existing security program comes in handy. Bad password practices, unsafe browsing, and lack of phishing awareness are major security risks, but your company may already be addressing them with regular training.

    One thing your company probably isn’t handling is the risk of unencrypted email. If insiders email sensitive information — for example, because the recipient doesn’t use the same secure client portal — it can be intercepted by a hacker. Your company may want to drop the portal altogether (if you have one) and use an email encryption program for all communication. This will simplify secure communication, and limit the risk of employees forgetting to switch when they need a secure channel. To learn more about the benefits of encrypted email over portals, checkout the resource list below.

    You’ll also want to tag sensitive data, and implement (or strengthen) rules for handling it securely. Restrict access to sensitive information to those who absolutely need it, and make rules governing how they can use and share it. For example, users should never email billing information, as this violates PCI (unless your email is within scope, which isn’t likely).

    Use a Data loss protection (DLP) solution such as Virtru DLP to enforce those rules, and supervise workers for compliance.

    However, poor access control is also a factor in many accidental breaches. When users have more access than their jobs require or retain access after termination, it creates unnecessary risks, and can exacerbate the scale of breaches.

    Organizations need physical, technical and procedural controls in place to control how much access users have. Data should be restricted based on role so that, for example, Protected Health Information (PHI) is restricted on a granular level — even within a department that administers medical benefits. Clerical staff may need access to patient names, but if they don’t need detailed medical records, they shouldn’t have access to detailed medical records.

    Organizations should work toward unified compliance frameworks, incorporating HIPAA best practices like business associate agreements with technologically rigorous CJIS compliance standards. CJIS security policy requires controls like weekly audits and account moderation which aid in insider threat detection, along with technical controls like multi-factor authentication, limits on unsuccessful login attempts and 128-bit or greater encryption to prevent breaches.

    Multi-factor authentication is not a substitute for other good authentication practices, such as strong passwords and frequent password changes. Google Apps security settings (currently known as G Suite) can help admins, allowing them to require strong passwords, restrict user permissions and enforce other compliance measures, and many other cloud productivity suites have similar controls.

    Risk Mediation for Malicious Threats

    An insider threat program plan for malicious insiders should revolve around spotting and reviewing warning signs. Workers and managers should be connected to a contact, and taught suspicious behaviors to look out for, along with careless risks, such as leaving your computer logged in and unattended.

    DLP can help you spot malicious insiders. For example, Virtru DLP can alert managers when workers break DLP rules, and BCC managers on email containing sensitive subjects, words, and data. This is generally not a high priority for a pilot insider threat program, but can be helpful for companies with high-risk information, or a history of insider threats.

    Risk Mediation for Third-Party Threats

    The breadth of third-party access is only going to grow with increasing reliance on multi-cloud environments, automation, and more devices. As organizations begin to adjust their risk strategies to prepare for this digital transformation, zero trust strategies have gained momentum. Zero trust entails a deny by default approach, with significant emphasis on access privileges.

    For the most part, zero trust has focused on application and network access. However, as data continues to flow in unprecedented levels through external storage and service providers,  access privileges must be at the data-level and persistent across environments to truly deny unauthorized data access.

    Object-level data protection with explicit and customizable access privileges is essential to deter today’s insider risks, including third-party providers. A focus on data access, including data revocation and expirations, helps secure data from unauthorized access. This is especially useful for countering insider threats who too often may leverage access privileges long after separating from a company or switching divisions. Because the access privileges persist wherever the data goes, data owners retain greater control of the data, even when stored across a broad range of cloud environments and devices. Access privileges can be time-bound and evolve over time as requirements shift and as the workforce changes.

    6. Insider Threat Program — Iterations

    Security is an ongoing process, not a one-time initiative. Set modest goals for the early stages of your insider threat program, and have workers and program staff meet frequently to discuss its progress. You may need to tinker with your DLP settings to reduce false positives or add lower priority rules gradually, and there’s a good chance some of your procedures will need tweaking. Stick with it, celebrate your progress as you go, and keep your workers engaged.

    At a certain point, you’ll want to roll out your pilot insider threat program to the rest of the company. Look to the workers involved in the original program as leaders and teachers. The more your company can learn from them, the more effective the rollout will be.

    7. Insider Threat Program Employee Education

    As we mentioned above, good security and administration is the best defence against accidental insider threats. However, education also plays a crucial role. Employees need to be trained and retrained to eliminate security risks and compliance issues.

    Poor access processes are a major source of insider breaches. Using unsecured public Wi-Fi, storing your access credentials on your computer or leaving your computer unsupervised in a public place can all result in criminals gaining access to sensitive data or even stealing an employee’s login credentials. For similar reasons, employees should never store passwords in-browser, and should configure browsers to clear their cache on exit.

    Security rules should clearly spell out what incidents should be reported, who they should be reported to and how they should be handled. Each department should have specific procedures placed in prominent places, with contact info for reporting potential breaches.

    Anything that could compromise government cyber security needs to be reported promptly. If, for example, an employee improperly used a public computer for a secure login, or suspects that someone may have spied on them typing in their password, prompt notification and remediation will decrease the risk of a serious breach. Therefore, it’s crucial that employees feel free to approach management. If workers fear termination or severe discipline for reporting a mistake, they’re much less likely to report it.

    Resources: How to Prevent Insider Threats

    The more you learn about insider threats and other IT security issues, the more effectively you can reduce risk in your organization. Use these resources to learn more.

    Editorial Team

    Editorial Team

    The editorial team consists of Virtru brand experts, content editors, and vetted field authorities. We ensure quality, accuracy, and integrity through robust editorial oversight, review, and optimization of content from trusted sources, including use of generative AI tools.

    View more posts by Editorial Team