Are you a threat?

You may want to know about the company you work for. Some corporate employers fear that employees could leak information, allow access to confidential files, contact clients, or bring a gun to the office.

Some companies use behavioral science tools to assess perceived trustworthiness, at times using semi-automated, near-constant assessments. Many employers are concerned about retaining workers in the face of the Great Resignation. Employers want to know which clock-punchers may harm their organizations, despite the fact that workers might be put off by a feeling that technology is invading another sphere of their lives.

The language around this sort of worker-watching is similar to what is used within the government, where public agencies assess workers who receive security clearances to handle sensitive information related to intelligence collection or national security. Monitoring software and behavioral analysis for the feds can be used by private companies, either independently or packaged with broader cybersecurity tools.

Tom Miller is the chief executive of Clearforce, a company that sells threat services to private clients.

An intelligence agency might keep a close eye on analysts and spies, even though they don't have access to the same data sources. Fortune 500 companies and employers in sectors such as critical infrastructure, financial services, transportation, health care and entertainment are some of the companies that provide these services. It is possible that you are working for one now.

Software can watch for suspicious computer behavior or it can dig into an employee's credit reports. It can check to see if Cheryl is using bulk cloud data or if Tom is getting testier over time. The companies that monitor insider risk say that they can point to potential problems in the workplace.

There is so much technology out there that employers are experimenting with or investing in. At some point, there will be a reckoning regarding that technology's consequences.

There are ethical questions about what level of monitoring nongovernmental employees should be. It's not always based on settled science when it comes to insider vetting.

Much of the federal government's security-clearance-granting process has relied on techniques that emerged in the mid-twentieth century.

Evan Lesser, president of ClearanceJobs, a website posting jobs, news and advice for positions that involve security clearances, said driving around in cars to meet people is very manual. It takes up a lot of time and is very outdated.

A federal initiative called Trusted Workforce 2.0 formally introduced semi-automated analysis of federal employees that occur in close to real time. The government will be able to use artificial intelligence to subject employees who are seeking or already have security clearances to continuous vetting and evaluation.

Chris Grijalva asked if we could build a system that would check on someone and keep an eye on them.

Since the 1980s, such efforts have been used in more ad hoc ways. Government policies are typically re-evaluated by employees every five or 10 years. The idea that circumstances, and people, change was the motivation for the adjustment in policy and practice.

The author of the book "Screening" said that it is compelling to keep people under some kind of constant, ever-evolving surveillance process.

The transition period before full implementation of the program finished in fall 2021. In December, the U.S. Government Accountability Office recommended that the automation be evaluated.

Corporations are using their own software to surveil them. Private companies help build technologies for the federal government, even though private-sector workers don't have to submit a 136-page clearance form. Any solution would have private-sector applications.

Three large corporations that provided information that helped identify potential government-insider threats were highlighted in a study. Forcepoint, Clearforce, Peraton and Endera are some of the companies that offer semi-automated insider threat analysis services.

Image
Credit...Daniel Zender

Mr. Grijalva said that people are starting to understand that the insider threat is a business problem and should be handled accordingly.

There may be no limits on the company's ability to monitor its employees.

The law gives employers a high level of freedom to do what they want, not just in the workplace, but outside of the workplace as well. Employees are informed of monitoring in a way that is transparent.

One of the most well-known frameworks for assessing insider threats is called the critical pathway.

There are weaknesses with the model, including that there is no ideal, full control group, according to Eric Shaw, a clinical psychologist and co-author of the 2015 study. Risk factors are described, but they do not predict who will present a threat.

There are cases where they have all the risk indicators, but they never become an insider risk.

Edward Stroz, a former FBI special agent, has helped apply critical-path principles to the analysis of text communications. The cyber forensics firm founded by Mr. Stroz Friedberg was once owned by Dr. Shaw. The Insider Risk Group is headed by Dr. Shaw. The linguistic software package they offer uses psycholinguistic analysis to find flags that indicate feelings of disgruntlement, like anger and blame.

The language changes in subtle ways that you don't know.

In the latest test of the software, 383 messages from senders were sent to a trained clinician for review, after they had checked out the messages. The small number indicates that this system could protect individual privacy because only the concerning messages would be seen by a human being.

The amount of email identified for this should give people a lot of comfort.

In the experiment, the software showed a false positive rate when it tagged someone as a threat.

Being transparent, introducing the idea in stages, and keeping the analysis locked away are some of the ways to implement ethical monitoring.

He said that we need to ask better questions about what we do to protect our society.

David Luckey is one of the authors of the report on continuous evaluation from 2019.

Mr. Luckey said that it doesn't mean that we shouldn't consider it.

That is a work in progress. There are limited behavioral or technical data available to develop and deploy an effective and predictive continuous evaluation tool.

There isn't enough information to build a system for trustworthiness from the ground up. It would hold in either the private or public sector. Privacy protection is part of the reason. The aim of many monitoring and behavioral analytic programs is to offer interventions before something bad happens, not to punish people in advance.

In an ideal world, any flag would be followed up with tools and resources to help an employee, whether it is alcohol counseling or an employee-resource group for family issues, said Ms. Kyzer of ClearanceJobs.

It would be difficult to draw individual conclusions about which behavioral indicators might be related to ill actions.

Even if one could collect all the data on bad actors, it wouldn't amount to much.

He said that you are starting to get into some very, very iffy math.

Margaret Cunningham is a behavioral scientist who used to work at Forcepoint, a firm that provides threat analysis to private companies.

Incorporating factors such as chronic or mental health issues and family history into insider threat behavioral analytics can lead to models that call up that old phrase: garbage in, garbage out.

It is not possible to determine how much personal factors influence future likelihood of engaging in malicious behaviors with software solutions.

The employee-employer relationship can be degraded by implementing such systems the wrong way. Part of skirting such Big-Brother territory is avoiding injudicious surveillance, not just ingesting all data that's available and legal, regardless of its proven utility.

Raj Ananthanpillai, chief executive at Endera, imagines running a trucking company.

I would want to know if they had a drink problem.

It can turn an employer into an antagonist if workers know or find out that there is more than they need to know.

It builds a lot of resentment. You are making it worse.

Ms. Kyzer said that private companies could take a tip from the federal government.

According to Dr. Cunningham, Forcepoint tried to use a focused approach with private companies. While the approach considers factors like employee financial distress or disgruntlement in work for clients that have need and justification for that level of analysis, it prefers to focus on deviations from normal work behavior.

That could include rule-breaking behavior that appears unusual in context: for instance, an increase in setting up a work email to auto-forward everything to a private Yahoo account, as well as an increase in screenshotting confidential documents during internal Zoom meetings.

She says the emphasis is on what the employee is doing on the job with work-owned equipment, not on building up an understanding of an employee's personal life or using behavioral indicators that are difficult to measure.

Dr. Cunningham said that he has focused on identifying indicators that you can actually measure, rather than those that require a lot of interpretation. I don't know if it's ethical because I find it too dangerous.