Search

Unpacking Regulatory, Contractual, and Sanctions Challenges from DPRK-Affiliated Schemes

Today, we’re zeroing in on a uniquely challenging scenario—what happens when the insider threat isn’t an employee at all, but rather a remote worker posing under false pretenses, potentially linked to adversarial nations like the DPRK. Landon Winkelvoss is here to help us understand how these investigations play out, what legal and operational pitfalls to avoid, and why secure, out-of-band communications become critical during these sensitive internal investigations.

Listen on :

DPRK-linked remote worker deception turns “insider threat” into an identity problem first, a cybersecurity problem second, demanding HR–Legal–Security choreography and disciplined, out-of-band (OOB) communications.

  1. Why this isn’t a normal insider case. Typical insider investigations start with a known employee and rich telemetry; DPRK cases start with false identity and employment fraud, often before day one.
  2. First signals that trigger the case. Many investigations begin as performance issues; basic I-9/identity checks quickly reveal a mismatch and lead to shipping address anomalies (e.g., laptop farms).
  3. Device and data handling. Teams must decide whether to brick or recover devices; use EDR/endpoint tooling (e.g., geolocation, custom scripts) to assess exposure and support retrieval.
  4. Operate OOB under privilege. Stand up an airtight, out-of-band channel for a small “tiger team” (Legal, HR, Security, IT), with E2EE and audit-ready records that survive e-discovery.
  5. Don’t tip off the suspect. If the worker sits in IT, spinning up in-band Slack/Teams/Signal variants can expose the investigation; keep comms entirely off the production network.
  6. Four pre-employment red flags (weigh them in aggregate).
    • VoIP phone number newly created/consistent with patterns seen in DPRK cases
    • New email with new LinkedIn profile
    • Minimal account registrations tied to that email
    • Copy-pasted resume content (internal/external matches)
      Treat multiple hits as a trigger for deeper review, not a failure based on a single signal.
  7. Scale matters. For high-volume hiring, automate these checks to flag candidates for human-in-the-loop verification before offers go out.
  8. US-based facilitators. Expect laptop-holding intermediaries (witting/unwitting). Notify the FBI; locating the device and documenting facilitators is often expected in follow-ups.
  9. Tabletop the identity path. Run TTX that simulates falsified identities from application → interview → offer, with “see-something, say-something” OOB escalation points for HR, Legal, and Security.
  10. Practical HR friction points. Match I-9 and ship-to addresses, require on-camera/in-person checks when warranted, de-duplicate reused references, and increase pre-offer identity screening for sensitive roles.
  11. Regulatory trajectory. Expect pressure for more aggressive pre-offer identity verification and clearer allowances to vet high-risk roles before onboarding, given the sanctions/OFAC exposure.
  12. Bottom line. Treat DPRK remote worker deception as a repeatable risk: verify identity continuously, communicate OOB under privilege, log decisions for audit, and practice the hand-offs between HR, Legal, and Security.

Navroop Mitter:

[00.00.02.23–00.00.30.00]

Hello, this is Navroop Mitter, founder of ArmorText. I’m delighted to welcome you to this episode of The Lock & Key Lounge, where we bring you the smartest minds from legal, government, tech, and critical infrastructure to talk about groundbreaking ideas that you can apply now to strengthen your cybersecurity program and collectively keep us all safer. You can find all of our podcasts on our site, ArmorText.com, and listen to them on your favorite streaming channels. Be sure to give us feedback.

Navroop:

[00.00.30.00–00.00.57.22]

Hello, and welcome back to The Lock & Key Lounge podcast. Today, we continue our special miniseries inspired by our recent Chatham House Rule event with Clear and T-Mobile on the increasingly complex threat of DPRK-linked remote worker deception. Following our previous episode on legal considerations, today we’re shifting focus to investigative tactics and operational strategies that organizations need when confronting these deceptive threats.

[00.00.58.00–00.01.22.19]

We’re joined by Landon Winkelvoss, co-founder and VP of Intel and Legal advisory at Nisos, to guide us through these complex investigations. The series leads into our public webinar Fraud, Fakes, and Foreign Threats: Identity Verification and Secure Comms in the Age of DPRK Remote Worker Schemes. The webinar recording will be linked for listeners tuning in afterward. Landon, thanks for joining us today.

Landon Winkelvoss:

[00.01.22.21–00.01.27.09]

Navroop, thanks for having me. Always been a big fan of what you guys are doing. Enjoy the—we’ll enjoy the conversation.

Navroop:

[00.01.27.13–00.01.59.05]

Yes, looking forward to it. Now, for those of you who don’t know, Landon Winkelvoss co-founded Nisos in 2015 with a mission to combat threats ranging from IP theft and fraud to sophisticated e-crime and disinformation. Before Nisos, Landon served as a technical targeting officer in the U.S. intelligence community, including multiple war zone deployments and overseas postings. At Nisos, he leads the intersection of threat intelligence, digital investigations, and legal advisory services, addressing clients’ toughest security challenges.

[00.01.59.07–00.02.20.21]

Landon recently joined our panel at Clear and ArmorText’s executive event, and we’re thrilled he’s here to share more today. So, today we’re going to be zeroing in on a uniquely challenging scenario: What happens when the insider threat isn’t an employee at all but rather a remote worker posing under false pretenses, potentially linked to adversarial nations like the DPRK?

[00.02.21.00–00.02.49.21]

Landon is here to help us understand how these investigations play out, what legal and operational pitfalls to avoid, and why secure out-of-band communications become critical during these sensitive internal investigations. And with that said, let’s get started. Landon, insider threats are complex enough. But why are investigations involving remote workers who may have falsified their identities, especially those linked to nation states like the DPRK, even more complicated?

Landon:

[00.02.50.01–00.03.26.02]

Nav, again, thank you very, very much for having me. Truly a pleasure. I think, really, the biggest difference is exactly what you talked about regarding the false pretense of a remote worker. Most insider threat investigations usually start because there’s some type of data exfil. There’s some type of “see something, say something” that comes about in a training and awareness campaign, or there is some type of malicious activity that is discovered outside of the firewall from a workplace violence perspective, from that side.

[00.03.26.04–00.03.51.00]

So, if you just think about any of that, what it really comes down to is identity. Most insider threats—typical insider threat investigation—the identity is known by the individual. Everything is known about that individual within reason. The telemetry is tracked. There’s data loss prevention systems. There is behavior analytics. There’s endpoint detection response. 

[00.03.51.00–00.04.22.18]

There’s logs. There’s chats. The identity of the individual is generally known. And therefore, digging up the insider threat investigation or having a tipping campaign—call it from employees or some type of monitoring from threat intelligence that’s happening outside of the network—is known. That is certainly easier to go into a discussion or a discovery process around an insider threat when you’re dealing with remote workers, particularly that of the DPRK.

Landon:

[00.04.22.20–00.04.53.02]

It’s really—we’re talking about employment fraud about, around a identity that is not known. So, realistically, that’s kind of the differences around this. So, to kind of summarize that really first, from that perspective, a typical insider threat is usually a cyber security and corporate security function around remote workers. It really is a identity crisis and identity problem.

[00.04.53.04–00.05.19.10]

That is not a cybersecurity problem, really, at all. It is a entire workforce management issue around identity. So, those are some of the differences. Now, look around insider threats. Again, HR is involved. IT’s involved. Those are still complex investigations that certainly have an element of cross-functional coordination complexities.

[00.05.19.13–00.05.51.04]

There are lanes of the road already, usually pre-established, when someone starts an insider threat investigation. These remote worker issues almost go left and become an issue even before the apple—in the application process. So, getting HR teams, and getting legal teams, getting security teams, getting engineering teams accustomed to identifying insider threat before somebody even starts the—that is completely greenfield right now.

[00.05.51.04–00.06.10.21]

And that is what DPRK has successfully exploited. And we’ll kind of talk through how that is addressed. But just to kind of bring it back, now we’re talking with the remote workers. We’re talking about starting insider threat investigations before somebody is even hired. And that is just not what HR teams are used to dealing with. 

[00.06.10.21–00.06.25.14]

And even security teams, either from small organizations all the way up to Fortune 500s. This is very much a new paradigm. To deal with that—a just another increasing scale—and we’ll kind of get into some of the nuance behind that.

Navroop:

[00.06.25.16–00.06.40.01]

Can you walk us through the investigative process, then, step by step, right from initial suspicion to confirmation? We want everyone to be able to understand, right? What does discovering a potentially compromised remote worker look like for organizations?

Landon:

[00.06.40.04–00.07.12.02]

Yep. No problem at all. Again. So, I think from the typical perspective, let’s take your average organization. I don’t care if it’s a Fortune 500, small, medium, SMB. Typically, this comes to some type of workforce performance issue. Sometimes a person ultimately is doing a great job, particularly around DPRK. But typically, it’s some type of performance issue of somebody not showing up, somebody not executing appropriately. 

[00.07.12.04–00.07.41.01]

So, immediately, it’s called—brought to the attention of HR, who contacts legal and says we have a performance problem with this remote worker. They’re not really doing their job well. And typically, within probably a 24-hour period, it’s very clearly identified that the individual that’s doing—supposed to be doing the job—is not who they say they are.

[00.07.41.03–00.08.09.19]

Basic checks around their identity and their I-9 verification see that the person that’s supposed to be in the chair is not the same identity as the person who is the real person. And, I mean, there’s basic checks in things like Spokeo and White Pages, and basic Googling and social media analysis that shows person A who took the job is not the per—actual—the actual person.

[00.08.09.21–00.08.33.16]

So then that leads into, okay, well, where is the laptop? The laptop was—is usually—shipped to a different address. Then what’s on the I-9 form? So, I-9 form will have address A; address B will be where the laptop is that actually shipped—that is actually shipped to a—usually—a facilitator that is witting or sometimes unwitting of working with DPRK, usually in the United States to some laptop form. 

[00.08.33.18–00.09.01.14]

There’s been numerous indictments that have come down from the FBI that have shown laptop farms in states like Arizona, Tennessee, New York, Texas, California—really all over the place—of where that laptop is usually shipped. So then the investigation then starts to rely on what—just a typical instant response investigation.

[00.09.01.14–00.09.38.12]

What has been taken? What—who still has access? Are they employed? How do we mitigate this as soon as possible? Where is that data now? Is it ultimately compromised? In what way? We’ve generally seen that investigations… Sometimes these employees are actually doing a great job, and they’re just collecting a paycheck on a regular basis. Sometimes, when they’ve—when they’re—when they realize they’re getting on a performance issue, they will look to exfiltrate data and communicate that in Telegram, Discord groups, numerous off-platform abilities to try and sell that data.

[00.09.38.14–00.10.03.06]

It really varies in between. But the core part is really how do we identify the laptop and understanding where that laptop is? How do we take steps to terminate this employee, facilitate the last payment—if we should, really? Which gets into legal considerations, because you cannot pay—or you should not wittingly pay—a foreign adversary like the North Korean regime.

[00.10.03.09–00.10.34.08]

So those are the typical pre-incident, post-incident type of indicators. Within practical steps for internal escalation, very critical is a part about out-of-band communications. I think any time you’re forming an insider threat team, you’ve got to have the tiger team of the legal officer, the head of HR, the head of security—a group of 5 to 10 individuals that are going to be trusted to run this investigation.

[00.10.34.10–00.11.03.11]

And you want to have a airtight out-of-band communication system that ultimately ha—this survives e-discovery processes. I think that’s a real, really critical point to have rolling, ‘cause, I mean, so many times you see these investigations take place on an out-of-band or not an approved out-of-band communication platform. And it’s—does—and it kind—and then you actually—you have to go back and find other different facts around that, and it becomes really messy.

[00.11.03.11–00.11.27.21]

And so, certainly, I think it’s important for, again, have a secure out-of-band communications system and platform. And then, from that perspective, again, it’s mitigate what the access they have, to do a damage assessment, and then look to ultimately prevent that it’s not going to happen again. And I’m sure we’ll kind of get into some of those pre-incident type of discussions and how you prevent this going forward.

[00.11.27.23–00.11.45.21]

But a key—one final point in terms of anatomy in an investigation. Oftentimes, organizations just make the call like, I just want to break the laptop. We don’t care with—about the laptop. We just—we want to break the laptop and send a—some type of device that ultimately disables that laptop.

[00.11.46.03–00.12.09.20]

Others want to actually go recover that device. There’s certainly ways to geo-locate the laptop using systems like Tanium or CrowdStrike. It’s very important to have an endpoint detection response to ultimately kind of look at what is happening on the endpoint, and the ability to pull data from that laptop. So that’s often a critical point.

[00.12.09.20–00.12.29.18]

And ultimately, you can deploy custom scripts that ultimately look at the BSSIDs around that laptop. And then there are ways to geolocate that with external telemetry. So those are some of the more nuanced parts of that kind of go into these investigations. But we’ll certainly talk about the HR perspectives and operational legal considerations here in the future.

Navroop:

[00.12.29.23–00.12.42.19]

Yeah. And actually, ahead of that, I’d love to understand what are the specific red flags that organizations should be looking for when vetting the remote IT workers in the first place, right? And really, how do these differ before and after an incident occurs?

Landon:

[00.12.43.00–00.13.05.17]

No. Great question. That’s often the—excuse me, edit that out. Let me start that over. Red flag indicators are a critical aspect of not to—how that just happens again. Again, going back to my major point, I think is just super critical for HR teams, and security teams, and legal teams. This is not a cybersecurity issue.

[00.13.05.19–00.13.35.06]

What do I mean by that? And what do I mean by that is directly apparent to the red flags I’m talking about. If you’re in the cybersecurity mindset only, and you are looking at—let’s just take the unauthorized VPN access, unauthorized remote desktop access—and starting from that perspective. If that’s the plane you’re starting from, you’re going to be in a losing battle, particularly at scale, particularly when you have thousands of remote IT workers or remote employees in general.

[00.13.35.08–00.14.05.15]

So if you’re taking this from a cybersecurity’s perspective, and you’re looking at just from a policy around the infrastructure that is being used, that is not where you have to start here. There’s really four specific red flag indicators from the technical side that really matter. And this is really pre-employment. When people submit applications, ultimately, of like what are the red flags that you have to build against.

[00.14.05.17–00.14.26.14]

And I’ll cover those first for a very specific reason. The fir—the main three—four red flags are really the following. You—we want to be able to flag on a voice over IP phone number. That has been almost consistently used by DPRK, of ultimately when they registered for these applications, they’re using a voice over IP phone number.

[00.14.26.16–00.14.47.00]

Number two, they’re creating a new email, and then they’re creating a new LinkedIn account. So you’ve got to be able to flag on a new LinkedIn account that’s newly created, as well as a new email address. There’s numerous tools that you can ultimately deploy that flag on a VoIP, or a new email, or new LinkedIn.

[00.14.47.02–00.15.16.02]

Nisos certainly being one of those types of tools, and kind of get to that in a little bit. The third process really is limited—not limited—account registrations of that email. So that email address has not gone and registered with social media accounts. Accounts like Fiverr, Upwork, Freelancer. 

[00.15.16.04–00.15.40.13]

I mean, any internet service honestly, it doesn’t just have to be remote worker services or social media accounts. Just think about the average person like you and I, Navroop. We have one or two, three—two to three emails—and we’ve registered numerous different sites from third-party accounts to those. If somebody doesn’t have a registration, that’s a huge flag.

[00.15.40.15–00.16.10.16]

And then the fourth is, of course, the copying and pasting of resume material, both internal to an environment as well as externally. So you’ve got to be able to search resume content that’s been copied from remote sites like Fiverr, Upwork, and Freelancer, as well as internal to the environment. If you flag on two or two to four of those red flags, you got—you’re going to have a problem.

[00.16.10.18–00.16.36.17]

That’s something that ultimately needs to be surfaced to then say, look deeper on this account. That’s a critical part. And I say that it’s important to do at scale because it’s super challenging to go to every HR professional and have them hold up—and consistently, as an example—hold up an ID next to the person you want to flag on.

[00.16.36.19–00.17.17.17]

When an address is different than the I-9 that is being shipped the laptop. We’ve seen some organizations go and ultimately require remote workers go to the building to ultimately sign up a laptop or to present themselves in person. Those are great aspects. I think those are very critical procedural points to do from HR. But when we look at this at scale, and when you—let’s just take a Fortune 500 that I talked to you last week—they have a million people that apply to different jobs over a year. Giving HR to do these types of trip wires, so to speak, is just really challenging at scale. 

[00.17.17.17–00.17.48.14]

You really need those technical implementation red flags to really be able to be able to alert pre-interview. And then ultimately—and then the last—I’d say the last point in terms of process, I think, is after you’ve gone, make—made final offers, organizations that want to be aggressive about this, they do more enhanced background checks, more than what HireRight or Checkr or your traditional HR tools are going to do before an offer is actually made.

[00.17.48.16–00.18.06.19]

I think that’s another critical part. And we talked about that a lot with—in different venues—Navroop, when we did this a couple months, a couple weeks ago, getting to an identity—confirming identity—before an offer goes out. That’s another—that’s a critical aspect as well.

Navroop:

[00.18.06.22–00.18.44.01]

It’s interesting, Landon, as you were speaking about this, one of the things that came to mind was the fact that for so long, so many of us have used services that allow you to set up a one-time-use email or a platform-specific email address that ultimately forwards to your normal email address, but allows you to know if and when someone you’re signing up with has sold your email to others or is now sending you things that were not part of what you agreed to be signed up for, and it almost sounds like that might backfire for folks today, because in the email they’re using for their employment applications may not actually have as much, and may not have enough registrations out there on other services. 

[00.18.44.01–00.19.03.16]

So, just is an interesting thought about how some of the things that we used to consider almost best practices from a privacy perspective may suddenly need to be reevaluated, given how folks are now going to be interpreting those things as potential red flags.

Landon:

[00.19.03.19–00.19.35.10]

No question about it. Right. And I think that’s a good aspect of why it’s important to have numerous of these flags. If a remote worker decides, I want one email address to apply to this position, that’s okay. Right. But when they only—when they use an email address to apply to a position, and they create a new VoIP number, and their—that email address has limited to no social media, and they’re copying that resume—all that in the aggregate is just important.

[00.19.35.15–00.20.14.05]

There’s no doubt about that. You’re going to get false positives if you flag on just new email address and new LinkedIn creation as just an exa—I get it, right. That’s that—there’s no doubt about it. It’s just about ultimately getting to signal—it’s about hiring the best person that wants to ultimately come work at the organization. And again, the best person, regardless of their background or the—regardless of where they come from—it’s about hiring the best person. And certainly, the best person can take security measures to register a new email address or a new phone number.

[00.20.14.07–00.20.37.16]

I totally get it. It’s just, in the aggregate of those types of aspects that just needs to be raised to a—another level of review. Right? I think that’s really the bottom line, is ultimately, if they’re the most qualified person, they’re using all these new… 

[00.20.37.17–00.20.56.05]

But we will have that interview, potentially, that needs to be somebody else on the line to ensure that is a valid person, and they are qualified. So there’s just a level of accountability that you’re ultimately hiring that right—the right person. And they can do—use all those—and they can be completely—that can be completely the right person for the job and qualified. 

[00.20.56.06–00.21.08.08]

But again that’s just what the bad guys are exploiting—our explore—exploit. Excuse me. That’s why the bad guys are exploiting.

Navroop:

[00.21.08.12–00.21.36.13]

And that makes complete sense. Right. And I think it largely echoes a part of the discussion we were having a few weeks ago, right? Where we were discussing the fact that when you’re doing things like—I don’t mean identity verification just as an example—it’s not a single attribute that causes a pass/fail. It really is looking at the aggregated set of signal that you’re receiving and kind of doing some correlation, and then having an appropriate human-in-the-loop decision maker determine what the next best course of action is.

[00.21.36.14–00.21.55.04]

And sometimes it’s going to be to proceed with caution, and someone’s going to say, oh, well, this is an explainable thing for these certain pieces of signal. It’s okay. Let’s move forward, without any bars. Or, no, wait, there is something more problematic here that needs much deeper investigation. And for now, suspend the process. Right.

[00.21.55.04–00.22.17.23]

And so I think largely what you’re saying is, as the nuances matter, they do need to be considered. There isn’t a signal—a single source of signal—that will then be a “this attribute has occurred, therefore don’t do anything,” or “sorry, suspend the entire process.” But you have to look at everything in sum or in aggregate. So yeah, I think that’s a really important point.

[00.22.18.01–00.22.39.13]

Shifting gears then for a second, right. I want to come back to something you started with earlier about the witting or unwitting collaborators, so to speak. Right? So if we think about it, lately there’s been a lot of media coverage about arrests involving U.S.-based facilitators who knowingly or unknowingly helped North Korean-linked workers infiltrate American companies.

[00.22.39.15–00.22.49.12]

From your perspective, what key considerations should legal and security teams keep top of mind when they’re investigating or addressing the risks posed by these U.S.-based facilitators?

Landon:

[00.22.49.17–00.23.23.01]

I think a couple things to paint the picture from that perspective. I think if you are an organization that has remote workers, particularly remote IT workers, it is not a matter of if, but when. The DPRK or some type of fraudulent type of employment fraud is going to happen from that perspective. And when you ultimately leverage that for contractors, in that, the ability to do so increases exponentially.

[00.23.23.03–00.24.03.14]

And from our perspective, in terms of operational legal considerations, it’s just an important awareness point that when you’ve ultimately done your blocking and tackling of OFAC considerations, when you’ve done you’re blocking and tackling around HR and making the last payment and ensuring that your risk is covered. From a legal perspective on the HR side, you—organizations are—there’s entire FBI task forces that are tackling this threat at a pretty decent scale.

[00.24.03.16–00.24.25.05]

And so there’s not a whole lot of blowback in terms of reaching out to the FBI and also notifying them that, hey, we have a DPRK issue. The first question they’re going to ask is, ultimately, have you detected any U.S.-based facilitators? And again, that’s—most companies aren’t thinking about that, nor should they want to kid.

[00.24.25.05–00.24.49.10]

They want to get back to business as usual. I get it. It’s just important to understand that there are people in the U.S. where the—your—where your company IP, your devices are going, and they’re going to a laptop form, typically from somebody who has been recruited by the DPRK, sometimes wittingly, sometimes unwittingly.

[00.24.49.12–00.25.29.18]

They have certainly been even skilled and inimical adversaries, or adapted from this perspective. They’ve been able to recruit regular IT outsourcers to be a pool of candidates. And these outsourcers are completely unwitting that it’s DPRK on the other side. And ultimately, they’re holding their laptops for, ultimately, remote use. And I think it’s just important for organizations to realize that it’s not a—it’s a pretty seamless process to ultimately locate a laptop and report that location to the FBI.

[00.25.29.18–00.25.51.08]

And then, of course, they want to enlist their security teams to go knock on the door to recover that laptop. They certainly can and should. But realistically, that is going to be a question of the FBI. And honestly, if they don’t have the time or don’t have the resources to ultimately find who the facilitators are, that’s completely reasonable as well.

[00.25.51.10–00.26.22.08]

I just think, if you can hopefully locate the laptop and recover, that’s in an organization’s best interests, and that’s going to be heavily, heavily relevant to the FBI. Just a couple weeks ago, the FBI arrested, I think, 16 facilitators throughout the nation that were facilitating these operations, and oftentimes they are witting, and realistically, it’s just been communications from the DPRK.

[00.26.22.08–00.26.48.07]

And, unknowingly, they asked if they can serve as a laptop farm and to ultimately do that for—to hold these laptop farms so the workers back in the North Korean regime can do their work in their—probably out of some kind of safe house and/or government building in North Korea, and it’s just important to understand that.

[00.26.48.07–00.27.17.06]

And then, so—and then, from that perspective as well, as you’re going through the investigation, back to secure out-of-band communications again, when you’re doing this investigation, you’re detecting U.S.-based facilitators. Again, the FBI is going to show up. They might not come back and actually ask for facilitator information, potentially months down the line. If you have all that information in a secure out-of-band communications process, it’s going to make your life a lot easier.

[00.27.17.08–00.27.32.16]

And I think that this is—and be able to combat this threat at scale—is going to be critical. And how do you do that? You kind of have to have those comms channels established relatively either early on, or even as the process kind of moves forward.

Navroop:

[00.27.32.18–00.28.14.13]

Yeah, Landon. One of the things we’ve seen brought up is the fact that, because so many of these workers are themselves embedded within the IT side of the house, what it means is, if you were to use your traditional communications capabilities, it’s quite possible you would actually tip off the very person you were investigating. And it sounds like that’s a big part of why the security and communication tools need to both be set up in advance. What you ideally are looking for is something that is going to pass regulatory compliance muster or give you the ability to have that sort of audit trails and your legal needs met down the road if and when you have to take this to court, or a regulator, or some other body.

Landon:

[00.28.14.16–00.28.47.12]

Very well said. When IT workers—right, wrong, or indifferent—now look, there’s different levels of access that you give IT workers. Not everybody has domain admin. Not everybody has admin privileges in general. They usually can get them with some low-level sophistication. And ultimately, to your point, if you’re giving a developer or an IT person the ability to administer your IT network, and you’re setting up another Slack channel or another desktop Signal channel, that’s going to be compromised pretty quick.

[00.28.47.12–00.29.25.18]

So, if you’re ultimately initiating those types of investigations, you absolutely want to be doing that out-of-band and completely off the network. That is not going to tip off the adversary, and that’s whether—that’s IT workers that are the insider threat, whether it is a red team member, whether it’s anybody with access to the ability to review the communications of an organization. It’s just important to be able to have a—another off-platform communication system that does not tip them off.

[00.29.26.00–00.29.27.14]

So yes very, very important.

Navroop:

[00.29.27.18–00.29.52.14]

So, let’s double-click that on the preparedness side of these things. Right. Given all the risks that we’ve just been discussing, what internal policy updates and preparedness measures, like tabletop exercises, should organizations be prioritizing? And, in your experience, do most of these tabletop exercises realistically account for the need to shift to secure out-of-band comms, especially when you’re up against suspected insiders who sit within IT?

[00.29.52.16–00.29.55.02]

Whether it’s a fraudulent remote worker or not.

Landon:

[00.29.55.06–00.30.17.10]

100%. When you think about all of the preparedness of tabletop exercises—and again, we do this even on the Nisos side of red teaming the actual employment process. So this isn’t red teaming like a pen test, where you’re testing the IT infrastructure and all the different vulnerabilities that you can exploit.

[00.30.17.10–00.30.42.22]

Now this is actually creating a falsified identity and applying for a job and going through each process. And you ultimately want to establish a place where, if there’s a tripwire that’s initiated, you have a—you as the team, so HR, legal, IT, security—have a place to go immediately if something doesn’t look right. This just happened.

[00.30.42.22–00.31.05.10]

I just had this interview. Something just doesn’t feel right. I mean, I think that’s like the underlying basis to so much of this—you got to have a secure way to ultimately see something, say something in this process. So that’s usually kind of step number one. Have a secure way to communicate and get real-time alerts.

[00.31.05.10–00.31.33.00]

So where you can ultimately initiate a more thorough investigation. I think that’s like—underlies all of this on the HR side. Look, there’s a lot of different things that you, as HR professionals, can do. Excuse me, there’s a lot of different things HR professionals can do. We’ve seen holding up the ID to the person; we’ve seen ensuring that all remote workers come to an office location.

[00.31.33.04–00.32.10.04]

We’ve seen to ensure that—make sure that the laptop address is the same as the ship. The shipped laptop address is the same as the I-9, that’s another one. We’ve seen looking at the different references. Looking at the references to see if they are the same as other candidates, ’cause what we see is DPRK is—they often recycle the same references. Looking at the—see if on different alerts on this for, again, this for VoIP number, new email address, new LinkedIn, new little activity in terms of internet registrations.

[00.32.10.08–00.32.42.04]

And then, of course, the copying resumes. We’ve seen all the tooling in place that ultimately flags those things before. Between all of those processes, you’re going to be relatively buttoned up. Then it’s just a question of scale, right? And again, where you want to interdict. I mean, it’s about implementing friction points from application to interview to second, third interview to final offer, then to ultimately making it—you’re making a offer. 

[00.32.42.04–00.33.17.05]

Often, like Nisos, we get heavily involved in doing a little more enhanced background check for, and for applications of access. So applications have access—a little bit more of a thorough identity scrub needs to occur from that perspective. That gets a little bit tricky. Particularly, there’s different legal nuances that I’m sure you’ll have other guests discuss around doing a little bit more of enhanced background check before a final offer goes out.

[00.33.17.07–00.33.40.23]

But again, what’s the alternative? The alternative is you’re hiring fraudulent identities that are going to cost the company hundreds of thousands, if not millions, of dollars. So again, the—just different friction points between application to interview to final offer, then to ultimately making an offer, and then having a monitoring system or threat intelligence.

[00.33.40.23–00.34.08.22]

It’s ultimately kind of monitoring for different types of activity, such as new and duplicate LinkedIn accounts—that type of thing. So that’s certainly a little late in the stage. But I think, again, kind of put a bow on it. Internal, secure, out-of-band communication system, having the tripwires from a technical perspective to review all the applicants that apply.

[00.34.09.00–00.34.23.19]

And then, ultimately, having HR conduct a little bit more aggressive interview just to ensure identity, and just to show identity and verify identity. That’s going to do more than what your traditional HireRight/Checkr tools kind of do. Also very important.

Navroop:

[00.34.23.21–00.34.49.00]

It’s a great set of recommendations, and kind of in keeping with that vein, want to get your take on something else. So, recognizing that you’re neither a lawyer nor policymaker, but that you do have extensive experience in investigating complex and set of threats that kind of give you a unique insight, I’d love to get your perspective then on how you anticipate regulatory expectations and compliance requirements might evolve.

[00.34.49.02–00.34.54.14]

Especially regarding remote insider threats when nation states or sanctioned entities are involved.

Landon:

[00.34.54.18–00.35.24.03]

Yeah. I mean, it’s—I think that there’s going to be different hiring compliance and fair compliance around what you’re allowed, ‘cause, I mean, look, I mean, I’ll just get to the hard long pole in the tent, right. If you ultimately conduct a background check on somebody, and ultimately they are refused for that reason, they’re allowed to ask for that documentation.

[00.35.24.05–00.36.10.22]

Right. And so HighRight and Checkr ultimately are able to—this is all part of—and they’re able to send those types of documents and those types of material to a candidate to say, this is why we did not hire you. I think that you’re going to see different regulation come down that ultimately just requires or allows for a more aggressive identity and background check before someone has actually made an offer. That’s—I think that is going to—that is what is holding up quite a bit from a lot of organizations, because when you deal with 15,000 different hires every single week—let’s just take a Fortune 50—for them to reorient process. 

[00.36.10.22–00.36.43.02]

And we’re talking about a slow-moving machine. And of course, they’re looking at regulation and compliance around the hiring mechanisms as part of that. So realistically, I think you’re going to see—and again, it just depends on how much this threat really evolves. I mean, when I was at RSA, or when our team was at RSA a couple months ago, at the biggest new security conference, this was all that was being talked about.

[00.36.43.04–00.37.08.13]

Right now, this is a security issue. And again, this is a security conference. So now, I think this conversation has to evolve in the HR and legal perspectives to where it’s going to force some decent room, some more expansive regulation that allows organizations to kind of implement these identity and background checks more aggressively in the pre-application and interview process.

[00.37.08.13–00.37.21.19]

I think that’s—I think if I had to put my finger on the thumb of regulation, that’s what’s likely going to be coming, assuming that this threat just grows and scales over the next months and years.

Navroop:

[00.37.21.19–00.37.44.02]

I appreciate that perspective. And it makes a lot of sense. I think you’re right. There are going to be regulations that better support companies’ ability to perform more aggressive identity verifications prior and during employment. Landon, if you’ve got time for one more question, we’d like to wrap up with something a little bit more, just kind of fun.

Landon:

[00.37.44.07–00.37.46.02]

Fire away. Anything for you, Nav.

Navroop:

[00.37.46.07–00.38.10.00]

All right. Well, with that then, Landon, imagine you’ve just uncovered a remote insider threat. Only it’s not just any threat. This time, you realize that someone has managed to infiltrate your own personal email, and as a result, they’ve exposed your plans for your next vacation to your in-laws. After navigating this harrowing ordeal and salvaging your trip plans, what libation are you reaching for?

Landon:

[00.38.10.05–00.38.32.19]

Wow. Great question. I think libation that I’ll be reaching for—I—it’s hard to go against a glass of rosé. I’ll be honest with you. I like—I’m a big rosé or anything with gin, always is nice. Of course, everything in moderation. But I’d say a glass of rosé and anything with gin in it.

[00.38.33.01–00.38.37.16]

So there you go. I don’t know if that’s what—what about you, Nav? What do you—what are you reaching for?

Navroop:

[00.38.37.21–00.38.57.23]

Oh God, at that point, I’m wearing a lot stronger than a rosé. We’re going straight to a strong bottle of whiskey over here, and we’ll probably be more than a couple pours. But then again, I’m actually single, so I don’t really have this threat at my doorstep. All right. And, Landon, with that, thanks again for joining us and sharing your valuable insights.

[00.38.58.01–00.39.09.01]

And to our listeners, thanks for joining us on this episode of The Lock & Key Lounge. Remember, secure communications aren’t just about compliance. They’re essential infrastructure when trust itself becomes a target.

Landon:

[00.39.09.04–00.39.12.04]

Thanks for having me, Nav. Always a pleasure.

Matt Calligan:

[00.39.12.06–00.39.44.19]

We really hope you enjoyed this episode of The Lock & Key Lounge. If you’re a cybersecurity expert or you have a unique insight or point of view on the topic—and we know you do—we’d love to hear from you. Please email us at lounge@ArmorText.com or our website, ArmorText.com/podcast. I’m Matt Calligan, Director of Revenue Operations here at ArmorText, inviting you back here next time, where you’ll get live, unenciphered, unfiltered, stirred—never shaken—insights into the latest cybersecurity concepts.

Search