The Lock & Key Lounge - RIFF Edition 5
In this RIFF Edition, Navroop and Matt work through seven stories spanning different sectors, threat actors, and tools, yet all point to one failure mode: the layer underneath your security stack is the actual attack surface. From collapsing patching windows to Russian intelligence phishing Signal accounts, from an Iranian-linked group wiping 80,000 Stryker devices through Microsoft Intune to Proton Mail’s billing layer exposing an anonymous user, the throughline is consistent: the attacks didn’t beat the technology—they went around it.
- Five Days: The New Patching Reality
- Signal Compromise: The Account Layer Is the Attack Surface
- FBI/CISA PSA — Russian Intelligence Services Target Commercial Messaging Application Accounts (Mar 20, 2026)
- Reuters / KFGO — Cyber actors linked to Russia targeting users of messaging apps, FBI says (Mar 20, 2026)
- Dear CISO, Your Identity Fabric Is a Kill Chain
- Stryker: No Malware Needed
- BleepingComputer — CISA urges US orgs to secure Microsoft Intune systems after Stryker breach (Mar 18, 2026)
- BleepingComputer — FBI seizes Handala data leak site after Stryker cyberattack (Mar 19, 2026)
- CISA Alert — Urges Endpoint Management System Hardening After Cyberattack Against US Organization (Mar 18, 2026)
- Proton Mail, the FBI, and the Payment Layer
- 404 Media — Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester (Mar 5, 2026)
- Benn Jordan — Fungible payment certificate proposal (Instagram reel, Mar 2026)
- Persona: The Surveillance Stack Behind the Age Check
- Designing for the Reality: Comms When the Network Fails
Navroop Mitter:
[00.00.04.22–00.00.28.16]
Hello. Welcome back to The Lock & Key Lounge RIFF Edition. I’m Navroop Mitter, founder and CEO of ArmorText. If you’re new here, in a riff, Matt and I react to a curated set of cybersecurity stories. There are no guests, no filter. We just dive straight into it. Matt is joining from just outside of D.C., where he’s preparing to attend the National Fusion Center Association’s conference in D.C. next week.
Navroop:
[00.00.28.18–00.00.51.06]
Meanwhile, I’m recording from RSA in San Francisco, which, depending on your mood and the day, is either lovely or not so lovely. And in today’s episode, we’ve an interesting throughline that we didn’t plan but that we couldn’t ignore. The attack didn’t beat the technology. It went around it. So we’ve got seven stories, one argument. And with that, let’s just jump right into it.
Navroop:
[00.00.51.08–00.01.02.19]
So, for today’s seven stories in today’s episode, they span different sectors, different threat actors, different tools, but one failure mode. The layer underneath your security stack is the actual attack surface.
Matt Calligan:
[00.01.02.21–00.01.18.13]
And in most of these cases, there’s no novel exploit, fantastical TTP, or zero-day required. It’s simple things—a credential, an admin console, a payment identifier, even a helpdesk call.
Navroop:
[00.01.18.16–00.01.39.10]
That’s right. It’s the simple things. And so we’re going to move through these as quick hits, starting with the data that sets the clock, walking through where the attacks landed, and closing with what designing for this reality actually looks like. But before we start, several of these stories touch geopolitical flashpoints—Russia, Iran, and even domestic law enforcement.
Navroop:
[00.01.39.12–00.02.05.00]
We’re not litigating those conflicts or those investigations. We are, however, extracting the enterprise and architectural design lessons from what actually happened. And with that, let’s start the clock, because it’s shorter than you think. All right, first up. Mandiant analyzed 138 exploited vulnerabilities from 2023 and found that the average time to exploit has collapsed from 63 days in 2018 to just five days.
Navroop:
[00.02.05.02–00.02.17.02]
That’s the average, right. Twelve percent of the n-days were exploited within one day of a patch being released, 29% within one week, over half within the first month.
Matt:
[00.02.17.04–00.02.36.10]
Yeah. Every—you got to think about every patching program that you can—that you know, is pretty standard, are built on this monthly cadence. And if that’s the case, with this kind of collapse in these numbers, that it’s just operating a world that no longer exists. The patch is now the announcement. It’s the gun going off.
Matt:
[00.02.36.10–00.02.49.19]
It tells every threat actor exactly where to look. And zero-days now outpace n-days, 70 to 30. So you might not even have a patch window to miss, right? You just—you’re already done by the time you know about it.
Navroop:
[00.02.49.23–00.03.10.13]
That’s right. So then there are a few questions that need to be considered. What is your current patching SLA? And when was it last calibrated against actual exploitation timelines, not just what the policy recommendation or regulatory requirements were? Another question. How many systems in your environment would still be unpatched five days after a critical disclosure?
Matt:
[00.03.10.16–00.03.41.12]
Yeah. Five days. The vendor landscape is actually diversifying as well here. It’s not just the Microsofts and Googles and Apples that used to represent—I mean, it was half of exploited vendors. In ’23, they dropped below 40%. Attackers are spreading their focus wider, finding success and doing it, going after different vendors. Your attack surface isn’t just the big components, the crown jewels. It’s every integration point that nobody owns the patching schedule for.
Matt:
[00.03.41.15–00.04.07.06]
So we’re looking at five days on average, zero-day majority, and patching programs still running on 30-day cycles. And that’s the clock. So let’s talk about where the attacks are actually now landing. The FBI and CISA issued a joint advisory March 20. Russian Intelligence Service–linked actors fished their way into thousands of Signal accounts, and Signal’s encryption wasn’t the problem.
Matt:
[00.04.07.06–00.04.23.01]
The security was intact. It was actually the account layer that was the problem. Their tactic was to impersonate in-app support. Sometimes they tricked users into linking the attackers’ devices or even handing over the PINs, and that owned the account, which made the encryption irrelevant.
Navroop:
[00.04.23.05–00.04.51.04]
Yeah, it’s an interesting thing. We often hear, “so-and-so application is end-to-end encrypted.” But end-to-end encryption alone isn’t a panacea. The channel is hardened, right. The identity layer wasn’t. Once in, attackers can read messages, view contacts, send messages as the victim, and conduct further phishing from a trusted identity. That means the blast radius isn’t just the compromised account. It’s every contact that person has through a channel.
Navroop:
[00.04.51.05–00.05.16.05]
Those contacts have no reason to distrust. In the enterprise context, this pattern is actually quite active as well, right? This is something we’ve talked to customers and prospects about. Helpdesks are being targeted specifically to facilitate pass resets that enable impersonation. The problem isn’t the encryption. It’s that there’s no reliable way to verify identity at the point of a sensitive transaction or communication.
Navroop:
[00.05.16.08–00.05.40.05]
And that’s why we’ve been setting the stage for out-of-band identity verification with our partners over at CLEAR. It’s to address this—exactly this issue, right. For either high-value transactions or when you’re having to protect out-of-band comms, you need a reliable way to verify who you’re actually speaking with at the moment of the sensitive action, not just at the time of login or at the session initiation.
Navroop:
[00.05.40.05–00.05.52.17]
Right. And so, happy to talk further about this later on with anyone interested. But we do think there’s a lot of value in helping to establish out-of-band identity verification for these use cases.
Matt:
[00.05.52.22–00.06.15.19]
Yeah. And not—really, that starts with the key question. If your most sensitive communications are running on a consumer messaging app, what is the plan if that account is compromised, right? Not the application. The second one to think about is can your team reliably verify who they’re talking to when your primary channel is under active targeting?
Matt:
[00.06.15.19–00.06.44.20]
The advisor—the advisory is also a candid admission that consumer apps—the Signals, the WhatsApps, even Telegrams—they’ve migrated into sensitive professional use because they’re convenient and nominally secure from an encryption standpoint. This advisory is directed at current and former US government officials who are using these apps for official communications—communications around national security. The gap between the threat model and the tool was always there.
Matt:
[00.06.44.20–00.06.48.06]
And this time it just got a joint FBI–CISA advisory attached to it.
Navroop:
[00.06.48.09–00.07.11.22]
Yeah. Frankly, it’s not the first time either, right? We’ve seen repeated instances now where the use of consumer privacy‑oriented applications has come into question. Right. Ultimately, though, here it was a human layer at the account level that was compromised. So now let’s go one layer deeper, and let’s look at the architecture side of things.
Navroop:
[00.07.12.02–00.07.33.03]
In our next article, this is actually an interesting piece from a former colleague. So, a practitioner piece from Halim Cho has been circulating with a pointed argument, and this is one that is near and dear to my hearts because we’ve been talking a lot about weaponization of identity, weaponization of single sign‑in, and weaponization of some of the tools that everyone comes to rely on for zero trust.
Navroop:
[00.07.33.04–00.07.59.00]
The point that Halim Cho is making is that the industry’s decade‑long push to unify identity—that’s SSO, identity management, that federated fabric, that’s single pane of glass—hasn’t necessarily improved security in all cases. Right. It has created the most dangerous potential single point of failure in the enterprise stack. Unification was sold as a simplification. What it produced was a blast radius that scales with your integration footprint.
Matt:
[00.07.59.06–00.08.27.11]
Yeah. Yeah. A compromise orchestration or policy engine doesn’t just compromise one system, right? It hands the attacker the keys to every system downstream. So the more complete your identity fabric, the more complete that compromise is when it falls. And then there’s identity debt, right. The AI explosion left thousands of non‑human identities, service accounts, bots, agents, with no human behind them, no one accountable.
Matt:
[00.08.27.13–00.08.49.02]
Often no one even knows they exist. You can’t even evict what you can’t even inventory in these kinds of scenarios. So a lot—for a very long time, vendors have been saying, hey, consolidate here, automate, bring—integrate us into all these things because that makes things less expensive. And they say, let us be your single throat to choke.
Matt:
[00.08.49.02–00.08.55.15]
Well, now we have hackers who are basically saying, hold my beer here. They’re the ones saying, well, we’ll be the first to choke that throat. Thank you.
Navroop:
[00.08.55.17–00.09.17.12]
Yeah. That’s right. And this is why we’ve been talking about the weaponization of the IDM or the SSO systems so often. It’s been the topic of a number of tabletop exercises we’ve helped others run. It’s been part of scenarios we’ve designed for others. It’s been a part of the kind of questions that we have for injects around how are you going to communicate during a compromise?
Navroop:
[00.09.17.14–00.09.40.20]
Because if your out‑of‑band communications tool were integrated into your identity management, your SSO, your attacker can very easily weaponize it to either provision their own access in, or to deliberately block all of your access to the comms channel you need most during that incident. Right. And that makes it unreliable. So, it brings up a couple questions, right?
Navroop:
[00.09.40.20–00.10.10.07]
If your SSO or identity provider were compromised right now, how many systems would an attacker have immediate access to, and how quickly would you know? Another question is, what in your environment is deliberately kept outside of the identity fabric? And is that a design decision, or is it an accident? Too often you find that there were experimental applications or things that were in prototyping or testing that may have been left out of the identity fabric, and that’s what helps see someone through an incident.
Navroop:
[00.10.10.07–00.10.26.22]
I can think of a more than a couple famous examples of this. Someone is running a—the chat server on a server that was underneath their desk practically, right, or the computer email in the closet. But that wasn’t deliberate. It wasn’t intentional. Not everyone knew about it. It still took some time to go through how to get their way through it.
Navroop:
[00.10.26.22–00.10.36.18]
And you can’t necessarily rely on just that bit of luck, right? This has to be more intentional. You have to know what it is you’ve absolutely kept out of the identity fabric on purpose.
Matt:
[00.10.36.19–00.10.37.23]
Yeah.
Navroop:
[00.10.38.01–00.10.47.07]
And so, the article points towards decentralized identity as the mitigation direction. Not everything should be in that identity fabric. Some things should be deliberately kept out by design, not by oversight.
Matt:
[00.10.47.10–00.11.25.20]
Well, that brings us to topic four here. On March 11th, an Iranian‑linked hacktivist group called Handala, which has now been confirmed as a front for Iranians—the Iranians’ Ministry of Intelligence and Security—they compromised an admin credential at a medical technology giant, Stryker. It allows them to create a new global administrator account and walked into Microsoft Intune remotely, wiping approximately 80,000 devices without a single bit of malware, no custom exploit, no real work, no zero days.
Matt:
[00.11.25.22–00.11.46.18]
It was just a management console with the front door left open. Personally, I got a soapbox here, but this is why I have a problem with the term breach. It implies somehow that your defenses were overwhelmed by force beyond your control, when in fact hackers, for the most part, they just steal your keys and walk through the front door of these devices.
Navroop:
[00.11.46.23–00.12.27.07]
Yeah, in this case, the attack vector was legitimately provisioned tooling—the exact tools Stryker paid for to manage its own estate. And in the impact, it wasn’t just IT disruption, right? Stryker has a $450 million DoD contract and supplies surgical devices to hospitals globally. That means the attack delayed servers not by touching a single connected device, not by disrupting order processing or manufacturing, or even the shipment of custom implants. The indirect patient impact from an IT wipe is the enterprise lesson that sectors outside of health care, frankly, consistently underestimate about their own indirect dependencies.
Matt:
[00.12.27.10–00.12.54.08]
Yeah. I have a personal anecdote with this. I was talking to a friend of mine who runs a fusion center down in Florida, and he was part of a physical security for an event that had Stryker executives attending. And he said you could tell who was a Stryker employee because they showed up late the day after the strike or the hack, because the wiper had completely bricked their phones that they were relying on—or their phone for their alarm.
Matt:
[00.12.54.14–00.13.18.22]
So, I mean, it wasn’t just like, hey, we’re missing some data or things were taken back to the beginning. It was bricked. This stuff completely wiped everything. So, that brings us to the two questions here on this part. Here are, very quickly, something to think about. How many accounts in your environment could execute a mass device wipe, or an equivalent destructive action, without requiring a second approval?
Matt:
[00.13.19.00–00.13.31.23]
The second thing to think about is does your endpoint management platform have multi‑admin approval configured for these high‑impact actions, or is it just one person? Is it a single compromised admin account with the right privileges sufficient?
Navroop:
[00.13.32.04–00.14.01.20]
Yeah. And in this case, I think it’s really worthwhile to go back to CISA’s response advisory. And it’s worth reading it in full. It’s a postmortem, and it basically then reads like a checklist. Right. You want least‑privilege RBAC. You want phishing‑resistant MFA. And, very critically, you want multi‑admin approval for sensitive operations—a second administrative account that can approve before a destructive action is executed, or, sorry, not can but must approve before this a—a destructive action is executed.
Navroop:
[00.14.01.22–00.14.30.15]
That single control would have really broken this attack chain. And this brings up something interesting method we’ve recognized for almost a decade at this point, right? Going back to the Edward Snowden disclosures, we recognized what we call the Edward Snowden effect. It was the risk that a single lone‑wolf admin being able to compromise the critical content of a system could lead to extremely large or impactful breaches.
Navroop:
[00.14.30.17–00.14.55.10]
So, from very early on, we at ArmorText pioneered an architecture where the ability to review end‑to‑end unencrypted retained archives is not dependent on just one account, but on the participation. And not just the account access, but the active participation of up to two, three, or four independent parties coming together. Right. You can technically configure a single reviewer on ArmorText if you really wanted to, but we’ve never recommended it.
Navroop:
[00.14.55.12–00.15.27.00]
Most other platforms are barely getting around to thinking about this for admins. We’ve had it in production for reviewers for years, and it extends beyond just archive review. We’ve taken the same principle into security event notifications. They’re designed to be delivered immutably to people beyond just admins, should you need. For example, General Counsel might need to know if new reviewers are suddenly attached to people, if scopes of review are being changed, if something’s happening to account access there, or when a scope review might be being executed—just as an example.
Navroop:
[00.15.27.00–00.15.35.05]
And so, more eyes on critical events means a compromised admin can’t quietly suppress the alerts, or something else is going wrong in the background.
Matt:
[00.15.35.09–00.16.08.02]
So finally, here with the—once the FBI seized Handala’s leaked sites on the 19th, the DOJ formally called them psychological operation infrastructure run by Iran’s MOIS. The seizure hits them where the psychological operation lives. But, as one analyst put it, it’s whack‑a‑mole. Handala has already announced new sites. What that means is we can’t wait for the bad guys to get caught in this case, or for justice to work its way through the system.
Matt:
[00.16.08.07–00.16.16.10]
The architectural fix has to happen on the defender side. We have to take these steps to harden our defenses against these kinds of things.
Navroop:
[00.16.16.13–00.16.44.11]
Yeah. And look, coming back to that geopolitical backdrop that we were discussing earlier, it’s real. This attack occurred in the context of the ongoing US‑Iran conflict. And so, once again, we’re focused on the enterprise design lesson, not the context, not the conflict itself. Right. And ultimately, there was no malware needed, just a credential and a management console, which raises a harder question. What else do the tools you trust actually expose about you in ways you didn’t agree to?
Navroop:
[00.16.44.15–00.17.05.02]
And so, for this next story, we’re going to go to something that’s a bit more on the consumer‑privacy side of the house. It involves Proton Mail, a favorite application of many. Right. And so, before we get into this, there’s a quick note on the framing. At ArmorText, our focus is enterprise security, where identity accountability is often a feature, not a bug.
Navroop:
[00.17.05.04–00.17.27.08]
We do support enrollment and creation of private directories for partners like law firms or Accenture, who can obscure customer identities through codenames or non‑corporate domains. But what we’re about to discuss here is really something we’re to discuss with our consumer‑privacy hats on. Both Matt and I believe in consumer privacy. So, the questions here aren’t the ones necessarily that an enterprise would typically be asking, but you and I, as individuals, should be considering.
Navroop:
[00.17.27.10–00.17.50.11]
And so, in this case, 404 Media reported that Proton Mail, which markets end‑to‑end encryption and Swiss privacy law protection to over 100 million users and responded to over 19,000 legal orders in the past two years, just recently provided payment data to Swiss authorities under Swiss law in response to a legally binding Swiss court order.
Navroop:
[00.17.50.13–00.18.07.06]
The Swiss authorities then handed that data to the FBI via a mutual legal assistance treaty between the US and Switzerland. The FBI used that single credit card transaction identifier to then unmask an anonymous account linked to the Stop Cop City movement in Atlanta.
Matt:
[00.18.07.09–00.18.31.06]
And Proton’s official response was, hey, we didn’t give it to the FBI—Switzerland did, which is technically accurate. But, I mean, operationally, it’s meaningless. The chain was Proton to Swiss authorities under their law, then Swiss authorities to the FBI via the MLAT. The number of hops doesn’t really change the destination or the outcome.
Matt:
[00.18.31.06–00.18.37.21]
The content was protected. That’s what they’ve held themselves out as doing. But guess what? Their billing department wasn’t.
Navroop:
[00.18.38.02–00.19.06.17]
Yeah, it’s an interesting thing. I mean, I—to be honest, even I wouldn’t have thought about the mutual legal assistance treaties coming up in this context until someone brought it up in this way. Right. And with that said, the investigation context here—the domestic terrorism designation, the protest movement—is politically adjacent. What we’re focused on is the architectural lesson: what any provider can and will hand over when asked through the appropriate legal channel, regardless of what their marketing says.
Navroop:
[00.19.06.19–00.19.23.00]
And so, for that, there are a couple of questions that then come up. Right. For every tool you rely on for sensitive communications, do you know what metadata that provider holds, and under what legal jurisdiction it can be compelled to disclose it? Is your operational security model accounting for the billing layer, or just the content layer?
Matt:
[00.19.23.02–00.19.53.22]
Yeah. I mean, MLATs are the international law enforcement coordination layer. So, jurisdiction‑shopping your provider doesn’t eliminate that legal exposure, right. It just relocates. Maybe sometimes it delays it. The design question isn’t which country’s laws protect you the best. It’s what does this provider actually hold that can be handed over to? What are they agreeing, legally, to be able to expose about what you’re doing with that tool?
Navroop:
[00.19.54.00–00.20.15.17]
Right. And look, there’s always going to be something, right. There’s never going to be absolutely nothing. It may be highly limited, like it is in the case of some of these consumer privacy applications. In the case of the enterprise applications, it might be a little bit more. What you just need to do is know what that is, and know what’s potentially something that your provider can hand over.
Navroop:
[00.20.15.19–00.20.33.13]
Right. And you need to take that into account as part of your risk modeling efforts, your pro modeling efforts. What’s interesting here, though, is that there was actually an Instagram post by a creator named Benn Jordan, who put forward a genuinely kind of elegant design proposal in response to the story—a certificate pooling model, right.
Navroop:
[00.20.33.13–00.20.55.12]
We’ve seen similar things in the past, but in the proposal that Benn put forward, it’s—your payment secures a certificate. That certificate enters a pool of equal‑value certificates from other users, so it doesn’t stand out. And that makes them fungible and indistinguishable. At checkout, you receive a scrubbed pool certificate, untraceable to your specific account. The credit card linked to your identity is severed at that service layer.
Navroop:
[00.20.55.17–00.21.13.14]
It’s not a refusal of payment, but you’re making it fungible. You’re not using crypto and hope. It works with existing infrastructures. Right. So VPNs, privacy, email, privacy chat operators—this design pattern is worth serious attention because you’re serving that consumer privacy‑focused community.
Matt:
[00.21.13.17–00.21.38.08]
Right. Right. ’Cause yeah, obviously this is—we’re raising this point with our privacy—consumer privacy hats on. And I mean, I just love the fungible token idea. It’s so—it’s elegant in its simplicity. You don’t need anything crazy or new or complicated. All the pieces are there, all ready to do something like this. And there’s applications beyond it, which I just thought was a great, great approach.
Matt:
[00.21.38.10–00.22.05.11]
But within enterprise scenarios, knowing who your customer—that tends to be a requirement. There’s legal obligations that we all have. But for these privacy‑focused consumer services, this proposal really deserves an engineering conversation. It—because of its simplicity and ability to utilize structures and architecture that’s already here, things that are already available, nothing has to be created.
Matt:
[00.22.05.13–00.22.28.07]
So with payment layer, building layers with account metadata—with this scenario, the content was protected, but it was just the plumbing, it was just the transfer layer. It was that the billing component of this that ultimately exposed these people’s accounts. And this brings us to a story about what that plumbing was doing all along.
Matt:
[00.22.28.07–00.22.55.14]
And it was doing it well before anyone asked. So researchers were investigating Discord’s age verification service that they were running as a pilot in—somewhere in Europe, I believe. And they discovered that its identity vendor, Persona, had left a front end exposed on a US government‑authorized server. And what was—the exposure was pretty crazy.
Matt:
[00.22.55.14–00.23.25.04]
It wasn’t just a simple age check. In fact, it—they claimed it was some pretty simple stuff. What turned out to be was a 269‑point surveillance and financial intelligence stack. Facial recognition against watch lists and politically exposed persons, adverse media screening across 14 categories, including terrorism and espionage. There was risk scores, similarity scores. All this data was retained for up to three years.
Matt:
[00.23.25.06–00.23.36.14]
So folks that wanted to get in here, they handed their face to a gaming platform, and they got enrolled in a KYC/AML apparatus. And that’s not a bug. That’s the product.
Navroop:
[00.23.36.17–00.23.59.22]
Yeah. There’s a lot of interesting requirements for know‑your‑customer, anti‑money‑laundering applications. And ultimately that’s really what these were. And so while they could be used to provide that simple age check, the reality is the plumbing was already there to do a lot more, right. Now Persona has clarified it was an isolated test environment.
Navroop:
[00.23.59.22–00.24.21.05]
No production data was involved. And if you take that at face value, that’s fine. But the capability mapping was real. And now that’s public. Even if no data leaked, the architecture has. And so we now know what potentially could have taken place. Right. And most post‑incident reviews aren’t built to address that. Right.
Navroop:
[00.24.21.07–00.24.50.11]
The teen safety framing here is—it’s worth unpacking. Governments are mandating age verification for child protection, but the vendors fulfilling those mandates are KYC and AML fintechs, whose business model goes well beyond that into identity surveillance. And so, rather than strictly limit this to the function that Discord actually needed to map to, they went ahead and used that to push more people through their biometric data collection pipeline at consumer scale.
Navroop:
[00.24.50.13–00.24.54.14]
And it’s kind of worth asking, at the end of the day, who benefits from these kinds of mandates?
Matt:
[00.24.54.15–00.25.25.11]
Right. Right. And even more so, I mean, beyond that, some questions that are worth addressing. When you onboard a vendor for a narrowly scoped function—a pretty narrow, narrow use case—things like age verification, identity check, and authentication, do you audit the full capability set of what you’re integrating, or just the scoped feature? Similar to the MDM and with Microsoft Intune and Stryker—like yes, they said you don’t access personal data on phones.
Matt:
[00.25.25.11–00.25.46.21]
It’s only to access corporate data and things like that. But that didn’t mean they couldn’t. So the scope was beyond just the use case in that case, and in this as well. Another one is who in your organization is accountable for understanding what a vendor’s product actually does versus what it was just procured to do.
Matt:
[00.25.46.23–00.25.51.23]
Who’s asking? What is the full exposure, based on what it’s capable of doing?
Navroop:
[00.25.52.03–00.26.13.09]
Yeah. And there’s an interesting note here, right. There’s a vendor concentration point that has to be looked at as well. Here, it’s Roblox, OpenAI, ChatGPT, Discord—all were running through Persona. It was one vendor, one exposure model. Enormous aggregate population. It’s the same third‑party concentration argument that keeps coming up in supply chain conversations.
Navroop:
[00.26.13.10–00.26.15.16]
It just arrived wearing a safety badge this time.
Matt:
[00.26.15.16–00.26.38.11]
Right. Right. Well, of course, Discord has since said that they’re discontinuing their use of Persona. I mean, personally, for me, in Discord, it’s always too little, too late. I mean, they’ve been pushing off or ignoring the datascrapers selling Discord data for years. They just simply lawyer up and change their legal terms.
Matt:
[00.26.38.12–00.27.01.22]
And now, to this kind of scenario, each story kind of wraps up, but frankly, the design problem doesn’t. The tool did more than advertised. That’s what this comes down to. The vendor held more than you realized. The account layer wasn’t verified, or the admin panel was left open. The patching window—non‑existent—is closed before you even get the notice.
Matt:
[00.27.02.00–00.27.07.01]
What does designing for all of this actually look like? And that’s where we close.
Navroop:
[00.27.07.02–00.27.35.23]
Yeah. This brings us to our seventh topic, right—designing for reality. Again, now for our final story. This is based on a viral carousel by an ex‑military cybersecurity practitioner who has 15 years in cybersecurity, eight years in the US Air Force. They made the case for—again, this is a consumer‑oriented thing here—but they made the case for Meshtastic on LoRa as a no‑permission‑needed backup comms layer for when cell towers, power, and internet fail.
Navroop:
[00.27.36.01–00.27.56.02]
And before Matt gives me a look—yes, this might sound a little prepper‑y. This might remind you of some of your friends at work, right. And some folks might just say, look, it’s not really that big of a deal. But for some of you, it might actually look like something that you should take another look at.
Navroop:
[00.27.56.02–00.28.32.10]
Right. So, because for people in more remote conditions, or frankly in any environment where networks are unreliable or actively compromised, what sounds like overkill today has a way of looking prescient tomorrow. So this ex‑military cybersecurity practitioner, she grounded it in real 2026 events—the Caracas blackout during Maduro kidnapping, Iran’s near‑total internet shutdown, and Salt Typhoon, where China‑linked actors were already inside US telecom infrastructure and were capable of throttling and dropping services without touching a single tower, or whenever they wanted to, right.
Navroop:
[00.28.32.11–00.28.52.06]
The post got 26,000 likes, I think. So there are a number of people at least taking this seriously. It’s not a large percentage of the population, but what’s interesting is it didn’t appear to be CISOs. It appeared to be civilians who’ve internalized that their primary channels of communication could actually be their single point of failure.
Matt:
[00.28.52.09–00.29.24.10]
Right. But even then, the context and the parallel to enterprise is pretty direct. I mean, our good friends over at Dragos put a very specific number on it. I mean, when they run these tabletop exercises, 70% of the organizations that are participating—their IR plans don’t even account for an out‑of‑band communications option during an incident.
Matt:
[00.29.24.12–00.29.51.11]
Meaning they’re assuming they can coordinate on the very network they’re assuming the bad guy is on, they’re trying to remediate against, or on sitting—tools sitting in a blast radius, or a slapdash sort of last‑minute pull‑together thing. And of the remaining 30% who actually did have this plan, fewer than 1% of those had actually tested it and played it through, making sure that it addresses their enterprise control requirements, compliance requirements, and things like that.
Matt:
[00.29.51.13–00.30.05.10]
Most of them landed on Signal or WhatsApp. Right. And these tools create liability. They’re—they were never built with the governance, the oversight, or the scalability that enterprise requires in these kinds of scenarios.
Navroop:
[00.30.05.14–00.30.29.00]
Yeah. And it’s kind of interesting, right? Less than 1% having addressed enterprise controls—especially around things like user management, policy enforcement, or retention and review capabilities—means that the average enterprise out there is underprepared from an out‑of‑band communications perspective. And you’re right, Matt. There are a lot of enterprise parallels to this story.
Navroop:
[00.30.29.02–00.30.53.14]
Right. And so, a couple of key questions to think about. Do your incident responders know how to reach—how they’re going to reach out to each other, as well as executives in your company, if your primary communications channel is unavailable, or is that figured out after the incident starts? Right. Is it going to be ad hoc? Another question is when did you last stress‑test your communications continuity plan against a scenario where your primary channel is the thing that’s been compromised?
Navroop:
[00.30.53.17–00.31.14.10]
What’s interesting is this isn’t the first time we’ve brought up this topic, right? We’ve had these conversations at GridSecCon. We’ve also actually had similar conversations with executives at large financial institutions—their readiness teams. And it actually touched on more than just the communications layer, just like this post does. Right. So that’s why this post stood out to me so much.
Navroop:
[00.31.14.12–00.31.37.21]
If you look through the carousel, and you go from—I don’t know—four or five images into the carousel, suddenly she’s discussing redundant power. That’s something that we’ve brought up repeatedly, right? In times of crisis, if power and your telecom capabilities are down, you’re going to need to have redundant power for the critical communications design—devices.
Navroop:
[00.31.37.23–00.31.56.11]
Because that’s a real operational requirement. It’s not a nice‑to‑have. If you don’t have power to power those devices, you’re not going to be able to do the work you need to do. But that also brings us back to a conversation that we’ve also had at GridSecCon, and along with a number of executive teams—particularly finance—as they’re planning their go bags for their executive teams.
Navroop:
[00.31.56.13–00.32.28.16]
You—what you really want to be solving for is not sat comms. It’s actually sat data. And that distinction matters more than most readiness plans acknowledge. Oftentimes, if one wants to look cool and have the sat phone on the front door—right, Bourne had one, I’m sure everyone at MI6 had one in some movie here or there, right—and so everyone’s worried about sat phone. But sat data would actually enable you connected services that continue to enable context‑rich messaging, file sharing, voice, video, and images, and access to the broader internet, so you can actually get other work done too.
Navroop:
[00.32.28.16–00.32.50.12]
Your sat comms device doesn’t actually do that. It gives you a phone call, but it doesn’t give you any of the rest of that access or ongoing ability to operate. And sometimes, frankly, your sat comms are actually limited just to others on the same network. And so, executives who are reaching for satellite coverage are often reaching for the wrong layer. They really should be thinking about sat data versus sat comms.
Navroop:
[00.32.50.14–00.33.12.02]
But also, coming back to a recommendation that this practitioner makes—that is relevant to you and I with our individual families, but in the enterprise context still stands—is that comms plans need to be pre‑established. You need to know your people and reach each other immediately when it matters, not figuring that out ad hoc in the first ten minutes or first ten days of an incident, right.
Navroop:
[00.33.12.04–00.33.16.02]
You’ve got to have kind of practiced this already. You have to know how this works.
Matt:
[00.33.16.07–00.33.40.02]
Yeah. Yeah. Standard emergency practice guidance there. The post closes, actually, with a line that I think should be on the wall of every security operations center. It’s “responsible people adapt before they’re forced to improvise.” That’s—I mean—that’s the whole thesis here. So, before we close, there are three questions I want to take everyone back to.
Matt:
[00.33.40.02–00.34.03.07]
And also for you to take back to your teams. First one is what systems are you assuming will always be available when you need to coordinate, and what’s your verified fallback when they’re not? Second would be how many privileged actions in your environment actually can be executed by a single compromised account, and when did you last audit that number?
Matt:
[00.34.03.09–00.34.26.03]
And when you look at the metadata footprint of your tools—not just content security, but the billing layer, the admin telemetry, integration points—what would an adversary or a legal order find in those kinds of scenarios? And with that, this episode of The Lock & Key Lounge RIFF Edition has come to an end. If you like the podcast, please follow us.
Matt:
[00.34.26.08–00.34.50.04]
More importantly, share this with a cybersecurity colleague or with people who would appreciate these kinds of ideas. And if you have any ideas of your own you’d like us to riff on, email us at Lounge@ArmorText.com. You can find all our riff and regular episodes at ArmorText.com/podcast, on the Apple Podcast and Spotify. Until next time, be well, stay curious, and do good work.