The Lock & Key Lounge — RIFF Edition 4
In this RIFF, we connect recent headlines to the realities of communicating during a crisis.
Headline 1: Verizon and Microsoft 365/Outlook outages
When your primary tools go dark, approvals stall, and coordination breaks down.
Headline 2: Iran’s Starlink “kill switch” (with the Russia angle)
Satellite isn’t a guaranteed “Plan B.” Geopolitics can shut it down.
Headline 3: Identity verification and deepfakes
With synthetic personas rising, you must be able to trust who’s in the room when you pivot out-of-band.
We map these stories across three layers—access, membership, and enterprise software—and weave in recent signals like voice-phishing SSO/MFA bypasses, Signal usage scrutiny, and BitLocker key reporting. They show how surveillance, misrouting, or the wrong person in the room can turn small mistakes into major incidents.
We close with three questions every team should answer now:
- What will you use to coordinate when your normal systems fail?
- What is your fallback connectivity vulnerable to?
- How will you verify identities out-of-band without breaking privilege or retention?
Three layers to watch:
- Access: MFA alone can be subverted. Tie access to the user and their device key, so even if a login is compromised, the enrolled device can’t simply “become” the user.
- Membership: Know exactly who’s in the room. If you can’t see or control who joins, you can’t trust the conversation or the approvals.
- Enterprise software: Don’t just think about live chat. History, retention, and key custody all carry risk. A breach can expose everything that came before.
Three questions to ask:
- “Who can decrypt us?” Are you the only party that can decrypt, or can your vendor do it too?
- “Who can see our history right now?” Not just “in general,” but at this exact moment.
- “What’s the blast radius if access controls fail?” Map out what gets exposed if something goes wrong.
Practice the pivot:
- Pre-approve a counsel-directed OOB channel, pre-stage the right roles and devices, and measure time-to-OOB pivot.
- Build identity verification into the OOB workflow so synthetic actors can’t slip in and cause hesitation and stall approvals.
- Plan for contested comms and sovereignty constraints—make sure decision‑making continues even when primary channels are degraded or cut off locally.
Navroop Mitter:
[00.00.03.12–00.00.11.23]
Welcome back to The Lock & Key Lounge Riff Edition. No script, just what caught our attention this week and what it means for real operators.
Matt Calligan:
[00.00.12.01–00.00.20.14]
And this week’s theme is basically your comms can go down, your fallback can get jammed, and even the person you’re talking to actually might be what they call a synthetic.
Navroop:
[00.00.20.16–00.00.33.07]
We’re going to do this in quick hits—outages, contested connectivity, a contested identity, and then three headlines that explain why some Oracle design decisions that we made once upon a time weren’t actually overkill.
Matt:
[00.00.33.10–00.00.38.01]
It’ll be like a—I don’t know—a cyber sparring session but without all the sweaty vinyl.
Navroop:
[00.00.38.06–00.00.40.07]
My record player’s never going to look the same again.
Matt:
[00.00.40.09–00.00.43.03]
The gloves are vinyl.
Navroop:
[00.00.43.09–00.01.00.00]
All right. Yeah. All right. Well, now, before we dig in, we can’t stress enough that we’re not litigating politics. We’re going to be extracting enterprise lessons from real-world failure modes. And with that, let’s start with the most relatable—the lights just go out.
Matt:
[00.01.00.02–00.01.27.14]
Yeah. So first there was a huge carrier outage. And I don’t mean like, oh no, it’s 3G now. I mean, like the back channel many organizations assume are going to be there—just gone. By now, I’m sure everybody’s seen the news about Verizon’s ten-hour-plus outage that affected millions of customers. It practically taking out the eastern half of the United States. Even some of the cities on the West coast.
Navroop:
[00.01.27.17–00.01.44.10]
Yeah, a lot of continuity plans quietly assume mobile will work, right? If voice, text, data are down, your phone isn’t really your back channel at that point. It’s really just a brick with a battery. And when you talk to operators, what’s the actual comms fallback in practice, then, when mobile is degraded?
Matt:
[00.01.44.14–00.02.03.12]
All right. It’s messy. Right. I mean, when I ask folks that, it’s always an improvise. Your—if you still have data, maybe personal apps, maybe text, phone trees. A lot of times it’s just radios. And the problem is, when you improvise during a crisis, that’s when mistakes actually start to stack up.
Navroop:
[00.02.03.15–00.02.25.12]
That’s exactly it. Right. And so, in preparation for those moments, I would hope that there be a couple questions that exec teams would be looking to go answer, and I think it’s what we would suggest folks actually take on if they’re not already thinking through it. Where do we rendezvous when primary comms fail, and who is already provisioned to speak to whom before the crisis starts?
Matt:
[00.02.25.12–00.02.25.19]
Right.
[00.02.25.19–00.02.34.11]
Yeah. If your plan is someone’s going to email a spreadsheet with phone numbers, get out, leave, stop listening here right now—go fix that.
Navroop:
[00.02.34.13–00.02.37.11]
That’s right. Your e-mail’s really not going to be there.
Matt:
[00.02.37.13–00.02.59.12]
This is not yet for you. And even barely over a week after this Verizon outage, right, nearly the same pattern repeats, but it’s actually at the enterprise software layer with Microsoft 365 and Outlook. They were disrupted nearly nine hours, and it was due to infrastructure maintenance gone wrong. And then just the traffic rebalancing issues that come from that.
[00.02.59.16–00.03.07.05]
So this is the enterprise version of the same lesson. The place that you think you can coordinate to is going to fail.
Navroop:
[00.03.07.05–00.03.18.07]
Yeah. And it doesn’t even need to be an intrusion then. Right. It could be a pure reliability event. And suddenly, all the underlying communication technology that you would use day to day is gone.
Matt:
[00.03.18.09–00.03.44.17]
Exactly. It’s like the—it’s like a CrowdStrike update. I mean, you can’t even blame hackers now. It’s just software-hardware issues, sometimes made worse because of over-consolidation. I guess the question folks need to be asking is, when conference calling, email, company chat are just gone—are suddenly just black—what breaks first? The status reporting, approvals, escalation.
[00.03.44.17–00.03.45.16]
What’s the first thing to go?
Navroop:
[00.03.45.20–00.03.56.10]
I’m not so sure about the first, but one of the most impactful, though, is definitely approvals and clarity, right? People can’t confirm who said what, and decisions can stall.
Matt:
[00.03.56.14–00.03.56.22]
Right.
Navroop:
[00.03.57.01–00.04.16.06]
So back to the same two questions then. Right. But with bigger consequences. Where do you coordinate if your daily tools are unavailable? How fast can you speed up a secure, controlled space with the right people already in it so that they can continue to communicate and keep the approvals and the clarity flowing?
Matt:
[00.04.16.08–00.04.17.08]
Yeah. Big questions.
Navroop:
[00.04.17.12–00.04.40.15]
Yeah. All right. Well, now we’re going to be talking a little bit about satellites and your plan B until, frankly, they’re not, right? People treat satellites like a cheat code. If the internet goes down, satellites will save us. Well, Iran just reminded everyone that that’s not a law of physics. And my heart goes out to the Iranian people who are desperately fighting for the freedom they desire and deserve.
Matt:
[00.04.40.17–00.05.03.19]
Yeah. For those who haven’t read it, on January 8th, the Iranian government basically nearly set a world record for one of the most extensive internet shutdowns ever recorded. And it—this time, it went beyond just the standard IP blocking, as reportedly they used military-grade mobile jammers to prevent even connecting with satellite services like Starlink.
[00.05.03.19–00.05.11.09]
It was a basic denial-of-service attack in reverse on the satellite services like that for folks in Iran.
Navroop:
[00.05.11.12–00.05.20.17]
Yeah. Again, we’re not debating geopolitics here. We’re extracting an enterprise design lesson. Your connectivity can become contested.
Matt:
[00.05.20.19–00.05.35.15]
Right. And it highlights a subtle point here. Connectivity degraded can be just as bad as not even there at all. Even if technically you’re online, your operational coordination can still fail.
Navroop:
[00.05.35.19–00.05.46.13]
That’s exactly it, right? The issue could just as easily have been cable cuts, or lava flows, or tech blockades that come on the heels of new sanctions.
Matt:
[00.05.46.13–00.05.59.12]
Yeah. Multiple things that can drive this kind of disruption, including being jammed. I mean, geofenced. Politics comes into this. It’s more and more becoming a thing that folks have to worry about as a constraint.
Navroop:
[00.05.59.16–00.06.16.16]
That’s exactly it. Political constraints are a very real reality that everyone has to deal with today. Frankly, they’re part of what’s been driving some of the talks we’ve been giving around infrastructure and data sovereignty. But coming back to that, right, let’s step back for those operating companies not under a sanction regime with an active struggle for freedom that’s raging.
[00.06.16.16–00.06.35.08]
Right? What services do we need to keep running inside the country, even as the country is partially cut off from the global internet or specific external providers? Right. These are some of the questions are going to answer. Some organizations are starting to treat comms as critical infrastructure now. And and that’s a really good thing because frankly they are.
[00.06.35.10–00.06.56.21]
And that’s where bringing capability in-country becomes a real resilience design decision. You may need local hosting and local operational control to remain viable under certain geopolitical constraints. And again, this is a topic we’ve been discussing in various forms, including Black Hat Middle East and Africa. And one of the reasons for the development of the ArmorText Sovereign Edition.
[00.06.56.23–00.07.17.15]
We’ll actually be on the road again next week in Iceland, of all places, where we’ll be discussing what providers and buyers need to know about that distinction between sovereignty, infrastructure, and data sovereignty so that they can actually make their own services sovereign-capable, and start to provide those to others outside of the island and the rest of the world with confidence.
Matt:
[00.07.17.17–00.07.39.23]
It’s a very cutting-edge topic that we’re seeing everywhere now. Okay. So we talked about outages, contested communications. Here’s a more uncomfortable one. What happens when the comms—communication channels are fine? What if it’s the person that’s not real in this case?
Navroop:
[00.07.40.02–00.08.12.04]
Yeah. I mean, look, a lot of folks used to talk about identity verification like it was solved. Right? And that’s definitely not the case anymore. Right? Because identity is definitely a contested space. This is something we’ve been talking about again for quite some time now, especially in the context of incident response. But last week, I saw a LinkedIn post by Jennifer Eubank, who spent 28 years at the CIA and now sits on a dozen or so advisory boards for cybersecurity and national intelligence orgs and companies.
[00.08.12.06–00.08.33.17]
And her point was, well, it was a sharp one, right? Verification tools were built to ask, does this person match the documentation? But now we need to ask, is the person on my screen real? This question—it’s showing up everywhere, right? It’s showing up in resumes, work samples, interviews, onboarding, and even KYC or Know Your Customer checks.
Matt:
[00.08.33.17–00.09.00.02]
Right. I mean, the pace for me is the scary part. I mean, this used to be something that was so challenging to do. It was just nation states. But, in less than five years, it’s gone from that to a believable candidate combining deepfake voices, deepfake video. They can be built in as little as ten minutes and get through these KYC processes.
Navroop:
[00.09.00.02–00.09.26.19]
Yeah. I mean, look, day-to-day synthetic identity is fraud and an insider risk issue. And it is absolutely impactful. And it’s unfortunately quicker than ever for people to pull off. But during incident response, trusting in the identities of your colleagues becomes mission critical, right? Because you’re making potentially irreversible decisions under time pressure and quite possibly right in front of your adversary.
[00.09.26.21–00.09.32.21]
You let them go in, and they’re sitting there. They’re going to stay a step ahead of all of your remediation efforts.
Matt:
[00.09.32.21–00.09.54.09]
Oh, and during things like incident response, you’re under this time pressure. It’s a rapidly shifting environment. Executives are getting pulled in ad hoc. You’ve got potential vendors that are impacted. You need to get in touch with consultants and security services, joining quickly. At various times, you’ve got parallel communication channels popping up.
[00.09.54.11–00.10.22.05]
This is the perfect environment for a threat actor to insert themselves as this synthetic known stakeholder. And they could do all—and anything—depending on who they’re pretending to be. They can hijack approvals, misdirect actions into the wrong way. You can extract a baseline. You can extract sensitive details of how the incident response team is responding and what their plans are.
[00.10.22.07–00.10.26.11]
You can do out anything that could truly trigger a bad decision.
Navroop:
[00.10.26.14–00.10.48.23]
Yeah, I mean, Matt, that’s spot on, right. If you can’t communicate, you can’t remediate. And if you can’t trust in the identities, you may hesitate. That’s really what’s going to start to happen when suddenly people realize, wait a minute. That may not be Matt Calligan on the line with me anymore, right? This could actually be someone else. I’m going to hesitate, and that’s going to insert unnecessary time delays and all sorts of friction into processes when you can afford them the least.
[00.10.48.23–00.11.29.06]
Right. And that’s why we worked with a top identity verification provider to enable out-of-band identity verification for a major telecom customer. Right. We’re talking top three in the nation, especially for scenarios involving both internal and external resources. What they really wanted was identity verification, but they also wanted to make sure that they had a human in the loop as a decision maker who’s still in charge of whether or not access was maintained, escalated, or completely rescinded, or suspended, or deleted. And that’s what we’ve actually developed out with one of the leading identity verification writers for a top three telecom, and something we’re hoping to roll out to many more customers now.
Matt:
[00.11.29.08–00.11.37.10]
If you can’t trust identity, you can’t trust approvals, right? You can’t trust instructions, even escalation paths—stuff like that.
Navroop:
[00.11.37.12–00.11.46.09]
Yeah. If your identity strategy is as is—if your identity strategy is we’ll know it when we see it, what happens when seeing is no longer believing?
Matt:
[00.11.46.10–00.12.18.23]
Exactly. I mean, it’s thousands of years of visual being the one thing that you can’t—you can always trust. If you see it, you can trust it. Now, we’re not in that space anymore. All right. So we got outages. We got contested communications, contested identities. Now we’re moving on to three headlines that explain why some controls that really, honestly, looked like overkill, like you mentioned, Navroop, five years ago, are suddenly common sense. They’re just table stakes. So…
Navroop:
[00.12.18.23–00.12.20.16]
Well, should be table stakes.
Matt:
[00.12.20.16–00.12.22.07]
Should be. Yes. Yeah. Should be.
Navroop:
[00.12.22.08–00.12.26.23]
You brought these not-overkill measures to the table, and that’s part of the problem.
Matt:
[00.12.27.00–00.12.44.10]
Yeah. So Navroop, preparing today’s episode, three headlines really jumped out at us because they highlighted how some of the decisions we’ve made long ago—we being ArmorText—were seen as overkill. We were even told they’re overkill until now, when it’s becoming clear that they aren’t.
Navroop:
[00.12.44.14–00.13.05.18]
Yeah. One quick note before we jump in. Right. And I want to be careful about this. We’re not throwing shade, and we’re not trying to be partisan in any of this discussion again. Right. These stories have very different contexts. We’re simply trying to pull enterprise lessons about security design out that the rest of us can learn from.
Matt:
[00.13.05.20–00.13.30.17]
So the first article we’re looking at here is from CyberScoop, where a combination of voice phishing plus real-time phishing kits are targeting single sign-on flows, whereas—where victims are actually getting tricked into handing over their credentials and even approving MFA, sometimes even enrolling an attacker-controlled device into the workflow.
Navroop:
[00.13.30.20–00.13.53.16]
Yeah. Look, I mean, here’s the lesson, right? MFA alone is not enough. In your standard collaboration technologies or your file share systems—your collaboration tech in general—once the attacker gets in, they can immediately access all prior communications or previously shared files. Anything that’s there, right? Because for all intents and purposes, then they are you.
[00.13.53.16–00.14.16.16]
And when we built ArmorText, we raised the bar, right. The decision was made that we were going to be using a user-plus-device-specific end-to-end encryption model so that when you were adding a new device to your profile and you had MFA turned on, right, it was really going to be MFA plus possession of a user-plus-device-specific private key that would be necessary to decrypt that user’s comms on that device.
[00.14.16.18–00.14.38.04]
So you have your MFA, it gets compromised. They don’t have a key that was meant for you on that device. You’ve bought time. And you’ve reduced immediate historical exposure. You can have all this kind of protection while still having the ability to read that device ineffectively, and a proactive action that you have to take.
[00.14.38.06–00.15.04.16]
But that was one of those design decisions we took a long time ago that the bar had to be raised. MFA alone could not be enough. The key could not simply be reused because now MFA was satisfied. We had to raise the bar. Hey, Matt, when we’re talking about this topic, what do you think is underappreciated here—the phishing sophistication, or the fact that the compromise becomes instant history access?
Matt:
[00.15.04.18–00.15.26.08]
Everybody focuses on how they got in. Right. If you look at any article, and it is—it’s always a—everybody likes to, for lack of a better term, geek out about this. How they got in and what the vulnerability was, and the strategy to get through the defenses. But it’s instant access to the history.
[00.15.26.13–00.15.48.18]
And in my case, what did these adversaries immediately get access to? What could they see? What did they learn? It’s not just about how do they get here. It’s now what do they see that they can’t unsee. We can’t make sure they have unseen. It’s like going—giving someone your house key and your diary at the same time.
Navroop:
[00.15.48.21–00.15.52.09]
Good God. I mean, look, our analogies here are getting lamer and lamer as we go longer.
Matt:
[00.15.52.11–00.15.57.01]
I blame the coffee. I need more coffee.
Navroop:
[00.15.57.02–00.16.01.04]
This is why this is way too early right after a storm.
Matt:
[00.16.01.06–00.16.14.02]
So that’s identity at the access layer, right? So next, let’s talk about identity at the membership layer. Right. Who’s in the room—we’re talking about who can see the history, but who is it that can see this history at this point?
Navroop:
[00.16.14.04–00.16.54.09]
Yeah. And on this front, now we’ve got yet another Signal chat as an example. Right. So NBC reported that the FBI is targeting the Signal chats of protest groups they believe to be crossing the line, and they’re doing it by infiltrating those conversations. Right. And this hits on many of today’s themes—contested identities being one, but more importantly, how unknown actors can end up in the wrong conversation through very simple actions like social engineering, or like during Signalgate, when the wrong person was added to a conversation through user error. And again, the point isn’t to be political; it’s to extract enterprise lessons that are, in effect, neutral.
Matt:
[00.16.54.10–00.17.23.20]
Yeah, I mean, people are always—already calling this one Signalgate Two. But the enterprise lesson is consumer privacy apps—they optimize for convenience and ad hoc groups. They want to make it easy for the user, as an individual, to do what they have to do. But for critical response teams inside enterprise and regulated organizations, they need governance. They need oversight to this, and control of that beyond what an end user does.
Navroop:
[00.17.23.23–00.17.44.07]
Yeah, and that’s spot on, right. Like without the ability to control and oversee who your teams are connecting with externally, and who is visible to others outside your organization, it becomes a free-for-all. Using Signal is like taking the barn doors completely off the barn and turning off the cameras, and never know who comes and goes.
Matt:
[00.17.44.07–00.17.45.14]
Forget about if they’re open or closed.
Navroop:
[00.17.45.18–00.18.00.20]
Yeah, back to the lame analogies there. And that’s why we build trust relationships—cross-org comms—where each side preserves its own governance, and where you can define availability and boundaries across that trusted line.
Matt:
[00.18.00.22–00.18.10.01]
Yeah. I mean, if your governance model is, please don’t add your mom to our incident response group. I’m sorry. That’s—you have no control.
Navroop:
[00.18.10.03–00.18.18.17]
Oh my God. All right, well, now for the third—key custody. It’s the quiet convenience choice that becomes everyone’s problem much later down the road.
Matt:
[00.18.18.17–00.18.37.12]
Yeah. So Navroop, to your favorite analogy, if using Signal is like taking all the doors off the barn. Well, then, like you say, giving your keys to under the control of someone else’s just essentially opening those doors and letting the horse out, and then trying to close them after the fact.
[00.18.37.13–00.18.57.08]
It was a—there was a Forbes article here. Microsoft is providing BitLocker recovery keys to the FBI under warrant. Obviously, they’re legally obligated to because users are able to store those keys on Microsoft servers for convenience.
Navroop:
[00.18.57.08–00.19.31.06]
Yeah. And, look, this isn’t a moral judgment. Again, these are architectural lessons, right? If a provider can access keys, assume they will eventually be compelled to produce them. A lawful warrant shows up for demand. There’s a breach. Insider risk. Pick your scenario. Those keys are going to get produced. And for secure communications, what we want in reality, then, is an approach where enterprises can meet audit and governance needs without the provider holding keys that can decrypt customer content. So that you can help prevent a whole series of these kinds of issues.
Matt:
[00.19.31.06–00.19.46.01]
And if your model depends on provider of access to keys, however indirect, revocation after compromise becomes—wait for it—closing the barn door after the horse has got out. So what—
Navroop:
[00.19.46.02–00.20.07.20]
Yeah. And actually, for those who don’t know what Matt’s referring to there, he’s actually talking about EKM architectures. Right. The enterprise key management architectures where you’re bringing effectively a signing key to the table. You get to revoke that. But along the way, all the keys necessary for processing your communications or your shared files in plaintext were always in the hands of your providers.
[00.20.07.22–00.20.18.19]
And because of that, they could very much either be compelled or misused along the way. And you’ll only have the ability to correct that well after the fact.
Matt:
[00.20.18.22–00.20.31.04]
All right. So we’re done with farm analogies, but I’ll ask a question here. What should a board ask when evaluating these encrypted “systems” for these kinds of scenarios?
Navroop:
[00.20.31.09–00.20.47.21]
Yeah. Look, I mean, since we’re talking about board level, I’m going to try to keep it super simple and high level—something that they can easily just start the conversation with the more technical folks on their team. And hopefully those folks have some good answers for them. Right? So, number one, who can decrypt us?
[00.20.47.22–00.21.07.19]
Are we solely able to do it, or is our vendor also able to decrypt those? Number two, what happens under legal compulsion? Right. What is your vendor able to do when legally compelled, and what are they unable to do? There’s always going to be something they can do, but how limited is that, and are you comfortable with that balance of power?
[00.21.07.21–00.21.29.00]
Right. Once upon a time, when these systems were on premise and things you maintained yourself well, they were coming to your company and your general counsel, and they could insert themselves in the process. But now there are other parties at the table. So what happens under legal compulsion? Right. And then number three, what’s the blast radius if access controls fail?
[00.21.29.02–00.21.33.16]
Right. If something goes wrong, what’s your—what’s the loss radius? What’s the exposure risk?
Matt:
[00.21.33.19–00.21.46.14]
Right. I mean, convenience and security are always on this sliding scale. But for enterprise, convenience is frankly the most successful social engineering campaign of all time here
Navroop:
[00.21.46.18–00.22.06.15]
Yeah. Sadly, it is the most used. All right. Well, you can pull all this together, right? These are—if your model assumes MFA always holds or that membership always stays clean and key custody is harmless convenience, you’re eventually up for a rude awakening, right? You’re going to get surprised in the worst moment.
Matt:
[00.22.06.17–00.22.33.05]
Yeah. All right, so let’s tie this all together with a couple key considerations for those who are still listening and didn’t get turned off by all the horse jokes. One, what systems are you assuming will always be there to communicate during an emergency? Two, what is your fallback connectivity plan vulnerable to itself? We always think about the vulnerabilities taking out the primary.
[00.22.33.07–00.22.54.03]
But what can take out that secondary, and is it more resilient or actually more susceptible to certain types of attacks? The third one would be, how do you make sure you can trust the digital identities of who would be your inner circle in these scenarios—when your inner circle—when you’re out-of-band during a crisis.
Navroop:
[00.22.54.05–00.23.14.14]
Yeah. And actually, really quickly on the second question you asked Matt. Right. I think that’s a super important one. What is your fallback connectivity plan? Well, two, some of this is—some of it is—no matter what solution you pick, there’s going to be certain dependencies that it still has. Right. Like if data connectivity is there, then that isn’t necessarily a vulnerability in and of itself.
[00.23.14.14–00.23.31.08]
It’s something that also has to be addressed. Right? People kind of think of and say, well, this doesn’t solve every possible exposure point. Therefore, it’s not the right solution. No, it’s probably still the—it could very well still be. It probably is, but could very well still be the right solution. It just might mean there’s another thing that you need to tackle as well, right?
[00.23.31.08–00.23.53.05]
We often talk about your go kits for your execs actually need to have data resilience and energy resiliency built in. Right. That means battery and backup data connectivity capabilities so that they connect to a whole host of services, including your fallback connectivity solution, but also a whole host of other solutions that are going to need to get worked on at that moment.
Matt:
[00.23.53.07–00.23.54.18]
Yeah. Defense in depth.
Navroop:
[00.23.54.23–00.24.18.01]
That’s right. These are part—these are parts of a larger solution that has to be put together. There are, of course, people like us who are talking about helping bundle those things together with the right partners and providers now that we’re working with. But yeah, it’s not—there’s going to be nothing that is a one-stop panacea for everything.
[00.24.18.02–00.24.27.07]
It’s—it is going to be requiring you to have some defense in depth, to Matt’s point. Right. You’re going to be posting the—a handful of things together to achieve the solution you really need.
Matt:
[00.24.27.09–00.24.54.00]
And it’s a new mentality for a lot who’ve been, for a very long time, told that consolidation and—sort of a—the old single pane of glass approach is the most rational. When that flips, being in a contested environment, coming head to head with someone who has malicious intent, it flips all its priorities on its head.
[00.24.54.03–00.25.13.20]
And with that, this episode of The Lock & Key Lounge Riff Edition has come to an end. If you’ve had a comms outage, an identity incident, or even the wrong person in the room—like your mom moment—we’d love to hear about it. Not like your mom, but like a moment like that. We all know what we’re talking about, but we would love to hear about it.
Navroop:
[00.25.14.00–00.25.23.20]
All right. And as always, when we do these Riff Editions, we’ll include the links in the show notes. And with that, I’d love to thank you all for listening. See you next time—The Lock & Key Lounge.