Search

How AI can turn air gaps into security gaps for ICS/SCADA

For decades, critical infrastructure companies have relied on organizational silos—air gaps between IT and operational technology—to ensure that enterprise disruptions do not cascade into the physical systems that keep the lights on. But those silos have been largely successful due to biology and physics: the scale of coordination and depth of expertise required to overwhelm them has been beyond human capability. That changed when we built something capable of assembling expert skill sets instantaneously. Patrick Miller, CEO of Ampyx Cyber, recovering regulator, and one of the most recognized voices in OT cybersecurity, joins Matt Calligan to confront the question that most organizations have not seriously answered: what does resilience look like when both IT and OT systems are simultaneously degraded or unavailable—and the assumption that you can “go back to manual” turns out to be a pipe dream?

Listen on :

  1. “We can go back to manual” is fake. When organizations say they can revert to pen and paper or manual operations, they are describing a fantasy. Patrick puts it bluntly: walk through the thought exercise in a tabletop, and stuff starts to break down quickly. Could humans even show up during a pandemic or global instability? Where do you get enough qualified, safety-trained humans to run 24/7 operations? You might sustain it for a day or two, but anything longer falls apart at the seams.
  2. Digital transformation created digital dependency. We are all technology companies with a product problem. Inside the silos that were meant to protect operational technology, the push for automation and digital transformation vertically integrated physical and information systems. The result: layers of dependency on digital components to interface with the physical world.
  3. Silos worked because of biology; AI changes that equation. The scale of coordination and depth of expertise required to attack across IT and OT layers simultaneously was a biological improbability. Humans could not assemble it fast enough. AI removes that constraint. Pattern recognition, correlation, inference across enormous datasets—capabilities that were difficult for humans—can now happen at the click of a key. That is an enormous force multiplier for attackers.
  4. Cross-domain interdependencies are the real attack surface. Colonial Pipeline is the quintessential example of shutting down because of IT-side dependencies. The Texas blackout revealed cyclical dependencies: gas-fired electric turbines needed gas, but moving gas required electricity. Even contract structures—firm versus non-firm—can determine whether critical services get delivered during a crisis. If you were the attacker, you would look for ways to affect multiple domains by doing one thing in a single domain.
  5. Attackers do not need malware to cause physical damage. Patrick’s focus in OT is not defacement or wipers—it is whether an attacker can use operational data to manipulate conditions that cause physical damage to equipment. Locking molten metal into a smelter. Chemical processes that explode when disrupted. Shifting vibration frequencies on spinning machines until they shake themselves apart. AI enables attackers to find subtle, undetectable ways to do this that would have been extremely difficult for humans.
  6. The blackstart sequence depends on components below the regulatory threshold. Nation-states like China are targeting smaller components of the blackstart sequence—municipal utilities, co-ops, generation under the BES threshold—because big systems depend on little systems to restart. Littleton Electric is off the grid, barely regulated, yet critical to recovery. Public data, sunshine laws, and purchasable GIS maps let attackers model the grid, identify critical intersections, and probe weaknesses. AI makes this scalable.
  7. Regulatory thresholds created perverse incentives. When the threshold was 75 MVA, operators built 74.5 MVA plants and then aggregated them under single automated control points. The result: enormous megawatts of generation controlled from a single point, all technically under the regulatory threshold. Regulation must stay flexible enough to respond as infrastructure design and market conditions evolve.
  8. Complacency is the weakness. Organizations in regions with high reliability and little geopolitical turbulence need Hollywood-level production in tabletop exercises to feel engaged—because nothing bad has happened recently. Estonia, next to a hot zone, treats exercises as muscle memory. When the bell goes off, they are already in the truck. The degree of complacency in stable environments is exactly the weakness that will be exploited.
  9. Cyber-informed engineering is getting oxygen. CIE—the practice of installing physical, manual safety nets underneath digital systems—is finally gaining traction. If the digital vibration sensor gets attacked, a mechanical float valve operating on physics backs it up. This approach removes or minimizes the risk of catastrophic damage from cyber attacks, improves global trust in infrastructure, and forces adversaries to go somewhere else.
  10. We have automated ourselves into unearned trust. We have created enormous dependency and assumed reliability because our short memories cannot recall when these systems failed. Jaguar Land Rover. Stryker. Ask anyone who just got hit whether they wish they had out-of-band comms. The trust we placed in automated systems was unearned—and now we are discovering that the hard way.
  11. Never say the word “security” to executives. Security carries connotations of witchcraft, expense, and restriction—nothing that sounds cheaper, easier, faster, or better. Instead, talk about visibility, uptime, efficiency, and root cause analysis. Buy security tools by framing them as operational improvements. The security practitioner’s job is then to deliver in a way that actually makes operations better, not worse.
  12. The IT security mindset in OT environments set us back. Rolling out IT security approaches too fast, making them restrictive, and causing outages created antagonistic relationships between IT and OT teams. We approached it the wrong way and have been digging out of a hole ever since.
  13. Regulation should set a minimum floor, not a ceiling. Regulation is done in the public interest; corporations operate in shareholders’ interest. Those do not always align. The goal of regulation is to forcibly align them for critical functions—electric, water, gas, chemical. But regulation must be flexible, and regulators must be knowledgeable and balanced. Done wrong, it becomes box-checking that detracts from real security.
  14. Europe’s NIS2 holds board members personally accountable. In the US, penalties target the company. In Europe, under NIS2, board members can be individually penalized—€10 million or 2% of global gross, whichever is higher. That construct forces board-level attention in a way that US regulation does not. The European model is not perfect, but it will shift behavior faster.
  15. Offensive cyber strategy requires nuance. Patrick wrestles with the “punch back” approach in the new national cybersecurity strategy. Overt offensive operations raise escalation stakes for critical infrastructure—and in the US, these are private companies. Recovery costs roll into consumer pricing. If every nation does this without penalty, international humanitarian law and norms erode. Patrick favors covert, attribution-resistant offensive capability, not advertising it by pissing on someone’s fence post.
  16. We are 72 hours away from societal collapse. No water, no gas, no electricity—that is a problem, real fast. The infrastructures we depend on have interdependencies that, if exploited at key intersections, cause cascading failures. The NTIA study mapping infrastructure interdependencies should make your jaw hit the floor.
  17. Norsk Hydro is the model. They were transparent, continued to operate, went back to manual, and chugged along through the incident. As a result, they became one of the most reliable partners in the business ecosystem. Resilience is not just about surviving—it is about becoming the supplier everyone picks first.

Navroop Mitter:

[00.00.02.23–00.00.31.05]

Hello, this is Navroop Mitter, founder of ArmorText. I’m delighted to welcome you to this episode of The Lock & Key Lounge, where we bring you the smartest minds from legal, government, tech, and critical infrastructure to talk about groundbreaking ideas that you can apply now to strengthen your cybersecurity program and collectively keep us all safer. You can find all of our podcasts on our site, armortext.com, and listen to them on your favorite streaming channels. Be sure to give us feedback.

Matt Calligan:

[00.00.31.07–00.01.12.20]

All right. Welcome to another episode of The Lock & Key Lounge, the podcast dedicated to talking about things nobody’s talking about in cybersecurity. And really, today is no exception. Fortunately, for decades, critical infrastructure companies have relied on man-made organizational silos to ensure operational resilience. You think about physical systems—electrical grids, oil and gas pipelines, nuclear facilities, even manufacturing operations—typically air-gapped to make sure IT disruptions don’t cascade into physical or OT systems that, in many times, literally keep the lights on.

Matt:

[00.01.12.22–00.01.57.07]

But inside these silos, the push for automation and digital transformation has ensured that more and more layers of both physical and informational systems are vertically integrated over time. And with this digital transformation has come digital dependency. And today’s guest likes to say that today we’re all technology companies with a product problem. And yet these silos, they have been largely successful, but it’s mainly been due to biology, to physics, because traditionally, to overwhelm these defenses, the scale of coordination, depth of expertise, and all these technology layers is being beyond the capability of any reasonably sized group of humans.

Matt:

[00.01.57.07–00.02.20.23]

It was basically a biological improbability that someone could assemble all this stuff fast enough, in such a scale, that it overwhelm the defenses. Now, y’all can probably see where we’re going with this, ’cause we humans have now built something that, given enough resources, is actually capable of assembling these expert skill sets and scale instantaneously.

Matt:

[00.02.21.01–00.02.46.03]

And we’ve named it artificial intelligence. So, with this new set of rapidly evolving capabilities, which are available to pretty much everyone, cybersecurity teams have to start asking what does resilience look like when both IT and OT systems are simultaneously degraded or unavailable? And my guest today, Patrick Miller, is on a mission to make sure folks are answering this question.

Matt:

[00.02.46.05–00.03.17.13]

Patrick is the CEO of Ampyx Cyber. He’s known for bridging that gap between technical cybersecurity and that real-world operational risk. He is widely recognized as an expert in cyber for critical infrastructure, operational technology, as decades of experience advising governments and organizations globally on securing industrial control systems and thereby improving resilience. Patrick, welcome to the show.

Patrick Miller:

[00.03.17.15–00.03.19.12]

Awesome. I’m super happy to be here.

Matt:

[00.03.19.15–00.03.36.23]

Yeah. Given—I know you’re at RSA and it’s been—I know your schedule is rapidly evolving here. So, if it’s okay, we’ll go ahead and just jump right in here. We’ll start with the question that you—or the comment, I guess it was a comment that is actually a question—when we were talking about this. 

Matt:

[00.03.36.23–00.03.47.14]

And it kind of got those gears turning for me, and you said, what does manual look like for these companies? And I was wondering if you can explain. Start by explaining what you meant by that.

Patrick:

[00.03.47.18–00.04.13.00]

Yeah. Manual, in most cases, were—again, in your intro—it’s about these operational technologies, right. These are the systems that interface with the real—the physical world. So this operates machinery, for example, or opens a breaker, or turns a pump—those kinds of things. So, in this space, manual means how do you get these things to operate manually?

Patrick:

[00.04.13.00–00.04.48.10]

Because we used to do this—like a human would go physically move a valve or turn a big wheel, and it would open a valve. Right. Now we have a little electrical device that does the turning of that wheel for the human. So that manual process we’re talking about is what does this manual world look like now that we’ve gone so far down the digital path and we’ve created such a tremendous degree of—and, frankly, layers of—dependency on those digital components or cyber components to interface with that physical world for us in—through automation and many other means.

Matt:

[00.04.48.12–00.05.09.23]

Yeah. And organizations will, if—depending on how far up into the leadership structure you get—they’ll often say, well, we can go back to manual, go back to pen and paper, as they say. Yeah. So, I mean, the—what does the reality of that look like across these kinds of environments? Kind of unpack that a little bit from your perspective.

Patrick:

[00.05.10.00–00.05.28.04]

Yeah. That’s just fake. That’s just—that’s a pipe dream. That’s false. I hear this a lot, and like, oh, well, we’ll just go back to manual, and I’m like, okay, let’s try that. Let’s just—not even, like, physically trying, like actually walking through—and let’s just walk through the thought exercise of doing this.

Patrick:

[00.05.28.04–00.05.47.21]

Yeah. And even in those—just postulating about what it would be like versus really doing it—stuff starts to break down pretty quickly. And it’s not like it’s the immediate things that come to mind, like, would people… let’s just say that the situation is bad enough. Let’s say it’s like a pandemic, right?

Patrick:

[00.05.47.21–00.06.13.16]

Or something like that. Would the humans even show up to do the things? And ’cause there are situations that are, “I’m going to be home with my family.” Like, if there’s particularly threatening global instability, if there is a pandemic. But they’re not, like, fantasy-level, but things that are actually happening right now—kind of reality that would cause some people to not want to show up to work.

Patrick:

[00.06.13.18–00.06.31.06]

So that’s just, like, one of the aspects, I mean. And the other ones are just—and they say, we can do manual, and I’m like, okay, so let’s just walk through this. And we get through some layers of it, and they start getting a little uncomfortable, and they quickly realize we might be able to do this for a little while, maybe a day or 2 or 3 days.

Patrick:

[00.06.31.06–00.06.50.06]

But the minute you get beyond that, it starts to absolutely fall apart at the seams. Like, if you got to do 24/7 operations for a physical—I don’t know whether it’s some sort of critical infrastructure—water, gas, electric—kind of those delivery systems. Do I put humans on shift, and where do I get enough humans to do this?

Patrick:

[00.06.50.11–00.07.11.11]

Qualified humans that have gone through the safety training, that have gone through all of those things that are necessary to, like, be the operator of the thing. So it breaks down really fast. So I think that it’s just—when I say it’s fake, it’s not necessarily like it’s absolutely impossible, but there is a non-zero chance that you’ll do this successfully for anything longer than a day or so.

Matt:

[00.07.11.16–00.07.34.22]

That’s right. Well, what do you think it is that organizations still think in sort of isolated layers of incidents, or incident within a layer, or within a silo, instead of cross-domain? Or—we talk about IT/OT a lot—but, I mean, there are various silos beyond just those. Those are just the famous ones in the energy sector.

Matt:

[00.07.35.00–00.07.37.03]

Why do you think they think like that?

Patrick:

[00.07.37.05–00.07.59.08]

I—well, there’s a lot of reasons, really. But, I mean, I think some of the key reasons are they haven’t had to do it. They—there’s—I think there’s a lot of assumptions kind of built into that construct or that model for them. And when you really go through and test these kinds of things… I was just at the wor—mentioned word RSA—it was just at the Estonian discussion around what they do for Locked Shields.

Patrick:

[00.07.59.08–00.08.18.21]

And it is—it’s a real, like, live-fire operational test. And that kind of thing is not usually done by most organizations. There’s a lot of reasons. They don’t have time, they don’t have the resources. There’s lots of reasons why. But without doing it like that, you don’t really realize how many other dependencies you have on those inter-domain relationships. 

Patrick:

[00.08.18.23–00.08.43.01]

One of the classic situations—like, just Colonial Pipeline—is the quintessential example of having to shut down OT because you had dependencies in IT. But even things like in the Texas blackout, there was a high dependency on gas reserve for electric power because the power generators were electric—I mean, sorry, they were gas-fired electric turbines.

Patrick:

[00.08.43.03–00.09.08.02]

So, in order to get to electricity, you had to have the gas. Well, in order to actually move the gas and actually work the gas system, you had to have electricity. So you end up with this interesting kind of cyclical dependency on—yeah. And there’s even things like weird situations, like firm versus non‑firm contracts. So if the price is outrageous and I’m not required to serve you, and I can serve someone else at a higher price, I’m not going to serve you.

Patrick:

[00.09.08.02–00.09.16.23]

I’m going to serve someone else at a higher price. So it’s not just, like, do the infrastructures depend on each other, but weird subtleties like contract relationships can even creep into this.

Matt:

[00.09.17.03–00.09.44.08]

Yeah, yeah. Do you think it’s—from a gaming‑it‑out kind of scenario, cyber—we talk about tabletop exercises a lot. But, I mean, these kinds of simulations, these wargame events, I—we interact in our line of business with a lot of folks that run these, and the general consensus is that folks view them as a—as sort of a box‑checking exercise.

Matt:

[00.09.44.10–00.10.03.18]

We—in fact, one of our first podcasts was interviewing people who used—actually, they brought in Hollywood directors and writers and actors to make these events real, because people got so much more value out of it. Do you think that there’s just not enough fear of that kind of a thing, or is it head in the sand? 

Matt:

[00.10.03.18–00.10.10.04]

What’s—how do you think that the reality of that should translate more? How do—how would you do that, I guess.

Patrick:

[00.10.10.04–00.10.28.19]

Yeah. That’s a—that’s a good question. There’s some really interesting dynamics. Like, I do a lot of work in the electric sector in North America. We have what’s called GridEx. It’s done every couple of years. It has positively beautiful, like, production quality. It’s—like you said—Hollywood directors. It is done very real. It feels real.

Patrick:

[00.10.28.19–00.10.50.01]

It looks real. The vibe is truly exigent. And it’s a fantastic job. And it does definitely get more people engaged, because it just feels more real. Right? Just because it doesn’t feel real doesn’t mean you should, like, discard it. Though I think that’s a—that’s something that I think people have to get over, just because it doesn’t feel real enough.

Patrick:

[00.10.50.03–00.11.12.10]

All that means is you’re just not playing the game well. You’re not—I mean, and I get it. We, we’ve got limited attention spans, and we need to be spoon‑fed all of those things. But I think most of it, for us at least, in most countries or regions where there’s been a high degree of reliability over a large number of years, there’s been little geopolitical turbulence.

Patrick:

[00.11.12.10–00.11.23.14]

There’s been a lot of comfort. Those areas need that Hollywood‑level production to feel like they need to get engaged, because they’re like, well, nothing has happened in the last number of X number of years, or days, or whatever.

Matt:

[00.11.23.14–00.11.25.02]

Nothing happened yet.

Patrick:

[00.11.25.02–00.11.45.11]

Exactly. And that degree of complacency is actually their weakness, because there will be a situation. It’s not an “if,” it’s a “when.” It’s been said many times, but there will be a situation that will absolutely wreck their day. And it may not be, like, a sector‑wide or a regional event, or it may just be their company gets hit and hit absolutely cripplingly hard.

Patrick:

[00.11.45.15–00.12.06.07]

So, even situations like that should just tell you right upfront, you’re just a—it’s a numbers game. It’s going to happen at some point, and you should prepare accordingly. That said, you see countries like Estonia, for example—they’re next to a hot zone with Ukraine. They’re a supply line for Ukraine. They’ve got a long history of—we’ll say—a challenging relationship with Russia as their neighbor, to say the least. 

Patrick:

[00.12.06.09–00.12.30.23]

And even Russia, as of late, has been threatening to do things like reunite the Soviet Union. So, for them, it’s very real, right? It’s something they do, and they mean it. They’re like the firefighter that does the testing so that it’s muscle memory. When the bell goes off, they wake up half asleep, and they’re already in, like, their fireman suit, and they’re on the truck, and they’re on their way to the fire.

Patrick:

[00.12.31.01–00.12.47.12]

But it just kind of operates like that, because it’s been so well‑practiced. And that’s, I think, in areas where that’s your life, you get really good at practicing it. In areas where it’s not, you become really complacent, and it becomes more of a Hollywood exercise when it really shouldn’t be. ’Cause, like—

Matt:

[00.12.47.15–00.12.48.04]

Yeah, put it on a shelf.

Patrick:

[00.12.48.10–00.12.52.01]

Exactly. It’s just going to be a matter of time before something does really hurt you.

Matt:

[00.12.52.03–00.13.22.21]

Yeah. The—I made an assertion kind of in the beginning, and that is these silos are man‑made, and people have relied on them mainly because biology has never been able to scale beyond a certain point. Do you agree with that? Yeah. I made that assumption in here. But, like, it feels sort of because people can’t get their head around what a world looks like beyond how a human thinks about something.

Matt:

[00.13.22.23–00.13.41.21]

They haven’t—they have a hard time envisioning the threat landscape as it is, because they’re so used to that sort of complacency and those silos working, because they couldn’t ever get their head around it before. Do you see that being a big part of it?

Patrick:

[00.13.41.23–00.13.54.16]

I do, yeah. I look back—like, I always look for historical stories and that kind of thing. But there was a time before we had a telescope, or—I mean, we’ll say—long, long field‑of‑view vision.

Matt:

[00.13.54.18–00.13.55.01]

Right.

Patrick:

[00.13.55.05–00.14.15.21]

That went away once we actually had something. We’re—like, a looking glass. Like, we could see farther, we could see an enemy coming, and then you could behave much differently in terms of how you arranged your defenses, for example. So even something like that, that gave us greater visibility as a single human, empowered that human to do things beyond what you could do at reaction notice when an army is at your door, for example.

Patrick:

[00.14.15.21–00.14.37.05]

So now, with AI and the ability to scale enormous amounts of technology behind it, and multiple AIs and agents of AI, and multiple agents of multiples AI—multiple AIs—and that scalability of all of those components is—it’s basically yet another springboard for those kind of same thing.

Patrick:

[00.14.37.05–00.15.06.22]

So, with that, as the threat actors’ capabilities can use those, then the defenders should be using the same thing. So, I would say—but we were limited by biology, right? We were limited by what we could conceive mentally. And the amount of—even things like correlation, and aggregation, and inference of things based on large numbers of events, for example, and trying to find some arc of interestingness that could cause a weakness or make some effect happen.

Patrick:

[00.15.07.00–00.15.28.07]

That kind of thing is limited by human capacity. But when you can scale it with the enormity of AI, it really—and that, I mean, we’ve seen AI is really good at finding, like, patterns and synthesizing enormous amounts of different data sets to come up with interesting trajectories. And—but that is so difficult for a human that it can be done literally at the click/stroke of a key.

Patrick:

[00.15.28.09–00.15.30.08]

That is an enormous force multiplier.

Matt:

[00.15.30.13–00.15.53.06]

Yeah. What’s—how does that—I mean, with AI, and again, I tried to bury AI in here a little bit so this wasn’t a podcast about AI, ’cause we have 18 of those every day we’re getting alerts on. But with this kind of scalability and automation, you and I know these headlines—we’ve read these—but just for, just not making the assumption that everybody here is aware of it.

Matt:

[00.15.53.06–00.16.02.21]

How has AI changed the way these kinds of malicious activities are coordinated with the—between systems and environments?

Patrick:

[00.16.02.23–00.16.39.16]

Yeah, I think, I mean, most of my world is in OT. So I look at this differently than, can you get Claude to code something, or can you vibe code some exploit based on the release notes of a vulnerability? And those are definitely things you can do, right? That—that’s a reality. Now I’m more concerned about causing damage to a physical environment because now, as we mentioned earlier, we’re running a lot of these physical technologies, whether it’s a manufacturing line, or whether it’s a water system, or an electric system. It’s basically a big giant software platform that creates products of some kind, whatever that is.

Matt:

[00.16.39.16–00.16.41.08]

As a physical output.

Patrick:

[00.16.41.08–00.16.58.21]

Has a physical interface into the physical world. Whether it’s through the sensing and telemetry it gets about the physical world it’s in, or the controls that you send it to operate things in the physical world—whether it’s a moving arm on a belt and change the position of a component or a box, or water in a pipe, or power on a wire.

Patrick:

[00.16.58.23–00.17.20.12]

So I’m looking at this more along the lines of, can I use large amounts of operational data—how the system operates, how it behaves—to affect it in some way that would cause physical damage to that piece of equipment? So it’s not just, can I get to deface it? Can I use a wiper to render it unavailable, or something like that?

Patrick:

[00.17.20.17–00.17.43.08]

But can I physically damage that thing based on the way I can manipulate the conditions that the machine is operating in? So examples are like locking molten metal into a smelter so that you can’t get them. Now you’ve just disabled the smelter, and you’ve got to rebuild components because you’ve just fused the metal in there.

Matt:

[00.17.43.08–00.17.44.17]

Just a slag now.

Patrick:

[00.17.44.17–00.18.03.22]

Something like that, or in certain chemical processes, if it does not happen perfectly, things go boom. And terrible clouds of dangerous gas come out, and refineries can literally blow up. I mean, so these are things that are like—that’s a very different scale of a problem when you’re looking at, could a human envision this.

Patrick:

[00.18.03.22–00.18.15.08]

Probably so. But could a human envision a way to do it in a subtle, undetectable, non‑malware kind of way? And it would be extremely difficult for a human.

Matt:

[00.18.15.08–00.18.17.12]

Still doing the thing it’s supposed to do. Just slightly worse.

Patrick:

[00.18.17.12–00.18.34.10]

Exactly. So the capability to do that now, because you can get access to so much more operational data—as I mentioned, you can aggregate, you can infer in ways you couldn’t—you now have a capability there that was much, much more difficult to achieve a long time ago. So that is  definitely a game changer.

Matt:

[00.18.34.15–00.18.57.08]

Yeah, I forget who was talking about it, but they’re talking about inside an ICS system—certain things have to operate at a certain vibration frequency, and all you have to do is shift it off that vibration frequency and give it time. It’s not instant, but at some point it breaks, and nobody just watching the system at an alert level is going to even see that happening.

Matt:

[00.18.57.10–00.19.03.18]

Like you said, you don’t have to install malware. It doesn’t have to be doing something bad to create that outcome.

Patrick:

[00.19.03.20–00.19.22.18]

Yeah. And that is—that’s very true. One of the biggest ones we worry about are things like big spinning machines, whether it’s a turbine or a centrifuge or some other kind of motor or fan, though it’s a big spinning hunk of metal. And to get it in—in most of the cases, there are things like what they call vibration or sync check relays.

Patrick:

[00.19.22.20–00.19.43.23]

And those relays monitor, and it can only go this far outside of its band of what’s acceptable for vibration, because it will start to—the motor will start to shake itself apart, essentially what you see over time. So that’s a legitimate attack. And we’ve long looked for that. And we install literally layers and layers and layers of protection to try to keep that from happening.

Patrick:

[00.19.44.03–00.19.51.00]

But if you can manipulate all of those things in subtle ways that they just don’t get caught, then yeah, you can cause big problems.

Matt:

[00.19.51.03–00.20.13.12]

Yeah. So what do—what are the kinds of, in the immediate near‑term future here, what are the kinds of coordinated attacks when we’re talking physical a lot on this side? But like, are we—they—the big fear that I hear a lot of folks is sort of a multi‑domain attack, where it is a cyber combined with an OT thing.

Matt:

[00.20.13.12–00.20.29.22]

I mean, the Colonial Pipeline—I don’t know if those guys envisioned the BEC being that effective at shutting down the pipeline itself. I think that was more of an accident. But like, how close are we to somebody just actually intentionally coordinating at that level?

Patrick:

[00.20.30.00–00.20.35.04]

Yeah. The unintended consequences so far have been alarming enough to imagine what it would be like if it was intended.

Matt:

[00.20.35.10–00.20.37.12]

That’s right. That was enough of an “oh shit” just to—yeah. Yeah.

Patrick:

[00.20.37.17–00.20.58.12]

Yeah. So the intended consequences like that—I mean, that is, that’s key. And if I were the one doing the attack, I would look for those cross‑domain interdependencies. I would look for ways that I could get affect in more than one domain. By doing one thing in a single domain, I can affect multiple other domains.

Patrick:

[00.20.58.14–00.21.24.16]

So you do see this in areas, and particularly like gas and electric and comms, because a lot of other things are dependent on those infrastructures heavily. So, I think affecting one of those, you can cause cascading problems—interdependency problems—the—just the layers of dependency that operate. I think there was an old NTIA study that I saw that showed all of that.

Patrick:

[00.21.24.16–00.21.51.06]

It was a really, really well‑done graphic and study that showed all the different types of interdependencies between all the different kinds of infrastructures. And when you just sat there and examined it, your jaw just hit the floor—like, “oh my God, this is such a House of Commons.” Yeah. How does this all stay together? So, in knowing those intersections and exploiting those intersections, it would obviously cause massive amounts of additional—we’ll call kind of quote‑unquote “benefit”—for the attacker.

Matt:

[00.21.51.10–00.22.31.10]

Yeah. Yeah. Well, and something—this particular got my brain—tickled my brain—was I did a podcast with Rob Lee over at Dragos, and we—he was—one of his—he speaks on this a little bit, and that is the blackstart sequence. And it really brought a really fascinating perspective for me on this. And this kind of dials into exactly what you’re referring to, even beyond just domains within an organization, but interdependencies downstream and upstream as well. Where the—from a top‑down level, we look at, like, what are the critical things at a federal level? 

Matt:

[00.22.31.10–00.22.43.12]

And we define them in electricity as the BES, right—the bulk electric system. And then everything that falls below that line sort of just—you got to figure it out. You don’t…

Patrick:

[00.22.43.14–00.22.44.12]

Yeah, you’re left to use the state laws at that point.

Matt:

[00.22.44.12–00.23.05.23]

Yeah. Yeah. Exactly. If anything, it really is constrain you at all. But the reality is that these big things depend on the little things that are below that line. And we’re seeing now China—like with the Littleton Electric—where they’re so off the grid, it’s just a little co‑op or muni, or not even an IOU at this point.

Matt:

[00.23.06.05–00.23.28.07]

And—but these are components that are required to be operated in order to get the big parts moving again. And so the thing, when you and I are talking, that got me thinking is like, okay, we have China—wait, nation‑state—figuring this out. But what happens? Is it possible for anyone to leverage AI to kind of find these—kind of probe these kinds of weaknesses and interdependencies as well?

Patrick:

[00.23.28.09–00.23.49.15]

Yeah, absolutely. And a lot of this stuff is actually still public information, right? There are a lot of municipal utilities, for example, that have to have their sunshine laws show that they’ve got to have a lot of their data about the organization as public. You can buy these maps from, like, various different engineering firms or other sorts of GIS companies.

Patrick:

[00.23.49.15–00.24.10.06]

There are ways to get data about a power system—enough to see, like, wow, that’s a bunch of lines going into one place that looks pretty critical. And based on this diagram, this is probably a 100 or 230 or 345. So you can guess the voltages. You can infer a lot. So you can feed all this into a system and essentially map out the grid.

Patrick:

[00.24.10.06–00.24.26.07]

And you could map out the important parts, and you could see where the intersections were that were quote‑unquote “critical” or “necessary,” not just for blackstart but even for operations. So, yeah, I mean, could you model this in AI and cause problems using that model? Sure. Absolutely. Without question.

Matt:

[00.24.26.08–00.25.00.03]

How do you—I mean, do you think that there needs to be a shift in—I mean, electricity is probably the most, I guess, poignant example of this, ‘cause we all understand how little generators are needed for big generators. But I wonder if, from a federal level, anything needs to shift as far as how they’re—the—how they treat what’s critical and not. Like, do you think that there could be—do you think that this needs to come more from a top‑down perspective or a bottom‑up, like—do—what’s the—what needs to happen there?

Patrick:

[00.25.00.05–00.25.02.21]

Yeah, I mean, you’re talking to a recovering regulator so.

Matt:

[00.25.02.21–00.25.05.23]

Right. I targeted this for, like—yeah.

Patrick:

[00.25.06.00–00.25.27.06]

Yeah. I am not necessarily a fan of regulation, even though I’ve been a regulator. I do think it should be—it shouldn’t be—that we shouldn’t target, like, a perfect state with regulation. We should target a minimum floor that’s as—really, in order to ride the infrastructure ride, you got to do at least these things. And yeah, there is a certain threshold that’s small enough that kind of—we’ll say—I don’t want to say doesn’t matter, but has less impact, shall we say. 

Patrick:

[00.25.27.06–00.25.58.01]

Now, we had that problem with the electric sector. We’ll use that as an example, where there was a certain threshold that said, if you’re under this threshold, then NERC doesn’t apply, right. You don’t have to do the security regulations. Over time, we added a whole bunch of new renewable generation and smaller generation that was under that threshold, because people are like, well, if the threshold’s, we’ll say, 75 MVA, I’m going to make my plant 74.5 MVA.

Patrick:

[00.25.58.01–00.26.15.07]

Right? And I’m going to make a whole bunch of these 74.5 MVA plants, because I can make them whatever size I want. And I can just make a whole bunch of them. Well, when you do that, then you begin to control all of those different smaller points in aggregate, because you’re not going to send an operator out to each one of those little tiny plants.

Patrick:

[00.26.15.09–00.26.36.00]

And, dude, you’re going to automate that. But that just makes sense. So, with that automation and with the aggregation of that automation, you now have a single point where you can control an enormously high number of megawatts of generation that are all under the threshold but from a single point. So that is something that we’re looking at now in terms of changing how things are regulated.

Patrick:

[00.26.36.05–00.26.56.08]

So it does have to shift, as the infrastructure itself responds to things like regulation and market needs and all those other things. So, as long as your regulation is minimum—that says this is your floor—and everyone understands this isn’t the ceiling, right? This doesn’t mean you’re, like, terrorist‑proof or bullet‑proof. This literally means you are now allowed to operate, right?

Patrick:

[00.26.56.09–00.27.13.18]

That’s it. You can get on the bus. If we have that mindset, and then we have the mindset that this has to stay flexible enough to operate as the threat landscape—and, as I said, market conditions and infrastructure design—just change over time. With that construct, I think it becomes a better understanding.

Patrick:

[00.27.13.18–00.27.31.22]

And I think people might even be a little more willing to adopt it, right? But it’s not easy to do that, ‘cause you have to have typically pretty flexible standards and a pretty knowledgeable, balanced, and well‑informed regulator. That construct is a challenging one to meet.

Matt:

[00.27.32.01–00.27.33.21]

Yeah, it’s not.

Patrick:

[00.27.33.23–00.27.56.19]

Regulation is definitely not the solution to everything. But can it at least give us some good, high‑quality minimums to do really important things like electric, water, gas, chemical? Sure, it can be done. But it—again, it’s not just done because, hey, we should tell them what to do. It’s like, hey, we—these things have to operate, and they have to—they operate independently.

Patrick:

[00.27.56.19–00.28.11.10]

So we have to work in a way that works with the other sectors. And so, done well, it can be tremendously powerful and tremendously useful. Done wrong, it can be obviously quite frustrating and useless, and cost a lot of money, and frankly detract from the situation.

Matt:

[00.28.11.10–00.28.14.23]

Yeah, yeah. And get a lot of people who are just there to check the box.

Patrick:

[00.28.15.01–00.28.15.13]

Exactly. Yeah.

Matt:

[00.28.15.19–00.28.32.03]

Right. How do you think—from a—that’s federal help—from a private‑sector side, how—everybody looks at the security as sort of like the last box check. It’s always like, oh great, now we got it. Here come the guys who are going to tell us no. Right.

Matt:

[00.28.32.06–00.28.53.20]

Yeah, yeah. How do you—how should, particularly cyber—the leadership there—how should they better sell the idea of this? And to your—of these kinds of risks to leadership, in a way that implements some change into this. Like, you see where I’m going with it?

Patrick:

[00.28.53.22–00.29.15.20]

Yeah. I think so. Yeah, I think I—for example, when I talk to boards or executive layers, I never say the word “security.” Security, the term, the concept, has like a witchcraft and voodoo—a lot of expense. Eunuchs, beards. It’s just got all of these different things that come to mind, and none of it is “this is going to make my life cheaper, easier, better, faster.” 

Patrick:

[00.29.15.20–00.29.34.09]

Any of those things—none of those things come into mind. So I start with the cheaper, easier, faster, better approach and say, look, when was the last time you had some downtime? Okay. What happened? And we’ll say, well, we did this, and we got to—we think we got the root cause analysis.

Patrick:

[00.29.34.09–00.29.50.19]

So we think, okay, well, did it happen again? Actually, yeah, it did. We did this—oh, when we refined the root cause analysis, and we think we got it. So I’m like, okay, so what if you could actually, like, do the root cause analysis super fast? Would that help? I’m like, oh yeah, that would certainly help. I’m like, okay, so you need some visibility into those operations in ways you don’t have now.

Patrick:

[00.29.50.19–00.30.08.00]

Yeah. Yeah. Like that. I was like, okay. So let’s go get a tool for visibility. Let’s architect that network in such a way that you can get that visibility. And let’s make sure that next time something happens, we can do a root cause analysis, like, super fast and get that plant back online. I never said “security,” but I just bought a bunch of security tools to get that.

Patrick:

[00.30.08.00–00.30.29.12]

So if you can talk to them in their words—obviously. And everyone has said this, and you’ve heard this, but it really, really does matter. And it’s not just talk about it in dollars. It’s talk about them and their world, in their terms, in their words. ‘Cause every plant manager—they could care less about security unless it’s there to make their job easier, cheaper, faster, better.

Patrick:

[00.30.29.13–00.30.49.18]

So talk about those things. Don’t sell it as security. Sell it as visibility, uptime, and efficiency. And all of those other great things that they want. And it can deliver that, if done right. So I think then it’s on you, as the security practitioner or architect, to deliver it so that it does that. Right. And it’s not actually causing more problems than it’s worth.

Patrick:

[00.30.49.21–00.31.05.23]

So that part, I think we’ve done a bad job of both selling it and then a bad job of actually delivering it in a way that’s reliable and useful for them. We’ve rolled out too much, too quick. It’s become restrictive. It’s actually caused outages. So those kinds of things need to be factored in.

Patrick:

[00.31.05.23–00.31.16.08]

So I think we’ve just approached it the wrong way. We approached a lot of OT security with an IT security mindset, and it really hit wrong. And it’s—we’ve been set back ever since. And we’re digging out of a hole.

Matt:

[00.31.16.13–00.31.39.18]

Yeah, yeah. There’s—I mean, there’s a very real, even sort of antagonistic, relationships, and a lot of it culturally in organizations between those two different departments. It’s a real thing. With—when it comes to sort of these aggregated systems and the automation we talked about. One of the things, obviously, we’re—ArmorText is a communications tool for IR.

Matt:

[00.31.39.18–00.32.04.13]

So we are big on having redundancies, and in a way that actually delivers resilience. But with automation itself, it really—and tell me if you agree with this—but it seems like you—think about, like, taking—when there is a successful attack on a network, it takes out a lot of things. It takes out communications tools, it takes out IdP, single sign‑on. 

Matt:

[00.32.04.15–00.32.13.09]

Do you think that, in a way, is it fair to say, in a way, that we have automated trust away to a certain degree?

Patrick:

[00.32.13.14–00.32.38.17]

I don’t disagree. We—we’ve automated a lot of things, including trust out. Well, maybe another way to look at it is we’ve created an enormous amount of dependency. And I would say unearned trust because of it. It’s been reliable enough that our short memories—our little gnat memories—can’t look back at a time when this was a real big problem.

Patrick:

[00.32.38.18–00.32.55.08]

And I mean, all it takes is going and asking someone who just got nailed super hard, like, let’s go ask Jaguar Land Rover about their day a few months ago and see if they would be worried about things like out-of-band comms during a problem.

Matt:

[00.32.55.09–00.32.56.09]

Or Stryker.

Patrick:

[00.32.56.09–00.32.57.20]

Or Stryker, for example.

Matt:

[00.32.57.20–00.32.59.02]

Yeah, even use the phone. Yeah.

Patrick:

[00.32.59.04–00.33.11.02]

Exactly. So, I think for those that haven’t had to go through it, again, it’s a complacency issue. But we’ve automated ourselves into a place to where we have a high degree of dependency and an unearned level of trust.

Matt:

[00.33.11.02–00.33.33.00]

Yeah, yeah. How do you know in that moment where suddenly you realize how little trust is there outside of this thing that we’ve sort of handed it over to? How would you reestablish that? Like, you’re on the fly and you’re recovering, and now it’s like, what? What—do I trust these people? Because I don’t have this tool telling me they’re good.

Patrick:

[00.33.33.02–00.33.57.11]

Yeah. This is—it’s going to sound like a broken record. But again, you’ve got to be able to go back to manual. You’ve got to have some way—if I’m a shop owner and I can’t process credit cards, I’m going to take cash, and I’m going to do the math by hand. Obviously, that’s a very simplified approach, but you need to have some way to do those things the same way.

Patrick:

[00.33.57.11–00.34.20.07]

Right? If you can’t—I mean, I talked to, like, electric organizations, like, well, oh my God, what if someone hacks all the smart meters, for example, and weaponizes them? And I’m like, well, then you estimate the bill and you send that out later, and then you fix it—you fix it in arrears. So you just think of everything that you do that is obviously critical for your operations, or necessary for operations, out of that list. 

Patrick:

[00.34.20.09–00.34.51.09]

What do you do to actually do that a different way? If that thing was gone, what would you do to have that function back? And at the operational level—like at the physical cyber interface level—we look at things as CIE, or cyber‑informed engineering, or consequence‑informed engineering. There’s a bunch of different ways to look at it, but like I mentioned, that vibration‑sensing relay is a digital device, and it’s got a chipset, and it runs as a standard kind of routine, and it’s looking for vibrations outside of a band.

Patrick:

[00.34.51.09–00.35.11.11]

And if the vibrations go one way or the other too much, then it sends a signal that says, slow the machine down safely so that we don’t harm the machine. Now, that digital thing gets attacked, and I take those sensitivity markers and I move them way out so that it could effectively harm itself. There should be a physical device that just says underneath—that it’s not digital, right.

Patrick:

[00.35.11.11–00.35.29.23]

There’s an actual mechanical thing that says, whoa, this is too much vibration. And then it should send the signal. So there’s, like, catastrophic‑level physical backup protection for these things. And does that mean going back to, like, pencil and paper? I mean, if that’s what you need for operations, then yeah. And you should try that out.

Patrick:

[00.35.30.05–00.35.34.02]

It doesn’t mean you got to do this for everything. No. But you should do enough to where your critical stuff can move along.

Matt:

[00.35.34.02–00.35.34.11]

Identify that. Critical. Yeah.

Patrick:

[00.35.34.15–00.35.52.10]

Yeah. So that you’re—you can at least continue to move. I look at, like, Norsk Hydro was one of the best examples of this. And they were so transparent, and the world learned such—well, the world saw such a great lesson. I don’t know how they actually learned from it, but I mean, they continued to operate.

Patrick:

[00.35.52.10–00.36.10.17]

They were open, they were transparent. And they literally, like, went back to manual, and they kept things moving and chugged along and came out of it. And as a result, if I’m going to be—if, I mean, if I’m going to use a supplier and they’re on the list, I’m picking them first. I mean, that just makes sense, right.

Patrick:

[00.36.10.21–00.36.21.08]

So it doesn’t just save your business in the sense that your business can continue to operate, but now you become one of the most reliable partners in the business ecosystem, right? So it’s just good all around.

Matt:

[00.36.21.11–00.36.41.02]

Yeah. Yeah. And in doing so, you reestablish that trust—at least in that context. Do you see, from a cyber leadership perspective, people with the kinds of insights you’re talking about? Do you see them integrating those into their IR plans and things like that at this point?

Patrick:

[00.36.41.06–00.37.02.20]

Well, I see a wide degree of variety. There are a lot of organizations that are very short‑term, quick turnaround, quick profit—burn as fast as you can. I have low expectations that they will do anything about any of these problems in a realistic way. Your organizations that are—I mean, obviously, things like critical infrastructure—they look at it very differently, and in some cases they’re required to by law.

Patrick:

[00.37.02.22–00.37.21.15]

So that’s a different construct. But just a business driver of being a more reliable business partner—you’ll see businesses that want to be around longer and have a longer‑term vision. They already get this, and they’re thinking about this, and they’re doing this the right way. So, I mean, as we’ve all heard, it kind of comes down to your risk appetite, of course. 

Patrick:

[00.37.21.15–00.37.28.19]

Of course. And if you have a very high risk appetite and you’re ready to just, you know, burn down and start over, then.

And if you have a very high risk appetite and you’re ready to just burn down and start over, then—

Matt:

[00.37.28.22–00.37.30.05]

Stay close to the edge. Yeah.

Patrick:

[00.37.30.07–00.37.37.09]

Yeah. Then I would say, as a professional in this space, I myself would probably choose somewhere else to work if I had the choice.

Matt:

[00.37.37.09–00.37.50.21]

Yeah, yeah. That’s—that even should influence—yeah, that’s true. And I never thought about, like, career path choices. It’s like, those guys went a little too fast and loose. I think—I might, for longevity, go somewhere else.

Patrick:

[00.37.50.23–00.38.08.20]

I mean, there’s a lot of—I guess there’s some… It’s weird calling it satisfaction. There’s some sense of being a really good firefighter and putting out a fire quickly and solving a problem quickly. That’s great. But honestly, it should never have gotten to that point. There should have been a building code that says, here’s how the building is built to reduce fire.

Patrick:

[00.38.08.20–00.38.25.04]

Here’s where the sprinklers go. Here’s the smoke detectors, and here’s the fire extinguishers, and here’s the exits and the fire doors. But that keeps it so the firefighters aren’t needed as much. And when they do show up, they’re much more effective, because it’s all been designed to enable them in ways that are much more effective.

Patrick:

[00.38.25.04–00.38.43.05]

So we’ve got to get to that stage where—and hopefully we get to a place where there’s, like, the bill equivalent of building codes in software, for example. Good luck. But we’re approaching that. I mean, I look at, like, a European CRA—that’s… It’ll be difficult, but it will be amazing and transformational. So that construct is there.

Patrick:

[00.38.43.05–00.38.52.07]

We know it’s there. We know what it’s like to put out fires all the time. It’s exhausting. But, like I say, it really depends on your business model—and, in a lot of cases, that informs your risk appetite.

Matt:

[00.38.52.11–00.39.03.16]

Yeah. Do you think—from the regulator hat side—do you think there are governance models that need to be implemented to address these kinds of rostering challenges?

Patrick:

[00.39.03.18–00.39.24.22]

I, again, I look—we in the US were really good—in North America, for that matter—we were really good at—we kind of hit a good sweet spot of regulation with, like, a NERC CIP. And there’s a bunch of argument about the TSA SDs for the gas space and what’s happened in water. But we kind of set the bar for making our critical infrastructures pay attention and at least get to a minimum level of security.

Patrick:

[00.39.25.02–00.39.48.08]

We did a really good job. We moved the needle. Since then, regions like Europe, for example—they’ve got the NIS2 standard, and they’ve got the CRA for supply chain. Like, just an example in NIS2, the correlation would be, like, in NERC CIP, ostensibly there’s this million‑dollar‑per‑day, per‑violation maximum penalty. Actually, adjusted for inflation, it’s much higher.

Patrick:

[00.39.48.08–00.40.08.01]

It’s almost $1.8 million per day, per violation. That—that’ll never happen. That won’t—I mean, despite that being there, you would have to absolutely intentionally do something egregious to come even close to that level of penalty. In Europe, for example, they—and then this would happen at the company level, right? This NERC penalty that I just described.

Patrick:

[00.40.08.01–00.40.36.02]

The correlation is, in Europe, your board members can be individually penalized. So this goes to the leadership of the organization directly, so that they want these—the owners and the people that direct this company, that have skin in that game, the share that governing what the shareholders make and all these other things—that they take this seriously. And the penalties are, like, what, €10 million or 2% of your global gross, whichever is higher.

Patrick:

[00.40.36.04–00.40.56.00]

I mean, so I think when you put that level of responsibility on it, it gets their attention. And it’s, like I say, it’s not a prescriptive regulation—we’ll say directive. It’s not technically a regulation, but each country has to transpose that into their own laws. But we’ll loosely call it a regulation.

Patrick:

[00.40.56.02–00.41.15.15]

But it makes those companies—I mean, they have to get religion quick and get religion at the board level. So I think that would severely shift security for many organizations if they tried that in the US, for example. So I think if there’s a model to look at—is the European model perfect?

Patrick:

[00.41.15.15–00.41.23.07]

Oh, God, no. But it is a way—it is clearly going to shift things in a much faster direction, in a very real way.

Matt:

[00.41.23.09–00.41.47.19]

What I’ve seen is—I mean, there’s, to your point, regulation is always the kind of drudgery and oftentimes seen as the thing that creates problems and gets in the way of innovation. But I also see, in these kinds of industries, where if you don’t make it cost something to not do something, people are going to not do the thing, right.

Matt:

[00.41.47.19–00.42.00.12]

I think it’s—and that’s where regulation has that play, that role. I mean, if you’re a private, for‑profit organization, you’re not going to just spend a lot of money voluntarily, if you don’t have to.

Patrick:

[00.42.00.14–00.42.07.10]

Well, I—you’re absolutely right. And I’ve used the example of regulation is done in the public interest.

Matt:

[00.42.07.12–00.42.08.15]

Yeah.

Patrick:

[00.42.08.17–00.42.26.06]

And corporations operate in the shareholders’ interest. Now, those two interests don’t always align. So I think what they’re trying to do with regulation is to forcibly align those—at least for some things where it’s really, really important.

Matt:

[00.42.26.08–00.42.28.03]

Yeah. If the lights don’t come on. Right.

Patrick:

[00.42.28.05–00.42.32.06]

Right. Yeah, yeah. No water, no gas, no electricity. That’s a problem.

Matt:

[00.42.32.08–00.42.32.20]

Big problems.

Patrick:

[00.42.32.22–00.42.37.05]

Yeah. Real fast—you, we’re about 72 hours away from total societal collapse.

Matt:

[00.42.37.05–00.42.58.02]

Things go towards collapse very quickly. Yeah, yeah. Well, I—I’m going to pivot here a little bit more to the personal side. What’s, from your perspective as you’re kind of continuing forward into these problems and across the world, what’s—what are some ideas that have you excited?

Matt:

[00.42.58.02–00.43.02.17]

What are you—kind of like, what are you learning about right now that’s got your attention?

Patrick:

[00.43.02.19–00.43.16.10]

Oh, wow. I am—I’m really enjoying that cyber‑informed engineering is actually getting a bit more oxygen in the room. We’re seeing a shift toward people understanding that—wow, we probably do need a safety net under the trapeze artist.

Matt:

[00.43.16.16–00.43.17.09]

Yeah.

Patrick:

[00.43.17.09–00.43.27.08]

Just to catch them. Just in case they fall. Yeah, they’re super experienced, and they know what they’re doing, but sometimes things happen. So I’m liking that. I’m—that’s got me super excited to see—

Matt:

[00.43.27.11–00.43.30.01]

Define cyber‑informed engineering, just to make sure.

Patrick:

[00.43.30.03–00.43.52.14]

Yeah, it is—it’s the practice of putting in those manual safeguards, those manual safety nets, for those really dangerous processes or critical processes in the OT space. So it’s like that. If the pressure in the pipe gets too much and the digital sensor doesn’t work, then the float valve that operates based on physics will back it up and save—safely save the day, for example. 

Patrick:

[00.43.52.14–00.44.16.15]

And what this does is it puts a lot more trust in the infrastructures, such that hackers can’t cause catastrophic damage. Because right now we’ve seen, in a very real way, that—I mean, it used to be that there was this red‑line construct, right? That if you attack an infrastructure, you’re getting bombs and boots at your doorstep. We’re going to take that so serious.

Patrick:

[00.44.16.17–00.44.37.03]

And then Russia attacked Ukraine and other things, right. But the biggest example is Russia attacked Ukraine and basically nothing happened. So what happened was they did it again, and then nothing happened, right. And then you’ve got China getting in Vault Typhoon and Salt Typhoon, and—well, we’re not actually causing harm, but it’s embedded and it’s data theft.

Patrick:

[00.44.37.08–00.45.03.07]

So I think that the capability for adversaries to cause real physical damage in critical infrastructure is there. Like that—that’s a real guaranteed problem. So finding a way to remove that risk or minimize that risk, such that the damage is smaller, it’s contained, it’s less catastrophic. But motions in that direction are super, super awesome, not just because they like protecting critical infrastructure.

Patrick:

[00.45.03.07–00.45.12.08]

That’s a great thing. But it improves the level of trust globally on that infrastructure. And it makes the adversaries do something different. It makes them go somewhere else, which is what we want.

Matt:

[00.45.12.08–00.45.13.13]

Yeah, yeah.

Patrick:

[00.45.13.15–00.45.15.02]

It raises their cost.

Matt:

[00.45.15.03–00.45.16.08]

Make an easier target. Yeah.

Patrick:

[00.45.16.08–00.45.27.11]

If they know they attack something, there’s going to be some physical construct underpinning it that takes away their goal. Great. I’m all for it. That has got me super, super excited.

Matt:

[00.45.27.13–00.45.34.13]

What’s something your—what’s something you changed your mind on or like or wrestling with as far as an internal debate goes?

Patrick:

[00.45.34.15–00.45.58.03]

Oh, that’s a good question. I saw one today that I’ve been chewing on, and it was a really good pinnacle for my thought. But Sarah Fluchs was raising an issue from the Munich Security Conference where they’re talking about offensive cyber, and the new national cyber security strategy leans very forward into this offensive cyber approach.

Matt:

[00.45.58.03–00.45.58.18]

Right. Punching back. Yeah.

Patrick:

[00.45.58.19–00.46.22.16]

And I hear that—I’ve always struggled with this because what typically happens is, as we’ve seen so far, and it’s not just like this is in theory, this is a reality. They start going for the really important stuff like the critical infrastructure components. Right? If they’re going to hit you, they’re not going to take out a widget maker. They’re going to take out electric or gas or water comms.

Patrick:

[00.46.22.20–00.46.35.05]

So it what it does is it raises the response stakes and escalation path for those infrastructures. And at least in the US, these are private companies.

Matt:

[00.46.35.09–00.46.35.16]

Yeah.

Patrick:

[00.46.35.22–00.47.02.00]

So, what it ultimately ends up costing is costing the consumer at the end of this chain, to try to get them to improve. When they do get damage, those recovery costs are paid long term, through—you just roll it into your pricing. And, I mean, so it doesn’t all—it just shifts the burden of all of that weight back on the end person, at the end of that long chain of responsibility.

Patrick:

[00.47.02.05–00.47.30.13]

And that, I think, that’s a really, really unbalanced solution. And it feels good to say, yeah, we’re going to hit him back where it hurts. And then we see what the unintended consequences of that action turn out to be. So, I say if we do this, we should tread lightly. I am much more for attacking and doing it covertly, and false-flagging it, and making it look like someone else, and making the attribution tremendously difficult.

Patrick:

[00.47.30.18–00.47.41.15]

We should have absolutely ridiculously skilled offensive capabilities like mind-blowing offensive capabilities. We shouldn’t advertise that and just go piss on someone’s fence post on purpose.

Matt:

[00.47.41.17–00.47.44.05]

Yeah. Little [inaudible] would be nice.

Patrick:

[00.47.44.07–00.48.06.09]

I think it’s more effective. They—there should be a question if they do something, whether or not our offense is going to do it back in a way that is obvious or not. So, I like that uncertainty because it creates less of a target on those privately owned or municipally owned infrastructures that end up being the ones that lose out in this situation.

Matt:

[00.48.06.09–00.48.12.04]

Right. Right. Yeah. Yeah, that’s a whole another topic. Now you got me thinking.

Patrick:

[00.48.12.06–00.48.29.03]

Yeah. I struggle with it, ‘cause sometimes I’m like, yeah, we need to strike back. Then I’m like, well, so, I mean, it’s weird to say I’m for it and I’m not for it. It’s like I’m for it under certain circumstances, in certain ways. And it’s—there’s just so much nuance involved to that. It’s not as easy as just saying, go punch back.

Patrick:

[00.48.29.03–00.48.30.16]

It just—it’s just not that easy.

Matt:

[00.48.30.16–00.48.48.13]

And the unintended consequences—and, I mean, you can’t predict the number of variables that shift with an action like that. With an overt action like that, you send things into wildly new control, new directions. And it’s—you can’t game that out. Yeah, it’s impossible.

Patrick:

[00.48.48.15–00.49.08.07]

Yeah. It is. And I think if one nation does it and doesn’t get penalized, okay, that’s different. But if every nation is doing it and it’s not getting penalized, we might as well just toss out things like international humanitarian law and the law of armed conflict, and things that create those norms for, I would say, ostensibly a very valid reason, so.

Matt:

[00.49.08.07–00.49.27.19]

Yeah. Yeah, absolutely. All right, final question here. And this is kind of the theme of The Lock & Key Lounge. And I’ve taken to asking this question a little differently than just sort of like, what’s your favorite cocktail? I like to frame it a little differently. So, imagine that you’re at a bar, but it’s a nice one.

Matt:

[00.49.27.19–00.49.47.18]

Not one that—it’s the ones that, like, you don’t have to yell at, right? At the other end of the bar, it’s empty, right? It’s not busy. At the other end of the bar is someone in security that you’ve been dying to talk to or meet. The question is, what cocktail do you order, and who’s at the other end of the bar?

Patrick:

[00.49.47.23–00.49.58.04]

Oh man, I would probably want to have the next conversation with Dan Geer, to pick up from where I left off about 15 years ago.

Matt:

[00.49.59.10–00.49.59.58]

Who’s Dan Geer?

Patrick:

[00.50.00.04–00.50.12.04]

Dan—oh man. Dan Geer has been around for ages. Dan’s a very bright guy. I think he was maybe founder of USENIX and so many other things. He was at @Stake for years, and—but positively brilliant mind in the space. And my background’s microbiology.

Patrick:

[00.50.12.04–00.50.37.03]

So, we had some very interesting and thought-provoking conversations around living systems and security systems, and how you have overlapping capabilities and functions. And it was—we were looking at throwing out some ideas for designing certain architectures based on the biological constructs. So, really, really fun conversation with an absolutely brilliant person that I would love to continue at some point.

Patrick:

[00.50.37.03–00.50.58.20]

But—and I guess the cocktail for me, yeah, I used to bartend, so I have a hard time drinking other people’s cocktails. I would probably go with just a straight whiskey. Probably, I maybe even go with a nice Japanese, or something that’s a nice high-proof American bourbon, to keep me sipping it slowly.

Patrick:

[00.50.58.20–00.51.02.23]

Yeah. Or maybe even, like, a really amazing mezcal, depending upon the bar.

Matt:

[00.51.03.04–00.51.26.22]

Hey. Yeah, yeah. Very nice. I’ve found that there’s sort of a great shock test with cocktail—you—what you gravitate towards. Someone told me that there was a—she did this intentionally. She did an event, and they picked a pink martini, and an old fashioned, and then a gin and tonic. And she said you could actually kind of, like, you saw the people who took it, and you’d be like, yeah, that tracks.

Matt:

[00.51.27.00–00.51.29.04]

Oh, yeah, can confirm.

Patrick:

[00.51.29.08–00.51.45.22]

Yeah, yeah. It’s funny—when I used to bartend, people wouldn’t know what they—they would say, ah, I don’t know what I want. And I could look at them and say, you want a margarita? And it was like, actually, I do want a margarita. You could tell right away. You want a straight shot of bourbon, like, I do. Yeah, you can tell.

Matt:

[00.51.46.02–00.52.05.05]

Yeah. It’s an interesting insight. So—well, Patrick, I know you’ve got a lot going on here, so I want to thank you for kind of bringing some real-world perspective to this topic. I mean, I think this topic is something that really does need more attention. Obviously, you feel similarly, and I do want to thank the listeners.

Matt:

[00.52.05.07–00.52.10.14]

There are lots of places that you all could have spent this time. We appreciate you choosing and to come and share with us.

Patrick:

[00.52.10.16–00.52.11.13]

Absolutely.

Matt:

[00.52.11.15–00.52.16.13]

Patrick, I do appreciate your time again, and hopefully we’ll get to talk again soon.

Patrick:

[00.52.16.15–00.52.21.14]

Any time. I’ll probably catch you out of BEER-ISAC sometime soon, hopefully. And thanks for having me on.

Matt:

[00.52.21.16–00.52.26.17]

Yeah, we did not talk about BEER-ISAC. Another one. We’ll do another one. Yeah.

Patrick:

[00.52.26.19–00.52.27.06]

Awesome.

Matt:

[00.52.27.09–00.52.58.00]

Well, in closing, cybersecurity is no longer just about protecting systems or silos as IT and operational tech become more interconnected. And the ability to scale attacks across them increases, organizations need to start asking harder questions. Questions like what does operating in the dark actually look like? What does manual look like? In today’s reality of an infinitely scalable and intelligent adversary, these silos cease to be meaningful.

Matt:

[00.52.58.01–00.53.16.04]

Cybersecurity must become not just a tool or a component of security, but, in many ways, part of a BC/DR plan—part of how your organization gets back up and running. The challenge is no longer how to recover from an incident; it is actually how to survive it in some of the worlds that we live in.

Matt:

[00.53.16.04–00.54.01.16]

So, if you find this conversation valuable, we’ll be having many more of them. So be sure to follow, share some of this with friends you know who would appreciate it. And until next time. Be well, stay curious, and do good work.

We really hope you enjoyed this episode of The Lock & Key Lounge. If you’re a cybersecurity expert or you have a unique insight or point of view on the topic—and we know you do—we’d love to hear from you. Please email us at lounge@armortext.com or our website: armortext.com/podcast. I’m Matt Calligan, Director of Revenue Operations here at ArmorText, inviting you back here next time, where you’ll get live, unenciphered, unfiltered, stirred—never shaken—insights into the latest cybersecurity concepts.

Search