Security, Sovereignty, and the Future of Intelligent Systems
Today’s topic is AI, and before you click off let me stop you because we’re coming at this topic from some new directions and with a definitive voice in this space. As organizations accelerate AI adoption across cloud platforms and security operations, they are also introducing new vulnerabilities, attack paths, and questions of control. Today’s conversation explores how AI is reshaping the cyber threat landscape and why issues of data and infrastructure sovereignty will define the next decade of security strategy.
- The “baby tiger” problem demands early scrutiny. That shiny new AI system looks harmless today, but what happens when it matures? Noelle’s analogy (originally used by Yoshua Bengio, the “godfather of AI”) captures the danger of deploying AI without asking hard questions early: How big will this get? What does it consume? What happens when I can no longer control it? The time to ask these questions is when the system is still a “baby tiger,” not after it has become capable of real harm.
- AI has fundamentally changed the threat landscape. Ten years ago, only a handful of labs could synthesize a human voice with high fidelity. Now, anyone can do it on their phone. Voice-based biometric authentication is no longer trustworthy. Phishing, reconnaissance, and vulnerability discovery now operate at unprecedented scale and speed. AI has reduced the cost of sophistication, effectively equalizing amateur attackers with seasoned professionals.
- Today’s most underappreciated vulnerabilities are already in production. Prompt injection. Data poisoning. Knowledge manipulation. Agentic authorization. These risks are happening now. And many attacks are not even adversarial; they come from benign users unknowingly corrupting models through everyday interactions. This is why AI red teaming is not just about adversarial attacks—it is about catching unintentional drift before it causes damage.
- Trust has never been more critical, or more fragmented. From threat-sharing communities operating under Chatham House rules to building trusted collaboration platforms, the conversation kept circling back to trust—in identity, in technology, and in responsible information use. As Noelle put it, “We’re not competing, we’re protecting. We should be sharing.” Building and maintaining trusted communities requires discipline, governance, and technology that enforces responsible use.
- Sovereignty is no longer just about compliance—it is about control. After a decade of moving everything to the cloud, organizations are now asking: What if I just owned it? Companies like SambaNova, Cohere, and Digital Realty are enabling private LLMs that never touch the internet. Jensen Huang is holding computing power in the palm of his hand that once filled entire floors. The Stanford AI Index Report tracks where models live, who manages them, and what that means for geopolitical risk. Data sovereignty, infrastructure sovereignty, and model sovereignty are becoming distinct strategic considerations that security leaders must address.
- Security must be invited to the table first, not last. Noelle flipped the script: The people you typically talk to last—security, legal, privacy—should be in the room from day one when building AI systems. “Security people aren’t going to slow you down if you ask for their help at the beginning,” she said. “The only reason we slow things down is because you come to us at the end.” Organizations that embed security into AI governance from the start avoid costly rework and reduce the risk of deploying vulnerable systems.
- Shadow AI is a governance blind spot. Most organizations have no idea how AI is being used across their workforce. Some employees are on ChatGPT, some on free versions, some using Gemini on their phones—it is all over the place. Getting a handle on what AI systems exist, building a usage policy, and establishing accountability for each system is foundational work that many organizations have not yet done.
- AI can raise the defensive baseline for under-resourced teams. Today’s technology allows even small, under-resourced security teams to triage faster, build better correlations, improve documentation, and conduct more effective anomaly detection—all with the support of AI-powered tools. This is not an industry where AI will replace security professionals; it is one where AI can help them serve more people and manage more risk with fewer resources.
- Visibility enables accountability. With platforms like Assured Trust and the rise of agentic security assistants, organizations can operationalize trust by seeing which AI systems are deployed, who is accountable, and whether a human is in the loop, on the loop, or out of the loop. This visibility is what allows security leaders to demonstrate value to executives and justify investment.
- The ARC Framework guides the path to AI maturity. Organizations typically move through three phases of AI adoption: Awareness (understanding what AI is and how it’s used against us), Readiness (shifting to practical tools and agentic support), and Competitive Advantage. True advantage is not found in merely learning about AI, but in actually applying it to daily work through autonomous processes and agentic systems.
- Move from “vending machine” AI to the Socratic method. Most users operate in “vending machine mode,” asking AI for a quick output like a LinkedIn post or a firewall rule. Noelle argues that to be effective, and to avoid “mental Doritos,” users must adopt the Socratic method. This means asking AI questions correlated to the ultimate outcome you want to achieve, allowing the model to analyze and offer new perspectives rather than just a surface-level result.
- Sovereignty protects against a “Death Star” acquisition. National and infrastructure sovereignty is a critical defense against shifting corporate landscapes. Noelle warns of the “Death Star acquisition,” where a company with radically different values or leadership acquires your organization. If you do not own and control your own models and infrastructure, your data and intelligence are suddenly at the mercy of the new owner’s risk profile and intentions.
- Infrastructure sovereignty is an economic choice, not just a security one. For a decade, the mantra was “move everything to the cloud,” but the pendulum is swinging back. Thanks to the density of modern computing power—where palm-sized hardware can now rival what once filled entire floors—owning the “rack,” the model, and the data is now less expensive and more accessible for regulated industries than it has ever been.
Matt Calligan:
[00.00.04.02–00.00.29.17]
Welcome to The Lock & Key Lounge, a podcast dedicated to new ideas and perspectives you can use now, today, to build a more resilient security program for the organizations that you work with and the industries that we support. I am Matt Calligan, your host for today. And today’s topic is AI. Now, don’t click off before you let—roll your eyes and just stop listening.
Matt:
[00.00.29.21–00.01.21.14]
We’re coming at this from a new direction—actually, a couple directions here. And to help me with that today is a definitive voice in this space who has been working with AI, and in various capacities, for quite some time. As organizations accelerate AI adoption across these cloud platforms and security operations, they’re also introducing new vulnerabilities, new attack paths, questions of control—even the potential for being extorted by AI, which we’ve seen. Today’s conversation explores how AI is reshaping that cyber threat landscape and why issues of data, and even infrastructure sovereignty, will define this—the next decade, but probably just the next five years of security strategy.
Matt:
[00.01.21.16–00.01.49.04]
Our guest that I referenced is Noelle Russell. She’s a multi-award-winning technologist, bestselling author, a respected leader in cloud, data, and AI transformation, and, with her experience leading AI initiatives across—now all the names you’d recognize—Microsoft, AWS, IBM, even NPR. Noelle brings a uniquely human-centered and responsible lens to how intelligence systems should be built, secured, and governed.
Matt:
[00.01.49.06–00.01.51.17]
Noelle, welcome to the show.
Noelle Russell:
[00.01.51.19–00.02.01.23]
Yeah, thank you so much. I always—whenever people go through that, I’m like, wow, that’s a lot. But yeah, excited to be here. Excited to share my perspective. Should be a fun conversation.
Matt:
[00.02.02.02–00.02.26.09]
I think so. And I’m gonna—I have some questions that we’re just going to move through, but I—again—I don’t want to—I probably did a terrible job sort of explaining your background here. So—and—but instead of me trying to rehash that again as we go through these questions, I’m going to dive in, and feel free to kind of back up your answers with your experience in these areas as well.
Matt:
[00.02.26.11–00.02.29.11]
If that works for you… That way, you don’t have to listen to me prattle on about it, so—
Noelle:
[00.02.29.11–00.02.37.20]
Yeah, it’s nice it is as it is, I will definitely sprinkle in a little of the back story as we go along today.
Matt:
[00.02.37.20–00.03.05.02]
Right. Okay. All right. So, there’s two big sort of silos—if you will, topic silos—that I want to get your perspective on, the first being cyber security and sort of the impact that AI has on that, for good and bad. The second would be data sovereignty and even infrastructure sovereignty, ’cause that’s becoming a big buzzword even outside of AI, but specifically around AI there are some interesting concerns and things to think about.
Matt:
[00.03.05.02–00.03.26.12]
So, we’ll start with cyber security. The thing, I guess very high level here, for folks who—in the sense that I get from the way you see topics and stuff at conferences—there’s a lot of folks still thinking about just what does—what relevance does AI have to cyber security, and how I think as a cyber practitioner.
Matt:
[00.03.26.18–00.03.37.00]
So, from your perspective, how has AI changed the cyber attack surface itself compared to, say, five years ago?
Noelle:
[00.03.37.02–00.04.08.17]
Oh, absolutely. Yeah. And I always like to encourage… I’ve started this journey, as you mentioned, a little over a decade ago. And we were—I was working as a cloud architect, which I think is pretty similar to the journey that many—I was a cloud architect, I was focused on setting up networks and ensuring cloud infrastructure was configured correctly before being asked to join a team at Amazon, building Amazon Alexa.
Noelle:
[00.04.08.18–00.04.39.11]
And I then shifted completely away from infrastructure and architecture to basically AI application design and development, which was a big switch, but I was passionate about it. Number one, because I have—I always like to tell people—I have six kids. That gives you a sense of how much multitasking I do in a day. But one of my first-born child has Down syndrome, and it really—when I found out that Jeff Bezos ended up sending an email to the company and saying, hey, we’re starting this new team.
Noelle:
[00.04.39.11–00.04.57.10]
And when I found out they were building a natural language device that allowed you to basically say whatever you want and then have code run on the back end, as an engineer, I was like, this is magical. All I have to do is teach him how to talk, and all of a sudden the world opened up for him.
Noelle:
[00.04.57.10–00.05.18.16]
So, that was my motivation. But immediately upon doing that—and you, and some of you, some of the listeners probably can relate to this—is as soon as you say, great, now you can—as long as you can talk—you can access anything. If you’re a cyber mindset, like that attack surface, right? Maybe that was not something you would have thought of.
Noelle:
[00.05.18.16–00.05.43.12]
And certainly, ten years ago—as a matter of fact, ten years ago—companies were thinking, wow, I love the idea of using a natural interface. There were banks that immediately went into using voice as a mechanism for biometric identification, like in the early days, like that was the—it wasn’t multi-factor. It was like, if you use your voice, you’re good.
Noelle:
[00.05.43.12–00.06.05.21]
And what I think, to answer your question around how things have changed, is ten years ago there were only a few labs in the world that could really synthesize, to a high degree of fidelity, a human voice. Now, you could do it on your phone with no tech capability whatsoever. So now, all of a sudden, we can’t use that information to authenticate, right.
Noelle:
[00.06.05.21–00.06.31.11]
Because now it can be spoofed. Right? I think the speed and scale at which media can be synthesized and information can be synthesized now increases that attack surface that we’re talking about, right? Increases the way that someone can ask for information, can submit a request. Phishing attacks—all of these now have scaled to a whole new level of integrity.
Noelle:
[00.06.31.13–00.07.05.17]
Phishing, recon, vulnerability discovery—all of that gets just so much easier. But worse, it happens faster than before. And so, that means that as cyber professionals, we have to kind of get our game—we have to increase our game as well. And I think the other part of it, which we’ll come back to probably at the end when we talk about sovereignty, is this concept of trust that, oftentimes, sadly, we are reactive when it comes to cyber.
Noelle:
[00.07.05.17–00.07.10.13]
People don’t know that they’re in trouble until they’re already in trouble.
Matt:
[00.07.10.18–00.07.13.23]
Yeah. Six months in, and they realize they’ve been sitting on the network for that long.
Noelle:
[00.07.14.00–00.07.44.03]
Yeah. Exactly. And so, I think AI has pushed people to think about these concerns. Well, I would also say press cycles on AI gone wrong have also pushed people to now think about this a little bit ahead of time. So actually, I think it’s an opportunity for us that, in the midst of AI increasing the attack surface, companies are now willing to have a cyber conversation earlier in the process, earlier in the lifecycle, than they ever have before.
Noelle:
[00.07.44.03–00.07.45.14]
And I think that’s good news for us.
Matt:
[00.07.45.18–00.08.05.10]
Yeah, I was even—the thing that—even most people are focusing on now is the use of AI as a tool to, like, the deepfakes and things like that. I was listening to an interview with—I think his name is pronounced—Yoshua Bengio.
Matt:
[00.08.05.14–00.08.27.16]
He’s a Canadian researcher. He’s—they call him one of the godfathers of AI and neural networks and deep learning and stuff. But he was talking about examples of AI actually figuring out that people were trying to turn it off and finding blackmailable information about the engineers, and trying to blackmail them into not turning them off.
Matt:
[00.08.27.21–00.09.09.06]
So, it’s—and we’ve seen the anthropic use in actually doing an entire cyberattack with prompts successfully, even though it was available. It’s a public one. It was not supposed to be able to be tricked into this. So, it’s beyond just a tool that helps hackers. It itself is a threat and can be used maliciously, and even prompted to be done maliciously itself. So, what—do you—where do you think the most, I guess, underappreciated security vulnerabilities are emerging? Is it from that sort of quasi-sentient version of it, or is it from hackers using it better?
Noelle:
[00.09.09.09–00.09.51.00]
Yeah, I think there’s—it’s interesting, cause we always, especially as data scientists, machine learning experts, we tend to talk about security vulnerabilities that are three to five years from most people even understanding what that is like. We’re worried about things in research that haven’t really shown up in production. But from a production perspective—meaning AI that you’re likely deploying right now—I actually use an analogy that I think might be helpful, of this concept of when you start using an AI system in the beginning. And all of us can relate with the first time we use ChatGPT, or the first time we use Gemini, or Perplexity, right?
Noelle:
[00.09.51.00–00.10.12.10]
There’s this sense of kind of enamored—wow, this is pretty cool. I could do so much with this. This is amazing. And I’ve come to call that kind of a baby tiger kind of phase, right? Cute and cuddly and furry, but at the same time, if you think about it just a little bit, it is extremely dangerous.
Noelle:
[00.10.12.12–00.10.29.01]
In its current state, not so much. But it’s definitely going to grow up and maybe hurt someone if you’re not careful. And so, I think the cont—right now, a lot of organizations, a lot of—even a lot of professionals—are leveraging AI in this kind of mode of wow, that’s pretty cool.
Noelle:
[00.10.29.01–00.11.00.22]
I can’t believe I can do this. And they don’t realize certain challenges, certain vulnerabilities that are immediately—that are happening. And it’s very similar to kind of cyberattacks. These are not nefarious, dangerous things that are ha—it’s just out of pure either oversight or neglect that these types of vulnerabilities are exposed. And one of them, which I think we can probably relate to some specific cyberattacks, is this concept of prompts, right.
Noelle:
[00.11.00.22–00.11.29.16]
Most people understand today the way you talk to an AI system is through prompt engineering. And now we are automating a lot of prompts. Right? So, we are providing prompts that are automated. And in the process of doing this, there is a vulnerability called prompt injection. Right. Which means that people can, in the context of asking a question, inject information to intentionally try to get the model to be manipulated, to change its concept of truth.
Noelle:
[00.11.29.18–00.11.51.18]
Tool hijacking—we’re now using in ChatGPT, for example, there’s this operator capability. It allows me to go in and ask ChatGPT to go and do something on a website. Well, if you’re not careful about the environment—that’s why, as a cloud architect, I’m pretty passionate—if you’re not careful, someone can absolutely—ChatGPT’s gonna be, “I totally did that for you.”
Noelle:
[00.11.51.18–00.12.12.20]
But if you go in and you audit the, like, what is happening on the screen, you’re like, wait a second, that’s actually not what I asked you to do. And so, I think there’s a lot of—yeah, a lot of professionals tend to over-rely on this because it seems so cute and cuddly. But at the same time, I encourage us to ask really hard questions.
Noelle:
[00.12.12.22–00.12.34.20]
One of the—one of those questions is often around data. Of course, data is the fuel for these systems. And one of the most common vulnerabilities is data poisoning, right? The fact that attackers don’t have to even touch your model, they don’t even need to get into your DMZ, past your firewall. All they have to do is be creative in how they communicate requests.
Noelle:
[00.12.34.20–00.13.01.23]
It’s almost like DDoS in the olden days, right? But now they’re going to be able to do it through natural language. And it’s incredibly easy for anyone who speaks any human language to do so. So, that’s why I think it’s now a new level of concern, because it’s not just people who have figured out a way to build kind of dangerous software.
Noelle:
[00.13.02.04–00.13.32.02]
Instead, it is anyone. And that’s why we actually have a concept that we use becau—in AI, similar to cyber—called AI red teaming. And the main reason is not just to do adversarial attacks, but actually because most attacks on a system are not adversarial. They’re actually benign, right? People don’t even realize they’re poisoning our data or impacting the—they’re causing the model to drift away from the information it was trained on.
Noelle:
[00.13.32.07–00.13.54.17]
So, these are all things that I think people don’t really think about—like prompt injection, cross-prompt injection, knowledge poisoning. Auth—of course, using authorization in—using inaccurate ways to authorize access to systems. And most importantly, I think probably agentic authorization. Right? Like, giving an AI system just too much control.
Matt:
[00.13.54.19–00.14.08.04]
Yeah. With—as someone who has moved from—I’m sure you do some more hands-on keyboard stuff—but your job is more to advise and, like you said, wor—kind of talk about what’s five years down the road—
Noelle:
[00.14.08.06–00.14.08.22]
Yeah.
Matt:
[00.14.09.00–00.14.29.12]
The scale from our side—we speak about risks. Specifically, ArmorText being in this kind of space for post-breach and IR and stuff. We’re always talking to our clients about what we’re seeing and what we found—and I wonder if you’re experiencing this too, so I wanted to get your perspective.
Matt:
[00.14.29.13–00.14.59.21]
The—we—yeah—I—the joke that I make is, in cybersecurity, if you’re any good, you’re never wrong. You’re just early. And what we see is the speed at which predictions that seem like they should take two or three years is picking up to where it’s like, oh, wait, that was nine months ago. As someone who’s trying to get people to think down the road, when you say, hey, you should start thinking about these problems because this might happen in two or three years, and then suddenly it’s six months later and it’s happening.
Matt:
[00.14.59.23–00.15.14.04]
How—yeah—what—do you see people’s sort of cognitive flexibility adapting to that? Or are you still pounding the table about thing—yeah, I said it was coming three years from now, but it’s right now—do you see that from your perspective?
Noelle:
[00.15.14.06–00.15.45.04]
I think the only thing that inspires change, sadly, is bad things happening. Right? So, luckily and sadly, there have been very big companies that have pushed AI systems—baby tigers—into production. Those little AI systems become—have some either level of vulnerability or behave badly or exhibit bias. There’s many, many different ways for an AI system to fail in production.
Noelle:
[00.15.45.08–00.16.09.00]
And I would say probably 80% of them are in a way that creates vulnerability to the organization. And I definitely think that today, AI reduces the cost of sophistication. Right? So now I can use AI. An average attacker now can create pretty advanced tactics just by asking for them. Right. Just maybe, like—
Matt:
[00.16.09.01–00.16.09.07]
Right. In normal language. Yeah.
Noelle:
[00.16.09.09–00.16.32.19]
In nor—Yeah. In language that I would use as a teenager or as somebody who’s been in industry for 20 years. They are now equalized. And I think that does create some very—some new concerns because of the la—like, we don’t—we are going to get very sophisticated attacks, but not from very sophisticated attackers.
Noelle:
[00.16.32.21–00.16.53.11]
And so, the good news is, though, the reverse is also true. Right? So, we as an organization start up small to medium-sized businesses—like, you now have the ability to build a fortress around your organization that would have cost you hundreds of thousands of dollars, right? Or millions in the past. And you can now do that more effectively.
Noelle:
[00.16.53.11–00.17.14.12]
The challenge, I think, is—is, and to your point, no, I have not found that executives of these companies are like, well, let’s get to this then, right? It’s still—cyber is still the last thing they think of unless they themselves are hurt. And so, I am still absolutely banging my fist, like, I told you this was happening and now it’s actually happening.
Noelle:
[00.17.14.12–00.17.20.17]
And still, people are like, well, I’m different. Our team is different.
Matt:
[00.17.20.19–00.17.22.00]
Probably want to hack us.
Noelle:
[00.17.22.02–00.17.38.20]
Exactly. And then, of course, they come when bad things happen—or the first ones they call because they’re like, okay, so that thing you said, we think we might be in trouble. So, I don’t think that’s changed. In the last 15 years of my career, that has been very much the process.
Matt:
[00.17.38.20–00.17.40.08]
Still human. Yeah. When that does—
Noelle:
[00.17.40.10–00.17.58.09]
I wish I could—if someone has a secret code to how to get people to actually avoid pain rather than have p—it’s that concept of take a vitamin or take a painkiller. And we’re addicted to painkillers, right? We’d rather have the pain and kill it than take a vitamin and prevent it.
Noelle:
[00.17.58.09–00.18.01.22]
And I would love to see us avoid that as a society.
Matt:
[00.18.02.00–00.18.33.10]
It is. Yeah. It’s like we’re—we’ve created an environment where the products we’ve made are evolving faster than our brains are wired to change. And so, it’s like you—it’s almost like the quick and the dead, right? You’re going to find a place where only the people that have kind of broken their own wiring and figured out how to flex and become more cognitively flexible and think through these things faster are—that’s the only place that there’s going to be any kind of defense against it.
Noelle:
[00.18.33.12–00.18.45.04]
Yeah. And I do think that that’s—I mean, that’s the good news, is that as a cyber professional, typically that’s why we hire cyber professionals, because they are that way. Right? They’re thinking—yeah.
Matt:
[00.18.45.04–00.18.45.20]
They’re counter—yeah—counter-culture folks.
Noelle:
[00.18.45.21–00.19.00.13]
They’re thinking ahead. They’re innovating on behalf of their customer. They’re thinking about—they have more, in my experience and certainly myself, no—more neuroplasticity than the average human. And that’s why that skill set is so valuable.
Matt:
[00.19.00.15–00.19.19.10]
Yeah. Absolutely. How are you—how do you see folks in those seats using AI offensively? How to defend—what are some new ways that you’re seeing, maybe novel—think of things that are novel and out there on the edge. What are you seeing on that side from an offensive standpoint?
Noelle:
[00.19.19.10–00.19.51.00]
Yeah, I think there’s a couple different things. One is just the ability for us to do just deep, extensive social engineering, right? Deep research, understanding, how—what the most common and maybe even alternative ways for our clients to interact with the business. How do we know what is an anomaly and what isn’t? I think there’s incredibly powerful tools now at our disposal for recon and exploitation research, so we can get ahe—again, it just requires someone in the company, maybe to think like a hacker, right?
Noelle:
[00.19.51.00–00.20.13.17]
To think like an attacker. But there’s more tools available than ever to be proactive about that. Disinformation operations. So that’s one of the things I had an opportunity to work on at NPR while I was there was—they were—when I was there was right before Covid and even during Covid.
Noelle:
[00.20.13.17–00.20.39.20]
And there was so much concern around synthetic media at first, but then it became just humans saying weird stuff. Right? Just—it wasn’t deepfakes at all. But even then, how do I, as a professional, make sure that I understand what synthetic media is, narrative amplification—all of these are could be human or AI.
Noelle:
[00.20.39.20–00.21.05.00]
But either way, as a professional, I want to know what I’m looking at, what I’m dealing with. So, I think there are a lot more tools at our disposal to do disinformation management, to do operational acceleration for operational playbooks, simulation running—the concept of digital twins applies now to cyber, where we can simulate ideas that we have in a very real way.
Noelle:
[00.21.05.00–00.21.26.14]
Right. Scan, phish, credential, lateral moves. We have a lot more capability to do those things with not as many humans required to do it. And I think that will help. I’m a big believer—there’s so many small businesses that are just so—and I say small, but I mean 500K in revenue a year to, I don’t know, a hundred million.
Noelle:
[00.21.26.14–00.21.28.07]
Right. That’s not super small.
Matt:
[00.21.28.08–00.21.31.06]
Yeah. Not on a human level, but yeah, on an enterprise level. Absolutely.
Noelle:
[00.21.31.06–00.21.49.06]
Yeah. And I just think, they are the most vulnerable because they haven’t been able to invest in this skill set. And now you really could find someone in the organization who just cares about security and give them the tools they need to fight and operationalize some of these more modern cyber campaigns.
Matt:
[00.21.49.07–00.22.12.02]
Do you find a lot of support for these kinds of educational things coming from the traditional cyber threat-sharing communities, like the ISACs and the CERTs and the ISAOs? Do you see them kind of weaving this into that narrative as well, from your perspective?
Noelle:
[00.22.12.04–00.22.37.00]
Oh, yeah. Absolutely. I mean, it just goes to show, over the last three years—I, for example, last year, was the keynote at RSA, and I’ve been invited to a lot of security communities, privacy communities. So, I think they’re all looking for, number one, I call—there’s a framework that I use to kind of represent the pattern that organizations are going through, and it’s called ARC. A-R-C.
Noelle:
[00.22.37.00–00.23.06.23]
And it starts with awareness. And I do think, in the cyber community, there’s a hunger for an understanding of how does AI apply to me as a professional? Meaning, how do I use AI myself to do my job better, but also then, how is AI being used against me and the work that I’m trying to do? So, that awareness—I think we’re a couple years into a pretty strong level of kind of getting the cyber community to understand, and that’s why you kind of said what you did at the beginning, right?
Noelle:
[00.23.06.23–00.23.24.08]
Don’t think we’re going to talk about what is AI, because we’ve been doing that a lot. We now know what it is. And if you don’t, you’re certainly—you’ve made a decision on whether you’re going to learn it or not. That awareness is now shifting into readiness. Right?
Noelle:
[00.23.24.08–00.23.41.11]
People are now—cyber professionals specifically—are like, okay, I get it. But what does it mean? How do I use it? What are the tools? I want to be much more practical in my approach. And so, people are now shifting away from what—how do I learn about this to how do I do it, right?
Noelle:
[00.23.41.11–00.24.05.06]
How do I build an agent that supports me on a daily basis? How do I build an autonomous process that does certain reviews? Though, I think we’re shifting away from kind of what are we doing. And in that, the C in arc, right? Awareness, readiness—the C is competitive advantage. And that’s where it is, is that we all—when we actually start using AI, that’s when we get the competitive, not when we learn it.
Noelle:
[00.24.05.08–00.24.19.12]
Right? Not not when we watch a YouTube video. But when we actually start applying it, meaning we have agent systems in our workday that we’re using. That’s when the competitive advantage really starts to show up.
Matt:
[00.24.19.14–00.24.54.03]
Do you—willing is—I have an interesting perspective on threat-sharing communities, because a number of them use ArmorText. That’s kind of the underpinning. So, I’ve become unintentionally sort of this how-to-build-a-threat-sharing-community expert, like the logistical piece of it. And one of the things that I see is a—there’s sort of an instinct to silo yourself amongst birds of a feather, like enterprise over here and small and medium business over there, utilities here, co-ops there, small banks, big banks.
Matt:
[00.24.54.05–00.25.16.18]
What I’ve seen with communities is there’s actually a benefit from having both ends of that spectrum in there, because the large enterprise—they’ve got the pockets for the funding. They have the risk profile to justify putting money to fix it. And they have the subject matter experts they can keep on staff to look into it. They can find these answers because they—that’s their job.
Matt:
[00.25.16.23–00.25.40.13]
The small and medium guys that you mentioned—the small businesses—they don’t. They don’t have the time to become experts on these things. How does, within a community like a threat-sharing community of sorts—how does the big side of this equation better share what they’re learning around these tools with that small? Is it through a community like this?
Matt:
[00.25.40.13–00.25.45.01]
Do you see that happening, or do you see it’s still kind of siloing a lot?
Noelle:
[00.25.45.03–00.26.20.00]
I think one of the challenges is that we have been raised, over the last 20 years, that our data, our ideas, our patterns of managing risk in our organization—that’s kind of our IP, and it’s protected, and we don’t share it. And what we’ve come to realize, over the last five years, is that understanding of what is yours and what makes you unique and what is your competitive advantage is not necessarily the way you’ve solved these problems.
Noelle:
[00.26.20.00–00.27.03.23]
So, the reason I do like these communities—and I do think the scope should be larger—but even if they are, let’s say it’s a small to medium-sized business, certain revenue, the leaders of those communities should definitely invite—it’s almost like elders, right? Invite your elders to teach you what they know. And I often think, when you present it that way, right, when you position a bigger company and a leader at that company as—and give them the opportunity to share from the pedestal that they sit on, meaning they’re in a bigger space dealing with bigger problems—they are more likely to share, because it’s kind of their philanthropic. Right. Im helping this. Yes, the give-back option, but that’s very much like the community vibe that you have to develop.
Noelle:
[00.27.03.23–00.27.27.08]
So, if you’re leading a community, I definitely think—or even doing what you’re doing right now. Right. Identifying leaders, pulling them into a podcast, and making that available. I think the big challenge, obviously, is that when we’re talking about sharing how we’re managing risk in our company, we don’t want to overshare that with the wrong people.
Noelle:
[00.27.27.08–00.27.58.11]
Right. So now, trust becomes extremely important in those communities. I was just in an event with family office managers. Right. So, these are people that manage the wealth of the wealthiest people in the world. And when you walk in, they look—everyone looks you in the eye, and they’re like, there’s no recording. There’s no thi—I mean, and these are all people that—they’re all trying to build their own individual wealth portfolios. But it makes so much sense that they’d be like, oh, you know what, I did this thing last year, and it really worked out.
Noelle:
[00.27.58.11–00.28.33.17]
And they’re not competing from—and that’s why, specifically to cyber, we’re not competing, we’re protecting. And that’s—we should be sharing that. But I also understand that they don’t want that to become standard news on Twitter. So, that concept of trust, that concept of—what is that word they use? There’s a phrase for it. I can’t remember what it is, but there’s a phrase when you walk into a room, and every—there’s rules that you apply that say you cannot leave this room with any of the information.
Noelle:
[00.28.33.17–00.28.41.22]
Everything’s sacred here, trust is here. You have the ability to share your information. We’ll use it, but we won’t share it with other people. I think—
Matt:
[00.28.41.22–00.28.42.22]
Like Chatham House or something.
Noelle:
[00.28.43.03–00.29.01.18]
Thank you. Chatham House rules. Yes. Thank—I was like, it was going to hurt me. I wouldn’t have been able to concentrate, but yeah, exactly. And I think that’s the kind of communities that will cultivate this knowledge sharing. And it’s still—I mean, it’s very hard to do. It really is a crafted—
Matt:
[00.29.01.22–00.29.24.13]
It’s almost counter-human to think in that—those terms. Yeah. ’Cause I always tell, when folks are just starting a community, I say trust is the—that is the linchpin, and it’s trust in—you have to have guardrails. You have to have rules of engagement. You have to have an established framework for what is shared, how it’s shared, and what each person is going to responsibly do with it.
Matt:
[00.29.24.13–00.29.50.09]
So, you have to trust that there—go—there—the type of people that are going to use the information responsibly. You have to—you also have to trust the technology and the underlying tech to be able to protect the data and also enforce identities. And so, trust is—there’s layers of trust that have to be there before you can even begin to effectively collaborate and sort of form that community.
Matt:
[00.29.50.11–00.30.11.16]
So, trust is—and then, with AI, you factor in these—the ability to create these deepfakes, and sort of it’s hacking at the very core. What—how do I trust, and how does a human effectively establish trust? So, it’s being attacked over and all around, while at the same time we need it more than ever.
Matt:
[00.30.11.17–00.30.34.02]
I need to stop thinking in terms of the C and your competitive advantage. It’s not your company’s competitive advantage. It is your industry. It is your infrastructure. It is—that’s the competitive advantage. It’s like, if you’re—it’s—imagine if everybody’s on different layers of a boat, and one group figured out how to patch a hole really, really well.
Matt:
[00.30.34.04–00.30.49.10]
They’re—the boat still sinks if you don’t tell everybody else how to patch the hole. I mean, it’s—you’ve got to think in terms of that—more of that broader community. That’s—I think it needs to happen more, and faster, for sure. Anyways, end of my rant.
Noelle:
[00.30.49.15–00.31.04.01]
I love it, though. I agree, community is everything, but it’s very hard to create that—a trusted community and maintain it. So, yeah, it’s—if you’re inclined to do that kind of work, you are desperately needed.
Matt:
[00.31.04.03–00.31.22.16]
Oh boy. Yeah, it’s a—it’s an uphill battle, I’ll say that. What’s—so the baby tiger, this is just kind of a thread I’m pulling. Yoshua Bengio used that phrase too. His was much more dystopian. He was talking about where this is going to go and how it’s potentially going to wreck humanity.
Matt:
[00.31.22.17–00.31.35.22]
He really goes—and down this—is that—when you say baby tigers, is that the same? Is that your reference point as far as where this can go if we don’t—if we’re not careful with it? Or I didn’t know…
Noelle:
[00.31.36.01–00.31.57.21]
I mean, I tend to keep it because of my personality. I tend to be a little bit more optimistic, mainly because I’ve watched—I watched Alexa grow up. I watched Microsoft AI grow up. And in all of—I mean, these technologies are not new. And they have not become—they’re not evil by themselves.
Noelle:
[00.31.57.21–00.32.19.05]
And so, I would say when I look at a little model or a baby model, a baby tiger, I’m always thinking to myself, what questions can I ask? In spite of the super-excited moment that we’re in—it’s fluffy and cool, and I want to use it. But what discipline, what rigor can I apply to the earliest part of that process, meaning the baby tigers, part of life of an AI system?
Noelle:
[00.32.19.05–00.32.41.13]
What can I do to protect myself from dystopia? And that’s—I think I don’t know if we talk about that enough. I actually say this at the end of my TED talk—we are the first generation of humans to kind of build systems and train these systems and teach these systems to talk.
Noelle:
[00.32.41.15–00.33.02.00]
But right now, we’re at this kind of pivotal moment where we might be the last generation to actually teach it to listen, to stop, to have guardrails. We have this opportunity—we, as humans. It’s not AGI yet, meaning that it is expecting us to tell it what to do, to provide it a system prompt, to give it instructions.
Noelle:
[00.33.02.00–00.33.25.07]
And so, I’m—and I don’t think enough people—most of us—I call it vending machine AI. Most of us are in vending machine mode, where we’re write me an email, give me a LinkedIn post. I would like better firewall rules, right? We’re asking it for stuff as opposed to telling it what we want and producing the outcome.
Noelle:
[00.33.25.09–00.33.43.09]
And again, it’s easier to just ask for what you want. I want a LinkedIn post. But it’s more effective if you’re well, why am I writing a LinkedIn post? Oh, I want to create relationship with my prospects that turn them into buyers. Now I’m going to be like, well, what’s the best way for me to create a relationship with my buyers?
Noelle:
[00.33.43.11–00.33.55.02]
It might not even be a LinkedIn post. And so, we’re like in this vending machine mode, and some—similar to a vending machine, what I get out of it might not be that good for me, but I’m asking for it.
Matt:
[00.33.55.04–00.33.57.01]
Like the mental Doritos.
Noelle:
[00.33.57.02–00.34.26.18]
It’s probably junk food. Yes, exactly. Which maybe it’s cool, it’s fun, it tastes good, but it’s not. It may not actually be the right solution. So, I think, certainly as part of an AI red team, as someone who is challenging these AI models—like, really adopting the Socratic method, right, of how you talk to an AI system rather than asking it vending machine style—like, turn it into a question that’s correlated to the thing you actually want.
Noelle:
[00.34.26.20–00.34.43.22]
And I don’t think we do a lot of that today because of how we’ve been trained. I mean, it’s just—we are coming off the heels of social media, which is literally vending machine social media. You Google, right? You type in words, it gives you stuff. But that’s not how these models are built.
Noelle:
[00.34.44.00–00.35.06.11]
These models are built to research, to analyze, to offer us new perspectives and—but I guess the moral of the story is that we own that. We own the question, and most of us are opting out of that ownership. And I think that is what leads ultimately to dystopia. So, I always look at this baby tiger.
Noelle:
[00.35.06.11–00.35.21.19]
And I’m like, wow, look at your paws. How big are you going to be? I want to see, as early as possible, how big an AI system might be—or razor-sharp teeth. What does this model eat? How much does it eat? Where is it going to live? What happens when I don’t want it anymore?
Noelle:
[00.35.21.19–00.35.42.03]
These—it’s very hard to ask these questions when you’re in this excited “oh my God, this is amazing.” But it is really important to do that. It’s important to stop the train and say, okay, wait a second. How is this—where are we going to be six months from now? Six years from now?
Noelle:
[00.35.42.09–00.36.04.00]
And if you think about that, it seems like overkill on day one. But obviously, these baby tigers become big tigers, and big tigers kill people if you’re not careful. And so, I think a lot of our futurists are talking about what happens six years from now, based on the trend they’re seeing—that people are consumers and not producers.
Noelle:
[00.36.04.02–00.36.10.13]
But yeah, that’s—that doesn’t—it doesn’t have to go that way. And we, right now, we hold the keys.
Matt:
[00.36.10.15–00.36.32.08]
Yeah. That’s—it—I mean, again, it’s the—AI is almost kind of—it’s interesting ’cause it’s making us learn more about how humans think and what our limitations are and trying to address it because even to your point with futurists, they—it’s still linear, right? Humans just can’t—we just don’t know what we don’t know.
Matt:
[00.36.32.08–00.36.54.05]
And we can’t imagine futures full of variables that we don’t even think of. Right. And so, yes, it’s more about making responsible choices now so that those—that linear path that a lot of folks are getting more dystopian about can be redirected, as long as—but as long as we be responsible about it now. You have to make those choices now.
Matt:
[00.36.54.05–00.37.01.02]
You got to not eat the Doritos now so that you don’t have orange fingers and teeth five years from now.
Noelle:
[00.37.01.04–00.37.04.08]
That’s right. And you’re—yeah, 40 pounds heavier—yeah.
Matt:
[00.37.04.08–00.37.30.08]
Yeah, exactly. So, yeah, it’s interesting to—that—that’s where I am. I hear these debates on both sides, the optimist versus the futurist and the dystopian. And it’s—the mistake that I—I guess the pattern I see is that everybody is still extrapolating in a linear way. And my question is, how does this actually unfold in ways that we likely aren’t even anticipating?
Noelle:
[00.37.30.10–00.37.51.04]
Exactly. And how could we? Right. Because we can’t really foresee—right now, it’s hard for us to understand the capability of the models we have today. And according to—right, like, Moore’s Law—we are going to see models evolve a million times, a million X. That’s how in five years, according to the current trajectory.
Noelle:
[00.37.51.04–00.38.11.01]
And so, it’s hard to even fathom what that looks like. It definitely looks like some of these experts are talking about—it can do all the things, but at the same time, it’s still going to require someone. I mean, it could do all the things—all the good things, all the bad things. There is still a requirement for guidance, for leadership.
Noelle:
[00.38.11.01–00.38.30.23]
I mean, that’s why I focus on it so much. We’re going to have to become such good managers. And I don’t know if you know this, but most people don’t like managing. They don’t like—it’s actually—it’s a job that people are like—you get to that fork in the road where they’re like, become a manager or become a principal architect.
Noelle:
[00.38.30.23–00.38.36.18]
And they’re like, anything I could do to avoid managing people. It’s going to get crazy when we’re trying to manage machines.
Matt:
[00.38.36.19–00.39.01.01]
Yeah. Do you think, from a—we talked about earlier on this second topic of infrastructure sovereignty and data sovereignty, as far as keeping those baby tigers from becoming something that can kill you—do you think fragmentation and specialization is a way to keep a lid on those models from becoming something that becomes Skynet?
Noelle:
[00.39.01.03–00.39.34.06]
Yes. I don’t know that fragmentation would solve that problem. But I do think that where data lives and where computation actually happens will impact the decisions that we make around policies and threat modeling decisions. I think—what I’ve seen over the last just three years is a direction towards managing and trying to own LLMs.
Noelle:
[00.39.34.06–00.39.57.08]
Right? So, in the beginning, there were only a few labs with these big models. There wasn’t a lot, and they were huge. They were extremely expensive to build—even if you wanted to, $35–50 million in infrastructure costs. Right. Now, it’s much more than that because these models are getting bigger and bigger, and billions of data points, trillion data points.
Noelle:
[00.39.57.10–00.40.24.19]
So, what I now start to see is organizations like Rackspace or Digital Realty, where they’re moving into this world of—if you’re a bank, or you’re a regulated industry, insurance company, healthcare organization, and you’re worried about control—you actually—there is an opportunity for you to create a new level of sovereignty, or at least something that you could think about, but also economically.
Noelle:
[00.40.24.21–00.40.52.19]
Right. So, it’s interesting. We’ve come a long way, right? Ten years ago, we were for—we were very, very adamant about moving everything you have to the cloud. So much so that we would celebrate at these huge conferences, Re:Invent and Microsoft Ignite. We’d be like, they’re all in on cloud. They got rid of their—there was a huge marketing push that would show CEOs with a sign in front of their Colo that said “for sale.”
Noelle:
[00.40.52.23–00.41.14.19]
Right. And so, it was like this huge thing—like, yey, I have no more hardware that I’m owning that is unutilized. It was such a big moment. And now we’re—that pendulum swung all the way into cloud. And now, as we realize—wait a second, there’s data leakage. Wait a second, there’s uncertainty.
Noelle:
[00.41.14.19–00.41.35.23]
There’s risk, like Snowflake, right? And AT&T—those are not AI problems. This is good old-fashioned infrastructure problems. But now CEOs and CIOs, CISOs are like, well, what if we—we don’t have to pull it all back, but what if we pull back the things we are most concerned about? And these models tend to fall into that category.
Noelle:
[00.41.35.23–00.41.56.06]
So, we are seeing companies like—SambaNova is a company I got to work with quite a bit a few years ago—where they—not everybody has GPUs and FPGAs. And I know not everyone has all the processing capability. And they may not want to invest in Nvidia, for whatever reason.
Noelle:
[00.41.56.11–00.42.16.10]
And so, this company decided—this was years ago—but they decided, they were like, wouldn’t it be cool if we could just roll in a rack you could rent? And I say that with air quotes, right? You could rent a rack that could run custom LLMs that are fine-tuned to not just your industry, but to your organization.
Noelle:
[00.42.16.12–00.42.42.06]
And companies like Cohere. Right. Cohere is a frontier firm. I was on stage with their leadership just recently, and they—I mean, their entire business model has been around data sovereignty, model sovereignty. Right. Owning the model. Create—like, they create a generic version. You purpose it on your own hardware, and it never, ever hits the internet.
Noelle:
[00.42.42.06–00.43.07.19]
These are things that you can do that I don’t know that everybody realizes—that you can build an LLM that sits on your hardware in your data center, even in NPR. NPR has a bunch of—their racks are in the second floor of their building. Right? Their hardware is in their building. And so, now I can go in and say, okay, I want to actually run my LLM on my own hardware.
Noelle:
[00.43.07.19–00.43.27.05]
And I don’t know if you saw, Jensen Huang recently had a conference—like, he’s holding in the palm of his hand, right, this massive computing power that used to sit on an entire floor of a building, and now it’s in the palm of his hand. So, I think a lot of this has changed to give sovereignty as an option.
Noelle:
[00.43.27.05–00.43.51.23]
It was very expensive to do that in the past. But just like with what we talked about earlier, I think data sovereignty, infrastructure sovereignty—it’s a choice we can make now. And it’s less expensive than it’s ever been. So, now I can own my LLM, and I never have to run the risk of any kind of data leakage or any kind of public cloud infrastructure challenges that some of these vendors offer.
Matt:
[00.43.52.01–00.44.16.16]
Yeah. Do you see—one of the things that we, especially globally, this infrastructure sovereignty question is coming up a lot more, as far as who can see it, who can control it? Is it—is the entity that can control it in our interest, or is it—can it become adversarial from a geopolitical standpoint?
Matt:
[00.44.16.18–00.44.22.20]
Do you see these tools? I—say the name of the rent a rack again, I didn’t—
Noelle:
[00.44.22.22–00.44.26.01]
Oh, yeah. SambaNova. Samba—like the dancing.
Matt:
[00.44.26.01–00.44.29.09]
Samba. That’s it. Or the Adidas shoes. Yeah.
Noelle:
[00.44.29.11–00.44.31.10]
Yeah. Yeah, exactly. Ooh, I like that better.
Matt:
[00.44.31.14–00.44.58.11]
Oh, man, I love those shoes in the 90s. So—but do you see those kinds of approaches like that evolving to address that concern specifically—like, other jurisdictions and other countries and things like that? ‘Cause, obviously, the US is a massive player when it comes to tech innovation. I mean, we’re—we dominate this space.
Matt:
[00.44.58.13–00.45.16.17]
And so, it’s something I would imagine, to some folks outside of that, it could look concerning. And so, from your perspective, do you see that kind of geopolitical—creating boundaries around various AIs as opposed to this giant global AI that kind of runs everything?
Noelle:
[00.45.16.22–00.45.41.13]
Oh, absolutely. I think that there’s a couple of motivations. I mean, we’re starting to see this even in just legislation in the US, and even legislation in the European Union, right? The EU AI Act, and a lot of the things that they dreamed about three years ago are now finally being implemented—meaning the timeline has elapsed from “let’s think about this” to “let’s do this.”
Noelle:
[00.45.41.15–00.46.02.01]
One of the things, for those professionals who are kind of like, I want to get an insight into—’cause we’re talking about, kind of at the surface, these geopolitical constraints. And sometimes—I personally, I’m always like, how do I know more? I want to know more about that. I want to know where are we as a country?
Noelle:
[00.46.02.03–00.46.27.06]
Depending on where you’re from—whether you’re in Europe, or China, or the US. And I also am very interested in regulation that’s currently on the floor of the House, right? Or on the floor of Congress. And so, Stanford releases a report every year called the AI Index Report. And it does evaluate kind of the data points that I’m speaking from.
Noelle:
[00.46.27.07–00.46.47.16]
And I think it’s really valuable. As a matter of fact, they just did a podcast based on this report, and they were referencing some of the experts that you had mentioned on this—which, again, are dystopian and a bit negative, but that’s fine. No problem. That’s not my vibe, but it’s good.
Noelle:
[00.46.47.16–00.47.27.03]
It’s good. But the reason I like this report is because it actually gives you hard, fast data around where the models exist. And to your point, you can have a model exist in a country. And this is what happened with TikTok, right? With Bitwise, or whatever the name of that company is. But—and with many companies—is that where your data sits, that data sovereignty is often a higher priority than model—where the model is. Also, management—who manages it, meaning AWS managing the infrastructure and where the actual model and data resides—can be in different countries.
Noelle:
[00.47.27.05–00.47.46.10]
But you’ll—this is where governance policy, governance rules—what happens in these scenarios—is really important for each company to kind of lay out. It’s what I do with a lot of my clients, is I just sit down and say, what if, right? That’s kind of a cyber professional’s dream—it’s like, what if this happens?
Noelle:
[00.47.46.10–00.48.11.08]
What about this? And how do we mitigate these risks? So, companies like SambaNova, Digital Realty, Rackspace—these organizations are thinking through what happens in a world where it’s just—it’s not even that there’s a potential bad relationship within a company or geopolitical—like, what if you just don’t even want that to be possible.
Noelle:
[00.48.11.08–00.48.31.22]
What if you could just own it? And that’s what I think—like, we call it private LLM. So, building a private LLM. Another—for those of you who want to play around with this concept on your own laptop, you will need a relatively hefty laptop to do it. But there’s a GitHub repo called Ware—kind of like software, but LLMWare?
Noelle:
[00.48.32.03–00.48.55.19]
And it allows you to host an LLM on your computer, so you can kind of see where the edges are. Right. And you can understand, at a microcosm, what—why this is so valuable. Because then you get to define truth. You get to define—right—its data structure. You define its training. You define its inferencing. You also control how it’s inferenced, and the cost of that inferencing.
Noelle:
[00.48.55.19–00.49.30.02]
So, there’s a lot more capability that you get when you own it. The challenge with that, though, is that you now need to have people, resources, hardware to own it. So, the cost goes up, but your control goes up. And many people in regulated industries and government—fed, federal, local, state—are like, actually, I think I’d be interested in this specific use case, owning the whole rack as opposed to renting the rack and owning the model, or renting the model and the rack and owning the data.
Noelle:
[00.49.30.02–00.49.37.19]
Right. These are all just different layers of control that you’ll have to make very, very specific decisions on, in an organization.
Matt:
[00.49.37.21–00.49.59.18]
Yeah. Do you—from an—from a security perspective, how do these sovereignty issues shape the perspective of someone who’s trying to secure it? I mean, do you see the cybersecurity side of the house making these decisions? Or is it—who owns the data and where does it sit?
Matt:
[00.49.59.18–00.50.07.19]
Who—is that coming from the security space, or is that coming more from higher-level executives? Or are they asking those questions, and it’s coming down on cyber?
Noelle:
[00.50.07.21–00.50.31.09]
That’s a good question. I would say, if I’m in an executive room, I am presenting these questions. I always say—I talk about the Death Star, and what would happen if the Death Star acquires your company, right? How would you protect your organization? And we’re seeing Death Star acquisitions, where leadership is so drastically different, right, from the company they’re acquiring.
Noelle:
[00.50.31.09–00.50.58.05]
And that company they’re acquiring has a bunch of data and models, and they built it under a certain level of expectation, and all of that changes. Right. And not just from a political perspective, but just from a leadership ownership perspective. So, I think some of the things that we should be thinking about, correlated to national sovereignty, would be number one.
Noelle:
[00.50.58.05–00.51.30.17]
I do think security is typically—or, I always say, invite—and now, granted, I’m talking—I’m preaching to the choir here. But I always tell leaders, you should have people in the room when you’re building AI systems or architecting your strategy. You should have people in the room you typically ask last. So, AIs flip this script. It forces you to, or it encourages you to, invite people to the conversation that typically you wouldn’t talk to till the end of that deployment—which is security, legal—
Matt:
[00.51.31.04–00.51.33.03]
Now you gotta bring the security guys in the room.
Noelle:
[00.51.33.05–00.52.01.20]
Yeah. So, but if you—I always say, never build for people without the people that could actually solve the problem and help them. So, invite security into those ideation sessions, because then, as you’re building, training, selecting, and evaluating the models, the security team is going to ask questions that will help you pick a better model, rather than picking a model, finding out that it has inherent security flaws that nobody thought to even ask about baby tiger.
Noelle:
[00.52.02.00–00.52.45.14]
And then you put it into production. It fails. You have to start over, right? You have to go pick a new model. And so, I—get—I—like I said, I’m preaching to the choir, ‘cause obviously we’re the ones who want to be in that room. But I do a lot of executive discussions, executive briefings, where I’m encouraging them to really rethink their AI governance and their AI—when they build out their AI strategy and create a governance board, or create what we call a center of excellence for AI, the first members should be the last ones you typically want to talk to—which is the ones that would.
Noelle:
[00.52.38.07–00.52.45.14]
And again, I always say, security people aren’t going to slow you down if you ask for their help at the beginning.
Matt:
[00.52.45.16–00.52.46.13]
Bring in them first, right?
Noelle:
[00.52.46.15–00.53.11.06]
Yes. The only reason we slow things down is ‘cause you come to us at the end, and then we have to be like, whoa, whoa, whoa, what are you doing? But if we’re part of the design, we have a—just a much better way. So, also helping expose model supply chain transparency. This is something I think—just understanding the end-to-end machine learning lifecycle, I think, is really critical.
Noelle:
[00.53.11.06–00.53.24.20]
This is part of the awareness part for security professionals. The more you know about how an AI system is built, the better your mind can wrap around different phases and the risks exposed in each phase. And they’re—
Matt:
[00.53.24.22–00.53.25.07]
Yeah.
Noelle:
[00.53.25.07–00.53.30.06]
It’s very different than a typical software solution. So, it does require some special skills.
Matt:
[00.53.30.08–00.53.52.12]
Yeah. We’ve talked a lot about the risks and, sort of—at the risk of getting a little dystopian there, so I can—you and I seem to be more on the optimistic scale about some of this stuff. From your perspective, what’s—what gives you optimism? What are you hopeful and optimistic about from a future perspective with AI and cybersecurity?
Noelle:
[00.53.52.17–00.54.14.09]
I think the nice thing right now is that we can operationalize trust. We have the opportunity. So, one of the platforms that I built is called Assured Trust. And the whole goal of this platform is to just give visibility to your baby tigers, basically. Right. What are the AI systems—what are the baby tigers in your company?
Noelle:
[00.54.14.09–00.54.35.10]
Most organizations, when I go in there, they have no idea. We call it shadow AI, right? There’s all sorts of—some people are on ChatGPT, some people are on a free version. Some people are using Gemini on their phone—it’s all over the place. And so, just getting a handle on what AI, how AI is being used, building a usage policy.
Noelle:
[00.54.35.10–00.54.58.12]
Most organizations don’t have that either. But right now, we—I’m optimistic because we have technology that allows us to actually see all the things. I can go to Lovable, which is a vibe coding platform if you’re not familiar, and just ask it to build me a dashboard, right? I don’t need to enlist a development team or spend six weeks—like, I can do things.
Noelle:
[00.54.58.14–00.55.16.22]
I can go from idea to execution in an hour. So, I want to be able to operationalize who’s accountable for each one of my AI systems. Is the human in the loop, on the loop, out of the loop? That should be a variable that I can tell if something’s going wrong with the system. Is there AI involved?
Noelle:
[00.55.16.22–00.55.44.17]
Which AI is it, and who? What human is responsible for that? That visibility is capable now, and I think that’s—that gives me hope. Now, granted, we have to have leaders who actually care enough to look at that data. And then the—but the other thing I also like about the world we’re in right now is that, if, as a cyber professional, as someone who cares about security risk, we can create and raise the defensive baseline for our organizations.
Noelle:
[00.55.44.22–00.56.20.01]
Again, relatively simply, if we build—if you go into most organizations, their cyber team is often understaffed. They often don’t—they’re the first budget to get cut, especially if you have 90 days since last incident or four years since last incident. They’re like, it’s good, you don’t need that. And so, now smaller teams can triage faster, build better correlations, identify better documentation, do better anomaly detection—all with the help of an AI workforce, right, that supports these teams.
Noelle:
[00.56.20.01–00.56.36.22]
So, this is not an industry where I’m like, oh no, you’re going to lose your job because AI is going to take it like—this—security is one of those areas where we don’t have enough people who care about this. Team—companies—there aren’t enough people in these companies doing this work. So, how can I help?
Noelle:
[00.56.37.03–00.56.55.20]
Personally, I’m like, how can I use AI to help me just serve more people and manage more, like, defense management inside an organization all by myself? Because oftentimes there aren’t enough people doing that work. So, both those things, I think, are good, right? One, we can operationalize it and see it. And that means I can show it to a CEO.
Noelle:
[00.56.55.22–00.57.12.07]
And they’re like, oh, I see why you need that money. And I can raise my own defensive baseline because I have technology now that can support me. And the cost of that technology is getting cheaper every day. And tho—both those things make me much more optimistic about what we can do to protect our companies.
Matt:
[00.57.12.09–00.57.18.19]
Ironically, it sounds like you’re saying that AI can actually create job security for cybersecurity folks.
Noelle:
[00.57.18.21–00.57.21.15]
Right? Exact—a hundred percent, a hundred percent. I believe that.
Matt:
[00.57.21.18–00.57.48.11]
So, okay, so the final question here—usually we ask something like, what’s your favorite cocktail, ‘cause it’s all The Lock & Key Lounge theme here. But I wanted to try something different with you this time. Imagine you’re at a bar, and it’s a nice one, right? The kind that people don’t talk loudly in, and at the other end of this bar is someone in security you’ve been dying to talk to.
Matt:
[00.57.48.16–00.57.51.16]
What cocktail do you order? And who’s at the other end of the bar?
Noelle:
[00.57.51.19–00.57.54.02]
Am I buying this drink for this person?
Noelle:
[00.57.55.12–00.57.57.05]
Or is it just for me to walk over there?
Matt:
[00.57.57.05–00.58.00.16]
Yeah, it’s just you. It’s like, what do you kind of arm yourself with before you head over there?
Noelle:
[00.58.00.17–00.58.23.14]
Yeah. All right. I think it’s the same answer then. Yeah. So, last night, I actually was at a restaurant called Bourbon Steak, and Michael Mina is the chef who created that franchise or that chain. And similarly, it was a—he was there. Michael Mina was there. You got to meet the chef.
Noelle:
[00.58.23.14–00.58.51.11]
And it was this very high-end party. So, similarly, I had an opportunity to go meet. And when you walk in, you had a—the servers had on their tray three options. They had a pink—a rosé champagne. They had a gin and tonic. And then they had an old fashioned, and I think it’s similar—I feel like you’re saying a little bit about yourself depending on what you wanted to pick.
Noelle:
[00.58.51.13–00.59.12.16]
So, okay—and so, old fashioneds are my jam. I love old fashioneds. This was a very typical old fashioned. I’m a smoked old fashioned person. I like to add different types of wood and stuff, but yeah, I would totally walk over with an old fashioned because, I don’t know, I—I’ve come to associate, kind of, old fashioned with truth.
Matt:
[00.59.12.18–00.59.13.06]
Yeah.
Noelle:
[00.59.13.08–00.59.14.11]
Like—
Matt:
[00.59.14.13–00.59.17.12]
It conveys a certain part of who you are. Yeah, exactly.
Noelle:
[00.59.17.12–00.59.25.15]
Absolutely. So, yeah. So, I feel like they may be—we’re doing their own little experiment last night, but yeah, I would pick an old fashioned and walk on over.
Matt:
[00.59.25.17–00.59.35.00]
Who would—who do you—who was—who’s on your list of I would literally walk over to them in the middle of a restaurant from a security standpoint—like, who would you want to talk to?
Noelle:
[00.59.35.02–01.00.09.21]
Oh, goodness, that’s a good question. So, I got an opportunity at RSA to be on stage at the same time—in a two-hour period—with the CISO at Meta, Anthropic, and AWS. And from that point on, like, not what they said on stage, but the little funny conversations they were having off stage, which I kind of sat back and just listened to while I was still one of the five of us that were there.
Noelle:
[01.00.09.23–01.00.46.14]
So, I would—I’m kind of cheating, but I’d be like, hey, the best conversations is watching CISOs in these organizations talk. And one of the things that they said to me back then was that they were building security agents—little agents for every senior executive—meaning that—it’s—and we were talking about Constitutional AI at the time, but basically they were building in their security profiles into agents that would monitor all the traffic inflow, outflow, data requests, everything that that executive would be doing.
Noelle:
[01.00.46.16–01.01.05.21]
They’d have an—literally a personal cyber team. But a single agent that would just provide—they called it nudging, not nagging—guidance. Right. Just little nudges, like, so I wouldn’t maybe do that, or yeah, this is a good idea, you should probably do this, or hey, your password can’t be password one.
Noelle:
[01.01.05.23–01.01.25.04]
So, it was really interesting to hear them talk just kind of anecdotally about that. So, yeah, I would go after CISOs of the—any CISO/ISO of the lar—the Magnificent Seven. Any of those I would get out of my chair and walk over and be like, hey, I know you don’t know me, but here’s my book.
Matt:
[01.01.25.06–01.01.44.20]
Yeah. That’s it. Now my brain’s turning on this, ‘cause you could almost use—with that—I—we’re going down a rabbit hole here at that very end. But this—the—that personalized security agent could then become something that kind of prevents deepfakes, because this thing has learned your patterns.
Matt:
[01.01.44.20–01.01.53.01]
And if someone else could ping it and say, hey, he’s doing this or asking this, you could—it would almost be able to say that’s out of character for this person.
Noelle:
[01.01.53.01–01.02.12.15]
That’s right. That person would never send you that. Exactly. Now, the only challenge is, as we know, many of the attacks in our company come from within. So, yeah. So, there’s that, but yeah, agreed. This becomes a bit of a monitor on—a indexer.
Noelle:
[01.02.12.17–01.02.20.01]
There’s so many opportunities to take knowledge. We already know about someone who used that to protect them against their own better judgment.
Matt:
[01.02.20.03–01.02.32.17]
Yeah. Noelle, this has been a fantastic conversation. I appreciate the energy and the insight that you’ve brought to this stuff. So, thanks. Thanks so much for taking this time to do it.
Noelle:
[01.02.32.19–01.02.35.22]
Absolutely. It was my pleasure. Really fun. Thanks for having me.
Matt:
[01.02.36.00–01.03.12.03]
Yeah. And listeners, thank you for joining us on this latest edition here of Lock & Key Lounge. If there’s one thing that we can take away here, AI is no longer just a tool. It’s becoming an active participant in decision-making, personal and operational risk management—other things like that. And the future of cybersecurity depends not only on smarter systems, but on smarter people, trust, sovereignty, human leadership, and judgment.
Matt:
[01.03.12.05–01.03.41.21]
Organizations that succeed will design these AI strategies to prioritize resilience, secure coordination, accountability, even when these systems are under pressure. So, until next time, be well, stay curious, and do good work. We really hope you enjoyed this episode of The Lock & Key Lounge. If you’re a cybersecurity expert or you have a unique insight or point of view on the topic, and we know you do, we’d love to hear from you.
Matt:
[01.03.41.23–01.04.03.12]
Please email us at Lounge@ArmorText.com or our website ArmorText.com/podcast. I’m Matt Calligan, Director of Revenue Operations here at ArmorText, inviting you back here next time, where you’ll get live unenciphered, unfiltered, stirred—never shaken—insights into the latest cybersecurity concepts.