In this episode of Threat Vector, host Michael Heller, Managing Editor for Cortex and Unit 42 and Executive Producer of the podcast, sits down with long-time security leaders Greg Conti and Tom Cross to unpack the hacker mindset and the idea of “dark capabilities” inside modern technology companies. Greg, Principal at Kopidion Cybersecurity and a former Army Cyber Institute founder, and Tom, Head of Threat Research at GetReal and Senior Associate at Kopidion, explain why the real risk is not just what a product is supposed to do, but everything it is technically capable of doing in the hands of insiders, governments, or determined adversaries. Drawing on their DEF CON trainings in adversarial thinking and recent talks on effects based operations for tech companies, they explore how security leaders can systematically map their organization’s hidden capabilities, stress test them with an “if we decided to be evil” lens, and then build the technical and institutional guardrails that keep both people and platforms aligned with ethical and strategic goals. This conversation is especially important for decision makers tasked with securing the workforce in an era of AI, pervasive sensors, and increasingly blurred lines between defense and offense.
Protect yourself from the evolving threat landscape – more episodes of Threat Vector are a click away
Transcript
[ Music ]
Michael Heller: Hello, and welcome to Threat Vector. I'm Michael Heller, Executive Producer of Threat Vector and senior content guru at Palo Alto Networks.
Greg Conti: It's not what a product or system claims to do or says it does, or even its marketing copy, you know, says. It's what it has the ability to do, the true capability. With that in mind, you can operate more effectively. [ Music ]
Michael Heller: I'm filling in for David Moulton with a special episode recorded at DEFCON 26 with Greg Conti, principal at Kopidion, and Tom Cross, a threat researcher at GetReal and a principal at Kopidion. In this episode, we dig into the hacker ethos and how it led Greg and Tom to talk at DEFCON about a gap in security that most don't think about. That is, digging into the difference between what a device or company is designed for and what it's actually capable of when put into the hands of someone interested in exploiting that difference. This form of curiosity is at the core of DEFCON, and it's what makes the conference special in a world where most professional conferences have become a vehicle for marketing. I had a great time talking with Greg and Tom, and I hope you'll enjoy the discussion. Let's get into it. Welcome to Threat Vector.
Tom Cross: Thanks for having us on.
Greg Conti: Yeah, it's great.
Michael Heller: Can you give me a little rundown of your talk today, minus all the technical issues?
Greg Conti: The technical difficulties that we ran into. Yeah, so our talk is called Dark Capabilities: When Companies Become Threat Actors. And so what we're -- what we're talking about is, you know, it tends to be the case that, you know, like, if you think about a company, there's a set of capabilities that they have that they utilize, right? And then there's a set of potential capabilities that they have that they don't utilize. And we also think there's probably a set of capabilities that a company has that they don't realize that they have. And so when you think about those things, you know, what if you decided to flip the coin over on that and you said, "What if we decided we wanted to be evil?" You know, how -- in what ways could we be evil, right? You could imagine a company self-assessing for that and coming up with a list of ways that they could be evil. Again, capabilities that they're aware of, capabilities that they deliberately don't use. Maybe they would discover capabilities they didn't realize they had. And then you can ask a set of questions about that. Maybe there are architectural or political checks and balances we want to place within our company to ensure that we'd never use those capabilities, right? And so, we started talking about, in the context of that, the relationship between companies and governments, particularly in times of conflict. And, you know, how those capabilities that you may not even realize you have and you certainly wouldn't imagine using, you might find yourself in a position where you are using them. And so, we think it's worthwhile to have that conversation and to imagine those things and to start thinking about, you know, what position you want to take and how.
Tom Cross: Yeah. And I'd add to that that the end users only see the tip of the iceberg of what are the true capabilities. If you imagine a social networking site, right? You think, "Oh, I can -- I can -- " you connect with people and look people up in this directory. The social networking site knows every direct message ever sent. The day everyone joined, the date -- every photograph ever used, every IP address that people connected from. And they have the graph of the entire network. And so they only expose a little tiny fraction, and so many people think that that is the full, you know, the full end state of what that company can do, when, in reality, it's like.01%. So the idea is, where the circumstances where those capabilities would be used by the company decision, by an insider threat actor, an external threat actor trying to get in, or government has ability to -- Defense Production Act to leverage, if -- compel companies to use their capabilities. So, anyway, that was -- it was a good talk. It was good fun. So the back story of our talk is that Greg and I gave a talk at a very -- well, say corporate computer security conference, and we had a slide in it that talked about capabilities that companies might use in a military conflict that they don't realize they have, right? How could they -- how might they use the capabilities of their organization in an offensive way in the midst of a conflict, which they might choose to do, depending upon their valence to that conflict, right? And the conference was very uncomfortable with us having that conversation.
Greg Conti: They asked us to remove the slide.
Tom Cross: They asked us to remove the slide. And so Greg said, "Well, okay, we're going to do an entire talk based on that slide, and we're going to do it at DEFCON, where we're allowed to -- "
Greg Conti: Because we could have -- yeah, we can have the conversation.
Tom Cross: We're allowed to wade into these, like, ethically challenging discussions. And I think it's great. Like, DEFCON is the right room for these kinds of dialogues. And again, my point is that, you know, that they're vital to have. I think it's valuable to, you know, put on the black hat and look at things from that perspective, and understand that. And then what you choose to do with it is your decision, right? And so it's -- you know, any tool has both, like, malicious and beneficial uses.
Greg Conti: Before your adversaries do the same to you.
Tom Cross: So one of the things that we recommended was that, you know, the governments consider this. So we talked about what companies should do, which is something we've discussed. We also talked about what governments should do. And governments, you know, could think about, like, what kinds of capabilities exist within companies that could be used, in, you know, certain, we'll say evil ways, right? But then they have to ask -- you know, one -- maybe they want to use them, right? But then they have to ask, maybe another state will come in and use them in a way that's not aligned with my strategy, right? Or maybe the people that run that company will use that capability in a way that's not aligned with my strategy. And this really happens in places where conflicts are occurring, you know. Companies may independently shut off a satellite system, you know. And so they're making their own choices that affect, you know, the course of events, right? And so, you know, looking at all -- you have to understand what the capability is to ask those three different questions, right? And then, you know, what can you do to make sure that that capability is, in fact, used in a way that's aligned with your strategic objectives and not someone else's?
Michael Heller: Yeah. It does sound -- it does sound very interesting. It seems like there's a lot of different ways you can go with that because -- I mean, taking any one of those three -- insider threat, external threat, or government, I don't know if I want to use the word "coercion," but -
Tom Cross: Commandeering.
Michael Heller: Yeah, commandeering. Like, yeah, any one of those could be a good conversation starter.
Tom Cross: Yeah. I mean, another thing is that, like, we've been having these conversations about companies, tech companies, in particular, for the past few decades that are, like, privacy-centered. Privacy is the most significant implication of, you know, information technology, right? But, increasingly, we're deploying things that, you know, are robots, drones, things that can affect the world in a variety of other ways besides just collecting data about it. And those things, you know, have sort of potential negative consequences that are entirely new that we haven't -- like we haven't thoroughly considered at this point.
Greg Conti: Yeah. So the key takeaway for us was, it's not what something claims to do. It's what it actually has the capability to do. And we had some fun with an evil robotic vacuum. What could an evil robotic vacuum do? Well, it could -- it maps your house, of course, right? It could listen to all your conversations and report back for ideological compliance. It can -- it's literally a vacuum, so it could be harvesting DNA, right? It could be sniffing. It could be wardriving all the short-range Wi-Fi in your house.
Tom Cross: Duplicating your access card for your office, you know, so, and we found some really interesting vulnerability research that people have done into robot vacuums. And there's -- there's -- some of them don't do a good job deleting data. Some of them collect more data than you'd expect. Many of them have, like, AI image recognition capabilities, and some of -- you know, sometimes there's code in there for doing face recognition. It's like, why is that code even there, right? So, you know, there's -- it's -- it's -- if -- if you think about, like, what is the ultimate malicious vacuum? I mean, George Orwell could only imagine a television, but, you know, we have things that move around, and increasingly, we're going to have more and more of them over the next, like, five to 10 years.
Greg Conti: And consumers are just going to be stuck at the surface level of what, you know -- just they're going to be reading the marketing copy and parroting that back. So it's cool. Like, the hacker community, there are people who specialize in reverse engineering robotic vacuums for years and years. So, like, the hacker community has, like, world-class expertise, and they have the right mindset to think about these things.
Michael Heller: Yeah, I remember, maybe five years ago at DEFCON, there was a woman who was researching, like, social engineering using robots. Like, because if you have a little robot vacuum, it can be kind of cute. You send it into somebody's room with, like, a little package, you know, that seems unthreatening. But it's an easy way to deliver something that could be dangerous.
Tom Cross: Yes, absolutely, right? Yeah, we did include intentional destructive malfunction on the list of capabilities that it could have, right?
Michael Heller: And so, going through this exercise, like, what are the main things that you think companies probably are not thinking of that could be used in a malicious way?
Tom Cross: The main things that companies are not -- I mean, I think it depends entirely on the nature of the company, right? So it's really about, like, should you do the sort of assessment I suggested, where you flip the hat, and you say, "We're going to be evil." What are the list of things that we could do? And I think, we also -- one of the lenses through which we considered this is -- again, because we spent a lot of time looking at the relationship between companies and governments. Sometimes companies are operating independently in a way that is not aligned with, like, basically the national strategy of their country, right? And other times, you know, governments, you know, are commandeering capabilities from companies, right? So, you know, when a government is going to commandeer a capability, they're going to look systematically through the lens of what they're trying to accomplish. So, in a time of war, governments are interested in collecting intelligence. They're interested in influencing public opinion. They're interested in engaging in reconnaissance. They might want to know, you know, what the inside of buildings is like, which the robot vacuum will give them. They're interested in, you know, getting access to networks, right? So maybe, you know, a Wi-Fi light bulb company might be an interesting way for them to get, you know, sort of access to a network that they otherwise don't have access to. And so, you know, depending upon the nature of the technology your company creates, you know, that tells us, you know, what potential malicious or evil use cases the technology you have could be put to. And then you can assess, you know, again -- like, a lot of companies assess for maybe customer misuse, or they assess for insider unauthorized misuse. Usually, they do not assess for, like, misuse by the management team or misuse by a government that commandeers the capability. And, you know, it's like, are there guardrails, like, institutional practices or architectural -- like, technical architectures you can put in place that would limit those misuses if you desired to, right? And so, like, the idea of the Ulysses pact. So, Ulysses wants to sail through an area where there are these Sirens, and he doesn't want to be attracted to their song and change course. So he binds himself to the mast of a ship and sends it through that area, and, you know, the Sirens show up, but he can't change course.
Greg Conti: Yeah. And I'd add too that we've found that -- we -- like, we teach the -- we teach a course in adversarial thinking. And so, like, this was kind of an embodiment of that, you know, logical flow for the talk today. But the larger problem about, you know, where com -- you know, companies need to reflect on their own capabilities. We found that adversarial thinking is teachable, right? And we have people cheat in class and cheat on a test, and some other things. But the end of it, they're better tuned. So the idea is, like, companies can better reflect on their own capabilities with the right color hat on, if you will.
Michael Heller: And I was -- yeah, I was just about to ask that. Is this -- is this something that you would teach like a red team to do? Like, who in the company should be responsible for doing this?
Greg Conti: Well, so we've taught -- I've taught a red team a red team training, but I also worked with a company that had their own, basically, red team services, but they brought in their marketing person, the salesperson, the development person. And you know what? It helped them all. Because, for the salespeople, they understood those -- you know, the mindset of information security people, right? And, the -- or the mind of the customer, how they're doing. But at the same time, like early career, mid and early career, InfoSec people, developers for sure, right? Like, they're coders; they're not thinking like an adversary, right?
Tom Cross: It comes back to this adversarial way of thinking. So maybe it would help to, like, sort of double-click on that a little bit and talk about, you know, what I mean. And one of the things that we talk about in our adversarial thinking class -- like, we have some content in there that tries to sort of capture the mindset of the hacker scene. And I think that the mindset of the hacker scene is in sort of like -- it's like anthropological. And so, you know, when you -- when you -- when you consider the career path of a professional engineer -- and I've followed this career path. You go to elementary school and you learn math, and then you go to college and you learn some more sophisticated math. You learn calculus. And then eventually you get to a place where you start to, like, do engineering. And, by the time, you know, you're senior in electrical or computer engineering, you know, program, maybe you're finally, like, making a computer, right? So like, you have to build up all this knowledge to get to the point where you have this artifact. As a -- also, in my life, I also had the opposite path, which is, you know, I'm a kid. I've been given this computer, and I have to go top-down in terms of trying to understand it. So I start by playing games. Then maybe I, like, learn how to work with the operating system. And then, you know, maybe I learn how to write software. And, you know, maybe I get into, you know, some of the electronics at some point. You know, so you're coming from the top down. And you have -- the difference in perspective, you know, that exists there is -- and this comes back to I think a point that Greg made earlier. An engineer created this artifact, and they did so because they intended it to be a certain thing, right? And as a -- as -- when you're coming from the top-down, you don't necessarily have that mental model. You don't know what this thing is supposed to be. You have to discover that, right? And it often turns out that the reality of the artifact is not the same as the intent. And that gap between what the thing really is and what it was supposed to be is the space in which a lot of, like, interesting capabilities or security vulnerabilities exist, right? So I think developing that mindset -- I used the word "anthropological" because it's like, imagine I discovered this thing, and it's thousands of years in the future, and I don't have any cultural context for it. I don't know what it's supposed to be. The only way that I can figure out what to do with it is through discovery, right? And so it's like, you know, William Gibson said the street finds its own uses for things, right? Because there are things that are made for particular purposes, but, you know, folks that get them don't necessarily have that cultural context, but they find other things that they can do with them that are relevant to their world, right? So that's what I think is the essence of security vulnerability research. It's like finding -- you know, finding the truth about the nature of these artifacts. And I think that if you're really good at thinking that way, then you can take a company's product services, and you can ask what is the truth? What's actually possible? So, for example, let's take an antivirus product. Antivirus products look for files, right? And they're supposed to only look for viruses, but what if I went to an antivirus company and I said, "Well, this document was leaked. I'm going to give you the hash for the document. I want you to search all your customers to see if any of them have a copy of this document." Right? You know, the antivirus product becomes a surveillance system. So the thing is that when you point out something like that, people will say you're crazy, right? You know, this antivirus company is never going to do anything like that. That's insane, right? And so the point is that that makes -- that's true today, assuming that company is financially motivated and ethical, right? Then we assume that they're not going to misuse the capability that they have. The point that we're making is that, let's say, there's a war, you know. The consequences could change. Somebody could show up with a Defense Production Act and say, "We're taking over, and we need you to do this stuff." And you may want to do this stuff because the circumstances are such that it feels dire.
Michael Heller: Once you find these points, these products that can be used in a malicious way, whatever, what then? Like, obviously, you can put in policies where, you know, you're not going to comply with government, right?
Greg Conti: You aren't going to comply with them.
Michael Heller: You can't -- you can't -- you can't put in policies. You have to put in technical -
Tom Cross: Well, I mean -
Greg Conti: Or remove some capability.
Michael Heller: Yeah, you have to remove capability, or you have to --
Tom Cross: I think you have three -- yeah. So what is the list, right? So you -- certainly, you could remove capability, right? You could have a technical architecture which makes this thing either difficult to do or which makes it transparent if done, right? There are also, you know, sort of -- there -- I think there are institutional processes. Perhaps, it's not possible for you to prevent the institution from deciding to do it, but you could design things in such a way that lots of people in the institution would know if it was being done, right? So that they can't be done, you know, sort of quietly in a corner. And then one of the things that I talked about is, like, maybe, you know, a third-party NGO can come in and audit and publicly say they're not doing it. And, you know, if that relationship were to break down, the organization may not admit that they're doing it now, but it, you know, sort of like creates that assumption. So, like, there's this concept that -- if you've ever heard of the concept of a warrant canary. You know, if you're running a social media site, you might put something out there that says, "I've never had to respond to a warrant for which I was, you know, prohibited from disclosing." And then if the warrant canary goes away, we can make certain assumptions.
Greg Conti: I've always thought, like, that's legal. You can do that.
Tom Cross: I don't know, right? Yeah --
Michael Heller: It is. We've seen companies do it.
Greg Conti: There certainly are warrant canaries out there. You know, maybe -- maybe the government tells you you can't take your warrant canary down, and --
Michael Heller: I'm pretty sure that Google -- like, as part of Google's transparency report, I'm quite sure I've seen them use canaries before.
Tom Cross: Interesting.
Michael Heller: I'd have to -- I'd have to go back and double-check, but I've definitely seen that in use.
Greg Conti: Every company has superpowers. You mentioned industrial control systems, right? Clearly, they have powerful tech that, if maliciously used, can be highly impactful. But what we're finding is, basically, every major company has superpowers. Imagine what a dating -- like, just for sake of making this simpler, just think about an evil dating site. What type of data leakage, you know, can be collected from that, and also at scale?
Michael Heller: Yeah.
Tom Cross: So, I mean, I do think that, like, the practice of information security becomes more and more, you know, vital as time goes on. I mean, it's always this question of, like, are we going to -- maybe we solve the problem, right? Because -- because we just get really good at coaching developers to write better code, or we've got, you know, so we have -- there's still the debate about AI, whether AI-generated code is going to have fewer vulnerabilities, which is nonsense. It's got the same number of vulnerabilities because it's reading code that humans wrote, and it's writing it in the same way that humans do. And so it's, you know, pretty much like producing the same volume of vulnerabilities that the humans were. But, the -- you know, there's this -- there's this always been this question -- people have been asking this question for years. It's like, are we going to fundamentally address some of these problems in a way that, you know, means that there isn't as much of a need for this kind of work, right? And I think I'm continually amazed by how this whole conference continues to expand and grow every year, and the scale that it's functioning at now, right? You know, DEFCON used to be, you know, like maybe a thousand people in a conference -- single conference room in a hotel somewhere, right? And so, you know, I mean, I think, you know, these issues are going to continue to get more and more complicated, and so I'm -- I'm -- I -- I feel like there's a lot more work to do in InfoSec. And I think, you know, these -- we talk about these robots, a lot of these embedded systems, like, they don't have the degree of hardening of, you know, some of the -- some of the, like, sort of traditional computers that we use or our phones. You know what I'm saying? Like --
Michael Heller: The last question is always the same. What is the big takeaway that people should remember from this conversation?
Tom Cross: So, yeah, I mean, I think that, one of the reasons that we're talking to hackers is that hackers are good at seeing that distinction between what things are and what they -- what they -- what they were -- what they were meant to be, and figuring out how they can utilize things in ways that were not intended and may not be wanted, right? And so it's this sort of like adversarial mindset, where you can think about, you know, what evil is possible within a particular situation, that can be turned to good, right, by applying it in situations like this and then seeing what comes out of that application and then thinking through what you want to -- what you want to do with it. And I think a lot of what we do at conferences like DEFCON and Black Hat is we're willing to wade into these ethically challenging conversations. What if a company was evil? Right? That's a conversation that people are uncomfortable having. And, you know, we have that conversation openly, and then -- and then we -- and then having had it, we're actually -- we're actually able to apply what we learned to make the, you know, make things safer. And I think, you know, that's necessary. [ Music ]
Michael Heller: That's it for today. If you like what you heard, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Your reviews and feedback really do help us understand what you want to hear about. I want to thank our fearless leader, David Moulton, our content and production teams, which include Kenne Miller, Joe Bettencourt, and Virginia Tran. And thanks to Elliott Peltzman for the mix and the original music. We'll be back next week. Stay curious and keep asking the hard questions. Thanks for listening. [ Music ]