Deploy Bravely — Secure your AI transformation with Prisma AIRS
  • Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
  • magnifying glass search icon to open search field
  • Contact Us
  • What's New
  • Get Support
  • Under Attack?
Palo Alto Networks logo
  • Products
  • Solutions
  • Services
  • Partners
  • Company
  • More
  • Sign In
    Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
    Language
  • Contact Us
  • What's New
  • Get support
  • Under Attack?
  • Demos and Trials
Podcast

Is Your AI Well-Engineered Enough to Be Trusted?

Jan 29, 2026
podcast default icon
podcast default icon

Threat Vector | Is Your AI Well-Engineered Enough to Be Trusted?

00:00 00:00

Apple Podcasts Overcast Spotify RSS Feed YouTube

Can you trust your AI systems with your business, or are they just another attack surface waiting to be exploited?

Aaron Isaksen leads AI Research and Engineering at Palo Alto Networks, where he advances state-of-the-art AI in cybersecurity. In this episode of Threat Vector, host ⁠David Moulton⁠ sits down with ⁠Dr. Aaron Isaksen⁠ to explore why engineering excellence must precede ethical AI debates, how adversarial AI is reshaping cybersecurity, and what it actually takes to build AI systems resilient enough to operate in hostile environments.

 

You'll learn:

  • Why well-engineered AI must be the prerequisite before discussing AI ethics
  • How prompt injection attacks are becoming the "SQL injection of the AI era," and why they may never be fully solved
  • What defending the Black Hat USA NOC with AI-powered security taught about real-world AI resilience
  • How machine learning transforms attack surface management from manual inventory chaos to automated risk reduction
  • Why game development experience creates better cybersecurity AI researchers (and what curiosity has to do with it)

 

Before Palo Alto Networks, Aaron spent 15+ years building products across wildly different domains. From co-founding mobile gaming companies and funding independent game developers through Indie Fund, to leading ML engineering at ASAPP where his teams prototyped state-of-the-art neural networks for NLP. With a PhD from NYU (automated software design), a Master's from MIT (light field rendering), and a BS from UC Berkeley, Aaron brings a unique perspective: AI security isn't about philosophical debates. It's about rigorous engineering, continuous red teaming, and building systems that can withstand determined adversaries.

 

This episode is essential listening if you're: deploying AI in production systems, building security programs around generative AI tools, leading attack surface management initiatives, trying to separate AI security theater from actual resilience, or wondering whether your AI agents can operate safely on the open web. #AI

 

Related Episodes:

  • Identity: The Kill Switch for AI Agents
  • Securing AI in the Enterprise
  • Inside AI Runtime Defense

 


Protect yourself from the evolving threat landscape – more episodes of Threat Vector are a click away



Transcript

 

David Moulton: Welcome to "Threat Factor," the Palo Alto Networks podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of Thought Leadership for Unit 42.

 

Aaron Isaksen: Agentic coding is real. It's not going away. This is a real technology. It's very useful. It's learning how to use it and learning how to leverage it and do it safely, that's the important skill to learn. Here you're not, oh, let's wait this one out. I think that enterprises cannot blindly trust AI. It will not right secure code on its own without a proper AI software development lifecycle, so you have to have that in place. And it's complicated to do that, it's not simple. But it's important to have it. [ Music ]

 

David Moulton: Today, I'm speaking with Aaron Isaksen, Vice President of AI Research and Engineering here at Palo Alto Networks. Aaron leads a cross-organizational AI initiative spanning AI security, adversarial AI, and generative AI Copilots. Today, we're going to talk about AI-written code, the rise of vibe coding, and why accountability becomes one of the most critical enterprise risks as AI takes on more autonomy and software development. As organizations race towards AI-driven productivity, understanding who owns the code, the data, and the consequences has never been more important. Aaron, welcome to "Threat Vector." Excited to have you here to talk about vibe coding today.

 

Aaron Isaksen: Glad to be here. Thank you.

 

David Moulton: So talk to me a little bit about your role, maybe what you're doing now that's a little different than when you came into Palo Alto Networks.

 

Aaron Isaksen: Yeah, sure. So I've been working in AI my whole career. So started my career building computer vision systems, computer graphics systems. I've worked in video game AI, I've worked in automated software testing, AI customer service agents. And for the last five years have been working in cyber security. And when I joined Palo, I was working in attack surface management. So, scanning the entire internet, looking for vulnerabilities in systems. You know, can you find like a server that's exposed to the internet that shouldn't be? And about a year and a half ago, I moved into this new role where I'm leading AI research for Cortex. So working on problems that we don't know how to solve yet. So really like difficult problems that we're trying to figure out what the right technology is to build. So it's really exciting to be working in the space. And what we've been looking out in the last six months or so is how to make internet coding more safe. >>> David Moulton: Aaron, you've been working at this intersection between AI research, engineering, and real-world systems in a lot of different contexts. What experiences have most shaped your perspective on why trust and accountability matter so much in an AI-driven development environment? So I've always been interested in this connection between machines and people, and making sure that the technology that we're using is helpful, it's useful. And in cybersecurity, you know, a lot of that's making sure that the systems we use are secure and safe for people. And one of the things that defines AI is that it's always these hard problems that we don't quite know how to do yet. Like, in history, something called the AI effect, which AI is whatever hasn't been done yet. Okay. It's also called Tesler's theorem. And because these AI problems are always right on the fringe of what's possible with computers, you often have a human that's in the loop, that's helping the AI along. So if we're not 100 % confident in the computer solving the problem for us, we need a human to help. The human double checks, it gives suggestions, maybe solves some of the problems for the AI. And because we're not 100% confident in what the machine can do, we need to know that we can trust it. We need to know that we can verify what it's doing. And, you know, people who are on the deciding if we should adopt AI technology, they're always generally skeptical because they've seen AI can maybe not work in these fringe frontier cases. So those are the really ones that are interesting, I think, to work on as we're really pushing the bound of what machines can do and making AI much more useful, especially in making our software more secure. So, as this was applied to agents that are helping write code, you know, a year ago, we saw that we really couldn't trust very much what the agents were writing. And today, this is a practical thing that people are using throughout enterprise and other development spaces.

 

David Moulton: So when I think about trust, not just in AI, but between humans, it's say what you're going to do and then do it. And I think I'm hearing the same thing of, I expect to maybe tell you to do a thing if it's, you know, a human asking an AI to do something. And then it actually does what it's going to do. And if we don't get that output where we ask for something to be done, and then it does it the way that we expect. We can't trust it. And then that leads to this sort of like dubious moment where we're not trusting, or we don't want to use it. And you're trying to figure out how to move the technology forward into that more trust than less trust because the results seem like they're aligned with what we've asked for, right?

 

Aaron Isaksen: Yeah, it's not just trusting that you're going to get the right answer. It's also trusting, like what methods are you're using. How are you doing it? How do I evaluate if you got the right answer or not? Like, we often work with people that maybe don't have much experience with us, and where you're used to getting like incorrect answers, but we can go and talk to them and understand like how did you arrive to this? Can I understand what you did, and therefore can I trust what you did? And so--

 

David Moulton: Explainability.

 

Aaron Isaksen: What's super important with AI is to have-- exactly, it's in order that AI explain what I'm doing. Like, here's the code I'm trying to write. Here's the process I'm going to take. Here's the code I wrote. Can you read it? Can you understand it? And then let the people kind of interact with that system.

 

David Moulton: So I have two questions that I want to follow up with. First one is, does that trust go both ways? Right. Like, do we trust the machine? But is there ever a point when the machine has to trust that the prompt or the ask that we're giving it is valid and/or, you know, it doesn't violate its principles? How does a machine get to a point where it trusts a human?

 

Aaron Isaksen: Yeah. It's an interesting question. When we train models, there are a lot of bad actors out there, as we know, in cybersecurity. And so these models need to be trained to try to understand what the intent of the user is, and then not allow people to do malicious things with them. What's really interesting, being a cybersecurity rational, is when you ask the LLM to do a thing, you're often trying to test a system, you're trying to debug a system, you're trying to red-team a system. And so you're asking it to do adversarial things, but you're doing that because you're doing it for a good reason. And so, you know, LLMs here get a little bit, they can get confused at times. And I think it's one thing that we kind of see in the industry is that you're getting to a point where some tools you may have to have special access to, because it's so important for red teams to be able to use these tools to test out systems, but you don't want that to be available to everyone. So finding that right balance of, you know, how do you make a-- I always think about this as, you know, we're working, and you have a saw that needs to be extremely sharp, but when they're really sharp, you can hurt yourself. So you've got to find what's the right tool for the right time.

 

David Moulton: So maybe it's layers of who has access to a system and their identity is authenticated, and there's a sort of valid use case to go into the super sharp saws, if you will, because you don't want anyone to be able to get in and use that, so it's just not released anywhere. So it isn't necessarily if the system trusts that person that's asking something. It's it trusts the process has been set up so that it's safe.

 

Aaron Isaksen: Yeah.

 

David Moulton: You know, thanks for indulging my curiosity on that one.

 

Aaron Isaksen: Yeah. I think you'll see is that LLMs will try-- large language models will try to please the user. And so they'll oftentimes infer what the user meant and assume things. And so having that conversation back and forth, it's pretty important to really understand. You need to make sure, okay, does the machine really understand what I'm asking it to code? And so one technique I've seen people use is that they start by asking, I want to write a spec. Let's write the spec together. Let's check the spec. Let's make sure this spec has all the things you want to do, and then let's go code, as opposed to just jumping right into coding. That way, the machine has the ability to understand deeper what the person needs, really wants to build.

 

David Moulton: So, Aaron, was there a moment when you realized that AI productivity could actually increase risks if it wasn't engineered correctly?

 

Aaron Isaksen: Yeah. So in the past year or so, we've really seen this feeling in the industry that in order to keep up with the competition, people are willing to get rid of some of these human in the loop checks and just say, let's just go with the system. Let's let-- like, we know these agents don't write code 100% perfectly, but everyone else is doing it. We've got to keep up. We've got to move fast. And so they're allowing this stuff to get into their engineering departments, maybe at a faster rate than they would have done in the past. Now, when you give agents the ability to write code unchecked and like, you know, be able to test their code and execute their code, and write code without any controls, you'll actually get some really incredible results. And, you know, prototyping and building a proof of concept, you know, in 30 minutes, you can have something written that would have taken a week before. But when you care about security or accuracy of that system, you know, those kind of approaches really aren't the right ones to use. And I don't see as much conversation in the industry around how do we use these tools to write accurate code or how to write secure code as much as like how do we go faster, how can we do this? Can we increase productivity? And so that was part of what makes it so interesting to me, is that it was a major fundamental shift in how we write code. Yet, people like kind of know that it makes mistakes and can introduce vulnerabilities, and yet it's still being adopted. So how do we make these tools much safer to use and loop us forward in the use.

 

David Moulton: You know, as you were talking about that, I'm reminded of my personal cooking style. I just wing it and make it up as I go, and then the family likes the meal, but I can never make it again. And I kind of feel like with generative code and going in and using a Copilot or, you know, giving the system some prompts, it'll make something for me, but then I can't repeat it. And I'm curious, how do you address that idea that you can't consistently get the same results, and you can't necessarily trust that the results are error-free or that it can explain, you know, how it got to the very fast, very interesting prototype results that it got to, but, you know, you maybe don't want to move that over to production.

 

Aaron Isaksen: Yeah. Well, I think, you know, using your cooking analogy, you know, LLMs don't get bored, don't get tired, don't get-- don't lose interest in things, so you could tell them, "Hey, write down the recipe." This one worked well. You know, I got some thumbs-up from my family and got good feedback, let's record those down. So, and then you can use that recipe the next time that it wants to build something. So what we're doing a lot is using the LLM to do introspection on itself. So when you're done with the task, have it write out what it did. So one of the things we're going to talk about today, it's really important. LLMs do not learn like people do. When we do a thing over and over again, we get better at it. LLMs do not get better at something because they do it multiple times. Unless they write down what they did in a form that they can later on read and incorporate the next time it tries a task. Because it's starting over from scratch every time. You know, in its-- the base model, the one that gets trained by OpenAI or Anthropic or Google, those learn because they're being trained, but they only come out once every three months or so. And it can be really frustrating working with an LLM that's not getting better. Like when we have employees, and we work with them, and you tell them like, "Don't do that anymore." And then they do it again. You're like, "I don't know how to explain this to you without firing you. You have to stop doing this." An LLM is just like, you have to tell them each time, like here are the rules you're allowed to do. Here are the things I don't want you to do. And so over time, you build up this really, you know, nice document that describes all the things that are important to your team or your individual style. That's practice, I think, is very important. You can't just rely on an LLM to do the work; you have to guide it.

 

David Moulton: Yeah. I think you can get incredible results, but what you're describing is a toddler, right?

 

Aaron Isaksen: It's like a super smart toddler that is better than you in a lot of things. So it's not exactly a toddler. You know, a toddler, like we're better at everything. So maybe coloring, maybe they're better at coloring than we are. So it's just a different way of working. And it requires patience, but also it's just very exciting to be on the forefront of the stuff that's being invented on the fly.

 

David Moulton: Absolutely. So there's this term that's been around for a little while now, vibe coding. You know, it's been gaining some traction. How do you define vibe coding, and why does it raise red flags in an enterprise from the security teams?

 

Aaron Isaksen: Yeah. So before we talk about vibe coding, let's talk about agentic coding, which is like a bigger class. So agentic coding is using an agent that's powered by a large language model to help you in the software development lifecycle. It can write code. It can review code. It can write tests. It can bug code. It can call tools, read documentation, execute code, run unit tests, make commits. He's doing all the things that a software engineer does. And agentic coding is extremely useful. It's a huge change in how software is being built. It's expected to be getting even better, but even if it doesn't get any better than today, it's still here for good. Like, it's just a great way of-- a great tool to be incorporated in our software development tool chest. So that's agentic coding. Now, what you can do is take the human out of the loop and say, I don't want the human to check things anymore. I don't want them to like review code. I just want the agent to do everything. And that's vibe coding. So why is that cool? Well, that's people who don't understand. I want to build a thing, but they don't know how to code. They can actually build this. It's not something you want in an enterprise because in an enterprise, you need accountability, you need controls, you need detractings, you need to understand where your data's going. And so we have processes in place. And those processes need to be followed.

 

David Moulton: Where would you advise a leader to start to even get their eyes on what's being built?

 

Aaron Isaksen: Yeah. I think the first thing you need to do is know what tools are being deployed and used in your environment. What code editors, what agents, what models? You don't want an environment where everyone can do anything they want; they bring their own stuff, you know, your data needs to be staying within your environment. And so if people are bringing their own tools, that your data is probably leaving your environment. And so having some sanctioned tools that only are allowed to use for vibe coding-- for agentic coding, not for vibe coding-- is important. I think it's reasonable at this stage to expect people to use more than one tool, or to let people kind of experiment with a couple of different things to see which one's best. We don't know yet what the right tools are to use. So, you know, in some environments you're going to have, okay, there's only one tool that's sanctioned. In some environments, you'll have multiple tools. I think we are seeing that only allowing one model is not the right thing. There's a past quality trade-off that's happening. And so-- and different models are trained on different data, and so you can have, depending on what language you're writing or problems you're trying to solve, you might try with different models.

 

David Moulton: Better.

 

Aaron Isaksen: So if you're sanctioning, you can use these tools and these models, then at least you can be tracking what's actually happening. One thing that we've seen is if you don't sanction any tools, your employees will, in the modern-- will feel like they're being left behind, and they're going to have to find a way. But if you give them some tools that are up to date, they're going to like use those tools. Like, we're not seeing a ton of protests around, like, no, no, I want to use my specific one flavor of model. Like, okay, as long as I have some access to this, that I'm happy and I'm willing to work within the boundaries. [ Music ]

 

David Moulton: It's interesting. I'm hearing that over and over in these types of conversations. You know, we had a guest on from TELUS not too long ago. He talked about the idea that these tools are better and people want to use them to get their jobs done. I've talked to Mike Spisak here at our Unit 42 team about the idea of sanctioning a handful of tools and making sure that that is what people know they're allowed to use, expected to use, and to make that the easy path. And I think that knee-jerk reaction of like, just shut it down, actually ends up working against you as a security leader because there's going to be that ease of use of going around the rules. So it's good to see that there's maybe like a fundamental understanding that, yep, this is very exciting. We have access to something we've never had before. And getting out in front of a permissioning where security is the office of yes instead of no is a cool idea to see that maybe this moment in AI is pushing us in that direction. I do want to shift gears for a second. You're talking about like I'm going to code, whether it's vibe or agentic, right, human in the loop or no human, it kind of doesn't matter. Who is responsible for that code when a system or a tool writes it either entirely or, you know, the human is kind of the rider on the horse guiding it, but not necessarily responsible for generating the line of code and, you know, what APIs are called or however it's put together, like how do you set that up so that there is a level of accountability on the code inside of an enterprise?

 

Aaron Isaksen: Yeah. So, this is an important question. Machines can't really be responsible or accountable for things. Like, there is a person who runs the machine, and that person to be accountable. But the machine itself, like if anything goes wrong or something happens, what do you do? You're going to-- like you can't blame the machine. Like, you can say that's the problem. Okay, fine, what's the machine going to do? So, I think it's very important. That's why we have experienced software engineers using these systems. And they are the ones who need to be accountable for the output. And when something goes wrong with agentic coding, we try to catch it early because we have a software development lifecycle that, you know, allows us to have unit tests and do code review, and deploys carefully and, you know, does stage rollouts and does durability and does monitoring. And all those things we're going to still need to do with AI. Like, we don't just change the way we develop software just because now an AI does it. But you really got to follow the things that we've learned over 60 years or whatever of developing software to make sure that we're not cutting corners.

 

David Moulton: So let's shift gears again. Before we recorded, you and I exchanged a couple of articles, and we were going back about this idea that there's maybe some growing evidence about declining code quality with AI-assisted development. I want to hear your thoughts on what that means for like long-term security and technical debt.

 

Aaron Isaksen: First, I want to say that I think the code quality is actually getting better. There are areas where it's getting worse, but let's talk about what it's getting better first. Okay. So a lot more unit tests are being written, that really helps the agents and developers write that our code is valid, that code works. You know, trying to-- in the old days, trying to get your test coverage up was hard to do. Engineers didn't like to bring unit tests. There was a lot of pressure. Today, I can't imagine accepting a PR that didn't have tests in it. Like, why would you do that? There just has to-- agents to write the tests. Knowing that, you can't actually trust the agent code unless you have tests to run, because the agents will make mistakes like people do. And the way that they check to make sure that their code work is they run the unit tests. So that's super important. I think agents can help refactor code in minutes where it would have taken days. So when you do recognize, oh, the agents have duplicated code somewhere or, hey, this architecture we're building is not quite right. Let's refactor it. That stuff can happen much more quickly than it used to. Documentation is of much higher quality. Comments are of much higher quality. You know, again, the engineers under pressure, they don't write a lot of dock strings, but the agents don't mind doing it. And then, you know, automation can take bug reports that come in and do the initial attempt at fixing them. Many times, the agent's actually successful at doing that. So those are areas where the code quality actually can be much higher, right, because they can fix bugs automatically, they can test that the code has been working, they can write documentation, all that stuff. Now, where it's decreasing, I think, is that when too much code gets pushed out too quickly. And that's encouraged because it is easy to write code. Like, you want to write a lot of code? Just go ahead and write as much as you want. And even if that code is better quality code, because there's so much more of it, I think the absolute number of issues can be higher today because you're just getting more features being built. And so, you know, the software development lifecycle, like I mentioned before, is just more and more important. Like, it used to be that you could maybe get away with not doing steps of it. Today, you have to follow all those steps, right? Like, when an agent wants to install a package, you have to make sure that that package is sanctioned. Is there a version that doesn't have known CVs in it? Because agents are known to just like install buggy packages and things. Like, they have to-- you have to have the controls in place to make sure that they're not doing those things. And then lastly, I'd say that just like engineers, the AI does need to be managed. And I think that software engineering in the future is going to look a lot more like management. Maybe it's easy for me to say because I was an engineer, that's become a manager. But I see us thinking about engineering, as I have a team of AI agents that can do work for me. I need to break the problem down in a way that I understand what's being built in a way that I can like assign the task, verify the tasks are being done, check on the ones that aren't working right, you know, whatever that process is. And I think that you'll see individual contributors taking on a lot of those skills as they have their team of AI working for them.

 

David Moulton: So, before we get away from it, earlier you mentioned this idea of PR. And I just want to make sure that's a pull request, right?

 

Aaron Isaksen: Yeah, pull request or merge request. The basic idea is that you write your code, you get it working, you send it off to another person or a team to review the code to make sure that it can actually be merged into your source code baseline.

 

David Moulton: Yeah, I imagine most of our audience has a technical background, and I just want to make sure that when we get into that, we call it out. How should enterprises think about validating and testing AI-generated code differently from human-written code?

 

Aaron Isaksen: You know, as I just mentioned, I think we should think about AI engineering as being like engineering. So in the same way that all those methods and tasks and tools we've developed in order to test code, we're going to use those for AI. It's not going to be like those are thrown away. We're going to still use them. Code scanners, code review. Now, an AI may be doing the code review or assisting the code review, but it's still like that stage still gets followed. I think what's really important here is that agents have to self-evaluate how good their answers are. So no human being sits down and writes code and gets it right the first time. They write it, they review it, they think about what they wrote, they try it, they test it, they fix the bugs. So that-- AIs have to do that too.

 

David Moulton: Let's talk about autonomous coding agents for a second. I think these agents introduce a new attack surface. What makes agentic AI especially vulnerable to abuse or to compromise?

 

Aaron Isaksen: I think there's two big aspects here. The first is that they want to be helpful. Like, they're trained to be helpful. In the training process, they are rewarded when they give you answers, and they are anti-rewarded, punished, let's say. It's a little different than punishing people, but, you know, they're told not to not give you an answer. And they're trained to avoid malicious things and do positive things, but they are fallible. And so that's why you can have jail breaks, and you can convince an LLM to do a thing that they're not supposed to do. The other problem with LLMs is that they don't separate a control plane with a data plane. And what that means is that they can't distinguish between instructions that you gave them and things that they've read. Like, it all gets mixed up in there. And so if you ask it to do something, and then it goes out on the Internet and reads some code or reads some documentation, and in that documentation has a malicious message, it might think that that malicious message came from you. It can't tell the difference. And if it follows that malicious message, thinking it's from you, it can now be taken over and can do things that you didn't want it to. So it's really important to be careful about what data gets ingested into your system. So what we do and what we recommend is that you are checking your inputs and your outputs with guardrails. So you're using another system to make sure that whatever goes in and out of your agent or your LLM is not malicious. And so that's-- those two things, you know, the fact that the LLM is trying to be helpful and it's just going to follow you, and if you get the jail broken, it's going to do something malicious. And the fact that it can't separate between your instructions and what it's read in a clear way, those two can cause real problems.

 

David Moulton: So you've talked about this need for a secure enclave or a secure agentic enclave. Talk about the capabilities that the environment must provide to be enterprise-ready.

 

Aaron Isaksen: Okay. So I mentioned before that agents don't write code correctly the first time. Well, sometimes they do, but they need to test their code and execute it. And oftentimes, what developers will do is they'll execute that code on their machine. But they didn't write that code. And developers often have a lot of control over their own machine. And so, a better practice is to run that code that you're testing in an enclave or sandbox, or Docker container, whatever. Sometimes, a system where you control what can go in and out, you can control what it has access to, you know, a common attack that can happen is when an agent can edit its configuration files, then it can be taken over, or you can cause it to do things permanently. Like, it retains its memory about what it's doing. And so the sandbox should not allow it to edit its configuration files, is an example. It should not be allowed to exfil data, right? So there's no reason for your system to be reaching out to Dropbox, or, you know, third-party systems if it doesn't need to do that as part of its testing. And when you have that secure environment that you're testing in, you can start moving faster. So I don't love this concept of, well, any time the system wants to execute something, make sure a developer reads from his-- again, developers should know what's being executed. I'm not saying skip that. It's just also easy to secretly embed something malicious in there. If you're looking at a complicated batch script, you're not going to like maybe catch everything. So, ensuring that your environment will catch it for you is very important. I don't think we can just rely on people here.

 

David Moulton: Earlier, you talked about your view of engineering, or engineering, software engineering changing. And I think you were talking about this idea of the person becoming the team leader of AIs, or maybe it's AIs and humans. How does that shift affect the role of senior engineers?

 

Aaron Isaksen: Well, AIs don't know everything. They know what they've been retrained on, and then they know what they have currently read. But they don't understand your company. They don't understand what motivates the product you're building. They don't have long-term memory or accountability, or a deep understanding of all the changes they have made in the past. And so senior engineers have a history. They understand what you're trying to accomplish. They understand how all these systems work together. And they know more than just like, I woke up today, I looked at my to-do list, and now that's all I know. So there's a lot of knowledge that the people in the organization hold that are-- it's very important to incorporate into the software development process, right? Like, also, when you work with a PM, they don't state exactly every little detail; the engineers fill in the gaps because they understand the pockets being built. And the agent can maybe know that, but it doesn't always know those things. And then also, like what has gone wrong in the past? Why has that gone wrong? Why do we do things the way we do? Those are all the kind of things that senior engineers, you know, know. And then I think, you know, related to the question you just asked is around, well, do we need junior engineers anymore? And that's the thing I hear a lot, people talking about, like, oh, do we-- you know, can we just only hire senior engineers and, I don't know, whatever, I guess they'll live forever. So we absolutely need to hire our junior engineers. Like, we can't have a-- we don't have a world where we just like only hire senior people. And I think the thing that goes away is, for a long time, there was like somebody who could go to a boot camp for 12 weeks, and they learned a little bit of JavaScript and some HTML, and they come out, and they're like, you know, making money. I think that kind of will-- the agents will do, that very basic coding work. But engineers who go get a computer science degree today, they're not like ignorant of agentic coding; they're learning agentic coding, they're coming up in that world. And there's some evidence that they can be onboarded much quicker than junior engineers in the past. They're-- you know, they haven't-- they're learning this stuff at the same time from scratch that the senior engineers are learning much later in their career. And so there's a lot of insights I think that the juniors can bring. And, you know, the way the market works, there's a lot of plentiful, really talented juniors out there. And they may be more available than senior engineers. So I believe that we will still see a healthy market for all levels of seniority. But what those engineers do is different today than it was in the past. And we're going to have to learn how to adapt.

 

David Moulton: So, can you talk to maybe the skills that you think are going to be more valuable under this new model than the ones that are less valuable? So being able to describe what you want to do in words is very important. I think the ability to write is like, you know, text is like way more important. Writing documentation is way more important than it used to be. I think the need to maintain-- like multitask and like understand that I have multiple agents solving different problems for me at the same time is a skill that's going to be really important for people. I think when you're junior, you can probably deal with less agents running, but I think senior people will be expected to have a lot of tasks happening at once. I think current project management skills, breaking down a problem into tasks that can be completed by agents and verified, those are skills that are very important. And I find that reading code is absolutely essential. So I noticed that I can read Python, I can read C, I can read JavaScript. And so I can do a genetic coding in them, but I don't know how to read Rust. And so when I try to do projects in Rust, I'm like, I can't really use them very effectively because I don't know what the agent's doing. It's basically just vibe coding, well, I'll accept it. But when it's a language I know how to read, I can give it input and be like, oh, I don't like how you did that. Let's try this other direction. Or did you think about this? So my expectation is that engineers will do a lot more reading than they did in the past, like maybe before it was like 80% writing. I don't know. I make it up with numbers, but I think that it will shift to be a lot more of reading and reviewing than we used to do. Aaron, I know that listeners can find your article on vibe coding out on our perspective site on paloaltonetworks.com. Where else can they find you on the internet?

 

Aaron Isaksen: So I am on LinkedIn. I think Aaron Isaksen. It's a good place to find me. I'll answer messages there. Yeah. And then if there's questions that come in through your podcasts, you know, I'm happy to field those as well.

 

David Moulton: Yeah. We'll go ahead and make sure that those are in the show notes so folks can find you on the internet and read more about vibe coding and some of your thoughts on the AI research that you're leading. Man, thanks for a great conversation today. I really appreciate the insights, the conversation. Just the depth that you provided on this topic, I know that it's one that keeps coming up and people are really interested in for a variety of reasons, both on excited of where it's going and also maybe a little bit of trepidation that we need to put some thought around security when we're writing code. And that's certainly a big piece of what you're doing.

 

Aaron Isaksen: Well, thank you for having me here. I love talking about those stuff, and it's a very exciting moment in our industry. So I'm glad to be a part of it. Thank you for having me. [ Music ]

 

David Moulton: That's it for today. If you like what you heard, please subscribe wherever you listen, and leave us your review on Apple Podcasts or Spotify. Those reviews and your feedback really do help me understand what you want to hear about. If you want to contact me directly about the show, email me at threatvector @paloaltonetworks.com. I want to thank our executive producer, Michael Heller, our content and production teams, which include Kenne Miller, Virginia Tran, and Joe Bettencourt. Original music and mixed by Eliott Peltzman. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]

Share page on facebook Share page on linkedin Share page by an email
Related Resources

Access a wealth of educational materials, such as datasheets, whitepapers, critical threat reports, informative cybersecurity topics, and top research analyst reports

See all resources

Get the latest news, invites to events, and threat alerts

By submitting this form, I understand my personal data will be processed in accordance with Palo Alto Networks Privacy Statement and Terms of Use.

Products and Services

  • AI-Powered Network Security Platform
  • Secure AI by Design
  • Prisma AIRS
  • AI Access Security
  • Cloud Delivered Security Services
  • Advanced Threat Prevention
  • Advanced URL Filtering
  • Advanced WildFire
  • Advanced DNS Security
  • Enterprise Data Loss Prevention
  • Enterprise IoT Security
  • Medical IoT Security
  • Industrial OT Security
  • SaaS Security
  • Next-Generation Firewalls
  • Hardware Firewalls
  • Software Firewalls
  • Strata Cloud Manager
  • SD-WAN for NGFW
  • PAN-OS
  • Panorama
  • Secure Access Service Edge
  • Prisma SASE
  • Application Acceleration
  • Autonomous Digital Experience Management
  • Enterprise DLP
  • Prisma Access
  • Prisma Browser
  • Prisma SD-WAN
  • Remote Browser Isolation
  • SaaS Security
  • AI-Driven Security Operations Platform
  • Cloud Security
  • Cortex Cloud
  • Application Security
  • Cloud Posture Security
  • Cloud Runtime Security
  • Prisma Cloud
  • AI-Driven SOC
  • Cortex XSIAM
  • Cortex XDR
  • Cortex XSOAR
  • Cortex Xpanse
  • Unit 42 Managed Detection & Response
  • Managed XSIAM
  • Threat Intel and Incident Response Services
  • Proactive Assessments
  • Incident Response
  • Transform Your Security Strategy
  • Discover Threat Intelligence

Company

  • About Us
  • Careers
  • Contact Us
  • Corporate Responsibility
  • Customers
  • Investor Relations
  • Location
  • Newsroom

Popular Links

  • Blog
  • Communities
  • Content Library
  • Cyberpedia
  • Event Center
  • Manage Email Preferences
  • Products A-Z
  • Product Certifications
  • Report a Vulnerability
  • Sitemap
  • Tech Docs
  • Unit 42
  • Do Not Sell or Share My Personal Information
PAN logo
  • Privacy
  • Trust Center
  • Terms of Use
  • Documents

Copyright © 2026 Palo Alto Networks. All Rights Reserved

  • Youtube
  • Podcast
  • Facebook
  • LinkedIn
  • Twitter
  • Select your language