In this episode of Threat Vector, host David Moulton speaks with Tanya Shastri, SVP of Product Management, and Navneet Singh, VP of Marketing - Network Security, at Palo Alto Networks. They explore what it means to adopt a secure AI by design strategy, giving employees the freedom to innovate with generative AI while maintaining control and reducing risk. From identifying shadow AI in the enterprise to protecting data across AI-powered application lifecycles, Tanya and Navneet share insights on visibility, governance, and continuous monitoring. Learn how leading organizations can safely embrace AI without compromising trust, privacy, or security.
Protect yourself from the evolving threat landscape - more episodes of Threat Vector are a click away
Transcript
[ Music ] >> Having the tools, the technology, the -- the guardrails, the framework that allow users to be secure, both for the use of AI when you're using AI applications, as well as developing applications. Once you have the right frameworks and the right security tools, it allows you to do both bravely. It allows you to do it broadly, allows you to do it quickly, while still having the -- the assurance that you're doing it in a way that's secure for you. [ Music ]
David Moulton: Welcome to "Threat Vector," the Palo Alto Network's podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of Thought Leadership for Unit 42. Today, I'm speaking with two of my colleagues, Tanya Shastri, SVP of Network Security, Platform and Product Operations, and Nav Singh, VP of Marketing for Network Security at Palo Alto Networks. Tanya brings more than two decades of experience across product strategy, engineering, and leadership roles at companies like VMware, Google, NetApp, and Cisco. And she's played a critical role in bringing innovative SaaS, ML, and AI-powered infrastructure solutions to market. Nav is a transformative marketing leader who's helped drive over a billion in bookings at Palo Alto Networks and brings deep expertise in network security, SaaS, and secure cloud adoption. His published work has shaped how organizations think about insider risk and AI-driven data loss prevention. Today, we're going to talk about securing AI in the enterprise, a topic that is reshaping cybersecurity strategies across industries. As GenAI tools like ChatGPT, Gemini, and Copilots enter the workforce, organizations face new challenges balancing productivity, innovation, and security. We'll explore how to enable AI use safely, secure the AI development pipeline, and future-proof defenses in a rapidly evolving landscape. Here's our conversation. [ Music ] How are employees using GenAI tools like ChatGPT, Copilots, and Gemini inside the enterprise today? And what risk does that create?
Navneet Singh: Employees are using GenAI applications in a variety of different ways, especially, you know, if -- if you look at marketing department, I lead marketing for network security here at Palo Alto Networks. We use it in -- in many, many different ways. One example of this, we just came out of the Cybersecurity Big Risk Conference which is RSA and biggest week, RSA week. At that time, during that time, we launched many new products and we had a campaign come out of it. So, one of the things that we did in order to launch that was actually do a competition, internal competition, where we had to use AI tools to come up with taglines, concepts, creative concepts, videos, and so on. So, we got 56 submissions in two days. And one of that, those submissions was actually chosen and that's what we went with. We -- we had Deploy Bravely for Prisma AIRS. That actually came out of this -- this competition. So, this is just one of the ways in which we are using it in -- in my team, and when we talk to customers, they're using it in a variety of ways in sales, marketing, finance, and so on.
David Moulton: Sure, I've talked to a number of our guests here on "Threat Factor" about how they're using it even in security. And so, it seems like it's -- it's really, really widespread. Maybe where I want to go next is, you know, Nav, what are some of the really common blind spots or surprises that you're seeing when organizations assess their -- their Gen AI usage?
Navneet Singh: Yes, one of the blind spots is just knowing if employees are using Gen AI applications. So, many times customers say, you know, "Is Zoom a Gen AI application? Is and -- is Slack a Gen AI application?" Because many of these applications have been used before ChatGPT was born, they are considered regular applications. But as you know, Zoom has the transcribing feature, Slack uses AI, right? So, this is one of the biggest blind spots and customers say we either block AI applications or they're not aware that they're actually using dozens, maybe hundred or more applications that use AI in the background for -- for some purpose. That's one of the biggest blind spots. What you are blind to, you can't secure.
David Moulton: Tanya, talk to me about the critical components of a security strategy that allows employees' AI or Gen AI use, without putting data at risk.
Tanya Shastri: Yes, so as Nav just mentioned, there's tremendous adoption of AI, but there is lack of visibility into what users are actually using. Essentially, we call it shadow AI. So, first and foremost, one of the very important components is having visibility into what the employees are using, what apps they're using, and then having a visibility into what the app actually does. What does it do, the various attributes of that application, so you can assess the risk of that application. That is one area or one component that's important. And as you think about it, really these apps are being generated so quickly and there are more and more new apps. So, staying up to speed and being able to recognize and -- and -- and -- and understand what these new apps are continues to be important. Then another area, another component or another piece that's very important is once you have visibility, you have to be able to control the usage of the app. And that control could be a blunt tool where you say it's too risky and I just don't allow access to the application. But more importantly, you also have to be able to have a finer approach in that you allow access to an application, but then you are able to have a finer grained ability to decide what you do with that application. So, for example, having access to a chat LLM, ChatGPT, for general use makes a lot of sense. There's a lot of value to -- to leveraging it, but you may not want it to be used for situations where employees are sharing code with it and asking ChatGPT to -- ChatGPT to improve their code. So, being able to have that kind of fine-grained control over how a -- how an app is used and what data is shared with the app, ensuring that no private sensitive data is shared, either inadvertently or -- or otherwise with the application. That's also another important piece, essentially control of that application. And then if you choose to decide to allow that application to be used, it's very important to continuously monitor the traffic that's actually going between the application back and forth to ensure that there are no threats, no malware, no other command control, other such things in the -- in the communication between the application and back.
David Moulton: So, let's talk about something that is a -- a pretty broadly used technology: browsers. Right now, I'm coming in on this podcast through a browser, through an application that lets me podcast from anywhere in the world. And I'm wondering how a secure browser fits into that architecture that you're talking about.
Tanya Shastri: Yes, so today, browser adoption and especially work is done more and more through the browser. It's getting close to 100% of the -- the work can be done through the browser. And a lot of these AI apps are delivered as web apps, so they are essentially accessed through the browser. So, all these components I mentioned, visibility, control, I -- I didn't mention one which is very important actually, it is the continuous monitoring of traffic that goes through to ensure that there's no threats in there. All those capabilities really need to be brought to the browser and that then gives us an opportunity to transform how security is done, bring it into the browser, make it much more user-friendly, productive, you know, bring productivity and -- and user experience to the forefront and so on. And with the browser, it also allows you to have better security because essentially, you're able to do things that are sometimes difficult in other situations. There are situations, for example, where decryption may be difficult because it's either a business justification or the technology isn't allowing for it. And being able to detect threats then becomes difficult if you aren't able to decrypt it. But given that you're actually looking at this traffic post decryption in the browser, those use cases can be addressed in the browser. And then malware is evolving very quickly. So, there are some advanced types of malware that can be essentially assembled. There are pieces of the malware that come together and they're assembled in the browser. And it's very difficult to -- to protect from those kinds of malware unless you're actually in the browser. So, these are the kinds of things that the -- bringing security to the browser enables. And like I mentioned before, again, with -- with our Prisma Access browser, we are actually extending our platform and bringing a very sassy integrated browser to the -- to market. I think first and only to do it, and that really allows us to change the way security is done. And I think as we like to say, it allows everyone to browse bravely.
David Moulton: You got me there. Nav, let me take it back to you. You were talking about a launch, 50-plus different submissions. A lot of those were generated through AI. And on the other side, I've got to imagine that, you know, our security team is looking at this as a -- a potential risk, an area where what we don't want to -- to come out of the company goes out into a public -- public chat or, you know, onto a -- onto a system that we don't control. And so, there's this push and pull that's going on between security teams that want to protect the company, but also need to balance that need for innovation and make sure that our Gen AI usage is -- is safe. How do you strike that balance between those two seemingly conflicting needs within an organization?
Navneet Singh: It's difficult. It is -- it is challenging. Let me -- let me start with that. As an example for when, you know, our PMMs and PMs that worked on that challenge, very security savvy. But at the same time, we still have to be very, very cognizant. And there's always that danger that something might leak to the outside world before you want it to. So, I think the first thing that customers need to do is to really have guidelines. And those guidelines have to be so simple for users to understand and comply with. For example, don't use anything, don't upload anything or use anything that you don't want the external world to know. But at the same time, I think that's not always enough. And that's why we have to compliment it with tools that can do user coaching as an example. So, if there is something that is sensitive, as Tanya was saying, you know, many times users do try and upload sensitive data, the tool should be smart and intelligent enough to say, "Hey, this is something that is sensitive. Do you really want to go ahead or block it if it is really deemed to be something that has very sensitive?" data like credit cards information or personal information or source code and so on. So, I think it is -- it is challenging, but it has to be a multi-pronged strategy to -- to eliminate that risk or to reduce that -- that risk.
David Moulton: That makes sense. Nav, what advice would you give to CISOs who are feeling that pressure to either fully block or just ignore Gen A? It's worthy the risk. We've got this opportunity to go fast, innovate at an unprecedented rate, just let people rip. What would do you say to that -- that CISO?
Navneet Singh: Yes, I've presented to many CISOs, many customers, and their reaction ranges from, "This is really dangerous. I don't allow it," to "I really want to allow it because it provides us a competitive advantage." So, coming back to my previous response of most of the applications are now actually using AI. There is no real way for you to say, "I don't want any AI in my enterprise." Like, if you use any video conferencing application, any messaging application, right, you are essentially allowing AI. So, I think the CISOs have to come to terms with the fact that AI is there in their environment, in their tech stack, in their -- in their applications that are already approved for use. They just have to ensure that it is used correctly, it is used within guidelines, and they have, not only the guidelines, but also the tools, the security tools that help us help them enforce those guidelines.
David Moulton: So, let's shift gears a little bit here and talk about how companies can look at building and deploying AI-powered applications safely. Tanya, when developers are building AI into their enterprise apps or, you know, the -- the different types of innovations that they're hoping that AI will unlock for them, what are some of the biggest security risks that emerge? Is it in the code, in the models, even at -- at maybe the infrastructure level?
Tanya Shastri: Yes, you know, I'll take a step back here first, because before we dive into the risks, it's important to understand that AI applications, the entire stack is quite different from traditional applications or even from cloud applications. So, it's almost like we are through the third generation of applications, starting with some that were more traditional, the typical three-stack, three-tier kind of applications that were then transformed to distributed systems or microservices and so on in the cloud. And now to the AI stack, which has various different components that are specific to AI applications. You know, starting with the infrastructure, there are GPUs and TPUs and other such things that are leveraged. The -- the ML models themselves, the LLMs and so on. There are various libraries, ML libraries, AI libraries that are used in the context of these LLMs. There are a whole slew of new plugins that essentially help you to leverage those LLMs well. For example, you might have a plugin that converts natural language to a SQL query. So, those kinds of plugins essentially help people to develop AI applications. And as we go -- you know, as we're going forward, these applications are getting even more complex. We call them, you know, compound systems, which are bringing together various LLMs, databases, enterprise search functionality, various plugins and so on. And then, this whole complex -- the system becomes more complex, and this then leads to an, you know, a -- a wider attack surface, essentially. Now, given this new stack, there's a whole new set of threats and a whole new set of -- a new attack surface that is developed. So, all the way from, you know, when you think about developing an application, you go through the code development phase, you deploy it, you run it, right? And in each of those stages, there are new threats. When you're developing the code, often the LLM models that are used may have some vulnerabilities in them. There may be misconfigurations in the infrastructure as code because the infrastructure itself is different and the mechanisms are different. So, threats or -- or misconfigurations and infrastructure as code is another area. That's related to code development and deployment. But then when you come to runtimes, when you're running the application, there are a whole new set of threats there as well. Some that are more predominant are things like prompt injections, essentially when someone is able to, when a bad actor is able to intercept a particular prompt and make it behave differently from what the intention was, able to extract information that was not intended to be shared, so on and so forth. Those are kinds of things that -- those are some kinds of threats. Additionally, there are, you know, similar to DOS attacks, you can have an LLM that's de-DOS'd [phonetic] because a prompt can be altered in a way so that it -- it isn't able to respond because it's under so much -- so much stress, right? So, there are things like that from a runtime perspective. And then as we are now moving into Agentic AI and with agents, there are even more new threats because agents use a lot of memory. They want -- you know, you want to have memory in there so agents can respond more quickly. They can become, you know, get more customized with time. And memory injection, date -- it's called memory poisoning. Maybe memory poisoning, data poisoning, things like this, where the LLM can then behave differently, the agent can behave differently. Even the agent itself, you know, if its access is too pervasive, it can almost impersonate someone's -- and be able to take their identity, right? So, those are kinds of threats that you can get yourself into given these complex systems. So, those are the kinds of things one has to contend with. [ Music ]
David Moulton: Nav, let me take it over to you. How does the Secure AI by Design approach help organizations bake security into the AI development lifecycle?
Navneet Singh: So, let me talk about a customer example. So, I was talking to a customer with a professional services firm, and they're building AI application. They in fact have tested it internally. It helps their consultants prepare for their meetings 2X faster because it gives them so much information so quickly and so easily. And which basically, for a professional services firm means that they could potentially even double their revenues with the same headcount. So, it's -- yes, it can be a game changer. So, when you look at this, you know, going back to what you were saying about CISOs, they are going to feel the pressure from their CEOs and the board to really allow AI. So, they -- that's why we believe that the best approach is Secure AI by design, which means you use AI in your development lifecycle, as Tanya was mentioning. So, we offer capabilities to secure AI or safely enable AI in both use cases that we just mentioned, either employees using third-party Gen AI applications like ChatGPT. So, employees can safely use, but prevent sensitive data -- data from leaking. And secondly, enterprises developing their own AI applications. So, all the risks that Tanya had mentioned. So, model scanning, red teaming so that we can find vulnerabilities, looking at the posture. Do you have overly permissive AI applications or agents? The runtime security prompt injection attack, preventing multiple different types of prompt injection attacks, right? All of that is something that we offer as part of our portfolio of -- of AI. And that's -- that's what we mean by being able to secure AI applications by design and securely being able to embrace AI.
David Moulton: Tanya, what does it mean to secure the entire AI pipeline from development to deployment?
Tanya Shastri: So, I just shared how the entire stack for AI is new, and how there is a lot of complexity in terms of the technologies that are being brought together to deliver an AI application, and then essentially the threats that it opens up. And when you think about it, essentially all the threats that open up during the development, deployment and runtime are essentially what we need to take care of. So, starting with development, being able to scan these ML models, being able to have the confidence that the -- that the ML models that are being used are secure, do not have any malware or vulnerabilities in them. Starting right there with scanners and so on, ensuring that there are no secrets shared inadvertently, no data being included in code that should not be included. Those kinds of things at -- in -- at the code development time. From a deployment standpoint, you really need to be able to first assess what all -- what all exists in the infrastructure that is all related to the AI application. So, essentially discovery, to be able to discover all the pieces that bring -- that are being brought together to develop the application, ensure that all those pieces are deployed correctly, do not have any misconfigurations, being able to ensure that that is done right. That's another big piece from a deployment standpoint. So, essentially, all the things I talked about, whether it's new agents, plugins, LLMs, data sources, all those need to be deployed and configured appropriately. And then from a runtime -- runtime perspective, being able to continuously monitor. So, essentially when these applications are -- are put in production, they now access the outside world. They're communicating with other applications, with other -- with external entities. And being able to continuously monitor that connection and being able to ensure that all the traffic that's going back and forth doesn't have any malware in it, doesn't have any -- any threats in it. There isn't any data being exfiltrated, being able to make sure that there's no data loss, all those things are also important. And you know, I do also want to highlight, and I mentioned before, right, with AI, there is no AI without data for all practical purposes. So, ensuring that the data is secure, not just the access to the data, but you know important secure data, private data is locked down as appropriate.
David Moulton: So, Tanya, you've talked about this new pipeline and a lot of the different things that need to be done, and you get to a certain point where you're like, "Yes, I'm secure, I'm ready to go," and then adversaries evolve, right? So, how can a security team future proof their defenses against that constantly changing landscape?
Tanya Shastri: Yes, so we absolutely have to acknowledge that bad actors are evolving very quickly. They are extremely sophisticated. Threats are evolving very quickly. We are seeing upwards of 2.3, 4 million new threats that we hadn't seen the day before. So, there's a whole transformation of how you do security. I'd say there are two principal pieces that CISOs should keep in mind. One, things are evolving too quickly and bad actors, scale, speed, sophistication is increasing so quickly that you cannot really solve the problem by bringing more humans to -- to the situation. It really is a -- is a situation where you have to bring AI and automation to bear. That is going to continue to be very important and investing in those approaches and develop -- not just developing them, but leveraging them in -- in every possible way is definitely going to be part of the future. So, that's one key area. The other area is essentially platformization in that leveraging platforms is always a better approach than having to work with many different point products. Stitching them together makes it very difficult to be as agile and react as quickly to new -- new scenarios as you have to. So, essentially the -- the approach to be able to take and invest in a solution. And then extend that solution or add elements to that solution so that you can extend to the use cases that you need to react to would be another -- another approach that I'd -- that I'd say is very important because that platformization approach essentially allows one to adopt and -- and react to new use cases very quickly without having to build the entire infrastructure stack, new architecture, new policy, and so on. So, those two things, I'd say, leveraging AI and automation, essentially bringing machines to complement humans, and leveraging platforms where they are available. Those would be two important things.
David Moulton: So, Tanya, final question for you. How should organizations approach things like compliance and privacy and trust when they're deploying their own AI tools at scale?
Tanya Shastri: Yes, having good governance -- governance frameworks and -- and policies is very important. And I -- I like to think about it as not something that you think of you know, in hindsight. It has to be designed and thought -- thought of upfront, done well, because that actually then helps you to be more agile as you go forward. So, things like, and I'll start with data because data continues to be most important, being able to classify data well. Having, you know, is it -- is it public data, confidential data, restricted data? And then how do you, you know -- what are the governance that you need in the context of those data sources? So, I'll give you examples. Here at Palo Alto Networks, for example, we have public data, our technical documentation, for example, very different policies and approaches that we can use with it because that is public data. We don't have to worry about limiting access to it. We actually want everyone to have access to that data versus you know bringing LLMs to bear in product. And in that situation, we have to button it up down very tightly because it's essentially information that we want to share just with that particular customer whose product -- who is using that particular product. So, there's a range of things you can do and -- and you know, different policies you bring depending on the classification of data, leveraging models, ensuring that there are approved models similar to what I would say we do with secure base images, where there are approved base images, approved models, approved plugins, that allows, again, companies to innovate more quickly while ensuring the models and -- and tools and plugins they use are meeting the bar. So, that's another area. And then, you know, I bring -- and bring in agents as well, right? Essentially, when an agent can behave and act almost like a person and -- and take action on behalf of a person, we need to bring least-privilege access like concepts to agents. So, essentially, governance around agents, how they behave, what they have access to, all those things will also then be important. And again, like -- like in the context of data, what is the agent's role? What data does the agent have access to? All those things become important as well. And then, how -- how strict you are or how -- how relaxed you can be depends on the data that those agents have access to. And then one thing that's also important with these -- with -- with governance and with -- with policies that -- that are put in place is it isn't like a one-time thing. You do it and forget it. You have to continuously monitor and ensure that you're -- you're continuing to -- to meet those guidelines. You're continuing to evolve because new models come up. You have to continuously say which are the approved models so that you are allowing your teams to innovate with the best that is available, at the same time doing it in a way that's secure.
David Moulton: As you were talking there, it reminded me of the conversation I had in Episode 66 with Noel -- Noel Russell, and she was talking about this idea of being fair, being accurate and being secure. Those are choices that you need to make. You know, that was -- that was sort of the -- the framework she had because you own the outcome. And I think when you're talking about governance, that is a operationalizing, owning the outcome that you have with your AI applications, such that they're compliant. They -- they are private. They build trust rather than going in the other direction where you're running up against a regulator who says, "That's not -- that's not okay," or "You're sharing data. That should be private." And -- and it's expected to be protected. And certainly, if you do those things right, you get to that outcome of trust. And I think that as much as we look at the innovations of AI and some of the applications that we have and how fast they're going to allow us to move, if we don't have trust as the operator, as the user, it doesn't matter, right? Like, you get to that point where you go," I can't trust it," whether its outcomes aren't quite right or if it's leaking data. So, I think this is a really important thing to talk about as far as security and -- and the governance in and around building and deploying AI applications. With that, I'm curious what role you think governance frameworks have with staying ahead of some of the emerging regulations?
Tanya Shastri: Yes, so this is an evolving space, as you know, right? The regulations are changing as we speak. Having a good framework, a solid framework, and I like to think of even governance and things like this as -- as a platform, right? If you do it well, you do it once, you can take these controls that you have put in place and bring them to bear in slightly different ways and/or bring the same controls to a different regulation, right? So, essentially, the investment in it, and I couldn't agree with you more, it is -- it is important because doing it right ensures you have the right outcome. And allows you to innovate more quickly and -- and be more agile as things change, as regulations change. So, having that investment upfront and designing it well or thinking about it and structuring it well leads to those good outcomes.
David Moulton: Yes, I think Mehr [phonetic] said it very well a couple of months ago that she needs security here to be a really strong break, so that she can go really fast. And I love that contrast between innovation and being able to develop at lightning speed, because you trust that security is going to be able to stop you on a dime if you get to a point where it's too much risk.
Tanya Shastri: I couldn't agree more. And actually it is like that, right? When you're confident, when you know you're secure, you can go ahead bravely. And that's essentially what we're trying to do here. We are ensuring that our customers are secure. They don't have to worry about it. They can go ahead and do all the innovation they need to, do it bravely, do it broadly, and leverage the benefits without the risk. [ Music ]
David Moulton: Tanya, Nav, thanks for the awesome conversation today. As expected, I've learned a lot, and I really appreciate that you're sharing your insights on how organizations can securely enable Gen AI usage and build AI-powered applications with security built in from the start.
Navneet Singh: Thank you. It was a great conversation with you, Dave.
Tanya Shastri: Thank you.
David Moulton: That's it for today. If you like what you've heard, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Your reviews and feedback really do help us understand what you want to hear about. If you want to reach out to me directly about the show, e-mail me at threatvector@ paloaltonetworks.com. I want to thank our executive producer, Michael Heller, our content and production teams, which include Kenne Miller, Joe Bettencourt, and Virginia Tran. Elliot Peltzman edits the show and mixes the audio. We'll be back next week. Until then, stay secure, stay vigilant, goodbye for now. [ Music ]