Deploy Bravely — Secure your AI transformation with Prisma AIRS
  • Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
  • magnifying glass search icon to open search field
  • Contact Us
  • What's New
  • Get Support
  • Under Attack?
Palo Alto Networks logo
  • Products
  • Solutions
  • Services
  • Partners
  • Company
  • More
  • Sign In
    Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
    Language
  • Contact Us
  • What's New
  • Get support
  • Under Attack?
  • Demos and Trials
Podcast

Designing Human-Centered Security Operations

Aug 14, 2025
podcast default icon
podcast default icon

Threat Vector | Designing Human-Centered Security Operations

00:00 00:00

Security analysts are drowning in tools, alerts, and tabs. In this episode of Threat Vector, David Moulton, Senior Director of Thought Leadership for Unit 42, talks with Patrick Bayle, SecOps Consulting Manager, and Liz Pinder, SecOps Consultant, both with deep experience in Security Operations Centers. They explore how constant context switching impacts analyst performance, response time, and mental load. Hear how SOC leaders can design workflows that reduce noise, improve focus, and restore purpose with automation and unified platforms. It’s a must-listen for anyone building or managing a modern SOC. 


Protect yourself from the evolving threat landscape – more episodes of Threat Vector are a click away



Transcript

 

[ Music ]

 

David Moulton: Welcome to Threat Vector, the Palo Alto Network's podcast, where we discuss pressing cybersecurity threats and resilience, and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of Thought Leadership for Unit 42. [ Music ] Today, I'm joined by not one, but two incredible guests from Palo Alto Networks. Liz Pinder, Systems Engineer Specialist for Cortex, and Patric Bayle, SecOps Consulting Manager. Liz has built her career on solving complex SOC challenges, with hands-on and automation incident response and playbook design. Patrick brings nearly two decades of cybersecurity experience from consulting and engineering to leading SecOps teams and shaping strategic response frameworks across industries. Today, we are talking about a challenge that is both invisible and incredibly costly. Context switching in the security operations center. A 2022 Harvard Business Review study revealed that the average employee switches between applications 1,200 times a day, losing up to four hours a week in toggling alone. In a SOC, that cognitive drain is amplified. Where analysts shift between dozens of tools, dashboards, and alerts under constant pressure. This kind of operational friction can delay response times, increase errors and burn out talent. We are going to dig into what causes this kind of overload, how to reduce it with smarter workflows and automation and what leaders can do to design SOCs that are built for focus, not fatigue. [ Music ] Liz, Patrick, welcome to Threat Vector. I'm really glad to have you both on the show.

 

Liz Pinder: Yeah, thanks so much for having us. Yeah, me and Patty have been really looking forward to it.

 

Patrick Bayle: You said it, yeah. We really love the podcast, and are looking forward to this.

 

David Moulton: Liz, I have to start with you. Your path into cybersecurity is unlike anyone else's, from geochemistry labs, to SOAR consulting, to Cortex engineering. How did that shift happen, and do you see any crossover in how you solve problems across fields?

 

Liz Pinder: Yeah, it's-yeah, it's a bit of a strange one. So obviously studying science in university, I mostly around chemistry and geology, I definitely didn't see like cybersecurity in like my future at all. And then when I graduated, I went actually into a distilling career, so I was distilling like whisky and gin for about a year, and you know, a lot of people say cyber drives me to drink, but drink drove me to cyber. And around this time, it-WannaCry happened, so I don't know if anyone remembers that, but it was basically a huge ransomware attack that affected the NHS, our national health service, and I thought wow, that's really interesting, really exciting, and I just didn't think that that kind of career, that I could do that career with my background, but it turns out there's like, quite a lot of crossover between obviously maybe not the distilling side [laughs], maybe, but especially like the science. The science side, you know, that analytical thinking, and that's where I started my career really, and a grad program in a SOC, so I was kind of first sign analyst, then moving on to threat intelligence, and then the SOC that I worked in actually purchased Domesto [assumed spelling], so what was Exor, at the time, and really got into automation and just seeing how we could transform our SOC to a more kind of automated SOC, to make the analyst's side, such as myself, a lot easier. So yeah, there's quite a bit of crossover, especially around kind of like analytical and logical thinking, that's the most that I got from, you know, working in a lab, that kind of structured thinking. So it was quite-I wouldn't say it was easy-the crossover, it was just like a lot of learning. A lot. Kind of thrown in the deep end, there. But I would say that, you know, as long as you have that kind of scientific mind, it was quite, quite an easy transition in that way.

 

David Moulton: And Patrick, you've worked in security consulting, engineering, and operations across banks and vendors, and now, here at Palo Alto networks. You've seen the SOCs evolve first-hand. What has changed the most in how teams manage their workload and their tools?

 

Patrick Bayle: I think what's changed, the proliferation of tolls has exploded, and Legacy SOCs grew organically, and they would bolt on additional tolls, and they would bolt on thinks like we need threat intel management now, because we've got so many indicates of compromise. Oh, we've got a massive deluge of stuff that we need to alternate, let's bolt on SOAR technologies, like Liz said, how she came into the market at that kind of opportune time. So we've kind of seen that explosion from a singular point of view to multiple screens, multiple, you know, the swivel chair analogy that people in SOCs are looking at too many things, and they are, as a result, being unproductive, as far as we're concerned.

 

Liz Pinder: Yeah, you've got to remember, so when Patty first started in the SOC, like the wheel wasn't even invented. So it was a long time ago.

 

David Moulton: I also was there 3,000 years ago.

 

Liz Pinder: Wow [laughter].

 

Patrick Bayle: I'll explain to you what a Zed theory is, there is how you can manage security risks related to that for a large financial organization.

 

Liz Pinder: My goodness, I'm looking forward to it.

 

David Moulton: So when we were talking about putting this podcast together, we were talking about this idea of context switching, and I ran across this HBR article that talked about workers switching apps, something like 1,000+ times a day. It seems kind of wild, but then you start to observe your own patterns, and you realize, yeah, you're moving back and forth in between, you know, desktop applications, and web apps, through your browser, and your tab, you know, gets, you know your browsers and your tab get to the point where you can't even read the tabs anymore, there's so many there, and each one is, you know, a different action or a different capability. And I imagine in the SOC that kind of context switching shows up, and that it's really costly. You know, can you talk about what the cost of that disruption or the inability to focus because of all those tools looks like, Liz?

 

Liz Pinder: Yeah, definitely. I'm-I mean, when we talk about the impact of that actual screen switching, I feel like there's kind of two overall issues that happen. There's kind of the issues on the analyst's side, so what I definitely experience, and then we see issues as well on that visibility and detection side, and just kind of to talk more about the analyst side, because obviously like my personal experience, but I kind of like to think about, I don't know if you've ever heard of the article by Paul Graham, which talks about kind of maker and manager time, so it's a quite old piece of research, quite a few years ago now, but essentially it goes through that maker time as something where you have long interrupted blocks to actually build and create something, whereas manager time is kind of split into meetings, check-ins, quick decisions. And it's really that kind of maker time that you can directly associate with an analyst. You need to have that time for deep thinking with no interruptions, especially when you are, you know, going through an incident, when you are triaging, and of course, if you, I mean, just thinking back to my experience, if you're having to continuously collect data for an alert, an alert comes in, and I'm going to have to go to different sources to collect this data, either through logging into a firewall platform, or kind of going and querying logs in my seam solution, or even contacting someone, contacting the owner of this kind of misconfigured S3 bucket, for example. All of that time adds up on its own, but it's also that kind of mental overhead that you have, like that's not really kind of thought of, if, you know, someone interrupts you and you're kind of in the zone, and then you get a message come in, or you have a meeting put in, as you're doing, you know, this task that requires that deep thinking, for me personally it takes me like a good, you know, 30 minutes to actually get back in to the task that I was, you know, originally presented with. So, you know, imagine that constantly when you are just triaging alone, and how much time that adds to actually resolving that alert, you know, that's, you know, a lot of the reason why we have such long time to respond, is because of that jumping across different tools, and gathering all that information.

 

David Moulton: You know, Liz, you're talking about being interrupted by a different application, or you know, something coming in, there will be times when things run slowly and, you know, I am like, okay I'll let that run and I'll come back to it later. I interrupt myself because I'm not willing to wait, and you know, I guess I'm ashamed to admit this, a bit, but sometimes I'll come back and be like, what was I even doing here? And I can only imagine if you compound that and then you run that same scenario under stress, you're trying to look into an incident, figure out what's going on, and you're holding a lot of different ideas in your mind as you run those alerts down, it doesn't help to have multiple applications adding to that cognitive overload.

 

Liz Pinder: Yeah, exactly, and you say it's like a shame to admit, but that's just the way the human brain works. You know? That's like you said, especially when you had that pressure there, and it's not even that like, you know if you're dealing with a big incident that has come in. I'm talking like maybe like a low or informational alert that's come in. Even then just that overhead of like having to think about that one alert, that may be a false positive, just takes up so much time, and so I have less time to then-or less mental kind of energy-to then focus on what matters.

 

Patrick Bayle: I think one of the best things about working in a SOC is that no day is the same, the worst thing about working in a SOC is there's no two days the same. It can be chaos. Especially without automation, and we see this a lot with Legacy SOC, where if you don't have a grip on, and consistency on how you're going to respond to a type of incident, then it's the wild west, you know, I lost three people in a SOC, how will they-tell me about how you deem whether something is malicious, and I'll get three different answers. Completely different answers. And who is to say which is right? And which is wrong? So it's really up to the design of the SOC to decide what is the appropriate way to respond to that incident?

 

David Moulton: Liz, you were talking about how you came into security and then saw the rise of automation. Talk to me about where you see automation making the biggest dent in reducing the cost of the switching that we've been talking about?

 

Liz Pinder: I think automation is just a great start. Especially yeah when it comes to bringing tools together, integrating those tools together, and it's something that, you know, I experienced before, we brought in Domesto at the time, now Exor, into our own SOC, and that really changed just the way of working. My daily job became so much more interesting. So instead of receiving a-and having to deal with a phishing email coming in, for example, or a user-submitted email, instead of me having to go through that and nine times out of ten at the time it was either e-attack or spam, and it was just so boring and mundane, and going from that to then having these kind of auto resolved close down and focusing on actually like really interesting phishing emails, really interesting malware. I'm not only able to actually use the skills that I've been training for, but also of course, what Patty was going on about before, extremely reducing that meantime to respond, by bringing together all of the integrations that we use in a, you know, phishing response process, just automatically dealing with that, just made a huge change to not only the SOC, but also like my experience in my role as well.

 

Patrick Bayle: Yeah, I think everyone who gets into security analytics, maybe they don't know what we're getting into, but like how this fell into it, and how I kind of fell into it is, that sounds interesting, I want to go and do that, because it sounds like a challenge, and we want to be challenged. The SOC personas want to be the hero. They definitely want to be useful. Nobody wants to be dealing with false positives. They are inevitable, of course, that there will be a time when you will work on something and you'll invest your hard time on it, and it will result in an unfulfilled incident, or an incident that cannot be resolved, or an incident that's false positive, but if you do have automation, then you can reduce, you know, the context switching, and you can ensure that the SOC can be fulfilled in their jobs, so they can work on things that are genuinely interesting to them. They can potentially be the hero, and play on the hero complex, but they can potentially be the hero, and they can do what-they are interested, and they will have a material impact to the organization that they work for.

 

David Moulton: How can leaders streamline those SOC environments, that they are in charge of without sacrificing their detection and response quality?

 

Liz Pinder: We hear customers talking about streamlining, and this happened, even when I-so I first started this out like it must be nearly eight years ago now. And even when I first started, I know hearing customers say as well, they always say I want this single pane of glass. I remember my manager being like, this enigma, this single pane of glass, I want to get there, it was like Harry Potter and the Single Pane of Glass, honestly, it was like the goal. But if you think about it, you know, because they all dream, when they talk about single pane of glass, it's what we talked about before, fewer logins, less tool hopping, like essential pace. But if you think about, you know, a single front end, or UI consolidation, some automation may have, may lead to fewer tabs open, but then the problem is that that only really goes so far in actually helping an analyst understand like the full story of an event. So when we kind of brought in this whole like unified front-end single pane, when I was there, at in SOC, you still had that issue of I have to as an analyst manually still connect the dots. I still have to manually troll through logs to connect the dots myself, and get the full kind of causality, the full visibility of this alert, or of this incident. And you know, as a human, you can only go so far in actually correlating that, you know, data together. You know, I'm not machine, so you're really losing out on, you know, you're really still increasing that meantime to respond, because you're having to manually troll through that data, and establish that causality, which really adds to that response time.

 

David Moulton: Let's talk about alert fatigue. When analysts jump from tool to tool and alert to alert, how do we ensure that they can stay focused on what matters?

 

Patrick Bayle: So yeah, so when we speak to SOCs and we say what would you like to automate? And it's an intentionally provocative question. We normally get two answers. Everything, or we don't know. And that's, you know, I'm not sure which one is scarier, to be honest, but it's probably we don't know, because if we're talking about alert fatigue, they should know the type of alert that is causing them to be fatigued, or alerts, so really like when we're talking to SOCs, don't pick that one horrible task, that one horrible system that you don't like doing, that you have to do once every six months or every year. Do the things that you do less than often. If you can shave off 30 seconds here, a minute here, and you do that numerous times a day, a week, a month, then there is your return on investment on your SOC, and there's automation being key for you, and there's your reduction on burning out, because you're not doing the same thing over, and over, and over again. And that's the stuff that drives me up the wall, you know, repeating those mundane tasks. And also thinking about the risk perspective again, that's the stuff that people in the SOC would forget to do, or intentionally not do, because they have a bias, to know what that result is, so they would assume it's benign, or they will assume it's pernicious, and they'll just weakly try and close the incident down. That's the wrong behavior. That introduces risk, which we-the SOC is there to avoid, right? Or reduce the risk, sorry.

 

Liz Pinder: It's all about giving them something interesting to look at, right? As we talk about, you know, how do we, how do we not interrupt that flow, and first of all, we can go by automation, so not just automation attempts of, you know, all the way to resolution, all the way to let's block this straightaway, really simply we can utilize automation to enrich an alert. So instead of me having to go to my various open source intelligence tools to look up this one IP address, I can have all that information provided to me straightaway, so I can just make that informed decision so analysts can make informed decisions to then isolate that machine, or close that alert down, so it's really that low hanging fruit almost that helps combat that alert fatigue.

 

David Moulton: You know, Patrick, I think that humans seek out the new and novel. And when we're running through something that is extraordinarily repetitive, especially if it's a set of tasks in a row, w tend to forget things, or skip over things, because it's not new and novel, it's repetitive. I think this is why pilots and surgeons have, you know, pre-flight or pre-surgery checklists that they go through, to make sure that they've flipped the right levers, or washed their hands in the right order, and when you're talking about some of that automation, I feel like it's that checklist, but it actually goes through and it does that behavior. And that's the behavior that you want, so that you can get to, you know, the case of the pilot taking off in a safe way, or landing in a safe way. And is that what you're seeing here, as like the part that's new and novel? The investigation, running down and understanding if this is a malicious activity, that's what you want somebody to be able to focus on and bring all of their talent and their flow to, and not forget oh, we've got to log this or write that down, or check here for information, or make an assumption. You want to move those things into automation, because it's 20, 30 seconds, but it counts that you had the discipline to do it every single time.

 

Patrick Bayle: Absolutely. There was one SOC I engaged with and it stuck in my mind, because we were talking about, you know, they add value to the business, and how he engages and he was their level 3 analyst, so he was the man in their SOC club, one of the most senior people they had, and his frustration was, which he, you know, he didn't necessarily say in front of certain management, but he said to me, every 30 minutes I'm expected to stop investigating or stop analyzing this instant, and provide management with a summary of what's going on, and I was like, dude, that's a-that's a process. That's something you should be automating, and let me show you how to do that. You email this distributionist, with this report format, if the alert is open and in this status, i.e., critical, then you can tell them automatically here's what's going on, here's the next step, or that's just stuff that you're doing as part of an investigation and that's kind of case management stuff, which can be extrapolated and sent to people, so that was not the typical use case for solving SOC problems, but it was a burden to him, and that was a risk to the business, because if he left as one of their two most senior engineers or analysts, that was a problem for that business because that's something that's really easy to fix, and that's often not something that all the SOC would step back and go this is causing us pain, this should be a quick fix, because it's a process. It's a thing. It's not resulting in a happy analyst, and it's increasing our NTTR.

 

David Moulton: I'm sure you could calculate the, I don't know, the cost of the automation or the savings, but from an overall risk reduction, it seems like almost invaluable. [ Music ] Liz, you've designed playbooks, and that's what we tend to call these pre-flight checklists and some of the automations in security is a playbook or a work flow with a lot of customers. What mistakes do you see teams make that actually increase their context switching during instant responses?

 

Liz Pinder: Yeah, so something that was probably most common, and I think Patrick will agree, is you can't automate, you know, without having that process there in the first place, right? So quite often, you know, customers come to us, or you know, I'm building a playbook, and that process either doesn't exist in the first place, or it's a bad process, you know? So for example, we had a customer that wanted to simply just reset a password, and remove them from AD if there was like an insider-insider threat. So what they didn't have, the proper process written down. So it was really difficult to kind of then automate it. They didn't think about you know, what if this person was a VIP user? Do you want to like change the password of a C-Serv for example? You know? So things like that. If you put in a bad process into automation, you know, then to create a playbook out of a bad process, you're going to have a bad playbook. So really you need to think about that process, and go through it beforehand, before you think about automation.

 

Patrick Bayle: It's not all or nothing. And I think there is that fear, where we want to automate, we want to automate fully, but we don't trust it. That they-you can and should implement guardrails for you know, break-law situations like potentially putting yourself at risk of losing your job by resetting someone's password using the exact position that you shouldn't. But you could argue as well that they're probably the people who would likely to be targeted. Let's test it. Keep testing it. It's not our set and forget type thing. You know? It's an iterative process that you want to test and refine unless it's not, you know, the high fidelity, 100% accurate things that I said, like who is user? What is IP? What is machine? What is CV? All those things.

 

David Moulton: As we're talking about this, I'm realizing, more and more that the human factor is critical. If you can't focus, if you are burnt out, if you're incentivized wrong, all of those things can come in and compound, to increase your risk and lower the outcomes that you're looking for. If you were to talk about what a more human-centered SOC design would look like, what are the ideals there and how do organizations get closer to that?

 

Liz Pinder: When you first start the SOC, you are, especially fresh out of Uni, right? You are the keenest being. You're super-keen, and just like what we were saying before around, you know, you're hired, you join cyber because it's exciting, and you know, you're hired as an analyst, because you have all this background, like my job description when I applied as an analyst, you have to have Python skills, you have to have, you know, reverse-engineering. And it was like a bit of a shock to the system when I actually got down and did the job, I did none of that, you know? It was-it was still, you know, kind of interesting, but after a year of, you know, closing down false positives, and copy and pasting indicators of compromise, you think, you know, does my brain-it needs something more. It needs sustenance. So it's really about achieving a role for an analyst that encompasses what you were actually trained for. And I think if we look at our own SOC pods, or SOC-a perfect model of that-where they don't spend all of their time responding to instance triaging alerts. A lot of that time is on, you know, threat hunting. Things like research. That was my favorite thing in the SOC. It was that we were doing that more proactive instead of reactive threat hunting. So just kind of, yeah, relating back to our own SOC and the way we've achieved that is, you know, through that unified platform. Not only having that unified front end, but that unified back end as well and normalization of that data.

 

David Moulton: So it will come as no surprise, I'm going to shift the conversation to AI, hoping by now you guys have heard of these AI copilots, or assistants, you know, a lot of which are being pitched as a fix for the overload that we're talking about today. I'm curious. What's realistic? And what's hype, when it comes to using AI to reduce some of the SOC fatigue?

 

Liz Pinder: Yeah, so obviously AI has become such a buzzword, you know? I think every single vendor, every SOC comes up to us and wants to talk about AI and what we're doing around it, and I think, I mean in reality if we look at the umbrella of AI and what's within that, Paolo has been using, or Cortex has been using machine learning for years in terms of the analytics, on that normalized data that we're seeing in XIM and also just like taking it all the way back in our prevention modules around, you know, behavioral threat prevention and local analysis. And I mean obviously that's brought in huge benefits around AI within that context instead of relying on those static rule-matches, machine learning allows for analyzing of that behavior over time, and identifying what's normal to a machine or user. But again, just going back, what is over-hyped in my opinion is the use of AI, more specifically those large language models, or LLMs in SOC tools. So obviously, everyone I'm sure listening to this has used ChatGPT, they are non-deterministic in nature, you know, I can ask them "can you plan my Disney holiday" to one, and it will come out, you know, completely different Disney holiday to the next I'll ask the same again, it will plan a completely different, you know, the same model can plan a completely different holiday. So imagine that in the context of the-of a copilot, you know? Like we get questions like this all the time and we say do you have AI to summarize and incident? And you know, if you say can you summarize this incident, it will come up with different answers every time, you know, you'll likely get a response that's inconsistent in the accurate. And often these are, you know, sold or pitched as reducing out SOC fatigue, as helping out, you know, first-line untrained analysts, but in reality, you're just making it worse because first-line analysts, or you know, people who have just joined a SOC, like myself, you know, when I first joined I would have taken something like that as face value. I wouldn't have been able to determine, you know, if that summary of the incident was correct, was accurate. And another thing that it does as well is it completely takes away the skill that you actually brought in that analyst, you know, for; that logical and analytical thinker and instead, you know, it's being replaced by that vague assistance.

 

Patrick Bayle: Sometimes we get that question of tell me about your AI, and I think that is a telltale sign that they are looking at potentially too much marketing material, and not looking at what are the outcomes that I want from my function in this case security operations. It's like cool, we can talk about it, but what do you want from it? When we take it back to the problem, with where is the pain? So you know, we see it being really effective, where there is a specific goal in mind. But yeah, there is stuff that, you know, it's innovation for the sake of innovation, and yeah, some of it will get better, absolutely. But some of it is adding another thing for the SOC to do, and it's in vogue, it's sexy, but it doesn't contribute to the outcomes of the SOC.

 

Liz Pinder: And it's not to say that, you know, copilots with LLMs aren't valuable, you know, especially when we think about more kind of RAG LLMs, retrieval, augmentation, generation, based LLMs that have that relevant information, that retrieves that relevant information from like a knowledge base, or anything like that. And that's useful, especially when it comes to helping a new analyst with a specific tool. So you know, how do I isolate a machine, and I'll look at kind of the work documents around that, because let's face it, you spend a lot of time getting used to a tool. So I think it's really valuable when, you know, it comes to things like that.

 

David Moulton: Yeah, Liz, I tend to agree. I sometimes look at the tools that we have in front of us, and you're talking about the LLMs, or at least that's what I'm thinking about, and your knee-jerk reaction is go do this summary, or write this for me, and I prefer the model of here's what I think about this situation. Am I missing something? Have I got a bias or a blind spot? Or I need to learn about how to use a tool, and walk me through how to use this interface, or how to use these tools in a way that I hadn't considered, because I don't have the time to go through and, you know, build up that experience in having a coach, and it sounds to me that the right use case, and the right application of some of these tools-great. But just a wholesale handing off ends up putting you at a point where you end up taking longer, because you're not getting a good outcome that is deterministic or repeatable and you don't have confidence in it. And I think that's the lovely thing about automation, is that once you have it going and you've built to that point of trust, you can run it, and start to then take that savings of time, and apply it to the things that we humans are uniquely capable of doing.

 

Patrick Bayle: Yeah, it's robust, and it would have been something a human would have had to do multiple times, and that's the thing, like, we do it once, it's not going to be beneficial. If you do it 5, 10, 15, 20 times, that is massively beneficial.

 

David Moulton: Patrick, what's the most important thing a listener should remember from our conversation today?

 

Patrick Bayle: Realizing you need to reduce context, which is a great start. Change is very rarely as scary as you think it is. When you do it once, like, you go into future proof your security operations center. I think we have to kind of prepare ourselves to be constantly destructive, because the attackers are continuing to innovate, and I think if you're staying static, or you're afraid to change, or you have Stockholm Syndrome because you've poured loads of effort into this tool or these tools, and you can't possibly change things, then that will become a risk to the business. Either through attrition, from the experts have done that, or through tools not innovating, or through more tools being acquired and you know, increasing the demands on the SOC, so they have to switch context more often.

 

Liz Pinder: Yeah, what Patty said! No, um [chuckling] I-yeah, just to echo that, I agree, you know [laughs], I mean, just going back, what I said before previously, I've been in the industry almost eight years, nine years, and it's still an issue. Like, a little fatigue, still spoken about, you know, high MTTRs, and it's getting worse, because we've got more data, you know, than we've ever had before, so I guess my overall kind of comment would be that something has to change, we have to do something different, you know, Legacy, Legacy SOC tools just aren't working anymore and yeah, essentially it's what Patty was saying, it's really scary. You need to make the change, otherwise it's going to get worse.

 

Patrick Bayle: I think as well like validation. If you're unsure of something, then you can validate. You can have a purple team exercise where it's kind of a collaborative thing, or you can take a step back and look back at what is our strategy for the SOC? What are we going to-you know, we didn't plan this thing now, it's working, or we're unsure it's working, how can we validate that it's happened, and how can we maybe form some sort of collaborative exercise, like something with Uniforce, too, but we'll do a purple team or we'll do a table-top exercise, or play out different risk scenarios to see how the SOC would act, and that should be a continuous thing, as well. [ Music ]

 

David Moulton: Liz, Patrick, thanks so much for a great conversation today. I really appreciate you sharing your experience and your insights on context switching in the SOC, and maybe how we can design a smarter more sustainable analyst work flow.

 

Liz Pinder: Yeah, thank you so much for having us, it was really fun. We took a trip down memory lane. It was traumatic, but we got through.

 

Patrick Bayle: Yeah, we had fun. We are allowed to have fun occasionally, so it's good that we got to do it with you, David, thank you. [ Music ]

 

David Moulton: That's it for today. If you like what you heard, please subscribe wherever you listen, and leave us a review on Apple Podcasts or Spotify. Your reviews and feedback really do help me understand what you want to hear about. Or reach out to me directly at Threat Vector at paloaltonetworks.com. I want to thank our Executive Producer, Michael Heller, our Content and Production Teams, which include Kenny Miller, Joe Bettencourt, and Virginia Tran. Elliot Pelsman edits the show and mixes the audio. We'll be back next week. Until then, stay secure, stay vigilant, goodbye for now. [ Music ]

Share page on facebook Share page on linkedin Share page by an email
Related Resources

Access a wealth of educational materials, such as datasheets, whitepapers, critical threat reports, informative cybersecurity topics, and top research analyst reports

See all resources

Get the latest news, invites to events, and threat alerts

By submitting this form, I understand my personal data will be processed in accordance with Palo Alto Networks Privacy Statement and Terms of Use.

Products and Services

  • AI-Powered Network Security Platform
  • Secure AI by Design
  • Prisma AIRS
  • AI Access Security
  • Cloud Delivered Security Services
  • Advanced Threat Prevention
  • Advanced URL Filtering
  • Advanced WildFire
  • Advanced DNS Security
  • Enterprise Data Loss Prevention
  • Enterprise IoT Security
  • Medical IoT Security
  • Industrial OT Security
  • SaaS Security
  • Next-Generation Firewalls
  • Hardware Firewalls
  • Software Firewalls
  • Strata Cloud Manager
  • SD-WAN for NGFW
  • PAN-OS
  • Panorama
  • Secure Access Service Edge
  • Prisma SASE
  • Application Acceleration
  • Autonomous Digital Experience Management
  • Enterprise DLP
  • Prisma Access
  • Prisma Browser
  • Prisma SD-WAN
  • Remote Browser Isolation
  • SaaS Security
  • AI-Driven Security Operations Platform
  • Cloud Security
  • Cortex Cloud
  • Application Security
  • Cloud Posture Security
  • Cloud Runtime Security
  • Prisma Cloud
  • AI-Driven SOC
  • Cortex XSIAM
  • Cortex XDR
  • Cortex XSOAR
  • Cortex Xpanse
  • Unit 42 Managed Detection & Response
  • Managed XSIAM
  • Threat Intel and Incident Response Services
  • Proactive Assessments
  • Incident Response
  • Transform Your Security Strategy
  • Discover Threat Intelligence

Company

  • About Us
  • Careers
  • Contact Us
  • Corporate Responsibility
  • Customers
  • Investor Relations
  • Location
  • Newsroom

Popular Links

  • Blog
  • Communities
  • Content Library
  • Cyberpedia
  • Event Center
  • Manage Email Preferences
  • Products A-Z
  • Product Certifications
  • Report a Vulnerability
  • Sitemap
  • Tech Docs
  • Unit 42
  • Do Not Sell or Share My Personal Information
PAN logo
  • Privacy
  • Trust Center
  • Terms of Use
  • Documents

Copyright © 2026 Palo Alto Networks. All Rights Reserved

  • Youtube
  • Podcast
  • Facebook
  • LinkedIn
  • Twitter
  • Select your language