Scattered Spider's tactics keep evolving. But one thing remains constant: attackers must compromise an identity and fundamentally change its behavior to execute a successful attack. That's the invariant your detection strategy should exploit.
You've seen the headlines. Marks & Spencer. Co-op. Harrods. Caesars. MGM. Half a dozen U.S. insurers. Multiple airlines. Scattered Spider (also tracked as UNC3944, Octo Tempest, and Storm-0875) has carved a path through some of the best-resourced organizations on the planet. These aren't companies that skipped security. They have SIEM platforms. They have endpoint detection. They have MFA. And yet, Scattered Spider walked through their defenses repeatedly.
The 2025 Unit 42 Global Incident Response Report confirms what many of us already know: social engineering now accounts for 36% of all incidents, with 66% of those attacks targeting privileged accounts. In one documented case, attackers escalated from initial access to domain administrator in under 40 minutes.
The question every security leader should be asking isn't just "how do we defend against Scattered Spider." It's something more uncomfortable: why aren't the tools we already have stopping them?
Source: Unit 42 Global Incident Response Report 2025, Social Engineering Edition
How the Attack Actually Unfolds
Let's walk through what a Scattered Spider attack actually looks like, because the details matter. According to the July 2025 joint advisory from CISA, the FBI, and international partners, their primary entry method hasn't changed much since they emerged in 2022. They call your help desk. They impersonate an employee, using personal details gathered from LinkedIn, prior data breaches, or social media, and request a password reset or MFA re-enrollment. Because many of their operators are native English speakers who do their homework, these calls tend to be convincing enough to pass standard verification checks.
Once they have working credentials, they log in. Typically through Okta, Microsoft Entra ID, or whatever SSO platform the target organization uses. From there, CrowdStrike's 2025 threat research shows the group pivots into connected SaaS applications, searching for network documentation, VPN configurations, and stored credentials that open the door to deeper access. They register new MFA tokens to maintain persistence. They install legitimate remote access tools like TeamViewer or AnyDesk. They create entirely new identities in the environment, sometimes backed by fake social media profiles. And ultimately, they exfiltrate data to cloud storage and deploy ransomware (most recently the DragonForce variant) against VMware ESXi infrastructure.
Here's what's critical to notice: at nearly every stage of this attack chain, the attacker is using valid credentials, legitimate tools, and authorized identity pathways. There's no malware signature to match. No known-bad IP address to block. No exploit to patch.
This is the fundamental insight that should reshape how you think about detection. The attacker doesn't need to do anything "known bad." They just need to behave differently than the person they're pretending to be.
The question isn't how to detect Scattered Spider specifically. It's how to build a detection architecture that catches identity-based attacks regardless of which group is behind them or which tactics they're using this quarter.
The Invariant: Attackers Must Change Behavior
Here's the insight that should anchor your detection strategy: no matter how sophisticated the initial compromise, the attacker eventually has to do something the legitimate user wouldn't do.
They have to access systems the user has never touched. They have to query data outside the user's normal scope. They have to move through the identity graph in ways that diverge from established patterns. The social engineering gets them in the door, but the post-compromise activity is where the behavioral divergence becomes detectable. If you're watching for it.
This is why tactics evolve but the fundamental detection opportunity remains constant. Scattered Spider shifted from MFA fatigue to help desk social engineering to voice-based callback attacks. They'll shift again. But in every case, the compromised identity still has to traverse systems, escalate privileges, and access data in ways that break from baseline behavior.
"Social engineering works not because attackers are sophisticated, but because people still trust too easily." Unit 42 Global Incident Response Report 2025
The corollary for defenders: you can't stop all social engineering, but you can detect when a compromised identity starts behaving unlike itself. That's the invariant your detection architecture should exploit.
Why Identity Attacks Are Graph Problems
Here's something that might not be obvious: the reason most security tools struggle with identity-based attacks isn't just a data problem. It's a data structure problem.
Your SIEM stores events as rows in a database. Each row is an isolated fact: "User X logged in at time Y from location Z." To understand relationships, you have to write queries that join these facts together, and you have to know what you're looking for in advance. Want to know if this user's login is suspicious? You can check if the location is new. But what about whether this user has ever accessed the system they're now touching? Or whether that system connects to other systems the user has never been near? Each question requires a new query, and the queries get exponentially more complex as you try to understand multi-hop relationships.
Now think about what an attacker is actually doing. They compromise an identity. That identity has access to certain systems. Those systems contain credentials or trust relationships that lead to other systems. The attacker is traversing a network of relationships, hopping from identity to system to credential to system to data.
A SIEM gives you a timeline of events. But when someone's moving through your environment, you don't just need to know where they've been. You need to know where they can go. That requires a map of relationships, not a log of activities. That map is a graph.
In a graph data structure, relationships are first-class citizens. You don't query for events and then try to piece together connections. The connections are the data. This makes certain questions trivial that would be nearly impossible in a traditional database:
- "What can this compromised account reach?" In a graph, you traverse outward from the identity node and see every connected system, permission, and data store. The blast radius is visible immediately. In a SIEM, you'd need to query access logs, then query what those systems connect to, then query again... and you still might miss implicit trust relationships that don't generate logs.
- "Is this identity accessing systems outside its normal community?" Graph algorithms can detect clusters of resources that typically get accessed together (communities). When an identity suddenly touches nodes in a community it's never been near, that's a structural anomaly, detectable even if the access itself looks legitimate in isolation.
- "How did the attacker get from A to B?" Path analysis in a graph shows you the exact sequence of relationships traversed. In a SIEM, you're correlating timestamps across different log sources and hoping you've identified all the relevant events.
This isn't about replacing your SIEM. Your SIEM is still valuable for what it does. It's about recognizing that identity-based attacks are fundamentally traversal problems, and traversal problems need graph-based analysis. You can't see the path an attacker is walking if your data structure doesn't represent paths.
But there's a second problem a graph helps solve: data silos. An attacker's path doesn't stay within one system. It crosses from your IdP to your SaaS apps to your cloud infrastructure to your endpoints. To see that path, you need data from all of those systems normalized into a common model and connected in a single graph. Otherwise you're trying to correlate timestamps across five different consoles and hoping you found all the relevant events.
The Silo Problem: Why Your Tools Can't See the Attack
Here's something that doesn't get talked about enough: most security stacks are architecturally incapable of detecting identity-based attacks because of how they're structured.
Your SIEM ingests logs from endpoints, firewalls, network devices, and (maybe) your identity provider. Your EDR watches processes and file activity on managed devices. Your network monitoring tracks traffic patterns and connections.
But who's watching the identity itself?
Not the authentication event. Your IdP logs that. The question is whether anyone is analyzing the behavioral context around that identity in real time. Is this login consistent with how this identity normally behaves? Does this MFA re-enrollment fit the pattern of a legitimate device change, or does it look like a socially engineered reset? When this identity starts accessing SaaS applications it's never touched before, is anything correlating that anomaly with the password reset that happened 20 minutes earlier?
We keep building better bouncers to check IDs at the front door, but we leave every window and side entrance wide open. Authentication gets all the investment. What happens after authentication gets almost none.
In most environments, the honest answer is no. IAM tools manage access. They provision accounts, enforce policies, handle authentication. They're not designed to detect abuse. SIEM platforms could theoretically correlate identity events, but in practice, identity telemetry is often underutilized, poorly contextualized, or buried in a flood of other log data that nobody has bandwidth to investigate.
Your organization exists as an intricate web of interconnected systems, trust relationships, and identity pathways spanning on-premises infrastructure, cloud platforms, and SaaS applications. But your security tools? They're siloed. One console for authentication events. Another for cloud activity. Your SaaS security posture lives somewhere else entirely. Even well-staffed security teams struggle to correlate across all of these, and most mid-market teams are already stretched thin covering everything else on their plate.
"When identity security operates in isolation, with separate teams, tools, and processes for each domain, organizations miss the critical attack paths that adversaries exploit most effectively: the connections between these systems." Dark Reading: Identity Security Silos
Every federated login, service account, and cross-domain permission creates a trust relationship that extends your identity attack surface. But traditional security monitoring captures events within each domain and struggles to correlate the cross-domain narrative that reveals actual attack progression.
Here's what an actual attack path looks like: An attacker social-engineers their way into an Okta account. They log in and start browsing the apps that user has access to: Confluence, SharePoint, internal wikis. They search for "VPN," "credentials," "service account," "API key." They find a page where someone documented how to connect to the production database, complete with connection strings. Now they're in your infrastructure with credentials that aren't tied to the compromised identity at all. They enumerate the filesystem, find more credentials, install a remote access tool for persistence, maybe even join a machine to their own infrastructure. By the time they're exfiltrating data, the trail spans five different systems and three different identities. No single tool saw more than one piece of it.
A user might have minimal permissions in Active Directory, yet possess extensive access in connected cloud resources through group memberships, federated roles, or inherited permissions that aren't visible from any single management console. Traditional privilege reviews examine permissions within individual systems and miss the cumulative effect of privileges across domains.
Why You Need All Three Detection Approaches
There's an ongoing debate in detection engineering about pattern matching versus behavioral analytics versus graph-based approaches. The answer isn't either/or. It's all three, working together. Each approach catches what the others miss.
The Static Rules Problem
"But we have custom detection rules," you might say. And you probably do. Many mature SOC teams have written rules to catch specific suspicious patterns: multiple failed MFA attempts followed by a success (a telltale sign of MFA fatigue), or password resets for privileged accounts outside business hours.
These rules help, and they're better than pure signature matching. But they have a critical weakness: they're static. They detect specific sequences of events in a specific order, and Scattered Spider knows this.
The CISA advisory explicitly notes that while some TTPs remain consistent, Scattered Spider threat actors "often change TTPs to remain undetected." Push Security's 2025 research documented the group shifting from hyphenated phishing domains to subdomain-based approaches specifically to evade automated detection rules. CrowdStrike reported that the group refines its help desk social engineering playbook regularly, incorporating stolen PII like Social Security numbers and birthdates to beat updated verification procedures.
This is the cat-and-mouse game that static rules are always losing. You write a rule to detect pattern A. The attacker shifts to pattern B. You update the rule. They shift to pattern C. Every detection rule you write is, by definition, a response to the last attack, not the next one.
It's not that your SOC team isn't skilled. It's that the detection model itself is reactive. It can only catch what it's already been taught to look for.
Pattern Matching
What it catches: Known TTPs, documented attack sequences, indicators of compromise from threat intelligence.
What it misses: Novel techniques, living-off-the-land attacks using legitimate tools, anything that doesn't match a pre-written rule.
Behavioral Analytics (UEBA)
What it catches: Deviations from established baselines, anomalous access patterns, activity outside normal working hours or locations.
What it misses: Attacks that stay within behavioral norms, slow-and-low activity that doesn't trigger thresholds.
Graph Intelligence
What it catches: Identities touching resources outside their normal community, unusual traversal paths through systems, immediate blast radius visibility when compromise is suspected.
What it misses: Attacks that follow legitimate relationship paths, activity within the identity's normal neighborhood of connected systems.
The Unit 42 report found that organizations "missed, deprioritized, or dismissed" malicious logins and privilege escalations in case after case. Why? Because each individual signal looked ambiguous in isolation. A login from a new location might be travel. An admin group change might be legitimate. A new OAuth token might be a developer testing something.
The attack becomes obvious only when you correlate across all three detection approaches: the pattern (credential reset followed by new device), the behavioral anomaly (accessing systems outside normal scope), and the graph relationship (suddenly traversing connections to high-value targets the identity has never touched).
The UEBA Trap: Why Most Implementations Fail
If behavioral analytics is so valuable, why do so few organizations deploy it successfully? The Unit 42 report notes that "few organizations deployed ITDR or UEBA" effectively. The problem isn't the concept. It's the implementation.
Poorly implemented behavioral analytics will drown your already underwater SOC in waves of low-quality, low-efficacy alerts. Every time someone travels, works late, or accesses a new system for a legitimate reason, you'll generate noise. Multiply that across thousands of identities and you've made your detection problem worse, not better.
We've seen this play out repeatedly. There are three failure modes that kill most UEBA deployments:
1. Alerting on Anomalies Without Context
An anomaly is not an incident. "User accessed a system they've never accessed before" is interesting. "User accessed a system they've never accessed before, 20 minutes after their MFA was reset via help desk, from a device we've never seen, and they're now querying admin group membership" is actionable.
UEBA that alerts on individual anomalies without correlating them into coherent narratives just creates more triage work. You need to bring related signals together and present the entire case to the SOC, not a series of disconnected "something unusual happened" notifications.
2. Stringing Together Detections to Drive Scores
Some platforms try to solve the signal-to-noise problem by combining weak signals into aggregate "risk scores." The theory is that enough low-confidence signals add up to high-confidence detection.
In practice, this often creates phantom incidents. Five unrelated anomalies don't become suspicious just because they happened to the same identity in the same week. If the signals don't form a coherent attack narrative, if they're not connected by relationship, timing, and logical attack progression, aggregating them into a score just produces high-scoring noise.
3. Generating Alerts from Siloed Data
This is where many UEBA implementations fail before they even get started. Behavioral baselines built from partial data produce partial insights. If your UEBA only sees authentication events but not SaaS access, it can't detect when a compromised identity pivots from Okta to Salesforce to AWS.
"Traditional security monitoring captures events within each domain, but struggles to correlate the cross-domain narrative that reveals actual attack progression... Attackers rely on legitimate tools to move laterally without triggering cross-domain alerts." Dark Reading: Identity Security Silos
Data silos must be broken down before generating alerts, not after. Correlating alerts from siloed systems after the fact is archaeology. Correlating signals from unified identity data in real-time is detection.
Building Detection That Actually Works
Given these failure modes, what does effective identity threat detection actually look like? Here's what we've learned works:
Start with Unified Identity Data
Before you can detect identity-based attacks, you need to see the identity comprehensively. That means unifying data from your IdP, your SaaS applications, your cloud infrastructure, your endpoints, and any other system where identities operate. The graph of relationships between these systems is where attacks actually happen.
Build Baselines That Capture Relationships, Not Just Events
Traditional baselines track metrics: how often does this user log in? From which locations? At what times? These are useful, but they're measuring activity in isolation.
Graph-based baselines measure something different: where does this identity normally live in the relationship structure? Which cluster of resources does this identity typically access? How many hops away is this identity from your most sensitive systems? When this identity accesses something new, is it adjacent to resources they already touch, or is it in a completely different part of the environment?
When an attacker takes over an identity and starts moving toward high-value targets, they're not just generating anomalous events. They're taking a path through your environment that the legitimate user has never walked. That path is visible in the graph, even when each individual step looks legitimate in the logs.
Correlate Into Cases, Not Scores
When signals indicate a potential compromise, don't just increment a risk score. Build a case. What's the attack hypothesis? What evidence supports it? What's the timeline? What's the potential blast radius based on what this identity has access to?
Present your SOC with an investigation, not a queue of disconnected alerts. The difference between "three medium alerts for user X" and "suspected credential compromise: help desk reset → new device → admin enumeration → evidence timeline → affected systems → recommended response" is the difference between triage work and actionable intelligence.
Implement Feedback Loops on Detection Efficacy
Every detection rule should be measured. What's the true positive rate? What's the false positive rate? How does it perform across different identity populations? Detection engineering isn't a one-time activity. It's a continuous process of tuning, suppressing, and refining based on real-world performance.
Rules that consistently produce low-quality alerts need to be flagged for review and either tuned or suppressed. Suppression isn't failure. It's hygiene. A lean detection portfolio that your SOC can actually work through is worth more than a comprehensive one that gets ignored.
| Detection Anti-Pattern | Better Approach |
|---|---|
| Alert on every behavioral anomaly | Correlate anomalies into attack narratives before alerting |
| Aggregate weak signals into risk scores | Build coherent cases with evidence timelines |
| Correlate alerts from siloed systems after the fact | Unify identity data before generating detections |
| Write detection rules and forget them | Continuously measure and tune based on efficacy metrics |
| One-size-fits-all detection thresholds | Baselines tailored to identity populations and behaviors |
The Speed Problem
Even with solid detection, you still have a speed problem. The Unit 42 report documents cases where attackers moved from initial access to data exfiltration in 90 minutes. Scattered Spider campaigns have completed entire kill chains in under 72 hours.
This is why detection and response have to work together. Detecting a compromise on Tuesday and responding on Thursday means you're doing incident response, not threat prevention. The detection system needs to not only identify potential compromises but predict what's likely to happen next and recommend immediate containment actions.
If behavioral drift and early TTP signals suggest a Scattered Spider-style attack in progress, your detection should predict: privilege escalation is likely within the next 30-120 minutes based on observed campaign patterns. Lock down admin group changes for this identity. Require step-up authentication for sensitive resources. Alert the SOC with a complete case and recommended response actions.
That's not just detection. It's actionable intelligence delivered fast enough to matter.
Practical Steps You Can Take Now
While you're evaluating your detection approach, there are steps your team can take today to harden your environment against Scattered Spider's known tactics.
Upgrade your help desk verification. If your help desk can reset passwords or MFA based on a phone call and a few security questions, you have a vulnerability Scattered Spider will exploit. Require multi-party approval for privileged account changes. Implement callback verification to numbers on file. Train staff to escalate any request that feels off, and empower them to say no without fear of pushback.
Prioritize phishing-resistant MFA. CISA explicitly recommends FIDO2 security keys or certificate-based authentication as the baseline for defending against these attacks. SMS codes and push notifications are both vulnerable to Scattered Spider's techniques. Start with your most privileged accounts and expand from there.
Audit your identity provider logs. Many organizations collect Okta or Entra ID logs but don't actively monitor them for suspicious patterns. Begin looking for anomalous MFA registrations, unusual login locations, unexpected SaaS application access, and federation configuration changes. Even basic monitoring here closes a significant visibility gap.
Control remote access tooling. Implement application allowlisting to prevent unauthorized remote access tools from running in your environment. Audit regularly for tools like AnyDesk, TeamViewer, ScreenConnect, and Tactical RMM. These are Scattered Spider's go-to persistence mechanisms.
Plan for identity-based incidents. If your incident response plan focuses primarily on malware containment, update it. Include procedures for revoking compromised credentials across all connected systems, force-terminating active sessions, auditing recent MFA changes, and reviewing identity provider configurations. All under time pressure.
How Auth Sentry Implements These Principles
We built Auth Sentry because we've lived these problems. Our team comes from Duo, Censys, Expel, Rapid7, and the MDR world. We've built detection systems, responded to incidents, and watched organizations struggle with the exact problems this article describes.
Auth Sentry connects to your identity providers (Okta, Microsoft Entra, Google Workspace, Duo) and SaaS applications via API. No agents to deploy. No infrastructure to manage. Most organizations are ingesting data within an hour of connecting their first source. From there, we build a unified identity graph that shows every identity, what it connects to, and how it normally behaves.
When something looks wrong, Auth Sentry doesn't just fire an alert. It correlates related signals across systems, maps the potential blast radius, assembles an evidence timeline, and presents your team with a complete investigation. The difference between "suspicious login detected" and "suspected credential compromise: here's the evidence, here's what this identity can reach, here's what we recommend."
Our Identity Rx engine continuously analyzes detection performance across your environment. It suggests custom detections based on patterns specific to your organization and flags rules that are producing too many false positives for tuning or suppression. Detection engineering as a continuous feedback loop, not a one-time configuration.
The result: fewer, higher-confidence investigations that your team can actually act on.
See It For Yourself
Connect your identity sources and see what graph-based detection finds in your environment.
Get StartedSources
- Unit 42 Global Incident Response Report 2025: Social Engineering Edition - Palo Alto Networks
- Scattered Spider Joint Cybersecurity Advisory AA23-320A - CISA, FBI, NCSC-UK, ACSC, RCMP, CCCS (updated July 2025)
- SCATTERED SPIDER Escalates Attacks Across Industries - CrowdStrike, July 2025
- Identity Security Silos: An Attacker's Best Ally - Dark Reading
- How Scattered Spider TTPs Are Evolving in 2025 - Push Security, November 2025
- Scattered Spider, Group G1015 - MITRE ATT&CK