The identity security industry has a dirty secret: we've optimized for the wrong metric. More alerts don't mean more security. They mean more noise, more fatigue, and more breaches slipping through the cracks while your team drowns in false positives.
It's 2:47 PM on a Tuesday. Your SIEM just fired its 412th identity alert of the week. "Unusual login detected for james.hernandez14." Your analyst opens it. Checks the IP. Cross-references with the IdP logs. Opens a ticket. Starts a timeline.
Forty-five minutes later, they've determined... it was probably fine. James was traveling. Different hotel WiFi. Nothing to escalate.
Meanwhile, three tabs over, alerts #413 through #847 are still sitting in the queue. Each one a discrete signal. Each one requiring manual investigation. Each one isolated from every other signal, as if attackers were polite enough to only do one suspicious thing at a time.
Source: AI SOC Market Landscape 2025
That statistic should terrify you. But here's the one that should make you angry: 61% of security teams admitted to ignoring alerts that later proved critical. Not because they didn't care. Because the alert model itself is architecturally broken for identity threats.
The Alert Industrial Complex
Here's how the identity security industry ended up here: vendors discovered that alert volume is an easy metric to sell. "Look at all the things we detected! You need us!" It's a compelling pitch. It's also a trap.
More alerts create the illusion of comprehensive coverage. They give vendors a big number to point to during renewals. "See all the things we told you about? Imagine if we weren't watching." What they don't mention is that alert volume has become inversely correlated with actionable intelligence.
Sources: AI SOC Market Landscape 2025; SANS 2025 SOC Survey
Do the math: 960 alerts per day, 70 minutes per investigation, and a team that's already understaffed. That's 1,120 hours of investigation work per day. You don't have 1,120 hours. It's simple algebra, and the math ain't mathin'.
"Almost 90% of SOCs are overwhelmed by backlogs and false positives, while 80% of analysts report feeling consistently behind in their work." — Osterman Research Report, 2025
Problem #1: The Volume Delusion
The first problem with current Identity Threat Detection and Response (ITDR) tools is the "more is better" philosophy. Vendors toss identity alerts at your SOC without ever considering the other data in your environment that might tell a different—or more complete—story.
Every "unusual login" alert is treated as an isolated incident. But identity attacks aren't isolated incidents. They're slow-burn campaigns that unfold across days, weeks, sometimes months. An attacker using stolen credentials doesn't do one suspicious thing. They do dozens of individually normal things that only become suspicious in combination.
Vendors love to cite detection counts because it's a defensible number during renewals. "We generated 50,000 alerts this quarter!" But detection count is a vanity metric. The metric that matters is: how many breaches did you prevent? How many investigations led to actual containment? That number is much smaller—and much harder to sell.
Here's what your team actually needs: not more alerts, but better time-to-decide. A tool that can eliminate false positives without sacrificing true positives. A system that pre-investigates before your analyst even opens the ticket.
Problem #2: The Silo Syndrome
The second problem is structural. Every identity-adjacent tool you've deployed creates its own data silo. Your IdP sees authentication events. Your CASB sees SaaS access. Your EDR sees endpoint activity. Your cloud security tool sees AWS/Azure events. None of them talk to each other in any meaningful way.
Ask any SOC analyst how many times they've investigated a suspected breach only to discover it was a complete nothing—but only after manually correlating data across four different tools. That's not investigation. That's archaeology.
Now consider what a smarter approach would look like: an ITDR tool that could automatically check whether the "suspicious login" IP matches the IP your EDR is reporting for that user's device. If it matches, that's not an attack—that's James logging in from his laptop at the hotel. Close the alert. Move on.
"The market hasn't caught up with the need. Big players have 'bolted on' identity but miss huge swaths of attacks. Recent breaches like Scattered Spider and the Salesforce/Drift OAuth compromise prove the problem is only getting worse." — Security practitioner feedback, Auth Sentry customer discovery
The Non-Human Identity Time Bomb
Here's a problem almost no ITDR tool properly addresses: non-human identities now outnumber human identities by 10:1 or more in most organizations. Service accounts, API keys, OAuth tokens, machine identities—all with the same access rights as humans, but with almost none of the behavioral monitoring.
When a service account that normally makes 50 API calls per hour suddenly makes 5,000, is that a legitimate workload spike or an attacker who compromised the token? When an OAuth token for a Slack integration starts accessing financial records in your ERP system, is that within scope or lateral movement? Current tools don't know because they barely track NHIs at all, let alone model their normal behavior.
The Drift/Salesforce breach exploited exactly this blind spot: compromised OAuth tokens performing authorized actions at anomalous scale. No failed logins. No unusual geolocation. Just a behavioral deviation that most tools never noticed.
These architectural gaps have real-world consequences. Let's look at what happens when attackers exploit them.
The MGM Resorts Wake-Up Call
In September 2023, the Scattered Spider threat group demonstrated exactly how devastating identity-based attacks have become. They didn't exploit a zero-day. They didn't deploy sophisticated malware. They called MGM's IT help desk, impersonated an employee found on LinkedIn, and talked their way into admin credentials for Okta and Azure AD.
The result: $100 million in losses, weeks of operational disruption, and a stark reminder that identity is now the primary attack surface.
Sources: MGM SEC 8-K Filing; CISA Advisory AA23-320A; ALPHV public statement
The attack chain traversed multiple systems: LinkedIn (reconnaissance) to help desk (social engineering) to Okta (identity) to Azure AD (cloud) to ESXi servers (infrastructure). A siloed approach sees each of these as separate events in separate tools. An integrated approach sees it as one attack path.
Problem #3: Static Rules in a Dynamic Threat Landscape
The third problem is that current ITDR tools rely too heavily on static, rule-based detection. "If login from country X, alert." "If failed MFA attempts > 5, alert." "If new device, alert."
These rules catch the obvious stuff. They miss everything else.
We've seen this movie before. In the 2000s, antivirus vendors relied on signature-based detection—comparing files against databases of known malware. It worked until attackers started using polymorphic code that changed its signature with every infection. The industry had to evolve: first to heuristic analysis, then to behavioral detection, and finally to EDR systems that monitor endpoints for anomalous behavior patterns.
Identity security is having its antivirus moment right now.
A Note on AI: It's Not Magic
Every vendor is slapping "AI-powered" on their marketing materials right now. Be skeptical. AI doesn't magically solve the identity security problem. In many ways, it makes the problem more complicated.
AI without rigorous foundations is just faster pattern matching on bad data. It can generate false positives at machine speed. It can hallucinate threats. It can miss attacks because it was trained on yesterday's patterns. The vendors rushing to add "AI" to their product names aren't necessarily solving your problems—they're solving their marketing problems.
What actually works is boring: solid research into how identity attacks unfold, mathematical models grounded in graph theory and statistical anomaly detection, and relentless validation against real-world attack data. AI is useful when it executes those models at scale—not when it's a buzzword on a slide deck.
- What specific mathematical models underpin your detection? (Graph theory? Statistical baselines? Sequence analysis?)
- How do you validate detections against real-world attack data?
- What's your false positive rate, and how do you measure it?
- Can you explain why a specific alert fired, in terms a human can verify?
If the answer is "our AI figures it out," that's not a security tool. That's a black box. And black boxes don't build trust when you're trying to decide whether to wake up your incident response team at 3 AM.
Auth Sentry uses AI extensively—to correlate signals, score investigation confidence, and assemble evidence automatically. But the AI serves a grounded detection model, not the other way around. Our AI is the tool; our graph theory and statistical rigor are the foundation.
Static detection works when attacks follow predictable patterns. Modern identity attacks don't. An attacker using a stolen OAuth token isn't triggering failed login attempts—they already have valid credentials. They aren't logging in from an unusual country—they're using a proxy in your region. They aren't escalating privileges through official channels—they're traversing legitimate permission paths that already exist.
| Static Rule Detection | Behavioral Anomaly Detection |
|---|---|
| Catches known attack patterns | Catches deviations from normal behavior |
| Easy to bypass with valid credentials | Detects credential abuse regardless of validity |
| High false positive rate for edge cases | Learns your environment's actual patterns |
| Requires constant rule maintenance | Continuously adapts to changing behavior |
| Misses the Drift/Salesforce OAuth attack | Catches volume anomalies in token behavior |
To be clear: you need both types of detection. Static rules catch known threats reliably. Behavioral detection catches the novel attacks that static rules miss. A hybrid approach—pattern matching, graph-based anomaly detection, and predictive intelligence working together—is the only way to cover the full threat spectrum.
The Real Cost of Getting This Wrong
Let's talk numbers. Not vendor marketing numbers. Real breach data.
Source: IBM Cost of a Data Breach Report 2024
That's nearly 10 months of undetected access. Your alert queue was generating hundreds of notifications per day the entire time. The attacker was in your environment the entire time. And your team was triaging alerts that led nowhere the entire time.
Sources: Verizon DBIR 2025; IBM Cost of a Data Breach 2024; Check Point 2025
The Verizon DBIR found that 38% of breaches started with identity-based attacks—stolen credentials (22%) or phishing (16%)—making identity the #1 initial access category. But credentials don't just open the door; they're involved throughout the attack. When you add up initial access, lateral movement, and privilege escalation, credentials touch the majority of breaches.
The speed tax is real: breaches contained within 200 days cost about $1 million less than those that took longer—a 23% cost reduction. Every day you shave off dwell time is money saved, data protected, and reputation preserved.
If 40% of all security alerts go uninvestigated, how many of those are identity signals that should have been the first warning? The cost isn't wasted analyst salary. It's the identity alerts getting auto-closed as "informational." The OAuth token anomaly nobody had time to check. The credential abuse campaign that's been running for months.
You're not paying for triage. You're paying for the breaches you don't catch.
Here's what makes this worse: with ransomware now present in 44% of breaches, a growing number of organizations only discover they've been compromised because the attacker tells them—usually via a ransom note. The "detection" is really just the attacker announcing themselves to get paid. As the Verizon DBIR researchers noted: "How long would these organizations have been compromised before they found out had not the attackers notified them?"
That's not detection. That's notification. And it means the signals were there the entire time. The correlation wasn't.
Problem #4: The Waiting Game
"Defense in depth" is the industry mantra. Layer your controls. Assume breach. Have detection at every tier. It sounds comprehensive. In practice, it means everyone's waiting for someone else to catch the problem.
Here's the uncomfortable truth about how most SOC teams and MDR providers actually operate: they're configured to spring into action after exploitation has already occurred. They're watching for the critical alerts—the ransomware deployment, the data exfiltration, the lateral movement to crown jewels. The signals that say "this is definitely bad."
By the time those signals fire, your data is already being moved offsite. The attacker has already achieved their objective. You're not preventing a breach anymore. You're managing an incident.
This isn't a criticism of SOC teams—they're doing exactly what they're equipped to do. The tools they have generate so much noise at the early stages that the only practical approach is to wait for high-confidence signals later in the kill chain. But "high confidence at stage 7" is worse than "actionable intelligence at stage 2."
We recently worked with an organization that suffered an account compromise. When we reviewed the logs, their identity provider had generated exactly one alert. Severity? Informational. Not a warning. Not a critical. An informational notification—the kind that gets filtered out, auto-closed, or buried under hundreds of other "FYI" events. The IdP saw something. It just didn't think it was worth making noise about. — Auth Sentry customer engagement, 2026
That's the gap. The tool detected the compromise. It just didn't understand it. Without behavioral context, without correlation to related signals, without understanding of what that account connects to—it was just another data point in a sea of informational alerts. The kind nobody reads.
Managed Detection and Response providers face the same constraint. They're staffed to investigate confirmed incidents, not to proactively hunt through thousands of low-confidence early-stage alerts across hundreds of customer environments. The economics don't work. So they wait for the escalation triggers—which means they're responding to breaches, not preventing them.
The identity attack lifecycle looks like this: credential theft → initial access → persistence (often via OAuth tokens or service accounts) → reconnaissance of internal systems → privilege escalation → data access → exfiltration. At every stage before exfiltration, the attacker is using valid credentials to perform authorized actions. Each individual action looks legitimate. The attack only becomes obvious when it's too late.
Tools that can recognize attack patterns as they form—at stage 2 or 3 of the kill chain, not stage 6 or 7—could save your bacon. Predictive intelligence that says "this sequence of individually-normal actions matches the early stages of a credential abuse campaign" is worth infinitely more than a high-confidence alert that says "data exfiltration in progress."
The first alert tells you to investigate. The second tells you to call your lawyer.
From ITDR 1.0 to ITDR 2.0
The identity security industry is at an inflection point. We're moving from what we call ITDR 1.0—which optimized for alert generation—to ITDR 2.0—which optimizes for threat prevention and intelligent response.
ITDR 1.0: The Alert Era (2020-2025)
Detection-focused. Success measured by alert volume. Every anomaly becomes a notification. Human analysts responsible for triage, correlation, and investigation. Silos between identity, endpoint, and cloud tools. Static rules dominate. Behavioral detection is a checkbox feature, not a core architecture.
The Transition: Where We Are Now
Organizations drowning in alerts. SOC burnout at all-time highs. Major breaches demonstrating the failure of alert-centric approaches. Early adopters experimenting with AI-assisted investigation and graph-based correlation.
ITDR 2.0: The Prevention Era (2026+)
Investigation-first architecture. AI agents that automatically correlate related signals, gather evidence, and assemble investigations before humans engage. Graph-based understanding of identity relationships.
Predictive intelligence. Recognizing attack patterns as they form—at stage 2 or 3 of the kill chain, not stage 7. Sequence analysis that identifies credential abuse campaigns before they reach exploitation. Acting on forming threats, not confirmed damage.
Prevention, not just detection. The goal isn't to generate a high-confidence alert after data exfiltration begins. It's to stop the attack before it reaches that stage. Automated containment. Session revocation. Conditional access enforcement. Response that happens in minutes, not days.
The Key Shift: Investigations Over Alerts
Here's the fundamental insight that separates ITDR 2.0 from what came before: triaging single alerts is a losing game.
When your analyst opens that "unusual login" alert, what's the first thing they do? They start gathering related context. What else did this user do? What other alerts fired around the same time? What does their normal behavior look like? What systems are they connected to?
That's step zero of every investigation. And right now, step zero is entirely manual. Your analyst is doing the work that the tool should have already done.
Instead of generating an alert that says "unusual login detected," an investigation-first tool would:
- Automatically identify all related signals across connected systems
- Map the identity's historical behavior baseline
- Check whether the "unusual" activity correlates with expected context (travel calendar, VPN connection, device posture)
- If suspicious, automatically gather evidence from EDR, CASB, and cloud logs
- Assemble a complete investigation package with confidence scoring
- Present to the analyst not as "something happened" but as "here's what happened, here's the blast radius, and here's what we recommend"
The difference between an alert and an investigation is the difference between a smoke alarm and a fire investigator. The smoke alarm tells you something might be wrong. The fire investigator tells you where it started, how far it spread, and what to do about it.
What the Solution Actually Looks Like
If you're evaluating identity security tools—or reconsidering the one you have—here's what to look for:
1. Multi-Stage Detection Architecture
The tool should combine multiple detection approaches:
- Pattern matching for known threat signatures
- Graph-based anomaly detection that understands identity relationships and behavioral baselines
- Predictive intelligence that recognizes attack patterns as they form—not after the damage is done
2. Autonomous Investigation Capabilities
When something triggers detection, the system should automatically:
- Cluster related alerts across time and systems
- Gather contextual evidence from connected data sources
- Map the potential blast radius using identity graph relationships
- Score confidence based on accumulated evidence
- Present a complete investigation, not a bare alert
3. Cross-System Correlation
Identity attacks traverse systems. Your tool should too. If a credential appears in Okta, then shows up in AWS, then accesses Salesforce—the tool should see that as one investigation, not three unrelated alerts in three different dashboards.
4. Time-to-Decision, Not Time-to-Alert
The metric that matters isn't how fast the tool generates alerts. It's how fast your team can make a confident decision to act or dismiss. A tool that generates 100 alerts requiring 70 minutes of investigation each is worse than a tool that generates 10 investigations requiring 5 minutes of review each.
When evaluating tools, ask: "What does my analyst see when they open this?" If the answer is a single alert with no context, that's ITDR 1.0. If the answer is a pre-assembled investigation with correlated evidence, confidence scoring, and recommended actions—that's what you're looking for.
The Bottom Line
The identity security industry sold you a detection model built for a different era. One where attacks were discrete events, tools operated in isolation, and human analysts had unlimited capacity to investigate every signal.
That era is over. Attacks are slow-burn campaigns that traverse relationships. Credentials are the #1 initial access vector. And your team is drowning in alerts while breaches go undetected for nearly a year.
The solution isn't more alerts. It's better investigations. It's tools that understand identity as a graph of relationships, not a list of accounts. It's systems that do the correlation work automatically—gathering context, connecting signals, and assembling evidence—before a human ever needs to engage.
The question isn't whether your current tools are generating alerts. They are. The question is whether those alerts are helping your team make faster, more confident decisions—or just adding to the noise.
How Auth Sentry Helps
We built Auth Sentry because we've lived this problem. Our team comes from years of building detection systems at companies like Duo, Expel, Rapid7, and Censys. We've seen what works and what doesn't.
Auth Sentry is an investigation engine, not an alert factory. We combine pattern matching for known threats, graph-based anomaly detection for behavioral deviations, and predictive intelligence that recognizes attack patterns as they form. The result: fewer, higher-confidence investigations that your team can actually act on.
Source: Auth Sentry platform metrics, early customer deployments
We're grounded in research and math—not AI hype. Our detections are built on graph theory, statistical baselines, and continuous validation against real attack data. The AI accelerates execution; the fundamentals are what make it work.
Start your free 7-day trial
See how Auth Sentry turns 847 alerts into a handful of high-confidence investigations your team can actually act on. No credit card. No pushy sales people. We're here to help.
Start Free TrialSources
- Software Analyst Cyber Research, "AI SOC Market Landscape 2025" - 40% alerts uninvestigated, 61% ignored critical alerts, 960 alerts/day, 28 tools, 70 min investigation time
- SANS 2025 SOC Survey - 66% of teams cannot keep pace with alert volumes
- Osterman Research Report 2024-2025 - ~90% SOCs overwhelmed, 80% analysts behind
- Verizon DBIR 2025 - 38% initial access via identity attacks (22% credentials + 16% phishing), 53% breaches involved credentials, 88% web app attacks used credentials, 44% ransomware presence
- IBM Cost of a Data Breach Report 2024 - 292 days credential breach dwell time, $4.81M average cost, $1M savings for sub-200-day containment
- Check Point, "The Alarming Surge in Compromised Credentials in 2025" - 160% credential theft increase
- CISA Advisory AA23-320A - Scattered Spider tactics and techniques
- MGM Resorts SEC Form 8-K Filing, October 2023 - $100M financial impact disclosure
- ALPHV/BlackCat public statement, September 2023 - 6TB exfiltration claim