First-generation ITDR tools generate alerts. Second-generation ITDR investigates autonomously. Here's why that difference matters.
Built on static ML models and rules engines. Flag anomalies, send alerts, wait for humans to investigate. Born from the realization that IAM needed threat detection.
Problem: Analysts drown in alerts while attackers move faster than investigation speed.
Powered by specialized AI agents and graph databases. Continuously gather evidence, auto-enrich from security stack, validate with users, deliver complete investigations.
Solution: SOCs get investigation-ready cases. 10x faster response, 70% less noise.
| Capability | ITDR 1.0 (Alert-Based) | ITDR 2.0 (Autonomous Investigation) |
|---|---|---|
| Detection Engine | ||
| AI/ML Architecture |
ITDR 1.0
Static ML models with generic rules. One-size-fits-all anomaly detection. |
ITDR 2.0
Specialized AI Agents (OAuth, Service Account, Lateral Movement, Toxic Combo) that continuously learn YOUR environment. |
| Evidence Collection |
Analyzes events in isolation. Flags individual anomalies. |
Graph Database links evidence across identities, permissions, and actions until confidence threshold reached. |
| Confidence Scoring |
Generic severity (Low/Medium/High) based on rule matching. |
Evidence-based confidence scores (0.98 = 98% certain based on correlated evidence). |
| Investigation Process | ||
| Who Investigates? |
Humans manually correlate logs, query SIEM, pull SaaS platform data. ⏱ Time: 45+ minutes per alert |
AI Agents autonomously gather evidence from multiple sources before alerting. ⏱ Time: <2 minutes automated |
| Context Enrichment |
Analyst manually queries SIEM, MDM, threat intel after receiving alert. |
Automatic enrichment from your security stack before alert created. Agents query SIEM, SaaS platforms, MDM, threat intel autonomously. |
| User Validation |
Analyst emails or calls user after investigation started. |
Agents message users directly via Slack/Teams to validate suspicious activity in real-time. |
| Alert Quality | ||
| What You Receive |
Alert: Unusual login detected
from: 203.0.113.45
Severity: Medium
Action: Investigate
|
Lateral Movement Detected
Dormant account accessed 4 systems in 5min
Evidence: 5 sources | Confidence: 0.98
Agent actions: Tokens revoked, user confirmed compromise via Slack
|
| False Positive Rate |
High (30-50% of alerts are false positives). Generic rules don't understand your business context. |
70% reduction through continuous learning. Agents baseline YOUR organization's patterns. |
| Recommended Actions |
Generic: "Investigate", "Review logs", "Contact user" |
Specific, context-aware: "Revoke OAuth tokens", "Force MFA re-auth", "Audit systems X, Y, Z for data exfiltration" |
| Threat Coverage | ||
| OAuth Token Attacks |
Limited visibility. Flags unusual IPs but misses context about token lifecycle. |
Dedicated OAuth Agent monitors token lifespans, cross-IP usage, impossible travel, dormant token activation. |
| Service Account Security |
No baselines for non-human identities. Can't detect "abnormal" bot behavior. |
Service Account Agent learns normal patterns for every API key, detects privilege escalations and unusual access. |
| Toxic App Combinations |
Single-app view. Can't see dangerous access chains across systems. |
Toxic Combo Agent maps access paths (GitHub → AWS → Prod DB) and detects chained exploitation. |
| Lateral Movement |
Relies on endpoint telemetry. Misses identity-based pivots. |
Lateral Movement Agent tracks identity usage across systems, detects unusual pivots and credential sharing. |
| Response & Remediation | ||
| Response Time |
Hours to days (depends on analyst availability and workload). |
Minutes (agents can auto-revoke tokens, block IPs, disable accounts with SOC approval). |
| Containment Actions |
Manual workflows. Analyst creates tickets, coordinates with IT. |
Automated containment workflows with approval gates. Agents execute remediation steps autonomously. |
| Post-Incident Analysis |
Manual correlation of logs and timelines. |
Pre-built attack timelines showing complete evidence chain and agent actions taken. |
| Learning & Adaptation | ||
| Customization |
Manual tuning of rules and thresholds. Requires security engineer time. |
Continuous learning from YOUR environment. Agents automatically adapt to your organization's patterns. |
| Baseline Updates |
Periodic model retraining (quarterly/annually). |
Real-time baseline updates. If DevOps starts deploying on Fridays, agents learn it's normal for YOUR team. |
| Organization-Specific Detection |
Generic rules applied to all customers. What's normal for Company A triggers alerts at Company B. |
Precision Detection (Identity Rx): Detections tailored to YOUR business workflows, not industry averages. |
Alerts per week
45 min per investigation
40% false positive rate
Generic recommendations
Complete investigations per week
<2 min automated investigation
10% false positive rate (70% reduction)
Context-aware, actionable guidance
While first-generation tools generate more alerts, we deliver fewer, higher-quality investigations—complete with evidence, context, and recommended actions. Your SOC focuses on real threats, not noise.
See ITDR 2.0 in ActionOrganizations moving from ITDR 1.0 to Auth Sentry typically see results within the first week.
Agentless deployment. Connects to identity providers and starts baselining.
AI Agents begin detecting threats with initial baselines. Immediate value from graph-based detection.
Full Identity Rx precision. Agents have learned your environment. 70% FP reduction achieved.
Continuous learning. Detection gets smarter as your organization evolves.
See how Auth Sentry's AI Agents deliver complete investigations, not just alerts.
Request Free Trial