Cyber Security

Your MTTD Looks Good. Your post-alert gap is missing

Anthropic limited its Mythos preview model last week after it discovered and automatically exploited a zero-day vulnerability in all major operating systems and browsers. Wendi Whitmore of Palo Alto Networks warned that similar capabilities are weeks or months away from expansion. CrowdStrike’s 2026 Global Threat Report puts the average eCrime breakout time at 29 minutes. Mandiant’s IM-Trends 2026 shows that enemy cooldown times have shrunk to 22 seconds.

The offense is accelerating. The question is where exactly the defenders are going slow – because that’s not where most SOC dashboards suggest.

Discovery tools have gotten better. EDR, cloud security, email security, identity, and SIEM platforms ship with built-in detection that pushes MTTD close to zero with known techniques. That’s real progress, and it’s the result of years of investment in engineering across the industry.

But when enemies operate on timelines measured in seconds and minutes, the question isn’t whether your acquisition is hot enough. That’s what happens between the warning shot and someone actually picking it up.

The Post-Alert Gap

After the warning light, the clock continues to work. The analyst has to see it, take it in, put the context across the stack, investigate, make a determination, and initiate a response. In most SOC situations, that thread is where most of the attacker’s operating window resides.

The analyst is in the middle of another investigation. A warning goes into the queue. Content is distributed across four or five devices. The investigation itself requires querying the SIEM, examining identity logs, pulling endpoint telemetry, and corresponding timelines. To do a thorough investigation – which leads to a defensible determination, not a close feeling – that’s 20 to 40 minutes of work, assuming the analyst starts quickly, which they rarely do.

Against the breakout window of 29 minutes, the investigation had not yet begun when the attacker moved to the side. Against the 22-second release, a warning may be in order.

MTTD does not take any of these. It measures how fast fires are detected, and in that regard, the industry has made real progress. But that metric stops at a caveat. It doesn’t say anything about how big the alert window really was, how many alerts received real investigation versus a quick skim, or how many were closed en masse without meaningful analysis. MTTD reports on the problem side that the industry is already making real progress. The river’s exposure – the post-awareness investigation gap – is nowhere to be seen.

What Changes When AI Takes Over Investigations

AI-driven investigations do not improve the speed of discovery. MTTD is a discovery engineering metric, and always has been. What AI compresses is the post-alert timeline, which is where the actual exposure resides.

The line disappears. All alerts are investigated as they come in, regardless of severity or time of day. Content integration that took an analyst 15 minutes to switch tabs happened in seconds. The investigation itself – considering the evidence, navigating the findings, reaching a decision – takes minutes rather than an hour.

This is what we built Prophet AI to do. Investigates all alerts with the depth and thinking of a top analyst, at machine speed: planning investigations flexibly, querying relevant data sources, and producing clear, evidence-based conclusions. The post-warning gap does not exist in this model because there is no queue and no waiting time. For the teams working on this benchmark, we have published effective measures to compress the investigation time to less than two minutes.

The same structural limitation applies to MDR. MDR analysts face a similar post-warning problem because they are still bound by human investigative powers. The shift from outsourced human investigations to AI investigations completely removes that ceiling, changing what can be measured about the actual performance of your SOC.

Metrics Matter Now

Once the warning window expires, standard speed metrics cease to be very informative indicators. The two-minute MTTI is important for the first quarter you report. After that, it becomes the table stakes. The question goes from “how fast are we?” in “how strong is our security posture over time?”

Four metrics capture this:

  1. Level of investigation coverage. What percentage of total alerts receive a full investigation that includes a full line of evidentiary questions? In a traditional SOC, this number is usually 5 to 15 percent. The rest is skimmed, closed in bulk, or ignored. For an AI-driven SOC, it should be 100 percent. This is the single most important metric to understand if your SOC is actually seeing what is happening in your environment.
  2. Location discovery. MITER ATT&CK strategy placement is mapped against your detection library, with gaps identified and tracked over time. This means continuing to map the acquisition landscape, identifying strategies with weak or non-existent openings, and marking single points of failure or situations where a single acquisition system is the only thing between an organization and total approach blindness. Vision engineering in an AI-driven SOC requires rethinking how this environment is maintained.
  3. False response speed. How quickly are the results of the investigation returned to the tune of discovery? In most SOCs, this loop operates on human memory and quarterly update cycles. The target situation is ongoing: the results of the investigation should flow directly to the improvement of detection, noise suppression and signal enhancement without waiting for a systematic review.
  4. Hunting driven discovery rate. How many permanent recoveries are created from active hunting detection versus incident response? This measures whether your hunting program is expanding your search area or generating reports. The most robust implementations include direct hunting and detection gaps where you use hypothesis-driven hunting against weak input techniques, and then convert confirmed findings into permanent detection rules.

These measurements are only relevant if the AI ​​is performing actual investigative work, but they represent a very different view of SOC performance that is focused on security outcomes rather than performance.

The Mythos reveal revealed something the security industry already knew but hadn’t fully internalized: AI accelerates crime at a speed that makes human speed checks untraceable. The answer is not to panic about AI-generated exploitation. Closing the gap when defenders are slow – the post-awareness investigation window – and starting to measure whether that gap is narrowing.

Teams moving from reporting speed to reporting investigation coverage and improved detection will have a clearer picture of their true risk profile. When attackers have AI working for them, that clarity is important.

Prophet Security’s Agent AI SOC Platform investigates every alert with superior analyst depth, continuously improves detection, and conducts targeted hunts against coverage gaps. Visit Prophet Security to see how it works.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button