Skip to Main Content
April 17, 2026

Mythos, Memory Loss, and the Part InfoSec Keeps Missing

Written by Justin Elze
Artificial Intelligence (AI)

InfoSec has a bad habit of acting like history started this morning. Something new lands, the industry loses its mind for a week, vendors start talking like the old rules no longer apply, and half the industry suddenly forgets how organizations actually get compromised.

We are doing that again with Mythos.

Mythos is legitimately impressive. It is very good at finding bugs, useful for exploit development, and materially improves the speed and quality of vulnerability research work. Anyone pretending otherwise is coping. But the conversation around it is already drifting into the same bad pattern this industry falls into every time a new offensive capability shows up: people fixate on the most technically dramatic part of the story and lose sight of what actually matters operationally.

That is the problem. The question is not whether Mythos is good at bug hunting and helping write exploits, it clearly is. The question is what that means for most defenders right now, and the answer is not “drop everything, autonomous zero-day machines are now the main thing compromising your environment.”

For most organizations, the bigger problem is still much more boring and damaging: ransomware crews, extortion operations, stolen credentials, phishing, exposed edge services, weak identity controls, stale appliances, known vulnerabilities, bad segmentation, and environments where once somebody gets in, they can move far too easily. Mythos does not replace that reality, it lands on top of it. If you miss that, you end up having the wrong conversation and spending your time talking about AI-generated zero-day storms while attackers keep getting paid through the same doors defenders left open last quarter.

We’ve Been Here Before

Weird how InfoSec collectively forgets there was a time when browser exploits were just sitting in Metasploit. Java applets got abused into oblivion until the default behavior had to change because the model itself was broken.

Before that, Windows Firewall was not a thing, and Linux and Unix boxes routinely shipped with inetd.conf full of exposed services running as root or close enough to it: rpc.statd, rpc.mountd, rpc.ttdb on Solaris, fingerd, rshd, the whole r-services trust model. If you were on the network, you were basically trusted, and that was normal.

Then came the exploit kit era. Blackhole rented for pocket change, and Angler industrialized browser exploitation at scale. You did not need to be some elite exploit developer, you only needed access to the kit, traffic, and a business model. Crime as a service worked because the underlying ecosystem made it work.

Every one of those periods felt unprecedented while people were living through them, and every one of them generated the same style of panic and the same kind of bad analysis pretending the sky had permanently fallen. Just as importantly, every one of them eventually got pushed back because the architecture changed enough to make the attack class less economical.

inetd exposure gave way to host firewalls on by default. Browser plugin hell (Java, Flash, Silverlight, ActiveX) got killed off because it was indefensible at scale. Windows kept layering in mitigations like ASLR, DEP, and CFG until exploitation became harder, less reliable, and more expensive than it used to be.

But offense shifted. As some of the older, cleaner exploit paths got squeezed, attackers moved into the next layer of abuse that still scaled: VBA, HTA, LNK files, script abuse, LOLBins, and every other piece of “normal” functionality that enterprise environments kept around because business workflows depended on it. That tradecraft is not historical trivia either a lot of it is still alive and kicking today, even if pieces of it have been nerfed over time. That is the pattern: defenders and platform vendors raise the cost on one attack class, and attackers adapt into whatever functionality remains exposed, trusted, and economically useful. The ecosystem changes, the tradecraft shifts, and the cycle repeats.

The retail industry learned this lesson the expensive way. For years, major retailers got absolutely eviscerated through basically the same playbook: get into the network, find the point-of-sale (POS) systems, scrape card data in memory before encryption, and exfiltrate it quietly. Target, Home Depot, Neiman Marcus, and TJ Maxx before that all got hit. This was not some magical new class of intrusion. It was weak segmentation, bad trust boundaries, and treating POS systems like normal back-office IT even though they waded through a river of payment data.

That attack pattern did not become less dominant because defenders suddenly got better at chasing alerts. It got less economical because the architecture changed. Chip-and-PIN raised the cost of card-present fraud, and point-to-point encryption meant that even if you landed on the POS environment, the data you wanted was increasingly not there in the same useful form. Eventually, the economics shifted hard enough that the old model stopped scaling the same way.

When offense gets cheaper, faster, and easier to scale, the long-term answer usually is not “patch a little harder.” The long-term answer is that the ecosystem changes underneath it until the old attack pattern gets more expensive, less reliable, or just not worth it anymore. LLMs accelerating vulnerability discovery looks like another one of those shifts, and the ecosystem will adapt. It always does.

What Mythos Actually Changes

Let’s get the obvious part out of the way: Mythos matters. A system that can identify promising vulnerability candidates, reason about exploitability, and help generate working exploit paths faster than prior tooling is a meaningful capability jump, and that should not be dismissed, minimized, or hand-waved away just because people are tired of AI hype. There is real offensive potential and real defensive value in the same class of capability.

But the industry is already doing the annoying thing where a real capability shift gets inflated into a bad narrative. The bad narrative is that Mythos means the average organization’s primary problem is now autonomous zero-day exploitation, but that is not the most useful takeaway.

The better takeaway is that Mythos accelerates a trend that defenders were already losing against. The window between vulnerability disclosure and operational exploitation has been collapsing for years. Public proofs of concept (PoCs), exploit frameworks, bug bounty economics, easier reverse engineering, better tooling, more researchers, and a culture that rapidly operationalizes interesting bugs were already compressing timelines before LLMs started helping write useful exploit code. Mythos did not invent that trend, but it pours gasoline on it.

The Part the Industry Keeps Missing

Most organizations are not getting compromised because a frontier model discovered a beautiful memory corruption bug and autonomously weaponized it before breakfast.

This is one of the most frustrating things about the Mythos coverage. It pulls the conversation toward the most cinematic version of the future while a huge amount of real-world compromise still runs through known, old, embarrassingly available paths. The average ransomware crew does not need elegance. They need access. The average extortion operation does not need a revolutionary AI exploit chain if they can buy credentials, phish a user, hit an exposed edge device, abuse a known vulnerability, or walk through a badly segmented environment. That is still the center of gravity for most organizations.

If you want a better reality check than vendor decks and AI panic headlines, go read The DFIR Report. It is some of the most useful public work in this industry because it is built from actual incident response cases rather than theory. Read enough of those reports and the pattern becomes hard to ignore: organizations usually are getting compromised through phishing, stolen credentials, exposed remote access, public-facing applications, weak segmentation, and known weaknesses that operators can chain into ransomware, extortion, and broader compromise. The sophistication usually shows up after initial access, not before it.

Zero days do matter, but they are not the main reason most defenders are losing today. Defenders have an exposure management problem first, a patch velocity problem, a prioritization problem, an identity problem, an architecture problem, and a “we knew this was reachable and still did not fix it” problem. That is not as fun to write headlines about, but it is a lot more useful.

Where Defenders Have Actually Improved

To be fair, defenders are not standing still. Cyber Threat Intelligence (CTI) has genuinely improved, detection coverage is better, EDR is more common, and response workflows are more mature. A lot of enterprises are materially better than they were 10 years ago at identifying active abuse and reacting faster once something starts to unfold.

Better CTI and faster detection give defenders earlier visibility and better leverage. They help teams prioritize active threats, hunt smarter, and respond faster. What they do not do is magically erase exposed attack surface, weak identity controls, stale edge systems, or bad segmentation. Better visibility into a losing architecture is still visibility into a losing architecture.

The KEV Problem, Reframed

Another way to think about this is to stop acting like the total pile of vulnerabilities is the same thing as the total pile of meaningful breach paths. It is not, because software has always had a mountain of bugs. Many never matter operationally. Some are real but noisy. Some are hard to weaponize. Some are dead on arrival outside a lab. Some are technically severe and practically irrelevant to the way real intrusions happen. CISA’s Known Exploited Vulnerabilities (KEV) Catalog is basically the smaller subset of bugs that have actually been confirmed as exploited in the wild.

And just as importantly, the attack surface has to be reachable. If a vulnerability is directly exposed to the Internet, that is obviously a more immediate problem. That is a clean path. But a lot of issues require the attacker to already be on a machine, already have code execution in a user context, already be past email controls, or already have made it through EDR, MFA coverage, segmentation, and whatever other layers an enterprise has put in front of the target over the last several years.

Enterprise controls have improved year over year, even if imperfectly. Email security is better than it used to be. EDR is far more common. Logging, detection coverage, and containment workflows are generally better than they were a decade ago. None of that makes enterprises safe, but it does mean not every bug sits in a straight line to impact.

The set of vulnerabilities that actually matter in the wild is the set that is reachable inside a real attack path at a cost attackers are willing to pay. That is why the real concern around Mythos is not “AI will find infinite bugs.” Infinite bugs were never the issue. The concern is that the subset of bugs that are reachable, useful, and operationally relevant can be triaged, understood, and weaponized faster.

That makes Internet-facing exposure more dangerous, N-days more dangerous, and patch delay more dangerous. It also makes post-compromise vulnerabilities more dangerous because once an attacker gets a foothold through the usual means, the path from local foothold to privilege escalation or deeper compromise may compress further.

The problem is that the path from “reachable issue” to “usable offensive capability” keeps getting shorter, while many defenders are still operating on timelines and assumptions built for a slower world.

Still a Remediation and Architecture Problem

The annoying part is that the answer still sounds boring: patch faster, reduce exposure, improve identity controls, segment better, shrink blast radius, harden defaults, get rid of legacy trust assumptions, log the things that matter, know what is Internet reachable, know what your third parties shipped, and know what your admins can touch.

Everyone hates this answer because it feels too mundane for the scale of the Mythos discourse, but it is still the right answer. The reason serious post-incident work keeps pointing back to fundamentals is because that's where organizations are failing. Threat actors do not need exquisite offensive capability against an environment with exposed edge devices, weak credential hygiene, half-deployed MFA, stale software, and wide-open lateral movement paths. They need one workable path, and most environments still provide several.

What Mythos changes is the cost of ignoring the fundamentals. If exploit development gets faster, patch delays become more expensive. If reverse engineering known fixes gets easier, “patched but not deployed yet” becomes a more dangerous state. If candidate triage gets faster, defenders get less protection from noise, scale, and obscurity.

The industry has been leaning on inefficiency more than it wants to admit, and that cushion is getting thinner. This matters even more in open source, where discovery capacity and remediation capacity are often badly mismatched. If systems like Mythos can help find issues faster than maintainers can realistically fix them, then the bottleneck is sustained remediation capacity. That makes funding, staffing, and maintaining critical open source software a practical security problem.

Where the Hype Is Wrong

There are two bad reactions to Mythos. The first is denial: this is just another overhyped AI demo, real exploitation is too hard, autonomous offense is a meme, and none of this matters. That is too dismissive.

The second is panic: the gap is gone, zero days are about to fall from the sky, defenders are cooked, and every vulnerability instantly becomes a practical breach vector. That is too theatrical.

The more accurate view is less dramatic and more useful. Mythos-class systems are meaningful acceleration layers on top of offensive workflows that already existed. They improve researcher throughput and triage, reduce labor, and speed up the path from “this looks interesting” to “this may actually be weaponizable.” In some cases, they will clearly compress time-to-exploit in ways defenders should care about.

But they do not erase environmental dependency, target-specific constraints, or the value of human tradecraft. They do not mean every discovered bug becomes a practical intrusion path. And they definitely do not mean defenders should re-center the entire universe around autonomous zero-day scenarios while continuing to lose to the same known access paths that have been driving compromise for years.

The Historical Pattern Still Holds

The good news, if you want to call it that, is that the historical pattern probably still holds here. When offense becomes cheap and scalable, the answer is to change the environment until the attack class loses efficiency.

Over time, that probably means more AI-assisted remediation, not just AI-assisted offense. It probably means faster exploitability triage on the defensive side, stronger secure-by-default expectations, more memory-safe software where the risk justifies the migration pain, and different development pipelines with better review augmentation and software that ships with fewer security failures because the tooling around it changed.

That is how these shifts usually end: not with defenders winning an endless footrace at the same speed, but with the ecosystem changing enough that the offense no longer scales the same way. We are not there yet, but that is still where this likely goes.

What Defenders Should Actually Take Away

Mythos is real.

The capability shift is real.

The hype is still wrong.

Because the story is not that AI suddenly invented offensive asymmetry.

The story is that defenders were already on the wrong side of exposure and remediation economics, and now the clock moves even faster.

Attackers do not need a revolutionary future to beat most organizations.

They are usually doing it with the infrastructure those organizations already forgot to fix.