On 11 May 2026, in two bursts six minutes apart — between 19:20:39 and 19:20:47, then again between 19:26:14 and 19:26:20 UTC — 84 malicious package versions were published to the public npm registry across 42 distinct packages in the @tanstack namespace. They came from TanStack’s own publishing infrastructure, with TanStack’s verified npm identity. As TanStack’s maintainers stated in their post-mortem, “no npm tokens were stolen and the npm publish workflow itself was not compromised.” The attackers did not steal a password; they did not hijack the workflow; they did something more subtle and more interesting.
For anyone responsible for a website, a Shopify storefront, a custom application, or a server that runs Node.js, this is one of those incidents worth pausing to understand. It quietly collapses the most common defence the industry teaches — trust the maintainer, check the publisher — because in this case the maintainer was trusted, the publisher was correct, and the published packages were properly signed.
What happened
TanStack is a widely-used family of open-source JavaScript libraries — its router, query, and form packages sit underneath thousands of websites and applications, including some of the largest on the web. @tanstack/react-router alone is downloaded over 12 million times per week, according to npm’s public statistics.
According to TanStack’s post-mortem, the attack chained three separate weaknesses in the project’s GitHub Actions setup.
First, the project had workflows that triggered on pull_request_target — a GitHub Actions event that runs with the repository’s own secrets and permissions rather than the fork’s. Workflows using this trigger need extremely careful guarding, because they can be invoked by an outside contributor’s pull request. The attacker submitted a malicious PR that caused such a workflow to execute their code with elevated context.
Second, that compromised run was able to poison GitHub Actions’ build cache. The cache layer is shared between trusted and untrusted workflows on the same repository; entries written by an attacker-controlled run could later be loaded by a production workflow as if they were legitimate.
Third, when a legitimate publish workflow later ran and authenticated to npm using OpenID Connect (OIDC) — a modern, password-less standard that issues short-lived tokens at the moment of publish — the attacker code, by then living inside the runner via the poisoned cache, extracted those tokens directly from the runner’s process memory. With the live tokens in hand, the attackers were then able to call npm’s publish API directly, outside the defined workflow steps, and push their own malicious package versions under TanStack’s verified identity.
The result: 84 package versions across 42 packages, published in roughly six minutes, properly signed. To npm, and to every downstream tool that audits package authenticity, those packages looked authentic — because in every meaningful signing sense, they were.
It is worth pausing on what the TanStack maintainers’ 2FA did and did not do here. The maintainers had two-factor authentication enabled on their npm accounts. The 2FA was not bypassed. Their accounts were never directly compromised. The attackers did not need to defeat 2FA at all — they reached the npm publish API through a different door, by living inside a runner that was already authenticated. That is the part of this incident most worth absorbing: standard account-protection hygiene, applied correctly by competent maintainers, was not in itself enough.
What the malicious code was designed to do once installed is straightforward: steal credentials. The payload executed at install time and harvested AWS instance-metadata and Secrets Manager credentials, GCP service-account tokens, Kubernetes service-account tokens, HashiCorp Vault tokens, npm tokens from ~/.npmrc, GitHub tokens (from environment variables, the gh CLI, and .git-credentials), and SSH private keys from ~/.ssh/. Exfiltrated data was sent out through the Session messenger network — an end-to-end encrypted channel that hides the destination from observers. The payload also enumerated other packages the victim maintained and attempted to re-publish them with the same injection embedded. Developer laptops and CI runners were both on the target list — but CI runners are the more dangerous case, because they typically hold credentials to far more systems than any single developer’s machine.
Within roughly 20 minutes of the first publish, an external security researcher had detected the compromise and reported it to TanStack with a full technical analysis. The malicious versions were deprecated by TanStack’s maintainers; npm security was engaged to pull the tarballs from the registry server-side, preventing further installs; and GitHub Security Advisory GHSA-g7cv-rxg3-hmpx was published with the specific affected version ranges. But during the window between publish and remediation, automated dependency-update tooling, CI builds running npm install against unfrozen lockfiles, and downstream maintainers pulling fresh upstream versions could all have ingested the malicious code.
What this means for developers
If you are a developer working in JavaScript, TypeScript, React, or anywhere on Node.js, the practical takeaways are:
Lockfiles matter more than ever. A lockfile that pins exact versions and integrity hashes is your single best defence against this category of attack. If your build does not consume package-lock.json, pnpm-lock.yaml, yarn.lock, or bun.lockb faithfully — for example, if CI runs npm install instead of npm ci — you can pull a poisoned version even after the legitimate ecosystem has caught and remediated it.
Audit your dependency tree, not just your direct dependencies. Many projects do not list @tanstack/* packages in their own package.json but pull them in transitively through frameworks, starter kits, and component libraries. The audit only takes a minute (see commands below) and tells you what you actually have, not what you think you have.
Treat dependency updates as a reviewed change. Automated PRs that bump versions to the absolute latest, merged without scrutiny, are the primary delivery vehicle for incidents like this one. A small delay between “new version is available” and “we install it” is usually a feature, not a bug. Tools like Renovate and Dependabot can be configured to honour minimum-age constraints — pnpm has a --minimum-release-age flag specifically for this, and it is worth turning on.
Understand what “signed” actually means. A signed package proves who published it, not whether the publishing process itself was honest at the time of publish. This is not a flaw in OIDC or in npm provenance — it is simply a useful reminder of what those mechanisms can and cannot guarantee.
What this means for server administrators
If you operate servers — production hosts, build agents, CI runners, container hosts, anything that ever runs npm install, pnpm install, or yarn install — the takeaways are slightly different.
Inventory matters. When the next supply-chain advisory lands, “did this advisory’s packages run on any of my hosts during the affected window?” is the question that decides whether you have a minor administrative task or a security incident. Answering quickly requires knowing where every Node.js install lives, what its lockfile says, and when its node_modules was last written. Not glamorous work, but essential.
Time windows matter. Knowing mtime on installed packages is genuinely useful. If your CI ran an unfrozen install at 19:23 UTC on 11 May 2026, that’s a flag. If it last ran two days before, your lockfile pinned an older safe version and you can move on. Stamping your install events with a timestamp you can audit against is a small habit with large payoff.
Mutating credentials is not always the answer. An instinctive reaction to any supply-chain advisory is to rotate every credential touched. Sometimes that is correct; often it is not. If the malicious package never executed on your hosts, the credentials never left your hosts. Spend the audit minutes first; rotate based on evidence.
How to check your device
The minimum useful audit on any Node.js project or server takes about a minute. Open a terminal in the project directory and run:
# 1. Find any @tanstack references in your dependency files
grep -r "@tanstack/" package.json package-lock.json pnpm-lock.yaml yarn.lock bun.lockb 2>/dev/null
# 2. List installed @tanstack packages with their resolved versions
npm ls --all 2>/dev/null | grep "@tanstack"
# (or `pnpm ls --recursive --depth Infinity` / `yarn why @tanstack/react-router`)
# 3. Check install timestamps on installed @tanstack packages
find node_modules/@tanstack -maxdepth 2 -name package.json -exec stat -f '%Sm %N' -t '%Y-%m-%d %H:%M:%S UTC' {} \; 2>/dev/null
# (Linux: `stat -c '%y %n' {} \;`)
If none of those commands return anything, you have no @tanstack/* packages installed and you are clear for this incident. If they return something, cross-reference the resolved versions against the advisory at github.com/advisories/GHSA-g7cv-rxg3-hmpx. The advisory lists the exact compromised version numbers — only those specific versions are dangerous; earlier and later versions of the same packages are clean. The TanStack post-mortem also explicitly confirmed that the entire @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, and @tanstack/store package families are clean — they are separate publishes and were not part of the compromised window.
On a host with multiple Node.js projects, run the same checks across every project directory. If you maintain Docker images, also inspect the relevant layers — a container built and pushed during the affected window may carry a poisoned package even if your live source tree does not.
If you find a match, do not panic-uninstall. Capture evidence first: the file path, the version, the install timestamp, the lockfile entry. Then plan a controlled remediation in a separate change. Frantic rollbacks under pressure are how this kind of incident becomes two incidents.
A thank you to the security teams
This incident was caught fast. According to TanStack’s post-mortem, an external security researcher — ashishkurmi of StepSecurity — detected the compromise roughly 20 minutes after the first malicious publish, and reported it to TanStack with a full technical analysis of what had happened and how. Socket.dev followed up with a notification by phone. TanStack’s own maintainers responded openly, deprecated all 84 affected versions, engaged npm security to pull the tarballs from the registry server-side, and published a post-mortem within days that explained the attack mechanism honestly and without minimisation. The GitHub Security Advisory was published with specific affected version ranges.
That speed is not free. It exists because researchers at companies like StepSecurity, Socket, Snyk, Phylum, GitHub Security, and others run continuous, often-thankless monitoring of every new package version published to npm. It exists because individual researchers like ashishkurmi do the technical work of catching an active compromise inside a 20-minute window. It exists because TanStack’s own maintainers responded openly rather than defensively. The downside risk of an incident like this is enormous; the response time is what kept it small.
If you work in this ecosystem and you have not said thank you to those teams recently, this is a fair occasion.
Will this keep happening?
Probably, yes. The structural conditions that made this incident possible have not changed: a public registry that accepts millions of new package versions per week from millions of independent publishers; a build pipeline that authenticates correctly but has no inherent ability to verify what the workflow is doing; and a downstream ecosystem of automated dependency updates that pulls new versions into production within minutes.
These are not bugs. They are the operating model of modern open-source software, and that model has produced enormous value. But it is also a model in which the attack surface is the entire pipeline from source commit to deployed application, and where compromise can occur at any point along it.
So the question is not whether incidents like this will keep happening. The question is what changes — at the registry level, at the tooling level, at the funding level — could reduce the size of the blast radius.
A proposal: delayed deployment on npm
One of the simplest mitigations would be a registry-level publishing delay for new versions. When a package version is published, it would be visible in the registry but not installable as @latest for a defined cool-off window — say 24 to 48 hours. During that window, automated scanners and downstream maintainers would have time to inspect the new version. After the window, the version becomes installable normally.
This already exists at the consumer level. pnpm has a --minimum-release-age flag. Renovate has age-based update policies. But these are opt-in, project-by-project, and most projects do not configure them. A registry-level default — even an opt-out one — would change the economics dramatically. A malicious version that exists for six minutes before being yanked would never reach a production install if the install layer enforced a 24-hour minimum age.
There are reasonable objections. Some legitimate workflows (urgent security patches, internal coordination releases) need to install a brand-new version immediately. But those workflows are a small fraction of total installs, and they are exactly the workflows where a human is already in the loop. A registry-level delay with documented bypass for emergency releases would catch the silent automated installs that drive almost all the real damage.
A proposal: AI labs funding free scanning of new npm releases
Continuous scanning of every new npm package version, for every public package, is technically straightforward but operationally expensive. Companies like Socket, Snyk, and Phylum do this commercially; their commercial work catches incidents like this one. Free, open scanning at the same depth — with results available publicly within minutes of a new publish — does not exist at the scale it could.
The AI labs are the most significant new beneficiaries of the npm ecosystem. Modern AI products are built on JavaScript front-ends, TypeScript backends, Node.js orchestration, and dozens of npm-distributed libraries. ChatGPT, Claude, Mistral, the rest of them — their products depend on the safety of this pipeline as much as any business on earth.
What if OpenAI, Anthropic, and the other major labs jointly funded a public scanning service for new npm releases? Continuous, open-source, fully transparent, with results published as a public feed. Cost would be modest at the scale these companies operate at. The benefit — protecting the supply chain that their own products depend on — would be direct and substantial. And the public-good dimension would be real: every business that runs Node.js, not just AI businesses, would benefit.
It is the kind of pre-competitive infrastructure that the open-source ecosystem genuinely needs, that no single company will fund alone, and that the major AI labs are uniquely positioned to underwrite. Worth asking.
What we are doing for clients
We have audited the systems we operate for Kahunam clients against the published advisory and found no exposure. None of the affected package versions appear in our installed dependencies, our lockfiles, or our build artifacts.
That is the easy part. The harder, ongoing work is the discipline that lets you answer that question quickly: keeping inventories of where dependencies live, pinning versions deliberately, treating dependency updates as a reviewed change rather than a default action, and watching the advisory feed. We do this for the systems we own. If you are not sure how your own stack would handle the next incident of this kind, that is itself a useful answer — and a good starting point.