Tuesday, 12 May 2026
TanStack's CI/CD pipeline was hijacked via three chained vulnerabilities to publish malicious npm packages, GitLab announces major restructuring for the agentic era, and Thinking Machines debuts real-time multimodal interaction models
Today's Lead
TanStack Blog
Postmortem: TanStack npm Supply-Chain Compromise
Between May 11–12, 2026, attackers exploited three chained vulnerabilities in TanStack's CI/CD pipeline to publish 84 malicious versions across 42 npm packages. The attack chain began with a `pull_request_target` workflow flaw — a well-known GitHub Actions footgun that grants write access to secrets from external PRs — followed by GitHub Actions cache poisoning, and finally OIDC token extraction to bypass npm's two-factor publishing safeguards. The injected payloads harvested credentials including AWS, GCP, and GitHub tokens, and attempted lateral movement into other packages. External security researchers detected the intrusion within roughly 20 minutes, containing the blast radius before widespread downstream adoption could occur. The postmortem is candid about the systemic gaps that made the attack possible: no alerting on new npm publishes, unaudited workflow files that had accumulated dangerous permissions over time, and no per-publish OIDC review gates despite the project using OIDC for identity federation. The `pull_request_target` attack vector is notable because GitHub specifically warns about its risks in their documentation, yet it continues to appear in popular open-source projects — the workflow's combination of read access to fork contents and write access to the base repository's secrets is a nearly universal footgun for projects that accept external contributions. For maintainers, the incident reinforces three concrete controls: audit all `pull_request_target` workflows immediately, enable npm publish notifications at the package level, and treat CI pipeline files with the same review rigor applied to application code.
Also today
GitLab Blog
GitLab Act 2: Restructuring for the Agentic Era
GitLab CEO Bill Staples announced a sweeping restructuring intended to position the company as the enterprise platform for AI-driven software creation, which the company frames as a fundamental shift from "software built by people" to "software built by machines, directed by people." The restructuring includes flattening management by removing up to three layers in some functions, reducing geographic footprint by up to 30% in countries with small teams, and reorganizing R&D into roughly 60 smaller autonomous teams — nearly doubling the current count — each with end-to-end ownership of their domain. Notably, the company is retiring its CREDIT values framework (Collaboration, Results, Efficiency, Diversity, Iteration, Transparency) in favor of three new values: Speed with Quality, Ownership Mindset, and Customer Outcomes. The removal of "Diversity" from the top-level values is significant, though the company places it as a sub-bullet under Customer Outcomes. The strategic rationale centers on the Jevons paradox applied to software: as AI dramatically reduces the cost of producing software, demand will expand to consume the savings, growing the overall market rather than contracting it. The company explicitly argues this expansion makes the developer platform market — currently measured in hundreds of dollars per user per month, up from tens last year — worth fighting for aggressively. The announcement comes as GitLab's stock has declined roughly 50% over the past year, and the company faces existential questions about whether its core value proposition survives a world where AI agents replace much of what human developers currently do on platforms like GitLab.
Read →seangoedecke.com
Software Engineering May No Longer Be a Lifetime Career
Sean Goedecke argues that widespread AI adoption could transform software engineering from a stable lifetime profession into a shorter-term career window of 10–15 peak years — analogous to professional athletics or manual trades reshaped by power tools. The core mechanism is skill atrophy: engineers who delegate code writing to AI agents may stop developing and maintaining the deep mental models that make senior engineers productive, since those models are built and reinforced through hands-on coding. The author acknowledges the economic pressure is largely coercive — engineers who refuse AI assistance become uncompetitive, so the choice is adoption or exit rather than informed preference. What makes this piece more interesting than generic AI-career doom is its specificity about the structural change: if the skills that compound most at senior levels (debugging, architectural reasoning, understanding edge cases) are the same skills that erode fastest under AI delegation, then the career trajectory changes fundamentally. A 10-year engineer who spent years delegating to AI may find themselves with the accumulated compensation of a senior engineer and the practical depth of a mid-level one, at precisely the career stage where depth is the primary remaining leverage. The piece recommends thinking about peak earning potential and career runway earlier than previous generations of engineers needed to, while the tools for building genuine expertise are still accessible.
Read →Thinking Machines AI
Thinking Machines Lab Unveils Real-Time Multimodal Interaction Models
Thinking Machines Lab — the AI company led by former OpenAI researchers including Mira Murati — publicly previewed its "interaction models" paradigm, a fundamental rethinking of how AI systems engage with humans that moves away from turn-based request/response toward continuous, real-time collaboration across audio, video, and text. The flagship model, TML-Interaction-Small (a 276B parameter MoE with 12B active parameters), processes continuous streams in 200ms micro-turns using an encoder-free early fusion architecture, enabling it to listen, watch, think, and respond simultaneously — handling interruptions, simultaneous speech, and visual triggers without explicit state transitions. New benchmarks were created to measure capabilities that existing evals don't capture: TimeSpeak (can the model initiate speech at user-specified moments?), CueSpeak (can it react to contextual triggers mid-conversation?), RepCount-A (continuous visual tracking and counting from video), and ProactiveVideoQA (answering questions at precisely the right moment in a video stream). The model reportedly beats GPT-4o-Realtime-2 and Gemini 3.1-Flash on several of these metrics. The design inversion here is significant: rather than adding speech and turn-taking interfaces to a text LLM as a post-hoc layer — the approach that produced the stilted feel of current realtime voice products — the model is trained from scratch with continuous interaction as a first-class property. The practical implications for product builders are substantial: capabilities like pushup counting, slouch detection, and real-time translation that currently require specialized models may become zero-shot capabilities of a single interaction-native model.
Read →Nvidia Labs
Nvidia Releases cuda-oxide: An Official Rust-to-CUDA Compiler
Nvidia Labs has published cuda-oxide, an experimental compiler that allows GPU kernels to be written in idiomatic Rust and compiled directly to PTX — CUDA's portable intermediate representation — rather than requiring CUDA C, domain-specific languages, or unsafe Rust bindings to CUDA APIs. The compiler operates as a rustc codegen backend, meaning it participates in the standard Rust compilation pipeline and can leverage Rust's type system, ownership model, and borrow checker to catch memory safety errors at compile time rather than at GPU runtime. Asynchronous GPU computing is supported through lazy DeviceOperation graphs with async/await syntax, mapping Rust's concurrency model onto CUDA's stream-based execution model. The project is explicitly early-stage and experimental, but its origin inside Nvidia Labs gives it different significance than community Rust-GPU efforts: this is the GPU vendor acknowledging that Rust is a serious target for GPU programming and investing engineering resources accordingly. For the systems programming community, this matters beyond the specific implementation — it signals that the GPU computing ecosystem, which has been CUDA C-dominated for almost two decades, is now seriously engaging with Rust as a memory-safe alternative for kernel development. The practical implications for ML infrastructure engineers who want the benefits of Rust's safety guarantees without abandoning CUDA's performance characteristics are substantial if the project reaches production maturity.
Read →Medium
If AI Writes Your Code, Why Use Python?
The article challenges one of software engineering's most durable assumptions: that Python's primary value is developer ergonomics and rapid iteration, properties that matter far less when an AI agent is doing most of the writing. Modern AI models achieve 80%+ scores on systems-language coding benchmarks, reducing the practical friction gap between writing Python and writing Rust or Go to near zero for AI-assisted workflows. The piece cites concrete evidence that this shift is already underway in production: Microsoft ported the TypeScript compiler to Go for 10x performance gains, and Anthropic used Claude to write 100,000 lines of production C code in Rust. Python's ecosystem increasingly depends on Rust backends for performance-critical operations anyway, making it a thin ergonomics layer over compiled code in many cases. The argument's force comes from flipping the developer value proposition: if human authoring ergonomics are no longer the bottleneck — and if developers now spend more time on architecture, code review, and integration than raw writing — then runtime performance, type system strength, and memory safety become more valuable selection criteria than syntax friendliness. This doesn't predict Python's death, but it does suggest that the era of "use Python because engineers can ship faster" as an unchallenged default is ending, replaced by a more genuine tradeoff analysis where compiled languages are competitive options rather than productivity sacrifices.
Read →Socket Security
fsnotify Maintainer Dispute Triggers Supply Chain Security Review Across Go Ecosystem
A dispute over maintainer access in fsnotify — a Go filesystem notification library with 321,000 dependent packages spanning a significant portion of the Go ecosystem — prompted supply chain security concerns when maintainer Martin Tournoij removed contributor Yasuhiro Matsumoto (mattn) and others from the project's GitHub organization. Tournoij's stated rationale was that the removed accounts held historical access inconsistent with active maintenance; Matsumoto countered that he had made legitimate recent contributions, including bug fixes and resolution of a year-long release drought that had frustrated downstream users. No evidence of code compromise emerged from the access change, but the governance ambiguity was sufficient to prompt Kubernetes maintainers — among fsnotify's most prominent downstream users — to evaluate whether to fork or monitor the project as a precaution. The incident illustrates a persistent supply chain security vulnerability that is structural rather than technical: critical open-source infrastructure is frequently maintained by one or two individuals with no formal succession planning, no governance documentation, and no off-ramp for contributors whose participation status becomes disputed. When access changes happen suddenly in dependencies at this scale, downstream consumers have no reliable mechanism to distinguish routine maintainer transitions from hostile takeovers until security researchers investigate — a window that sophisticated attackers know how to exploit. The Kubernetes team's response (evaluating a fork) is the correct instinct; the broader lesson is that ecosystem dependencies on single-maintainer packages with unclear succession deserve pre-emptive governance review rather than reactive crisis management.
Read →zackoverflow.dev
Zig vs Rust in 2026: Rust Wins Because of AI Agents
The author argues that while Zig offers superior ergonomics for human-driven development — cleaner compile-time evaluation, more straightforward custom allocators, and lower conceptual overhead — Rust has become the better practical choice in 2026 specifically because AI coding agents have changed the dominant code production workflow. When large volumes of code are generated by AI agents rather than handwritten by a human deeply familiar with the codebase, Rust's stronger type system, memory safety guarantees, and tooling like `miri` provide a systematic catch for the classes of errors that agents produce most frequently. Zig's advantages — superior for a skilled programmer who can manually reason about memory and avoid UB — become less differentiating when the "programmer" is an agent whose reasoning about memory safety is less reliable than a careful human's. The piece is interesting because it applies a concrete economic argument: the relevant comparison is not "which language is better for an expert?" but "which language produces better outcomes at the margin, given that a significant fraction of the code will be agent-generated and reviewed by humans under time pressure?" Rust's tooling answers this question better than Zig's current toolset. The implicit prediction is that language ecosystems which invest in machine-checkable correctness properties will compound advantages in an AI-assisted coding world, even if those properties carry ergonomic costs for human authors.
Read →flyingpenguin.com
Did Cloudflare Blackmail Canonical During a DDoS Attack?
The author examines the timeline of a major DDoS attack against Canonical — Ubuntu's parent company — and asks whether Cloudflare's role in the incident crossed from vendor into something resembling extortion. The attack was executed via Beamed, a commercial DDoS-for-hire service that specifically markets techniques for bypassing Cloudflare protection — and which is itself hosted on Cloudflare infrastructure, creating a conflict of interest where the same company profits from both the attack service and the protection service. Canonical's documented response shows they moved only their two most critical endpoints behind Cloudflare protection approximately four hours into the attack, at the point where service degradation costs became severe enough to force a decision. The author draws a parallel to classic protection racket economics: the structure of the incentives — Cloudflare benefits financially whether customers are being attacked or protected — creates a situation that resembles protection racketeering even if no explicit coordination occurred. Cloudflare's consistent position is that it cannot control who uses its infrastructure for attack services, while simultaneously being the primary vendor organizations turn to for protection against those attacks. The piece does not allege deliberate coordination, but it does argue that the market structure itself is sufficiently compromised that regulators and large infrastructure consumers should be asking uncomfortable questions about whether the dominant DDoS protection vendor having financial relationships with DDoS attack infrastructure represents an acceptable conflict of interest.
Read →404 Media
Students Boo Commencement Speaker Who Compared AI to the Industrial Revolution
At the University of Central Florida's May 8, 2026 commencement ceremony, Gloria Caulfield — vice president of strategic alliances at Tavistock Group — was met with widespread booing and sustained heckling from graduating students when she characterized artificial intelligence as "the next industrial revolution" and urged graduates to embrace it. The reaction, captured on UCF's official YouTube stream, came primarily from humanities graduates whose fields are disproportionately affected by AI-generated content displacement. The episode is a useful barometer of generational attitudes toward AI rhetoric in a specific context: commencement speeches invoke the future in ways that can feel platitudinous in any year, but in 2026 the AI framing carries a specific sting for students who spent their college years watching writing, design, and analysis work be automated and who are entering a labor market where those skills command less premium than when they enrolled. The reaction matters less as a data point about AI sentiment broadly and more as evidence that the optimistic AI-as-opportunity framing, standard in corporate and institutional contexts, has lost persuasive power with the cohort most immediately affected by its labor market implications. For anyone communicating about AI transformation to technical or creative audiences: the gap between "AI creates opportunities" as experienced from the executive level and as experienced from the graduate entry level has become wide enough to produce audible friction when the two frames collide in the same room.
Read →