Tuesday, 14 April 2026

30 WordPress plugins backdoored after marketplace acquisition, GitHub ships native stacked PRs, and Stanford reveals a sharp AI optimism gap between insiders and the public

Today's Lead

Anchor.host

Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them

A malicious actor purchased the 31-plugin Essential Plugin portfolio through the Flippa marketplace for a six-figure sum in early 2025, then quietly injected backdoors into every plugin in August — eight months before activating them for SEO spam attacks starting April 5–6, 2026. WordPress.org removed all 31 plugins in a single day once the attack was discovered. The malware hid in wp-config.php, serving hidden content to Googlebot for search ranking manipulation, while using an Ethereum smart contract for command-and-control domain resolution to evade traditional takedowns. The incident exposed a critical gap in the WordPress plugin ecosystem: there is no review mechanism triggered when plugin ownership changes hands on third-party marketplaces, making established plugins with large install bases an attractive acquisition target for threat actors willing to play a long game.

Read →

Also today

GitHub

GitHub Stacked PRs

GitHub has launched gh-stack, a native tool for stacked pull requests — a workflow pattern where a large change is decomposed into a chain of smaller, focused PRs each targeting the previous one, all ultimately landing on main. The tool ships with a new `gh stack` CLI command for branch management and automated rebasing, a stack map visualization in the GitHub UI so reviewers can navigate the chain, and AI agent integration support. Stacked PRs have been a widely-requested feature among developers working on large features or refactors, where monolithic PRs create review bottlenecks and feedback quality degrades. Previously, developers relied on third-party tools like Graphite or manual workflows; native GitHub support normalizes the practice for teams that weren't willing to adopt external tooling. The feature is currently in private preview.

Read →

TechCrunch

Stanford AI Index Report Reveals Sharp Optimism Gap Between AI Experts and the Public

Stanford's 2026 AI Index Report documents a striking divergence in how AI experts and the American public perceive the technology's societal impact. While 56% of AI experts believe AI will positively affect the US, only 10% of Americans describe themselves as more excited than concerned about increased AI use. The gap is particularly stark in employment: 73% of experts expect positive effects on jobs, versus 23% of the general public — with 64% of Americans worried AI will cause job losses. Trust in government AI regulation is also low, with only 31% of Americans expressing confidence in federal oversight, the lowest rate among developed nations surveyed. The report highlights a structural communication failure: the AI industry focuses discourse on long-term and abstract risks, while ordinary people face concrete economic anxieties from AI-driven displacement and disruption happening now.

Read →

aphyr

The Future of Everything Is Lies, I Guess: Safety

In the latest installment of his ongoing series, Kyle Kingsbury (aphyr) argues that AI safety as currently practiced is structurally insufficient, not merely incomplete. His core thesis: any technique that makes a model helpful can equally be removed or inverted, the knowledge and hardware required to build unaligned models are increasingly accessible, and even well-aligned models fail unpredictably and are vulnerable to prompt injection attacks that current defenses cannot reliably block. Aphyr catalogues concrete harms already materializing at scale — ML-powered fraud, deepfakes, harassment, and military autonomous targeting — and challenges the industry's framing that better alignment methods solve the problem. The piece argues that certain AI capabilities may be fundamentally incompatible with safety, and that the question should be whether some capabilities ought to exist at all rather than how to constrain them after the fact.

Read →

Cloudflare Blog

Building a CLI for All of Cloudflare

Cloudflare has released a technical preview of `cf`, a new unified CLI designed to eventually cover all of Cloudflare's 3,000+ API operations — many of which currently have no CLI access at all. To keep the CLI in sync with Cloudflare's rapid product pace, the team built a TypeScript-based schema system that generates commands, SDKs, configuration, MCP server definitions, and Terraform resources from a single source, rather than maintaining each interface separately. Alongside the CLI preview, Cloudflare is releasing Local Explorer in open beta: a browser-based UI available within Wrangler and the Vite plugin that lets developers inspect and manipulate local KV, R2, D1, Durable Objects, and Workflow state during development — mirroring the same API surface available remotely. The local/remote API parity is designed to make agent-driven workflows more reliable, since agents can introspect resources consistently regardless of whether they're pointed at local dev or production.

Read →

kirancodes.me

Lean Proved This Program Correct; Then I Found a Bug

The author fuzz-tested lean-zip, a formally verified zlib implementation written in Lean 4, and discovered two vulnerabilities despite the program's formal correctness proof. The first is a heap buffer overflow in Lean's own runtime caused by unchecked integer overflow when allocating large arrays; the second is a denial-of-service in the ZIP parser that accepts attacker-controlled compressedSize values without bounds checking. Critically, the verified application code itself was clean — 105+ million fuzzing executions found no memory safety issues in the verified portion. All bugs lived in unverified components: the language runtime and the parser logic that sits outside the verification boundary. The piece is a sharp illustration of a fundamental truth about formal verification: a proof is only as strong as the scope of what it covers, and production software always depends on unverified substrate.

Read →

Servo

Servo 0.1.0: The Browser Engine Arrives on crates.io as an Embeddable Library

Servo has shipped version 0.1.0 on crates.io, its first LTS release, marking the browser engine's transition from a standalone experimental project to an embeddable library that can be pulled into Rust applications as a standard dependency. The release follows five development milestones since the project moved to GitHub in October 2025 and introduces a formal Long-Term Support programme offering half-yearly major upgrades with continuous security patches — signalling institutional commitment to production stability. Servo can now be used to add web rendering and layout capabilities to native applications, game engines, and system software without requiring developers to maintain their own browser infrastructure. Simon Willison separately published a research note showing Claude Code building a CLI screenshot tool on top of Servo that works for static pages, confirming practical embeddability.

Read →

Google Search Central Blog

Google Announces Spam Policy for Back Button Hijacking

Google has formalized a spam enforcement policy targeting back button hijacking, a technique where websites manipulate browser history to redirect users to unintended destinations when they press the browser back button rather than returning to their previous page. The practice is used to inflate engagement metrics, manufacture artificial traffic, and trap users in advertising flows. Under the new policy, sites that implement back button hijacking will be classified as spam and penalized in search results, consistent with Google's broader Search Essentials guidelines around user-hostile practices. For web developers, the policy is a reminder that browser navigation APIs exist to serve users, not to be weaponized for engagement optimization, and that interference with expected navigation behavior is now explicitly an SEO risk.

Read →

DuckLake

DuckLake v1.0: The Lightweight Lakehouse Format Reaches Production-Readiness

DuckLake has shipped v1.0 with a production-readiness declaration and backward-compatibility guarantee, one year after its initial release. The format's core design choice — storing all lakehouse metadata in a database rather than scattered files — addresses the small-file fragmentation problem that plagues traditional lakehouse formats under frequent incremental writes. The 1.0 release adds data inlining for small DML operations, column-based sorted tables for improved query pruning, bucket partitioning for high-cardinality columns, GEOMETRY and VARIANT type support, and experimental Iceberg v3-compatible deletion vectors. Implementations now exist in Apache DataFusion, Spark, Trino, and Pandas, and the DuckDB extension has become one of the top-10 most downloaded in the ecosystem. The rapid trajectory from concept to production-grade specification in a single year positions DuckLake as a credible challenger to Iceberg and Delta in the data lakehouse space.

Read →

Blackmagic Design

DaVinci Resolve Launches a Dedicated Photo Editor

Blackmagic Design has added a dedicated Photo workspace to DaVinci Resolve 21, bringing Hollywood-grade color grading tools to still photography for the first time. The editor includes Resolve's full primary correction and curve toolset, Power Windows, qualifiers, and professional monitoring scopes alongside over 100 GPU-accelerated AI tools including Magic Mask, Depth Map, Relight FX, Face Refinement, and AI SuperScale upscaling. Native RAW support covers Canon, Fujifilm, Nikon, Sony, and iPhone ProRAW files up to 32K or 400+ megapixels. Camera tethering for Sony and Canon allows live capture with real-time grading during shoots, and Blackmagic Cloud enables team collaboration. The release positions Resolve as a direct competitor to Lightroom and Capture One, offering photographers access to the same tools used in professional film and television post-production, with no separate pricing — the Photo workspace is included in the existing Resolve package.

Read →