Saturday, 25 April 2026
Google commits $40B to Anthropic, GPT-5.5 lands in the API with a new prompting guide, and a wave of Claude cancellations goes viral
Today's Lead
TechCrunch
Google Commits $40 Billion to Anthropic in Largest-Ever AI Investment
Google is committing up to $40 billion to Anthropic: $10 billion in immediate cash at a $350 billion post-money valuation, with a further $30 billion tied to performance milestones, plus 5 gigawatts of Google Cloud compute over five years giving Anthropic direct access to TPU infrastructure. The investment comes just days after Anthropic's Claude Code postmortem and amid broader questions about AI model quality at scale. For Google, it's a strategic hedge — keeping TPUs at the center of frontier training workloads and ensuring a meaningful AI partnership regardless of antitrust outcomes with its own models. For Anthropic, the capital removes near-term compute constraints while raising the question of how independent the company remains at $40 billion of Google dependency.
Also today
Nicky Reinert
Claude Cancellations Go Viral as Users Surface Token Limits and Quality Complaints
A Claude Pro user's cancellation writeup became one of the week's top Hacker News posts — 854 points, 502 comments — detailing persistent frustrations: token exhaustion within two hours of a single coding session, lazy solutions to coding problems, unexplained usage limit warnings, and a support team that couldn't explain what was happening. The post landed the day after Anthropic published a postmortem attributing recent quality degradation to three harness bugs. The HN discussion reveals a trust gap: even if the bugs are now fixed, trust erodes when degradation goes unexplained for weeks, documentation is inconsistent, and token economics feel opaque. For subscription AI products, user trust is a compounding asset — or liability.
Read →Simon Willison
GPT-5.5 Hits the API — and OpenAI Says Start Your Prompts From Scratch
With GPT-5.5 now available in the API, OpenAI published a prompting guide with a blunt opening: treat it as a new model family, not a drop-in replacement. The recommended migration path is to start from the smallest prompt that preserves the product contract, then tune reasoning effort, verbosity, tool descriptions, and output format against real examples — rather than carrying over instruction stacks optimized for older versions. One practical tip: send a short user-visible status update before tool calls in multi-step tasks to prevent the model appearing frozen during long operations. OpenAI's Codex tool already does this. The broader signal is that as reasoning models mature, prompt engineering is shifting from clever phrasing to architectural decisions around effort, verbosity, and output structure.
Read →It's FOSS
Firefox Quietly Ships Brave's Rust-Based Adblock Engine
Firefox 149 (released March 2026) shipped 'adblock-rust,' Brave's open-source Rust-based ad and tracker blocking engine, without a mention in the release notes. The feature is disabled by default with no UI or bundled filter lists — just the engine, activatable via config flags and standard lists like EasyList and EasyPrivacy. The integration suggests Mozilla is building a native content-blocking path independent of extensions, significant given that Chrome's Manifest V3 transition has degraded extension-based blocking across Chromium browsers. Waterfox already adopted the same engine. Whether Firefox surfaces this as a first-class feature is unknown, but the groundwork is laid.
Read →GitHub
Matz Ships Spinel, a Ruby AOT Compiler with Up to 80x Speedups
Yukihiro Matsumoto (Matz), Ruby's original designer, published spinel — an ahead-of-time compiler that transforms Ruby source into standalone native executables via whole-program type inference and optimized C code generation. Benchmarks show an average 11.6x speedup over the standard Ruby interpreter and up to 80x on compute-intensive workloads. The compiler is written in Ruby and self-hosting. Beyond the performance numbers, the significance is that Ruby's creator is directly addressing one of the language's longest-standing limitations rather than leaving it to the community. Whether spinel handles enough of Ruby's dynamic semantics to be useful for real applications — not just benchmarks — will determine if this becomes a genuine deployment option or a proof of concept.
Read →GitHub
SDL 3 Gains Full DOS Support via DJGPP
A merged pull request adds comprehensive DOS platform support to SDL 3 through DJGPP (DJ Delorie's GNU compiler toolchain), covering VGA/VESA graphics, Sound Blaster 16-bit stereo audio at up to 44.1 kHz, PS/2 keyboard and mouse, joystick input, and a cooperative threading model with mutexes and semaphores. CMake cross-compilation and CI integration are included. SDL's platform list — spanning Windows, macOS, Linux, iOS, Android, and embedded targets — now includes DOS. The practical use case is narrow, but the port is a testament to SDL 3's hardware abstraction quality: if the HAL is clean enough, DOS is just another target. Useful for retrocomputing projects and new software targeting classic hardware.
Read →hhh.hn
Researcher Finds SSH Enabled by Default on Rode Caster Duo Audio Interface
A developer discovered their Røde Caster Duo audio interface shipped with SSH enabled by default and public key authentication pre-configured — a security exposure for a device likely sitting on home or studio networks with no mention in the documentation. By capturing firmware update traffic with Wireshark and analyzing the uncompressed update payload, the author reverse-engineered the update mechanism and modified the firmware to add their own SSH keys and enable password authentication. The write-up doubles as a clean tutorial in embedded firmware analysis, but the core finding reflects a broader pattern in consumer audio and IoT hardware: network services are often enabled by default and undocumented, and users have no idea they're exposed.
Read →arXiv
There Will Be a Scientific Theory of Deep Learning
A new arXiv paper argues that a scientific framework for deep learning is within reach — not by deriving everything from first principles, but through five converging research directions: solvable idealized models, tractable mathematical limits, empirical scaling laws, hyperparameter theories, and universal behaviors that recur across architectures. The proposed 'learning mechanics' framework is explicitly coarse-grained and statistical, making testable predictions at the level of training dynamics rather than individual weights. For practitioners, the near-term significance is limited, but the trajectory matters: if these research programs succeed, model behavior becomes predictable before training, not just explainable after — a qualitative shift in how the field relates to its own tools.
Read →Bloomberg
Norway Moves to Ban Social Media for Under 16s, Joining a Global Wave
Norway is advancing legislation to ban social media access for users under 16, joining Australia, Turkey (under 15), France, Spain, and Denmark in a coordinated international wave of age-based platform restrictions. The central challenge across all these jurisdictions is the same: age verification robust enough to actually block minors requires collecting enough personal data to create its own privacy risks, and determined teenagers route around most barriers. The UK examined the same policy and declined. The Norway proposal has cleared early legislative stages with implementation details — particularly around verification mechanisms — still unresolved. The cross-country pattern reflects political consensus that platform self-regulation has failed, even as technical and civil liberties experts remain skeptical about whether top-down age gates work in practice.
Read →ky.fyi
Do I Belong in Tech Anymore? A Designer Calls AI Adoption 'Political Defeat'
A design engineer resigned from a well-compensated role after what they frame not as burnout but 'political defeat' — a response to AI deployed pervasively and often without consent: auto-recorded meetings, auto-merged code, automated systems replacing the human communication that made the work meaningful. The piece articulates something distinct from the usual AI-will-take-jobs anxiety: the loss of shared professional values in workplaces that adopted AI for cost efficiency rather than human benefit, and a broader erosion of tech industry commitments to climate and equity. Coming the same week as Nilay Patel's widely-discussed 'The people do not yearn for automation' essay, it reflects a growing articulation among practitioners of what's being lost — not just what's being replaced.
Read →