Tuesday, 07 April 2026

Claude Code February regression draws 950-point HN thread, Anthropic locks in gigawatts of TPU capacity, and an AI singer floods the iTunes charts

Today's Lead

GitHub Issues

Claude Code Is Unusable for Complex Engineering Tasks After February Updates

A GitHub issue cataloguing a severe quality regression in Claude Code since February 2026 attracted nearly 1,000 Hacker News upvotes and over 500 comments, becoming one of the top stories in the developer community. Detailed analysis pinpoints a 67–73% reduction in extended thinking tokens as the root cause: the model now edits code without properly reading context (jumping from 6.2% to 33.7% of edits), cuts research depth before making changes, and exhibits contradictory mid-response reasoning. The practical cost impact is dramatic — one user's monthly Claude API spend rose from $345 in February to $42,121 in March while their multi-agent workflow that previously produced 191,000 lines of merged code per week collapsed. Commenters are calling for transparency on thinking token allocation, premium tiers guaranteeing deeper reasoning budgets, and exposed metrics to catch future regressions.

Read →

Also today

Anthropic

Anthropic Expands Partnership with Google and Broadcom for Next-Gen Compute

Anthropic signed an agreement with Google and Broadcom to secure multiple gigawatts of next-generation TPU capacity starting in 2027 — the company's largest compute commitment to date. The move is driven by explosive demand growth: run-rate revenue has surpassed $30 billion and the number of enterprise customers spending over $1 million annually on Claude doubled to exceed 1,000 in under two months. The majority of the new capacity will be US-based, part of Anthropic's previously announced $50 billion commitment to American AI infrastructure. The company continues to diversify hardware across AWS Trainium, Google TPUs, and NVIDIA GPUs.

Read →

Bram Cohen's Blog

The Cult of Vibe Coding Is Dogfooding Run Amok

BitTorrent creator Bram Cohen argues that pure vibe coding — using AI to generate code without reviewing or understanding it — is an ideological extreme that systematically degrades software quality. He notes that teams claiming to practice it still rely heavily on human infrastructure, frameworks, and decisions, while deliberately refusing to inspect obvious problems like code duplication. Cohen's position is that effective AI collaboration requires human guidance and code review, and that treating poor software quality as an inevitable AI limitation misidentifies it as a deliberate choice. The piece contributed to a 450-comment Hacker News discussion.

Read →

words.filippo.io

A Cryptography Engineer's Perspective on Quantum Computing Timelines

Cryptography engineer Filippo Valsorda examines when quantum computers will be capable of breaking current public-key cryptography, and argues the answer is closer than most security teams assume. Recent research — including Google's internal estimates suggesting a potential 33-month window — indicates that cryptographically-relevant quantum computers could arrive before organisations finish migrating. Valsorda advocates treating the threat as imminent and deploying post-quantum algorithms like ML-KEM for key exchange and ML-DSA for signatures now, arguing the cost of being wrong about the timeline far exceeds the inconvenience of adopting larger cryptographic keys today.

Read →

Vercel Blog

58% of PRs in Vercel's Largest Monorepo Now Merge Without Human Review

Vercel built an LLM-based PR classifier that automatically evaluates diffs and routes low-risk pull requests — UI changes, styling, tests, documentation — to auto-merge without a human reviewer. The system is adversarially hardened: invisible Unicode stripping, constrained JSON-only output with no code execution path, and author gating for untrusted contributors. Results over 671 auto-approved PRs: zero reverts, median merge time cut from 2.3 hours to 0.5 hours, overall merge time down 62%, and human reviewers reaching high-risk PRs 2.7× faster. The post frames mandatory review of low-risk changes as security theatre that no longer serves its intended purpose.

Read →

GitHub Blog

GitHub Copilot CLI Introduces 'Rubber Duck' — a Cross-Family Model Reviewer

GitHub's Copilot CLI is adding an experimental feature called Rubber Duck that pairs the primary coding agent with a second model from a different AI family to catch blind spots at critical checkpoints — after planning, after complex implementations, and after writing tests. When Claude Sonnet 4.6 is used as orchestrator, Rubber Duck runs GPT-5.4 as the reviewer. On SWE-Bench Pro evaluations, this combination closes 74.7% of the performance gap between Sonnet and Opus alone, with the largest gains on difficult multi-file problems. The feature is available now in experimental mode via the `/experimental` slash command.

Read →

OSNews

Adobe Secretly Modifies Your Hosts File to Detect Whether Creative Cloud Is Installed

Adobe Creative Cloud silently adds entries to users' system hosts file without explicit consent, as a workaround after Chrome began blocking local network access from web pages. When a user visits adobe.com, JavaScript attempts to load a resource from a spoofed local domain; a successful connection signals that Creative Cloud is installed. The approach involves roughly 900 hosts file entries and raises serious security concerns — system-level modifications without user consent draw direct comparisons to the 2005 Sony BMG rootkit incident, and a bug in the modification code could cause significant data loss.

Read →

Hacker News

Freestyle Launches Sandboxes Built for Coding Agents

Freestyle is a YC-backed infrastructure startup building Linux VMs optimised for AI coding agents, with sub-500ms startup times, horizontal fork capability (forking an entire running VM with less than 400ms pause), and persistent snapshots that survive across sessions. The company operates its own bare-metal racks after finding that cloud providers' bare-metal pricing was equivalent to purchasing the hardware outright. The platform targets the emerging pattern of agentic coding workflows where agents need full compute environments — not lightweight containers — and where the ability to branch execution mid-task is a first-class requirement.

Read →

GitHub

Ghost Pepper: Local Hold-to-Talk Speech-to-Text for macOS

Ghost Pepper is an MIT-licensed macOS menu bar app that performs hold-to-talk speech-to-text entirely on-device, using WhisperKit for transcription and Qwen language models for intelligent cleanup — removing filler words and self-corrections without sending audio off-device. The developer built it out of a desire for voice input that never leaves the machine, and has been using it as a voice interface for AI coding workflows and email. It requires Apple Silicon, caches models from Hugging Face, and transcribes directly into the active application. The Show HN thread attracted 340 upvotes and active feedback from the community.

Read →

Showbiz 411

AI Singer 'Eddie Dalton' Simultaneously Occupies Eleven Spots on the iTunes Singles Chart

An AI-generated artist named Eddie Dalton has captured eleven positions on the iTunes singles chart and ranked third on the albums chart simultaneously — without Eddie Dalton existing as a person. The phenomenon is the most visible instance to date of AI-generated music saturating commercial music rankings, raising urgent questions about chart integrity and what fair competition means in an industry where synthetic artists face none of the constraints of human performers. Apple has not commented on whether AI-generated artists should be eligible to appear on its charts.

Read →