Friday, 10 April 2026
EFF leaves X after 50-100× engagement collapse, Maine moves to ban major data centers, and AI agents now drive 30% of Vercel deployments
Today's Lead
Electronic Frontier Foundation
EFF Is Leaving X
The Electronic Frontier Foundation announced its departure from X (formerly Twitter) after nearly two decades, citing a dramatic collapse in reach: posts that once generated 50–100 million monthly impressions now receive less than 3% of their former engagement. When Elon Musk took over in 2022, EFF publicly requested transparent content moderation, end-to-end encryption, and stronger user controls — instead, Musk dismantled the platform's human rights team and reduced safety staffing in countries that had previously resisted censorship demands. EFF will redirect its efforts to Bluesky, Mastodon, and its own website, where it says vulnerable populations — activists, young people, marginalised communities — remain actively engaged.
Also today
Gadget Review
Maine Becomes First State to Move Toward Banning Major New Data Centers
Maine's legislature has advanced LD 307, a bill that would create the nation's first statewide moratorium on permits for data centres requiring more than 20 megawatts of power, pausing new builds until November 2027 while a new Data Center Coordination Council studies long-term grid and cost impacts. The move directly targets several proposed facilities — including projects in Jay, Sanford, and at the former Loring Air Force Base — as the state seeks to protect an aging electrical grid and cap rising electricity costs for residents. The bill reflects growing bipartisan concern about the AI infrastructure boom's energy footprint: data centres are projected to double their share of US electricity consumption from 4% to 8% by 2030, with similar local pauses already emerging in Michigan and Indiana.
Read →lzon.ca
How Microsoft's OneDrive Defaults Are Destroying Users' Files
An IT professional documents how Windows 11 silently enables OneDrive Desktop sync by default, filling the 5 GB free tier without the user's knowledge or consent and causing Outlook storage-full errors for non-technical users. In the described case, a customer panicked and deleted files — potentially including irreplaceable family photos — trying to resolve the errors, unaware that the files lived in the cloud. The author frames this as a deliberate dark pattern: a predatory design practice that exploits information asymmetry between Microsoft and its least-technical users to convert confusion into paid storage subscriptions, at the direct cost of user welfare.
Read →GitHub — aloshdenny/reverse-SynthID
Reverse Engineering Google's SynthID Watermarking
A researcher has published a project demonstrating that Google's SynthID watermarking system for Gemini-generated images can be reverse-engineered using signal processing and spectral analysis, achieving 90% watermark detection accuracy. The key technical finding is that SynthID embeds watermarks at resolution-dependent carrier frequencies in the green channel, requiring separate bypass profiles per image size rather than a single universal attack. The project has produced three bypass generations (V1–V3), with V3 reaching 43+ dB PSNR and 75.8% carrier energy reduction via FFT-based spectral analysis — raising significant questions about the practical robustness of watermarking as a defence for AI-generated content authentication.
Read →Vercel Blog
Agentic Infrastructure: AI Agents Now Drive 30% of Vercel Deployments
Vercel reports that over 30% of weekly deployments on its platform are now initiated by coding agents — up 1,000% from six months ago — with Claude Code accounting for 75% of that share, Lovable and v0 for 6%, and Cursor for 1.5%. Projects deployed by coding agents are 20× more likely to call AI inference providers than those deployed by humans, confirming that agents are writing software that itself uses AI. Vercel argues this shift demands a new class of infrastructure — what it calls "agentic infrastructure" — spanning programmatic deployment surfaces for coding agents, long-lived execution environments with multi-step orchestration and cost controls, and self-healing platforms that can autonomously diagnose and respond to production anomalies without waiting for human intervention.
Read →SkyPilot Blog
Research-Driven Agents: Giving Agents Context Before They Code
SkyPilot researchers describe a experiment in which a coding agent was given access not just to a target codebase (llama.cpp) but also to academic papers and competing implementations before beginning optimisation work. Within three hours and for roughly $29 in model costs, the agent produced five CPU inference optimisations — softmax fusion, RMS norm fusion, and flash attention improvements — that improved throughput by 15% on x86 and 5% on ARM. The central insight is that agents relying solely on source code miss the domain knowledge available in research literature and adjacent codebases, and that a structured research phase before implementation produces meaningfully better technical output.
Read →Vercel Blog
Making Turborepo 96% Faster with Agents, Sandboxes, and Engineering Discipline
A Vercel engineer reduced Time to First Task in Turborepo from 8.1 seconds to 716 ms on a 1,000-package monorepo — a 91% improvement — through eight days of systematically combining AI agents with traditional performance engineering. Initial unattended agents identified low-hanging fruit (hashing by reference instead of cloning, faster hash algorithms), but required the engineer to convert profiling data to Markdown for agents to interpret effectively, introduce parallelisation of sequential git and filesystem operations, and use Vercel Sandboxes to eliminate benchmarking noise that masked gains below ~15%. The write-up is a candid account of where agents succeed and fail in performance work: they excel at finding patterns and implementing mechanical changes, but miss end-to-end benchmarking opportunities and never wrote a regression test without prompting.
Read →LeadDev
The AI Code Verification Trap: Shipping Faster While Thinking Less
LeadDev argues that AI-generated code has made implementation cheap while making verification increasingly expensive, creating a trap in which teams ship faster but accumulate risk faster still. The core problem is that AI produces polished, convention-respecting code that passes static analysis and reads as authoritative, making it harder — not easier — to apply the scrutiny required to catch subtle logic errors, security vulnerabilities, and architectural drift. The post calls for engineering cultures to explicitly invest in verification discipline proportional to the speed gains AI delivers, rather than treating green CI as sufficient proof of correctness.
Read →Zig Devlog
Zig Achieves Incremental Compilation with the LLVM Backend
The Zig devlog for April 8, 2026 announces working incremental compilation for the LLVM backend — a long-standing goal that enables much faster compile-error feedback during development by recompiling only changed code rather than the full program through LLVM's pipeline. The update also covers a significant type resolution redesign that reduces over-analysis during compilation and produces clearer error messages, along with I/O backend improvements and package management enhancements. Together these changes represent a meaningful step toward Zig's goal of matching the tight edit-compile-debug loop developers expect from languages with faster native backends.
Read →