Sunday, 26 April 2026

An amateur cracks a 60-year Erdős conjecture with ChatGPT, Gmail ships end-to-end encryption for all businesses, and the AI industry confronts growing public hostility

Today's Lead

Scientific American

Amateur Mathematician Cracks 60-Year-Old Erdős Conjecture Using ChatGPT

A 23-year-old amateur mathematician, Liam Price, used ChatGPT Pro to solve a 60-year-old open conjecture about primitive sets and the Erdős sum — proving it converges to exactly one as integers grow — a problem that had resisted professional mathematicians for decades. The breakthrough came not from conventional approaches but from the AI surfacing a formula from an adjacent mathematical domain that no human researcher had thought to apply. The HN discussion coins the method 'vibe maths' — the mathematical equivalent of vibe coding — raising genuine questions about what AI-assisted discovery looks like when it cross-pollinates between fields faster than humans can survey the literature. For mathematics, the implication is less 'AI proved something' and more 'AI helped a human see a connection that the field's own specialization was obscuring.'

Read →

Also today

The New Republic

The AI Industry Is Discovering That the Public Hates It

Public sentiment toward AI has diverged sharply from industry narratives: only 23% of Americans are optimistic about AI's job impact (versus 73% of AI executives), 80% of companies using AI report no measurable productivity gains, and 95% of pilot programs showed zero returns on investment. The gap is compounding into hostility — data center expansion is driving electricity costs up 25% in some regions, and there have been violent incidents targeting AI executives and officials who championed AI infrastructure. The article frames this as a structural problem: AI's productivity benefits have accrued primarily to shareholders and executives rather than workers, while the costs (energy, displacement anxiety, surveillance creep) land on everyone else. The industry's response has been to accelerate deployment and PR rather than address the distributional question — a bet that capability wins public consent, which the data suggests is not holding.

Read →

Google Workspace Blog

Gmail Brings End-to-End Encryption to All Businesses — No IT Infrastructure Required

Google is rolling out simplified end-to-end encryption for Gmail that eliminates the S/MIME certificate infrastructure traditionally required for encrypted business email — enabling encryption with a few clicks for any recipient. The phased rollout begins with intra-organization users in beta, expanding to all Gmail users within weeks and eventually to any email provider by end of year. The implementation uses Google's existing client-side encryption technology while removing the IT overhead that has kept email encryption a niche capability despite decades of availability. S/MIME and PGP have existed for 30+ years and seen near-zero enterprise adoption outside regulated industries precisely because of setup friction — if Google's streamlined approach works across providers, it represents a meaningful shift in enterprise email security posture rather than another announcement that changes nothing in practice.

Read →

hyper-derp.dev

Hyper-DERP: C++/io_uring Rewrite Delivers Twice the Throughput on Half the Cores

A developer rewrote Tailscale's DERP relay protocol in C++ using io_uring, achieving 12,316 Mbps on 8 vCPUs versus Tailscale's Go implementation at 7,834 Mbps on 16 vCPUs — roughly 2x the throughput per core. The gains come from eliminating Go runtime overhead (garbage collection, goroutine scheduling) and leveraging io_uring's asynchronous I/O model for the relay's data plane. DERP relays handle traffic when Tailscale peers can't establish direct connections due to CGNAT or restrictive firewalls, making relay efficiency a real operational cost at scale. The project is a clean demonstration of the systems programming tradeoff: managed runtimes trade predictable performance for productivity — and when you're running a relay that is pure I/O multiplexing, that trade is entirely one-sided.

Read →

GitHub

Niri v26.04: Blur Effects, 8× Faster Rendering, and a Push-Based Architecture Overhaul

Niri, the Rust-written scrollable-tiling Wayland compositor, ships its April release with two major changes: blur effects for windows via the ext-background-effect protocol (a frequently requested visual feature), and a rendering architecture overhaul from pull-based iterators to push-based closures that achieves 2–3x faster render list construction on modern hardware and 8x on older Intel GPUs. The release also adds screencasting cursor metadata, improved GTK4 IME handling, and raises the minimum Rust version to 1.85. Niri's tiling model — infinite horizontal scroll rather than fixed workspaces — is architecturally distinct from i3/Sway and has attracted a following that finds the spatial model more intuitive. This release addresses both the aesthetic gap (blur) and a genuine performance bottleneck that was measurable on older hardware.

Read →

lute.luau.org

Lute: A Standalone Runtime Brings Luau Outside the Roblox Ecosystem

Lute is a standalone runtime for Luau — Roblox's typed, performance-focused dialect of Lua — enabling general-purpose scripting and application development independent of any game engine. The runtime ships with a test runner, linter, and type checker, and provides two library layers: low-level @lute APIs and a higher-level @std standard library. The cross-platform goal is explicit: code should run in both Lute and Roblox with minimal changes, creating a path for Roblox developers to write tooling and automation outside the game engine. Luau has genuine technical advantages over standard Lua — gradual typing, performance improvements, incremental garbage collection — and Lute is the first serious attempt to make those available to the broader scripting ecosystem.

Read →

fp32.org

Your CPU Has More Registers Than You'd Think — And That's Why It's Fast

Modern CPUs contain hundreds of physical registers on die, but expose only a small architectural set (x0–x31 in ARM, for example). Register renaming is the mechanism that dynamically maps architectural registers to physical ones, eliminating false Write-After-Write dependencies that would otherwise serialize execution. This enables out-of-order execution to overlap operations that appear sequential in the instruction stream — and allows register moves to be completely eliminated at the rename stage as zero-cost operations. The article is a clear primer on one of the most consequential CPU microarchitecture techniques, connecting the ISA abstraction to the hardware reality underneath in a way that's directly useful for anyone writing performance-sensitive code or reasoning about what a compiler actually produces.

Read →

argemma.com

You Don't Want Long-Lived Keys

The article makes a practical case against long-lived cryptographic credentials — SSH keys, API tokens, service account keys — in favor of ephemeral alternatives that expire automatically. The risk compounds in ways that static credential audits miss: key exposure windows grow with employee tenure, departures create revocation debt, and cryptographic guarantees degrade as algorithms age. The recommended alternatives (short-lived SSH certificates, workload identity federation for cloud credentials, trusted publishers instead of static tokens) are now broadly available but underutilized. Where long-lived keys are unavoidable, the recommendation is to centralize their management with a dedicated security team rather than distributing the responsibility — and the burden of care — across every developer in the organization.

Read →

Matthew Brunelle's Blog

It's OK to Use AI to Finish the Projects You Were Never Going to Finish

Matthew Brunelle argues for a principled distinction in how to apply AI coding assistants: 'wish fulfillment' projects — things you'd genuinely like to exist but were never going to build — are appropriate candidates for delegation to AI, while learning-focused work should remain hands-on to avoid skill degradation. He demonstrates with a YouTube Music → OpenSubsonic connector that had sat abandoned for years: using Claude Code to complete it wasn't laziness but the correct application of the tool to a project whose value was the outcome, not the process. The broader community discussion productively extends this to code ownership — if you didn't write it, do you understand it well enough to maintain it — which surfaces the real maintenance cost of AI-completed projects. The most useful reframe: the question isn't 'did I write this?' but 'can I debug this when it breaks?'

Read →

GitHub

WUPHF: A Karpathy-Style LLM Wiki That AI Agents Read From and Write Into

WUPHF is an open-source multi-agent workspace that uses markdown + git as the primary knowledge substrate — a deliberate minimalist bet on the architecture Karpathy has been gesturing at: an LLM-native knowledge layer that agents both read from and write into, so context compounds across sessions rather than being re-pasted each morning. The implementation uses BM25 (via Bleve) for retrieval over a 500-artifact benchmark at 85% recall@20, with SQLite for entity metadata and append-only fact logs, and no vector database. Each agent gets a private notebook plus access to a shared team wiki; a draft-to-wiki promotion flow and daily contradiction-detection lint pass maintain quality over time. What's interesting technically is the choice to go boring on infrastructure (markdown, git, SQLite, BM25) and ambitious on workflow design — the inverse of most agentic memory implementations that reach for Postgres, pgvector, and Neo4j before the use case is established.

Read →