Tuesday, 28 April 2026

GitHub Copilot moves to usage-based billing, Dutch central bank picks Lidl's cloud over AWS, and World ID lands US partners as global bans mount

Today's Lead

GitHub Blog

GitHub Copilot Is Moving to Usage-Based Billing Starting June 2026

GitHub Copilot is transitioning from a fixed-cost subscription model to usage-based billing with 'AI Credits' starting June 1, 2026. Subscription fees aren't changing ($10/month for Pro, $19/user for Business, $39/user for Enterprise), but users will now receive equivalent credit allotments rather than unrestricted access to premium features, with additional usage billed at published API rates per model. Code completions and Next Edit Suggestions remain unmetered across all plans. Business and Enterprise tiers receive enhanced promotional credits through August 2026 to ease the transition, and organizations can now pool unused credits across teams instead of per-user allocations. The shift changes the economic relationship between developer teams and their AI tooling: instead of a predictable flat fee that encourages high utilization, teams now face variable costs that scale with use. There is a real risk that cost-conscious managers throttle AI tool usage precisely when adoption habits are still forming, which could dampen the productivity gains organizations are counting on.

Read →

Also today

Techzine

Dutch Central Bank Abandons AWS for Lidl's European Sovereign Cloud

De Nederlandsche Bank (DNB) has signed a contract with Schwarz Digits' Stackit platform — the cloud division of the Lidl and Kaufland supermarket group — to migrate away from American cloud infrastructure. The decision is geopolitical: the US Cloud Act means data on American-owned infrastructure can potentially be accessed by US authorities regardless of physical location, and European financial regulators have been warning about over-dependence on foreign IT providers. Stackit keeps data under European jurisdiction and already serves Deutsche Bahn, SAP, and Bayern Munich; Schwarz Digits has committed €11 billion to expanding German data center capacity. The headline is striking, but the substance matters more: a central bank, one of the most risk-averse institutions imaginable, is willing to trade some technical capability for sovereignty guarantees. This sets a visible precedent for other financial regulators, and it suggests European cloud alternatives have crossed a credibility threshold that was hard to claim even two years ago.

Read →

Eclectic Light

macOS 27 Drops AFP Support and Requires TLS 1.2 or Later

Apple's macOS 27, expected in mid-September 2026, will remove AFP (Apple Filing Protocol) after a 13-year deprecation period dating back to OS X 10.9 Mavericks. Users relying on older NAS devices, Time Capsules, or any storage hardware that only speaks AFP will be unable to upgrade without replacing that equipment. More broadly, macOS 27 will enforce TLS 1.2 or later with ATS-compliant ciphersuites across MDM, device enrollment, and app distribution workflows, meaning enterprise infrastructure with outdated certificates must be updated before the upgrade is viable. The AFP removal is overdue but practically uneven: home users with old Time Capsules may not realize they are affected until they attempt the upgrade, while enterprise IT teams face a certificate audit before they can greenlight the rollout across their fleets. The TLS tightening is likely to catch more organizations off-guard than the AFP cut.

Read →

Talkie LM

Talkie: A 13B Language Model Trained Exclusively on Pre-1931 Texts

Researchers have introduced Talkie, a 13-billion parameter language model trained on 260 billion tokens drawn entirely from English-language sources published before 1931 — the largest vintage language model ever created. The architecture is modern, but the training data is deliberately historical, eliminating post-1931 knowledge contamination by design. An instruction-tuned conversational variant was also released. The research goal is to understand model generalization: by testing whether Talkie can reason about concepts that emerged after 1931, such as digital computing, researchers can probe what LLMs actually learn about language versus what they absorb from the content of the modern web. The model also creates a clean platform for separating universal language properties from artifacts of training on contemporary internet text — a question that is hard to study when the training corpus and the knowledge cutoff are tangled together, as they are in every major production model.

Read →

Rest of World

World ID Gains US Partners at Zoom, Tinder, and DocuSign Despite Bans Abroad

Sam Altman's Tools for Humanity announced partnerships with Tinder, Zoom, and DocuSign to integrate World ID — its iris-scanning biometric identity verification system — framing the technology as a defense against deepfakes and fraud. The company's Orbs have verified over 18 million people across 160 countries. The US expansion coincides with enforcement actions and outright bans in Brazil, Hong Kong, Spain, Portugal, and Germany, where regulators found the biometric data collection unnecessary, excessive, or in violation of data protection law. The divergence is instructive: the US market offers significantly looser biometric data regulation and cryptocurrency rules, making it uniquely hospitable to a product that has struggled to operate legally in most major jurisdictions. Embedding World ID inside Zoom and Tinder — platforms used by hundreds of millions — is a distribution strategy that bypasses the opt-in friction that has limited rollout elsewhere.

Read →

Lawrence Paulson's Blog

'Why Not Just Use Lean?' — A Veteran Theorem Prover's Case Against the Field's Consensus

Lawrence Paulson, creator of Isabelle and a foundational figure in interactive theorem proving, challenges Lean's growing dominance in formal mathematics. His critique is substantive: mathematical formalization predates Lean by nearly 60 years, with meaningful results from AUTOMATH (1968) onward, and the field's current culture exhibits 'cultism, insularity, and conformity' that dismisses alternatives unfairly. On technical grounds, Paulson argues Lean's dependent type system creates real headaches — type checking can become undecidable when equality testing is involved — while Isabelle offers superior automation (the sledgehammer tactic), better proof readability, and a simpler type system. The post is worth reading not as an anti-Lean screed but as a reminder that community consensus can calcify around a particular tool for social reasons as much as technical ones, and that the history of formal verification is longer and richer than Lean's recent momentum implies.

Read →

muffin.ink

Seven Years of SVG Sanitization Failures Show That Filtering Is the Wrong Defense

A detailed post-mortem from the Scratch project catalogues seven years of XSS vulnerabilities in SVG sanitization (2019–2026): script tag injection, IP leaks via image href attributes, CSS @import abuse, and CSS url() bypasses. The core argument is that incremental patching is architecturally doomed — each fix adds complexity that creates new attack surface, while the CSS specification keeps introducing new features that open new vectors for external resource requests. The conclusion is that TurboWarp's approach is the only defensible one: rendering SVGs in isolated iframes with a strict Content Security Policy rather than attempting sanitization at all. This is a genuinely useful security architecture lesson. Some classes of untrusted content cannot be made safe by filtering; architectural isolation — making the attack surface irrelevant rather than trying to harden it — is the only reliable defense. Any team handling user-generated SVG content should read this.

Read →

LeadDev

Tokenmaxxing: Why Token Usage Is the Wrong Way to Measure AI Productivity

Token usage has become the default organizational metric for AI productivity, and it's as flawed as measuring software development by lines of code — easy to quantify, easy to game, and poorly correlated with actual outcomes. A FAANG staff engineer's alternative is a four-tier cognitive delegation framework that tracks how engineers progress from basic tool use to orchestrating multi-agent systems, rather than just consumption. Honeycomb takes the opposite approach and deliberately avoids formal AI metrics, on the grounds that engineers can inflate token counts without delivering any value. Industry consensus on a better alternative doesn't exist yet; tooling that connects token spending to actual shipping outcomes is largely absent. The piece is a useful early signal for engineering leaders making AI investment decisions: if your measurement framework only tracks consumption, you're likely operating with a distorted picture of what AI is actually contributing.

Read →

emro.cat

How a Researcher Broke the Anti-Bot System Protecting Nike, Kick, and Twitch

A security researcher reverse-engineered Kasada's commercial anti-bot protection — deployed by Nike, Kick, and Twitch — and found three fundamental design failures. Time-locked bytecode decoding was bypassed by brute-forcing roughly 2,000 time windows. The opcode shuffling used a predictable Fisher-Yates permutation that, once mapped, remained exploitable across script updates without needing exact code matching. And Kasada's error payloads use XOR encryption with a hardcoded UUID key — a cryptographic implementation failure that undermines the entire payload-signing mechanism. The system collects 427 browser fingerprints across 20 batch functions, all of which the researcher fully mapped. The research is a useful reminder that commercial anti-bot products, despite their surface complexity, can contain fundamental algorithmic and cryptographic weaknesses that a determined analyst can reduce to a repeatable bypass.

Read →