Monday, 27 April 2026

GoDaddy hands a nonprofit's domain to a stranger without documentation, Asahi Linux ships 20% better power management for M1 Pros, and Waymo defends pulling into bike lanes as 'normal practice'

Today's Lead

Anchor.host Blog

GoDaddy Transferred a 27-Year-Old Nonprofit Domain to a Stranger — No Documentation Required

A nonprofit organization's domain — in use for 27 years — was transferred out of its GoDaddy account to an unknown third party within minutes, despite domain protection being enabled. GoDaddy accepted the transfer request without requiring any supporting documentation, then sent the rightful owner on a 32-call, 9.6-hour support odyssey with constantly shifting instructions. The domain was ultimately recovered only because the stranger who received it happened to cooperate — a resolution that had nothing to do with GoDaddy's process. The failure cuts at a fundamental assumption most domain holders make: that registrar-level protections like transfer locks actually prevent unauthorized transfers. This incident shows the locks are only as strong as the social engineering resistance of the support staff behind them. For any organization whose operations depend on a domain, this is a reminder that legal ownership of a domain and operational control of it are two different things.

Read →

Also today

Asahi Linux Blog

Asahi Linux Progress Report: Linux 7.0 Brings 20% Power Savings and M3 Feature Parity Push

Asahi Linux's Linux 7.0 progress report covers a broad range of improvements to Apple Silicon support: M1 Pro MacBooks now benefit from ~20% power management improvements via Power Management Processor support, Variable Refresh Rate displays are working, Bluetooth/WiFi coexistence issues are resolved, and Mac Pro machines are now supported. The M3 generation is closing the gap with M1 in terms of feature parity, gaining PCIe, keyboard, trackpad, and NVMe controller support. Installer releases are now automated. The pace here matters: Asahi isn't just a proof of concept anymore — it's a steadily-maturing Linux distribution where each release addresses real daily-driver friction. The 20% power improvement on M1 Pro hardware is particularly significant, since battery life has historically been the sharpest argument for sticking with macOS on Apple laptops.

Read →

road.cc

Waymo Says Pulling Into Bike Lanes for Pickups Is 'Normal Practice'

Waymo's autonomous vehicles are programmed to pull into bike lanes for passenger pickups and drop-offs — a practice that violates traffic regulations and which Waymo defends by arguing that holding vehicles to the standard of not blocking cycle infrastructure sets 'too high a bar.' The consequences are not hypothetical: a San Francisco cyclist suffered brain damage after a passenger opened a door into her from a Waymo vehicle stopped in a bike lane, and another cyclist was struck by a Waymo vehicle in February 2024. The company's stated commitment to treating cyclists as 'unique road users' sits in direct contradiction to its operational programming. What makes this notable is the framing: Waymo isn't saying this is an unsolved edge case — it's saying bike lane stops are acceptable policy. For the autonomous vehicle industry broadly, this surfaces a core question about who sets the safety floor when there's no human driver to hold accountable for each individual decision.

Read →

Koshy John's Blog

AI Should Elevate Your Thinking, Not Replace It

The piece introduces the concept of 'simulated competence' — AI-generated output that looks polished enough to pass review without the author actually understanding it — and argues this is the most dangerous failure mode of AI coding tools, particularly for early-career engineers. The concern isn't efficiency: it's that the productive struggle of working through bugs, reading confusing documentation, and debugging mysterious behavior is precisely how engineering intuition gets built. Outsource that phase and you get developers who can prompt AI but can't reason independently about systems. The organizational risk compounds: leaders evaluating output quality can't easily distinguish genuine thinking from sophisticated regurgitation, which means the problem is invisible until a system fails in a novel way and nobody on the team has the mental models to diagnose it. The framing — AI as leverage for experts versus AI as a shortcut past expertise — is one of the more useful distinctions circulating in this space.

Read →

SentinelOne Labs

Fast16: A State-Level Sabotage Framework That Predates Stuxnet by Five Years

SentinelLabs has analyzed fast16, a cyber sabotage framework compiled in 2005 — five years before Stuxnet became the public landmark for state-level software weaponization. The tool is architecturally specific: a Lua-powered carrier, a kernel driver, and a reporting module that intercepts executables and injects code to corrupt floating-point calculations in precision engineering software like LS-DYNA and PKPM. The sabotage is surgical — not destroying machines but quietly corrupting the physics simulations that structural engineers rely on, a technique that could cause catastrophic downstream failures without an obvious attack signature. The name 'fast16' appeared in the 2017 Shadow Brokers NSA leak, confirming its state origin. The broader significance is historical: Stuxnet established the public narrative that cyber-physical sabotage began around 2010, but fast16 suggests the capability and the intent existed earlier, and that the NSA's industrial sabotage toolkit was more developed than the public timeline implied.

Read →

Google DeepMind Blog

Google DeepMind Partners with South Korea for AI-Driven Scientific Discovery

Google DeepMind announced a formal partnership with South Korea's Ministry of Science and ICT, centered on an AI Campus in Seoul and access to DeepMind's frontier scientific models — AlphaFold, AlphaGenome, and WeatherNext — for Korean researchers working in genomics, protein prediction, climate, and energy. The deal includes 50,000 AI Essentials scholarships and a collaboration with Korea's AI Safety Institute on safety research and best practices. The partnership fits a pattern of AI labs embedding themselves in national research ecosystems: offering compute and model access in exchange for scientific collaboration, talent pipelines, and institutional legitimacy. South Korea's concentration of AI research output per capita makes it a strategically interesting partner, and the safety institute collaboration is notable — it suggests DeepMind is trying to get ahead of regulatory frameworks in markets where they want long-term presence.

Read →

Medium

Someone Bought Friendster for $30k and Rebuilt It Around Phone-Tapping

Mike Carson purchased the Friendster domain and trademarks for $30,000 and relaunched it as a social network whose core mechanic requires users to physically tap their phones together to connect — no digital friend requests, no follow buttons, no algorithmic suggestions. The iOS app includes connection-fading (friendships decay if not reinforced in person) and friends-of-friends visibility limited to shared physical contexts. The premise is a direct inversion of every design pattern that made social networks grow: it deliberately sacrifices virality, network effects, and engagement metrics in favor of enforcing real-world contact as a prerequisite. Whether or not it achieves adoption, it's a useful design experiment — a working demonstration that the affordances of social software are choices, not requirements, and that different values lead to fundamentally different products even on the same technological substrate.

Read →

purplesyringa.moe

WebAssembly Is Not Quite a Stack Machine

WebAssembly is universally described as a stack machine, but the author argues this characterization is technically misleading: unlike the JVM or Forth, Wasm has no instructions for manipulating or rearranging values on the stack. This means that standard optimizations like common subexpression elimination — sharing the result of a computation used in two places — require dropping to local variables rather than stack operations. The practical consequence is that Wasm behaves architecturally like a register machine with reverse-Polish notation encoding, not a true stack machine. For developers trying to apply stack-based programming strategies to Wasm (writing compilers, optimizers, or custom backends), the distinction matters: the mental model most documentation ships with will lead you astray when you hit the constraints that actual stack machines don't have.

Read →

GitHub Community Discussions

GitHub Reverts Issue Link Popup Feature After Community Backlash

GitHub shipped an experimental feature that replaced direct navigation to issue pages with inline popup overlays when clicking issue links. The community response was strongly negative: the change disrupted established workflows, broke compatibility with AI agents and automation that expected standard link behavior, and was generally experienced as friction rather than convenience. GitHub reverted the feature quickly, acknowledging it 'missed the mark.' The episode is a clean case study in the tension between UI experimentation and workflow stability in developer tools — GitHub's users have deeply grooved habits around link behavior, and even a well-intentioned UX improvement that changes the fundamental navigation model hits immediate resistance. The AI agent compatibility breakage is a new wrinkle: as more developer workflows involve automated systems reading GitHub's interface, 'it works for humans' is no longer the only bar a UI change needs to clear.

Read →

Panayotis Vryonis's Blog

LLM-Assisted Coding Isn't Deterministic — But That Might Not Matter

The piece makes a useful distinction between determinism and reliability, arguing the industry is worrying about the wrong property when it frets over LLMs producing different code on repeated prompts. Software development has never been deterministic — two engineers given the same spec produce different implementations — and modern systems are complex enough that even fully deterministic code produces unpredictable emergent behavior through hardware variance, dependency interactions, and configuration drift. The appropriate response to LLM non-determinism isn't trying to force the outputs to converge; it's the same practices that handle non-determinism everywhere else in software: testing, staging environments, observability, and rollback capability. The analogy to aviation safety standards is apt — aviation doesn't mandate that pilots execute maneuvers identically, it mandates verification of outcomes. Applying that framing to AI-assisted development shifts the conversation from 'can we trust the tool?' to 'do we have the verification infrastructure to catch failures?'

Read →