Friday, 15 May 2026
Toyota RAV4 teardown exposes vehicle surveillance gap, Linux and macOS zero-days drop simultaneously, and Anthropic joins the Gates Foundation with $200M
Today's Lead
arkadiyt.com
Removing the Modem and GPS from My 2024 RAV4 Hybrid
A developer documents physically removing the Data Communication Module (modem) and GPS antenna from a 2024 Toyota RAV4 Hybrid to stop telemetry transmission. Modern vehicles collect extensive personal data — manufacturers have been documented gathering information about location history, driving behavior, and personal attributes, which is then sold to third parties including data brokers. The step-by-step teardown required removing interior panels and physically disconnecting the DCM from the vehicle's CAN bus network. The author notes a critical limitation: Bluetooth connectivity can still expose data to vehicle systems, so they switched to wired-only phone connections. The article is the most viral entry yet in a growing genre of 'de-surveillance' hardware modifications, cresting at 767 points on Hacker News. Toyota, like most OEMs, collects and monetizes connected vehicle data; the author cites the difficulty of opting out through software alone as motivation for a hardware solution. The piece arrives as US legislators are being pushed for comprehensive vehicle privacy legislation and illustrates the gap between what consumers expect when buying a vehicle and what is actually happening with their location and behavioral data — a gap that is currently only closable by people with the technical skill and risk tolerance to crack open their dashboard.
Also today
jpain.io
The author argues that over-reliance on AI coding and writing tools has produced measurable cognitive atrophy. After delegating complex programming tasks to AI, they found their own coding ability had deteriorated — the mental muscle of debugging, reasoning through algorithms, and building internal models of systems had weakened from disuse. The same dynamic applies to writing: AI-generated text lacks authentic voice, but the ease of AI assistance makes the blank-page problem worse over time as the habit of generating original sentences fades. The piece is notable for its specificity: not a generalized worry about AI dependency but a first-person account of noticing decline and tracing it to specific behavior changes. It joins a growing body of evidence and anecdote that AI assistance — without deliberate practice of underlying skills — may accelerate the same kind of atrophy that GPS use is associated with in spatial navigation research. The author does not call for abandoning AI tools but for a more intentional approach: preserving high-difficulty tasks that drive skill development rather than outsourcing everything that AI can technically handle.
Read →GitHub / 0xdeadbeefnetwork
Linux 0-Day: Unprivileged Users Can Access Root-Owned Files via ssh-keysign Race Condition
A publicly disclosed Linux kernel 0-day allows unprivileged users to access files owned by root, including SSH host keys and /etc/shadow. The vulnerability exploits a race condition during process exit: when a process is exiting and its memory management structure is being torn down, the __ptrace_may_access() function incorrectly skips security checks, creating a window in which an attacker can use the pidfd_getfd() syscall to steal file descriptors from privileged processes. The attack is particularly alarming because it targets ssh-keysign, a privileged SSH binary, but the underlying primitive is general enough to affect other privileged processes. Affected distributions include Debian, Ubuntu, Arch, and CentOS on kernels prior to commit 31e62c2ebbfd. A working proof-of-concept is public at the time of disclosure. The vulnerability follows a pattern of local privilege escalation issues found in Linux's process management code — complex paths involving ptrace, process exit, and file descriptor handling that interact in ways that are difficult to audit manually and tend to yield exploitable race conditions.
Read →Calif
First Public Kernel Memory Corruption Exploit on Apple M5
Security firm Calif disclosed the first public kernel memory corruption exploit targeting Apple's M5 chip, bypassing Memory Integrity Enforcement (MIE) — a hardware security feature that prevents unauthorized writes to kernel memory. The exploit achieves local privilege escalation as a data-only attack, meaning it corrupts data without executing injected shellcode, which is a harder class to mitigate than traditional code injection. Notably, the research team developed the working exploit in five days by pairing security expertise with an AI system called Mythos Preview, using it to accelerate vulnerability discovery and exploitation chain development. Technical details are being withheld until Apple ships a patch. The use of AI-assisted exploit development is the methodological signal worth tracking: security firms are increasingly using LLMs to compress what would historically have been weeks of work into days, shortening the window between patch release and third-party exploitation and raising the bar for defenders who rely on that window for remediation.
Read →tmctmt.com
Mullvad Exit IPs as a Fingerprinting Vector
A technical analysis reveals that Mullvad VPN's exit IP assignment mechanism, while appearing random, uses a deterministic process based on WireGuard public keys that creates a stable per-user fingerprint. Because WireGuard key pairs persist between connections, users are assigned IP addresses from a consistent percentile position within each server's pool — resulting in only 284 distinct IP combinations out of a theoretically much larger space. Across 9 servers, this means an attacker who can observe which IP a user exits from across multiple sessions can narrow their identity to approximately 1 in 300 users — a significant deanonymization capability for a service whose primary value proposition is anonymity. Recommended mitigations include avoiding frequent server switches (which help reveal the consistent percentile assignment) and periodically rotating WireGuard keys to reset the fingerprint. The research highlights a subtle category of VPN vulnerability: not a protocol flaw or logging policy issue, but an emergent fingerprinting property arising from the interaction of key persistence and pool assignment logic — the kind of thing that passes a security review of any individual component but fails when the system is analyzed as a whole.
Read →antirez.com
Salvatore Sanfilippo (antirez), creator of Redis, published reflections on DS4, an open-source local AI inference project that has gained unexpected traction. DS4's success comes from a convergence of factors making capable local inference accessible: frontier-quality quantized models, hardware improvements in consumer GPUs and Apple Silicon, and a critical mass of community knowledge around local inference optimization. Antirez argues that local AI has reached an inflection point where for many use cases the quality gap with cloud inference is small enough to be irrelevant in practice — and the benefits of local deployment (privacy, zero network latency, no per-token cost) are substantial enough to make it the default choice for developers who prioritize either. Plans for DS4 include domain-specific variants and expanded tooling. The post is notable as a signal from a deeply respected systems programmer — someone whose career centers on efficient, production-grade software — that local AI inference is no longer a hobbyist project but a serious engineering target worth investing in.
Read →The Register
Ontario Auditors Find AI Medical Note-Takers Routinely Hallucinate Patient Data
An Ontario audit of AI transcription systems deployed in healthcare found serious accuracy failures across the majority of tested products: 60% of systems incorrectly documented prescribed medications, 45% fabricated information never spoken during the recorded consultation, and 85% failed to capture critical mental health details. The findings are damning not just for the AI systems themselves but for the procurement process — the certification framework allocated only 4% weight to medical accuracy while giving 30% weight to whether the vendor was domestically based in Ontario. The audit illustrates a systemic failure mode in AI deployment in high-stakes settings: procurement processes that prioritize policy criteria over the one metric that matters most for patient safety. Medical transcription errors propagate through care records, influence subsequent clinical decisions, and can contribute directly to patient harm. This audit provides concrete statistical evidence for what has been a theoretical concern, and raises an obvious question for every jurisdiction that has approved AI transcription tools for healthcare: what did their evaluation process actually weight?
Read →Anthropic
Anthropic Forms $200M Partnership with the Gates Foundation
Anthropic committed $200 million in grants, Claude usage credits, and technical support to the Gates Foundation for a four-year initiative spanning global health, life sciences, education, and economic mobility. Specific programs include AI-accelerated vaccine and therapy development, evaluation frameworks for AI healthcare tools in low- and middle-income countries, K-12 educational tools, and agricultural productivity tools for smallholder farmers. Anthropic's Beneficial Deployments team will embed technical support within the partnership, and the arrangement includes discounted Claude access for nonprofits and educational institutions. The partnership is notable for what Anthropic is contributing: not just money but model access, engineering support, and the development of shared public goods — evaluation frameworks and tooling that will remain open to the broader sector. For Anthropic, which has made responsible AI development a core brand claim, a multi-year partnership with one of the world's most prominent philanthropic organizations is also a competitive positioning move as major AI labs increasingly differentiate on stated values rather than just model capability.
Read →The Register
Germany's Sovereign Tech Fund Backs KDE with €1.3M
KDE secured €1.3 million from Germany's Sovereign Tech Fund to strengthen infrastructure and accelerate development of KDE Linux, an immutable desktop distribution built on Arch Linux that borrows its architecture from SteamOS 3. The funding is part of a broader European push for technological sovereignty: France's DINUM is developing Sécurix, a Nix-based secure workstation configuration, and multiple EU member states are exploring alternatives to US operating systems and cloud services. The Sovereign Tech Fund's investment thesis is explicit — European governments running critical infrastructure on software controlled by US companies represents a strategic dependency being actively reassessed. KDE Linux targets an interesting design point for government deployment: immutable OS architecture means lower maintenance burden and more predictable security properties than traditional package-managed Linux, making it operationally closer to what governments experience with commercial OS deployments. The investment follows earlier Sovereign Tech Fund grants to foundational projects like curl, OpenSSH, and Rust, and signals the fund is moving from infrastructure grants toward user-facing computing environments.
Read →Martin Fowler
Notes from an AI Programming Retreat
Martin Fowler attended a private software development retreat held under the Chatham House Rule, bringing together practitioners working on agentic programming. Key observations: one group ported a GNU Cobol compiler to Rust (70K lines) in three days using LLMs, enabled by strong regression test suites that provided reliable verification for generated code. A separate attendee proposed using LLMs as interviewers for specification review — having the model question human experts to surface gaps and inconsistencies in large spec documents rather than having humans read them directly. Fowler also observes that effective AI-assisted coding requires clean, readable code architecture; poorly structured codebases benefit less from LLM assistance because the model's comprehension degrades with complexity, suggesting AI adoption may accelerate the economic case for refactoring. The retreat also surfaced a concern about mentorship: as AI coding assistants reduce the need for human-to-human code review, the mentorship function that pair programming serves for junior developers may be at risk — a second-order effect of AI adoption that receives less attention than productivity gains.
Read →