Thursday, 16 April 2026

Google breaks privacy promise to share Waze data with ICE, Live Nation found guilty of illegal ticketing monopoly, and Cal.com abandons open source citing AI-driven security threats

Today's Lead

EFF

Google Broke Its Privacy Promise — Now ICE Has My Data

Google violated its stated policy to notify users before sharing data with law enforcement when it handed Immigration and Customs Enforcement location data from Waze about Amandla Thomas-Johnson — a Ph.D. candidate on a student visa who had attended a pro-Palestinian protest at Cornell — without advance warning, after ICE obtained an administrative subpoena. The breach is particularly stark because another student, Momodou Taal, received advance notice from both Google and Facebook for similar requests, allowing him to mount a legal challenge before data was released. The incident exposes how tech companies' data collection practices, combined with government authority, can be weaponized to target individuals engaged in protected political speech, with no consistent standard for when companies will or won't notify users before complying.

Read →

Also today

Bloomberg

Live Nation Illegally Monopolized the Ticketing Market, Jury Finds

A jury has found that Live Nation and Ticketmaster illegally monopolized the live event ticketing market, concluding a major antitrust case brought by 30 state attorneys general. The verdict determined that Ticketmaster overcharged consumers roughly $1.72 per ticket through its monopolistic control. The case centered on Live Nation's vertical integration — owning primary ticket sales through Ticketmaster, major amphitheaters through exclusive venue contracts, and benefiting from secondary market resales — a structure that locked out competitors and harmed consumers. The ruling opens the door for significant remedies and sets a precedent that state-level antitrust enforcement can succeed against vertically integrated tech and media platforms.

Read →

Cal.com Blog

Cal.com Goes Closed Source, Citing AI-Driven Security Threats

Cal.com has transitioned its production codebase from open source to closed source, citing AI-assisted vulnerability discovery as the primary driver: modern AI tools can systematically scan open codebases and generate working exploits at speeds that fundamentally change the threat model for public code. The company pointed to AI discovering and exploiting a 27-year-old BSD kernel vulnerability within hours as evidence of the new risk calculus. Cal.com released Cal.diy as an MIT-licensed alternative for developers, and expressed hope to eventually return to open source as the security landscape evolves. The move reignites long-running debates about open source sustainability and signals a potentially broader trend: projects reconsidering public code as AI lowers the cost of adversarial analysis.

Read →

arXiv

AI Assistance Reduces Persistence and Hurts Independent Performance

A new study finds a significant trade-off in using AI assistance: while it boosts immediate task performance, it undermines users' persistence and independent problem-solving ability over time. Participants who received AI help showed measurably reduced cognitive effort and struggled more when working independently afterward, compared to those who had worked through challenges unaided. The findings add empirical weight to concerns about deskilling effects from AI reliance — mirroring earlier research on GPS navigation and medical diagnosis — and raise serious questions for software developers and knowledge workers who lean heavily on AI coding assistants. The researchers suggest a balanced approach: using AI strategically for routine tasks while deliberately maintaining practice on complex problems to preserve deep problem-solving capabilities.

Read →

Simon Willison's Weblog

Gemini 3.1 Flash TTS: Google's Prompt-Controlled Text-to-Speech Model

Google has released Gemini 3.1 Flash TTS, a text-to-speech model that replaces traditional parameter-based voice configuration with natural language prompting and inline Audio Tags like [excitedly] or [shouting] to direct vocal delivery, emotion, and style. The model supports multi-speaker conversations with distinct assigned voices, responds to detailed accent and character descriptions, and outputs WAV files via the standard Gemini API using the gemini-3.1-flash-tts-preview model identifier. Independent benchmarks placed it at #2 on Speech Arena, just behind the top model. The prompting-first design represents a meaningful interface shift for voice AI: rather than tuning audio parameters, developers describe a character, a scene, and a delivery style, and the model interprets that as a performance directive.

Read →

Cloudflare Blog

Introducing Agent Lee: Cloudflare's In-Dashboard AI Agent

Cloudflare launched Agent Lee, an in-dashboard AI agent that replaces traditional tab-switching with natural language prompts, letting developers diagnose errors, query account state, modify configurations, deploy resources, and generate dynamic visualizations through conversation. The system uses a TypeScript-based 'codemode' architecture where the AI writes executable code rather than fixed tool calls, with credentials stored server-side in Durable Objects to prevent API key exposure. Already handling 250,000 daily tool calls across 18,000 beta users, Agent Lee is the user-facing embodiment of Cloudflare's broader Project Think platform — which adds durable execution, sub-agents, persistent sessions, sandboxed code execution, and a built-in workspace filesystem. The launch signals a broader industry shift toward conversational infrastructure management and positions Cloudflare as a full-stack agentic runtime, not just a network provider.

Read →

Google DeepMind

Gemini Robotics-ER 1.6: Enhanced Embodied Reasoning for Autonomous Robots

Google DeepMind released Gemini Robotics-ER 1.6, where 'ER' stands for Enhanced Embodied Reasoning, bringing significant improvements in spatial understanding for physical robots. Key advances include enhanced pointing accuracy for object detection and counting, the ability to read analog instruments like gauges and pressure indicators with up to 93% accuracy, and improved multi-view processing that handles occluded or poorly lit scenes across multiple camera streams simultaneously. The model can also make autonomous completion assessments — determining when a task is done without human confirmation. Developed in collaboration with Boston Dynamics, the release advances the feasibility of deploying robots like Spot for autonomous facility inspections and industrial monitoring without constant human oversight.

Read →

Medical Xpress

CRISPR Takes an Important Step Toward Silencing Down Syndrome's Extra Chromosome

Researchers at Beth Israel Deaconess Medical Center and Harvard Medical School demonstrated a modified CRISPR/Cas9 technique that can silence the extra copy of chromosome 21 responsible for Down syndrome — not by targeting individual genes, but by inserting the XIST gene (which naturally inactivates one X chromosome in females) to suppress the entire extra chromosome at once. In human stem cell testing, the technique achieved 20–40% integration efficiency, establishing proof-of-concept for the approach. Clinical applications remain years away pending off-target effects research and animal studies confirming cognitive and physical improvements. If it holds up, the strategy — whole-chromosome silencing rather than gene-by-gene editing — could represent a fundamentally different therapeutic paradigm for chromosomal disorders.

Read →

Darkbloom

Darkbloom: Decentralized Private AI Inference on Idle Apple Silicon

Darkbloom, built by Eigen Labs, is a decentralized inference network that turns idle Apple Silicon Macs into a privacy-preserving AI compute pool, leveraging the roughly 18 hours per day that most consumer machines sit unused. The privacy model relies on hardware-verified encryption via tamper-resistant secure enclaves: prompts are encrypted on the user's device, routed through a coordinator that cannot read them, and decrypted only inside isolated hardened processes. Operators earn up to 90% profit margins for contributing capacity, and users access an OpenAI-compatible API supporting models including Gemma, Qwen, FLUX.2, and speech-to-text at up to 70% lower cost than centralized alternatives. The project represents a concrete attempt to move AI inference off opaque cloud servers and into a cryptographically accountable distributed network.

Read →

GitHub Blog

GitHub Updates Developer Policy: Intermediary Liability, Copyright, and Transparency

GitHub published a developer policy update addressing three overlapping legal shifts that affect open source infrastructure. Following the Supreme Court's Cox v. Sony ruling, platforms now have clearer intermediary liability protection — they're not automatically liable for user copyright infringement absent intent to encourage it. GitHub is also advocating ahead of the 2027 DMCA Section 1201 triennial review to preserve developer rights for security research, interoperability work, and accessibility modifications. The updated Transparency Center reports record DMCA circumvention claims in 2025, while GitHub flags emerging age assurance laws across multiple countries as a potential structural threat to open source hosting. The announcements reflect an increasingly active posture from GitHub on the policy terrain that shapes what developers can legally build and share.

Read →