Wednesday, 01 April 2026

OpenAI closes $852B funding round, Oracle cuts 30,000 jobs, and OkCupid escapes without a fine after sharing 3M user photos with a facial recognition firm

Today's Lead

OpenAI

OpenAI Closes $122B Funding Round at $852B Valuation

OpenAI has closed a $122 billion funding round that values the company at $852 billion — one of the highest valuations ever recorded for a private company. The round was co-led by SoftBank and includes major commitments from Andreessen Horowitz, Nvidia ($30B), and Amazon (up to $50B in compute credits). Capital will be directed at global frontier AI expansion, next-generation compute infrastructure, and meeting surging enterprise demand for ChatGPT, Codex, and related products.

Read →

Also today

Rolling Out

Oracle Slashes 30,000 Jobs Amid Debt-Driven Consolidation

Oracle has eliminated approximately 30,000 positions in one of the largest tech layoffs in recent memory, driven primarily by a roughly $58 billion debt load taken on to fund aggressive datacenter construction for cloud and AI infrastructure. The cuts represent a consolidation from the company's COVID-era workforce expansion, as Oracle shifts investment toward cloud services, enterprise SaaS, and AI-capable compute rather than maintaining legacy headcount. The layoffs underscore the financial pressures mounting on legacy enterprise vendors competing in the AI infrastructure race.

Read →

Ars Technica

OkCupid Gave 3M User Photos to Facial Recognition Firm, FTC Says No Fine

The FTC has settled with OkCupid and parent company Match Group over the 2014 sharing of approximately 3 million user profile photos with facial recognition firm Clarifai — a company in which OkCupid's founders held financial stakes, creating a direct conflict of interest. Despite the unauthorized data sharing, neither company was required to pay any financial penalty; the settlement consists solely of a permanent ban on misrepresenting data-sharing practices. The outcome adds to a growing pattern of privacy enforcement actions that impose reputational consequences but no financial accountability.

Read →

Trail of Bits

How Trail of Bits Went AI-Native (So Far)

Trail of Bits has published its detailed playbook for going AI-native as a 140-person security consultancy — not just distributing AI tools, but redesigning workflows, knowledge management, and delivery around agents with 94 plugins, 201 skills, and 84 specialized agents. The firm reports that AI now surfaces approximately 20% of bugs in client reports, with some engagements scaling from ~15 bugs per week to 200 by deploying fleets of specialized auditor agents across entire codebases in parallel. Their system — combining maturity matrices, curated skill repositories, hackathons, and sandboxed environments — drove internal AI adoption from 5% to 94% of staff and helped their sales team average $8M revenue per rep against an industry benchmark of $2–4M.

Read →

Tiger Data

pg_textsearch: Open-Source BM25 Full-Text Search for Postgres

Tiger Data has released pg_textsearch v1.0, a Postgres extension for BM25 relevance-ranked full-text search built as a freely licensed alternative to ParadeDB's AGPL-guarded offering. Developed by a solo engineer using Claude Code + Opus over roughly two quarters — a project originally estimated at 6–12 months with a small team — it benchmarks at 4.7x higher query throughput than ParadeDB/Tantivy on the MS-MARCO dataset. Available under the Postgres license on GitHub and on Tiger Data's cloud, the release is a concrete data point in the ongoing discussion about AI-assisted development collapsing traditional engineering time estimates.

Read →

Cohere

Cohere Transcribe Tops the Open ASR Leaderboard

Cohere has released Cohere Transcribe, an open-source 2-billion-parameter automatic speech recognition model that achieves a 5.42% average word error rate, topping HuggingFace's Open ASR Leaderboard and outperforming OpenAI's Whisper Large v3 and ElevenLabs Scribe v2. The model supports 14 languages across European, East Asian, and Middle Eastern language families and is available under the Apache 2.0 license. Free API access is available alongside paid deployment, targeting enterprise use cases such as meeting transcription, speech analytics, and customer support.

Read →

Martin Fowler

Encoding Team Standards: Treating AI Instructions as Versioned Infrastructure

Rahul Garg, writing on martinfowler.com, argues that AI coding instructions should be treated as versioned team infrastructure — stored in repositories, peer-reviewed, and shared across the organization — rather than left to individual prompting habits. The approach encodes tacit senior-engineer knowledge (quality standards, security requirements, coding conventions) into shared instruction files that govern AI behavior consistently regardless of who is at the keyboard. Done well, it addresses both AI consistency and the long-standing problem of scaling institutional expertise across growing engineering teams.

Read →

PrismML

1-Bit Bonsai: First Commercially Viable 1-Bit LLMs for Edge Deployment

PrismML has released 1-Bit Bonsai, a family of 1-bit quantized language models targeting edge devices, smartphones, and robotics. The 8B parameter model requires only 1.15GB of memory — 14x smaller than full-precision equivalents — with 8x faster inference and 5x better energy efficiency; the 1.7B variant reaches 130 tokens per second on an iPhone 17 Pro Max. The company claims these are the first commercially viable 1-bit LLMs for on-device deployment, with benchmark scores described as competitive with standard models of similar scale.

Read →

GitHub Blog

Agent-Driven Development in Copilot Applied Science

A GitHub AI researcher shares lessons from building eval-agents — a Copilot-powered system for analyzing thousands of agent benchmark trajectories at a scale no human could manage — and distills a framework for agent-driven development. The core insight: treat coding agents like junior engineers by using conversational prompts, maintaining clean architecture and documentation for agents to navigate, and responding to failures by improving process rather than blaming the agent. Applying these principles, a team of five shipped 11 new agents, 4 skills, and 28,000 lines of code across 345 files in under three days.

Read →

Cloudflare Blog

Cloudflare Launches Programmable Flow Protection for Custom UDP DDoS Mitigation

Cloudflare is launching Programmable Flow Protection in beta for Magic Transit Enterprise customers, allowing operators to upload custom eBPF programs that run across Cloudflare's global network to filter UDP traffic using proprietary protocol knowledge. The feature addresses a longstanding gap: Cloudflare can protect well-known protocols like DNS and TCP with targeted mitigations, but custom or proprietary UDP protocols previously forced operators into blunt rate-limiting that cannot distinguish legitimate from malicious traffic. Programs can statefully track client IP status, issue cryptographic challenges, and integrate with Cloudflare's existing DDoS pipeline to enable precise, client-level validation at global scale.

Read →