Zurück zu den News

Digital Colliers Daily Briefing — April 26, 2026

Digital Colliers Daily Briefing — April 26, 2026
Digital Colliers Apr 26, 2026 7 min read

Digital Colliers Daily Briefing — April 26, 2026

Generated from 27 items collected across RSS feeds, YouTube channels, X accounts, Reddit, and Hacker News.


Digital Colliers Daily Briefing — April 26, 2026

The AI sector enters the final week of April caught between three forces pulling in different directions: a hardening of infrastructure power at the top of the stack, a quiet but striking demonstration of what frontier models can now do in pure research, and a public mood that has curdled into something closer to hostility. Today's briefing examines Epoch AI's quantification of Google's compute footprint, an amateur's GPT-5.4 Pro–assisted solution to a 60-year-old Erdős conjecture, and the mounting evidence — from polling data to political violence — that the industry's social license is eroding faster than its capabilities are improving.


1. Epoch AI puts numbers on Google's compute moat: 3.8M TPUs, 1.3M GPUs, ~25% of the world

A vintage technician stacking a towering column of magnetic tape reels.

What happened. Epoch AI estimates that Google now controls roughly a quarter of global AI compute capacity, operating approximately 3.8 million TPUs alongside 1.3 million GPUs, according to a Financial Times report by Stephen Morris surfaced on Techmeme this morning. Google Cloud CEO Thomas Kurian, asked about the scale of the company's capital expenditure, told the FT that customer demand and revenue trajectory justify the spend.

Why it matters. The figures, if accurate, reframe the AI infrastructure race. Most public narratives have centered on Nvidia's GPU allocation as the binding constraint on frontier development; Epoch's accounting suggests Google has built a parallel supply chain large enough that it is no longer a participant in the GPU scramble so much as an alternative to it. Owning the silicon, the fab relationships (via Broadcom and TSMC), the data centers, and the model lab creates a vertical stack that Microsoft, Amazon, Meta, and the independent labs cannot easily replicate.

Who is affected. Nvidia-dependent rivals — including Microsoft/OpenAI, Anthropic (despite its Trainium commitments with AWS), xAI, and Meta — face a competitor whose marginal cost of training and inference is structurally lower. Google Cloud customers gain a credible non-Nvidia path. Nvidia itself loses a portion of the implied long-term TAM, though demand elsewhere remains acute. Sovereign and enterprise buyers evaluating multi-year compute contracts now have firmer numbers to negotiate against.

What to watch next. Whether hyperscaler peers disclose comparable accelerator counts in upcoming earnings (Alphabet and Microsoft report next week), how Kurian's "demand justifies spend" framing holds up against utilization data, and whether TPU external availability expands materially beyond the current Anthropic and Cloud-tenant footprint.


2. A 23-year-old, a single GPT-5.4 Pro prompt, and a primitive-sets conjecture Erdős left open

A vintage young man at a typewriter struck by sudden mathematical insight.

What happened. Liam Price, 23, with no advanced mathematics training, posted a solution to a 60-year-old Erdős problem on erdosproblems.com after entering it into GPT-5.4 Pro, Scientific American reports. The problem concerns the lower bound of the Erdős sum on primitive sets — Stanford's Jared Lichtman had proved the upper-bound case in his 2022 thesis but, like others, got stuck on the lower bound. Terence Tao, who has been informally tracking AI's mathematical output, says the model bypassed the standard opening move that prior human attempts had all taken, applying a formula familiar in adjacent areas of number theory that no one had thought to bring to bear here. Tao and Lichtman have since shortened the proof to better isolate the model's key insight.

Why it matters. Many recent "AI solves Erdős problem" headlines have collapsed under scrutiny — either the problems were minor or the solutions were rederivations of known work. This one is different on two counts: the problem had genuine standing, and the method appears novel and potentially generalizable. Tao described it as "a new way to think about large numbers and their anatomy." Lichtman says it confirms a long-standing intuition that this cluster of problems shares unifying structure.

Who is affected. Research mathematicians, who now have a concrete instance — vetted by Tao — of a frontier model contributing original technique rather than retrieval. OpenAI, which gets a defensible scientific-discovery datapoint at a moment when corporate ROI numbers are weak (see Event 3). Competing labs, particularly DeepMind and Anthropic, whose own mathematical reasoning benchmarks now have a harder bar to clear. And the broader debate over AI evaluation: as Tao has previously argued, individual problem solutions are noisy signals, but novel-method solutions are not.

What to watch next. Whether the Tao–Lichtman cleaned-up proof appears on arXiv in the coming weeks; whether the underlying technique transfers to other problems in the primitive-sets cluster; and whether OpenAI publishes anything about the prompt trajectory or internal reasoning traces. Price's "vibe mathing" workflow — feed random open problems to a frontier model — is also likely to be widely imitated.


3. Polling, violence, and a populist turn against the AI industry

A vintage suburban woman holding a protest placard on her front porch.

What happened. The New Republic this week aggregated a set of converging signals that the AI industry's public standing is deteriorating. On April 10, Sam Altman's home was attacked with a Molotov cocktail by a 20-year-old self-described "butlerian jihadist." Three days earlier, an Indianapolis councilman who had backed a local data center project had 13 shots fired into his home, with a "No Data Centers" note left behind. Stanford's 2026 AI Index, released April 13, found expert-versus-public sentiment gaps of roughly 50 points on AI's long-term effect on jobs (73% of experts positive vs. 23% of the public) and the economy (69% vs. 21%). A March Gallup survey shows Gen Z excitement about AI falling from 36% to 22% year over year, while anger rose from 22% to 31%.

Why it matters. The economic case underneath the backlash is no longer purely vibes. A February 2026 NBER paper found 80% of companies actively using AI report no productivity impact; the widely cited 2025 MIT study put corporate pilot ROI failure at 95%. Virginia residential electricity rates are projected to rise up to 25% by 2030 on data-center load. The industry is asking for hundreds of billions in additional capex against a backdrop where the productivity payoff has not landed for most buyers and the externalities are landing on voters' utility bills.

Who is affected. AI executives and their security budgets — the latter likely to climb sharply. Data-center developers and the local officials who approve them, who now face a politically activated and occasionally violent opposition. Policy shops at OpenAI, Microsoft, Anthropic, and Google, whose recent gestures (OpenAI's Industrial Policy White Paper, Microsoft's Community-First AI Infrastructure Initiative) lack independent accountability mechanisms. State legislators: Illinois SB 3444, which would shield OpenAI from large-scale model harms, has become a flashpoint, with Anthropic publicly opposed.

What to watch next. State-level AI liability bills, particularly Illinois SB 3444 and the SuperPAC activity opposing similar measures elsewhere; whether utility regulators in Virginia, Ohio, and Texas begin ring-fencing data-center load from residential rate bases; and whether any major hyperscaler attaches binding accountability terms to its community commitments. Expect executive protection costs to surface in upcoming 10-Q filings.


The day's three threads tighten around a single tension. Google's compute disclosure and the Erdős result both argue that the technical frontier is advancing and consolidating — capability and capital are compounding at the top of the stack. The backlash data argues that the social and political base on which that frontier rests is narrowing at the same time. The industry's near-term challenge is no longer whether the models can do useful new work; it is whether the public will tolerate the buildout required to deploy them.

Related Posts