Digital Colliers Daily Briefing — April 29, 2026
The cloud AI stack reorganized itself in a single 24-hour window. OpenAI's models went live on AWS Bedrock — ending Azure's exclusivity and reframing the agent platform race — while all four hyperscalers reported Q1 results that put committed AI infrastructure spending on a trajectory few power grids or balance sheets were designed for. Underneath it, GitHub Copilot's shift to metered billing surfaced the question those capex numbers can no longer avoid: who, exactly, pays for the tokens.
1. OpenAI Lands on Bedrock, and Azure's Multi-Year Moat Closes

What happened. At an AWS event in San Francisco, AWS CEO Matt Garman announced that OpenAI's frontier models are now available through Amazon Bedrock in limited preview, alongside a new Bedrock Managed Agents offering "powered by OpenAI" and a hosted version of Codex that runs against models inside AWS data centers. GPT-5.4 is live now; GPT-5.5 is due "within the next couple of weeks," according to Garman. The launch follows Monday's amended Microsoft–OpenAI agreement, which makes Microsoft's IP license non-exclusive, ends Microsoft's revenue share to OpenAI, and frees OpenAI to ship its products on any cloud. Sam Altman appeared by pre-recorded video; The Register noted he was juggling a Musk lawsuit court date the same afternoon.
In an embargoed interview published by Stratechery, Altman framed Bedrock Managed Agents as a packaged runtime — identity, permissions, state, logging, VPC-scoped data — built on top of AWS's existing AgentCore primitives. AWS confirmed customer data stays inside the customer's VPC, and that inference will run on a mix of GPUs and Trainium, shifting toward Trainium over time as part of the up-to-$50B compute commitment OpenAI signed earlier this year.
Why it matters. Azure's exclusive on OpenAI was the single largest competitive asymmetry in enterprise cloud since 2023. Removing it collapses the choice many CIOs were deferring: they no longer need to move workloads to Azure to access GPT-class models, and they no longer need to expose data to OpenAI's APIs to use Codex. As Altman told Stratechery, "model and harness come together more over time" — the agent runtime, not the raw API, is where value is now accruing, and AWS just put OpenAI inside its runtime.
Who is affected. Microsoft loses differentiation but sheds OpenAI's revenue-share obligation and keeps IP access through 2032 — a trade Wall Street will likely score as net-positive given Anthropic's 2025 momentum on Azure. AWS gains the model brand its enterprise base has been asking for, while quietly demoting Anthropic's own Bedrock launch (Claude Cowork and Claude Code Desktop) to a Tuesday blog post the prior week. OpenAI gets distribution into AWS's installed base in exchange for foregone Azure revenue and a 2GW Trainium commitment. Oracle and Google Cloud now compete for a smaller pool of OpenAI-curious workloads.
What to watch next. GPT-5.5 going GA on Bedrock; whether the "Managed Agents powered by OpenAI" exclusivity holds when other clouds inevitably ask for the same package; and whether AWS's adjacent application bets — the rebranded Quick suite, plus Connect Decisions, Talent, and Health — find buyers, or join the long list The Register's Corey Quinn catalogued as AWS's "application graveyard."
2. Hyperscaler Q1: $44B at AWS, $125–145B at Meta, and a Capex Curve That Keeps Bending Up

What happened. Four sets of numbers landed on the same afternoon. Alphabet posted Q1 revenue of $109.9B (+22% YoY), with Google Cloud at $20B (+63%) and net income of $62.58B. Microsoft's Intelligent Cloud segment came in at $34.68B with Azure and other cloud services up 40% YoY; Microsoft 365 Copilot crossed 20 million seats. AWS revenue rose 28% YoY to $37.6B — its fastest growth in 15 quarters — while AWS capital expenditures hit $44.2B for the quarter, up from $25B a year earlier. Meta reported revenue of $56.31B (+33%) and net income of $26.77B (+61%), but raised its 2026 capex guidance to $125–145B (from a prior $115–135B), per Bloomberg's Riley Griffin. Meta shares fell more than 6% after hours.
Why it matters. The combined 2026 capex envelope across the four firms is now tracking well above $500B, and the quarter-over-quarter acceleration is occurring after two years of investors asking when the curve flattens. Microsoft's Copilot seat count and AWS's reacceleration show real revenue attaching to AI spend, but Meta's reaction — record earnings punished by a $10–20B capex hike — signals the market has begun pricing infrastructure intensity as a risk rather than a moat. Google Cloud's 63% growth, the fastest in the group, suggests the third hyperscaler is finally converting its TPU-and-Gemini integration story into commercial traction.
Who is affected. Nvidia and the broader accelerator supply chain remain the immediate beneficiaries; so do utilities, transformer manufacturers, and data-center developers in regions with available power. Enterprise customers face a market where their cloud bills, electricity costs, and AI vendor contracts are now structurally linked. Public-market investors get four different risk profiles inside the same trade: Microsoft monetizing through seats, AWS through GPU rental, Google through integrated stack, Meta through capex with no external revenue line at all.
What to watch next. Whether Meta's 2026 guide drifts higher again on the Q2 call; how Azure's growth rate behaves now that the OpenAI exclusivity is gone; and whether AWS's $44B quarterly capex translates into utilization, or into the depreciation overhang that begins eating margins in 2027.
3. GitHub Copilot Goes Metered, and the Subscription-AI Premise Cracks

What happened. GitHub announced that all Copilot plans will move to usage-based billing on June 1, 2026. Subscribers will receive a token allowance equal to their monthly fee — roughly $19 of inference for a $19/month plan — rather than the current allocation of "requests" and "premium requests." GitHub's framing: "a quick chat question and a multi-hour autonomous coding session can cost the user the same amount," and that model "is no longer sustainable." Roughly two million Copilot subscribers are affected.
Why it matters. The economic mismatch GitHub is acknowledging is industry-wide. As Ed Zitron documented in a piece that hit Hacker News' front page today, the Wall Street Journal reported in 2023 that Copilot was losing more than $20/month per user on a $10 subscription, with some users costing as much as $80. Anthropic's own Claude Code documentation, quietly updated this month, now pegs average enterprise spend at "$13 per developer per active day and $150–250 per developer per month" — figures that don't survive contact with a $20 or $100 consumer plan. Microsoft is the best-capitalized vendor in the category. If it cannot continue to subsidize agentic inference at flat-rate, the precedent is hard for Cursor, Replit, Lovable, Perplexity, or the AI-feature tier of every SaaS incumbent to ignore.
Who is affected. Individual developers will see effective price increases — early Reddit analysis cited by Zitron suggests a single former "premium request" can consume around $11 of tokens under metered pricing. Engineering organizations that authorized "use AI as much as possible" mandates without instrumenting token spend now face budget exposure that, per Goldman Sachs estimates Zitron cites, can run to 10% of headcount cost. SaaS vendors who bundled AI features into existing seat pricing face the same arithmetic Microsoft just conceded. And the AI labs themselves — particularly OpenAI and Anthropic, whose consumer and Pro tiers remain flat-rate — now have a clear competitive cover to follow.
What to watch next. Whether Anthropic's Max and OpenAI's Pro tiers move to metered or hybrid billing within the next two quarters; how Cursor and Replit respond, given both have been running negative gross margins on subscription tiers; and whether enterprise AI ROI studies start landing differently once buyers see line-item token costs rather than absorbed seat fees.
The throughline across today's three stories is a single tightening loop. Hyperscalers are committing half a trillion dollars in 2026 capex on the assumption that inference demand is durable and roughly price-insensitive — an assumption Altman repeated to Stratechery when he said customers are asking for more capacity "no matter what the price is." Bedrock Managed Agents is the distribution mechanism that converts that capex into enterprise revenue. GitHub's metered pricing is the first major vendor admitting that the demand curve looks very different when buyers see the meter. Whether the capex thesis or the metering thesis wins out will define the next four quarters; today, both got materially more real.

