Dear Dysruptors,
Fernando Santa Cruz here for the 43rd edition of the Weekly Synapsis—where the Pentagon signed with Google while 600 employees rebelled, China canceled a $2 billion acquisition from Meta, and the White House prohibited Anthropic from expanding the model that can hack hospitals.
Writing from Toronto, I’m getting back into the rhythm of the city’s dynamic AI ecosystem. We are coordinating with various stakeholders and efforts in Mexico to energize the ecosystem, aiming to ensure the benefits of AI reach more SMEs and individuals in their daily lives, with a better-tailored strategy for Yucatán.
This week, the decision-makers changed.
Until now, AI labs made their own product decisions. They launched models, set prices, and chose their clients.
This week, governments entered the engine room.
The Pentagon integrated frontier AI into classified military networks. The White House told a private lab who it could not sell to. Beijing canceled a $2 billion acquisition because it deemed the technology too strategic to leave the country.
AI is no longer just a market. It has become sovereign infrastructure.
This newsletter dives deeper into the WhatsApp summaries (week of April 27 to May 1) to understand why three governments intervened in the same week, why a Google researcher mathematically demonstrated that AI will never be conscious, and why the most important news was neither of these—it was a system at the Mayo Clinic that detects pancreatic cancer three years before a human.
The Pentagon Signed at 4 PM and 600 DeepMind Engineers Lost the Letter: Google Crosses the 2018 Red Line
On April 28, Google signed an agreement with the Department of Defense for the Pentagon to use Gemini with classified data for “any lawful government purpose.” The contract was signed at 4:00 PM.
That same morning, more than 600 Google and DeepMind employees signed an open letter asking CEO Sundar Pichai to reject the contract.
Pichai signed it anyway. Hours later.
In 2018, a similar letter with 4,000 signatures forced Google to cancel Project Maven. This time, 600 signatures didn’t move the needle.
Let’s think about what changed. It wasn’t ethics. It was opportunity cost.
OpenAI, xAI, and Microsoft were already operating on classified military networks. Anthropic refused, and the Pentagon designated them a “supply chain risk”—a label usually reserved for companies from adversary nations.
The message to the rest of the industry is clear. It’s like when the only gas station decides to close at night in a 24-hour city: the one that doesn’t open loses the customer; the one that does opens sets the price.
For any company building products on these models, the message is operational. An AI provider’s “ethical red lines” remain valid only until a larger client crosses them first.
Strategic Question: Does your company clearly know which AI providers it uses, and how their terms of service change when they onboard government or military clients?
$2 Billion Paid and a Veto That Is Obeyed: China Treats AI as Strategic Petroleum
The Chinese government blocked Meta’s acquisition of Manus. The purchase, valued at $2 billion and announced in December, was canceled in April by the National Development and Reform Commission.
Manus was an agentic AI startup founded by Chinese engineers. To escape regulatory scrutiny, its founders moved to Singapore before closing the sale. Capital was already transferred; employees were already integrated into Meta. Investors like Tencent and Hongshan had already cashed out.
China forced them to undo everything.
It’s like when a government discovers a strategic oil field has been sold and demands that every gram of mineral be returned.
The message isn’t for Meta. It’s for the rest of the ecosystem.
Agentic AI—systems that execute tasks autonomously without human supervision—is no longer treated as commercial software. It is treated as weaponry.
This fragments the global AI acquisition market in two. What is purchasable in San Francisco is not purchasable in Singapore, even if the company is legally registered there. What matters is not where the legal entity is, but where its founders learned to code.
For a Latin American SME, this changes the supplier calculus. An agentic AI tool of Chinese or American origin could disappear tomorrow due to a geopolitical decision, not a business one.
Strategic Question: Do you know the geopolitical origin of the AIs your company uses, and what happens if one of them gets blocked by a government order?
A 27-Year-Old Bug in OpenBSD and 70 Blocked Companies: The White House Wants Mythos All to Itself
The White House prohibited Anthropic from expanding Claude Mythos to approximately 50 more companies. 70 organizations were left out in the cold.
Mythos is the restricted cybersecurity model that, in internal tests, found a 27-year-old bug in OpenBSD that no human had detected, identified vulnerabilities in power grids, and compromised hospital systems.
The official reason: national security risk if it falls into the wrong hands. The secondary reason: Anthropic doesn’t have enough compute to serve 120 corporate clients without affecting the priority access the federal government already has.
What few are connecting: the same government that forbids Anthropic from expanding commercial access to Mythos has simultaneously integrated OpenAI and xAI models into classified military networks. The White House is also drafting an executive directive that would allow federal agencies to bypass the Pentagon’s designation and use Anthropic.
It’s like a mother who forbids her son from lending the car to a friend, but at the same time asks that friend to drive her to the supermarket. The rule isn’t about the car. It’s about who gets to decide when to use it.
What we are seeing is not coherent regulation. It is strategic capture disguised as regulation. The state wants priority access to the most powerful AI capabilities and to restrict who else can buy them.
Strategic Question: Does your AI adoption plan depend on having access to a frontier model, or can you operate with tier-two models that do not face these restrictions?
Non-Exclusive License Until 2032, -23% for Every Power User, and $14 Billion in Losses: AI SaaS Loses by Design
Microsoft and OpenAI radically restructured their historic alliance. Microsoft maintains a non-exclusive license to OpenAI’s IP until 2032. OpenAI can now offer its models on any cloud, including AWS and Google Cloud. Microsoft stops paying royalties on revenue to OpenAI, and the clause that invalidated the entire agreement if “AGI” was reached has disappeared.
The same week, data emerged that breaks the optimistic narrative of AI SaaS. Cursor, the most popular AI code editor on the market, reported negative gross margins of 23% while generating $2.7 billion in annualized revenue. And OpenAI is projecting losses of $14 billion by 2026.
It’s like a restaurant that fills every table every night but loses 23 cents on every dollar it charges because the raw materials cost more than the customer is willing to pay.
For decades, SaaS worked for a simple reason: every new user cost almost nothing. AI inverts that equation. Every “power user”—the customer the company likes most—is the one that generates the most losses. The more they use, the more the company loses.
OpenAI separating from Microsoft confirms the problem is structural. It won’t be solved by a better customer-supplier relationship. It will be solved, if at all, with absolute scale and proprietary models that reduce unit costs below the price charged.
Strategic Question: Does your AI budget anticipate a 30% to 60% price increase in the next 18 months, or are you budgeting with rates that the provider is currently subsidizing with venture capital?
The Creator of AlphaGo Bets That Human Text Is No Longer Enough (and $1.1 Billion to Start Without the Internet)
David Silver, the scientist behind AlphaGo, AlphaZero, and AlphaStar, left DeepMind to found Ineffable Intelligence. In his first round, he raised $1.1 billion at a $5.1 billion valuation—the largest seed round in European history.
What’s notable isn’t the money. It’s the technical bet.
Silver wants to build an AI that learns 100 percent through reinforcement, without training on human data. No books. No Wikipedia. No GitHub code. Just trial and error against the world.
It’s as if the next generation of mathematicians were educated without having access to a single previously published theorem, discovering everything from scratch.
The reason behind this bet is that frontier labs internally assume the “data wall” has been hit. The human data available on the internet is exhausted, new data is in legal dispute, and training with synthetic data generated by other AIs introduces cumulative errors that degrade models.
If Silver is right, two things change. First, the massive copyright lawsuits against OpenAI, Anthropic, and others lose economic relevance. The next generation of models won’t need to ingest human intellectual property. Second, the models that emerge won’t be predictable. An AI trained without human text won’t share our intuitions or our biases. It will generate literally alien knowledge.
Strategic Question: Does your 3-year AI strategy assume that models will continue to evolve based on chatbot logic, or are you prepared for tools whose way of reasoning does not resemble human reasoning?
Mayo Clinic Beats the World’s Most Lethal Cancer: 6 Months of Lead Time and Routine CT Scans
Mayo Clinic published in Gut the validation results of REDMOD, an AI model that analyzes routine abdominal CT scans and detects pancreatic cancer up to 3 years before traditional clinical diagnosis. It identified 73% of pre-diagnostic cancers with a median of 16 months in advance. Specialist radiologists, looking at the same scans without AI help, identified only 39%.
Pancreatic cancer kills 95% of patients precisely because it is detected late. When symptoms appear, the tumor is almost always inoperable. REDMOD finds the microscopic signature years before the tumor becomes visible.
This week, there were headlines about the Pentagon, the White House, million-dollar lawsuits, and record funding rounds. This news appeared in medical journals, and almost no one covered it.
It’s like when the mammogram was invented. It took decades to move from “fascinating technology” to “standard of care that saves millions of lives.”
The lesson for SMEs and entrepreneurs isn’t medical. It’s strategic. While public conversation is focused on ChatGPT, agents, and existential fears, the true economic and human value of AI is appearing in quiet places. Recognition of patterns invisible to the human eye. Early detection. Anticipation.
Strategic Question: What patterns does your industry consider “invisible” or “impossible to detect in time,” and what value would a tool have that detected them months or years in advance?
Google Puts a Mathematical Ceiling on the AI Apocalypse
Alexander Lerchner, a senior scientist at Google DeepMind, published The Abstraction Fallacy, a paper that argues mathematically that AI can never be conscious. Not because of limitations in scale, but because of the very nature of computation.
The argument is precise. Computation is not a real physical process. It is a map that a conscious observer imposes on transistors that change voltage. Saying a GPU “computes” requires a conscious human who decides which physical states count as which symbols. Without that cartographer, the chips are just doing pure physics: electrons moving.
It’s like a perfect simulation of photosynthesis on a computer. No matter how accurate the model is, it will never produce a single molecule of glucose.
This changes two things. First, apocalyptic fears about AIs gaining free will and rebelling—based on the idea that more parameters eventually generate consciousness—lose their technical foundation. Second, and more importantly for businesses, it redirects the conversation. The right question was never “will this AI feel?” The right question is “what does this simulation do with the speed and reach it has?”
An AI without consciousness is still capable of hacking hospital networks if someone asks it to. It is still capable of generating persuasive content at scale. It is still capable of making autonomous financial decisions. The risk was never the machine’s will. It is the human will orchestrating incredibly powerful, mindless machines.
Strategic Question: Is the AI governance your company is building focused on what the AI “might want,” or on what humans might ask it to do without proper supervision?
Tools You Can Use Starting Monday
- Gemini Export: Google added the ability to generate and download Word, Excel, PDF, Google Docs, Sheets, and Slides directly from the chat. A week of notes becomes a one-page PDF. A budget brainstorm becomes an Excel with formulas. Eliminates hours of reformatting.
- Microsoft Word Legal Agent: A native legal agent within Word that reviews contracts clause-by-clause against an internal playbook, suggests changes, and maintains format and track changes intact. Available in the U.S. within the Frontier program. Reduces reliance on external review for standard contracts.
- OpenAI Codex with Computer Use: Codex now operates your Mac autonomously. Moves the cursor, opens applications, navigates, fills out forms. Multiple agents can run in parallel without interrupting your work. Ideal for automating repetitive flows in applications that lack an API.
- NVIDIA Nemotron 3 Nano Omni: Open-weights model with 30 billion parameters that processes text, image, video, and audio in a single loop. 9x higher operational throughput than competitors at the same quality. Available under commercial license. For companies that want to run multimodal agents locally without sending sensitive data to the cloud.
- Claude Integrated into Adobe, Autodesk, and Blender: Anthropic released native connectors for Claude to work within design, architecture, and 3D modeling software. Allows using natural language to automate batch tasks or generate elements directly in the creative interface. Reduces export/import friction.
- Perplexity in Microsoft Teams and Excel: Brings its assistant into daily work tools. Queries within Teams channels. Side panel in Excel for data analysis. Partnered with 1Password so corporate passwords are never exposed to the AI.
- xAI Grok 4.3: Model with 1 million tokens of context and API pricing 40–60% lower than the previous version. Includes a voice cloning suite for $3 per hour of interaction. An affordable alternative for companies developing their own AI integrations or phone automation.
My Invitation This Week: The Digital Sovereignty Audit
This week, three governments intervened in AI decisions.
The White House told Anthropic who it couldn’t sell to. China canceled a closed purchase. The Pentagon integrated models into classified military networks.
For an SME, that means one concrete thing: the AI tools we use today are not yours. They are tenants on foreign infrastructure, subject to political decisions happening in languages no one in your company speaks.
I’m not inviting you to panic. I’m inviting you to take inventory.
- List the AIs your team actually uses (10 min). All of them. ChatGPT, Claude, Gemini, Perplexity, Copilot, integrations within Notion, Slack, HubSpot, Canva. The marketing tools, the dev tools, the customer service tools. Note each one honestly.
- Identify the jurisdictional origin of each tool (10 min). Is it a company from the US, China, Europe, or elsewhere? Does its infrastructure run on AWS, Azure, Google Cloud, or its own servers? Where is the data stored? Finding this information is 90% of the exercise.
- Classify each tool by criticality (10 min). Three categories:
- Critical: If it disappears tomorrow, the business stops.
- Important: If it disappears, we lose efficiency but keep operating.
- Replaceable: If it disappears, we find a replacement in 48 hours.
- For each critical tool, define a concrete Plan B (15 min). Not “we’ll look for something.” A specific name of an alternative provider, ideally from a different jurisdiction. How much would it cost to migrate? How long would it take? Who on your team would lead the migration?
The result is a one-page matrix. Three columns: tool, criticality, Plan B. That page is your insurance against a government decision that might happen tomorrow in a distant capital.
The lesson this week is not that AI is dangerous. It is that the availability of AI is no longer decided by the companies building it. And those of us building businesses on these tools need to have a map of who is making the decisions.
Closing
Google signed with the Pentagon while 600 of its employees rejected the deal. China canceled a $2 billion Meta acquisition to prevent the leakage of agentic AI. The creator of AlphaGo raised $1.1 billion to build AI that learns without internet data. Mayo Clinic published a system that detects pancreatic cancer three years before a human. And a Google researcher mathematically proved that AI will never be conscious.
It is not a week of products. It is the week where it became clear that AI has ceased to be a market and has become sovereign infrastructure.
Governments are no longer observers. Companies no longer set their own rules. And we SMEs and entrepreneurs who are building with these tools must learn to read geopolitics in addition to prompts.
Because availability is no longer decided by price alone. It is decided by who is sitting in which capital.
Fernando Santa Cruz Head of AI & Automation @ Adivor