Dear Disruptors,
I’m Fernando SantaCruz with the thirty-second edition of Synapsis Weekly, where this week artificial intelligence forked into two opposing philosophies (temporarily), an entire economy bet its future on a single technology, and China proved that sanctions are temporary obstacles.
In a particularly meaningful week. I’m working with the real estate industry to integrate AI into their operations and, thanks to an invitation from Universidad Autónoma de Yucatán and Cristina Mata , teaching AI for Marketing as part of their Master’s in Marketing program. There’s something powerful about teaching how to use IA. These are experienced marketers learning to wield tools that will redefine their entire discipline.
Meanwhile, my work in Toronto keeps deepening into NVIDIA’s GPU infrastructure and making AI upskilling accessible for Canadian businesses, because the distance between understanding a chip and understanding a business opportunity remains the real bottleneck. (My LinkedIn profile).
Two Brains, One Market: OpenAI’s Speed vs. Google’s Depth
This week I watched something split in real time. Artificial intelligence divided into two distinct species. OpenAI chose speed. Google chose depth. And the question they left on the table is brutally practical: do you need an employee who thinks fast, or one who thinks well?
Here’s the thing: the question itself is wrong. What you actually need is to know when to use each one. And that ability to discern (knowing which type of intelligence to apply to which problem) is perhaps the most valuable skill you can develop in 2026.
OpenAI launched GPT-5.3 Codex Spark in partnership with Cerebras chips: over 1,000 tokens per second. A complete game that used to take 45 seconds now generates in 6. On the opposite corner, Google DeepMind unveiled Gemini 3 Deep Think: 84.6% on ARC-AGI-2 (pure abstract reasoning) and gold medals in physics olympiads. Where Codex Spark is an Olympic sprinter, Deep Think is a monk who meditates for three hours before giving you an answer. But that answer is flawless.
It’s the death of the “one model for everything” approach. If you need to iterate on your website, generate prototypes, or fix CSS, the sprinter eliminates wait times. If you need to review a contract with ambiguous clauses or a financial calculation where one mistake costs millions, the monk is your senior consultant at a fraction of the price. The mistake is using a Ferrari to go to the grocery store or a bicycle to cross a country.
Question for your operation: Can you classify your team’s tasks into “needs speed” and “needs precision”? Are you using the same model for both?
$1 Trillion and a Prayer: The Economy That Bet Everything on AI
I’ll be direct: this number stopped me cold. Annualized U.S. spending on AI infrastructure surpassed $1 trillion. That’s 3.5% of GDP. 75% of S&P 500 returns depend on AI stocks. This is no longer a tech sector. It’s “too big to fail”: the American economy has become structurally dependent on the AI hypothesis working out.
But a DigitalOcean report revealed the paradox: 53% of companies report productivity gains, but only 10% have achieved real autonomous agents. A trillion dollars invested. Nine out of ten companies still in the “experimenting” phase. Let that sink in.
Here’s the hopeful data point: 44% now dedicate most of their budget to inference rather than training, a signal that the transition from lab to real operations has begun.
For SMBs, the secondary effect is positive: fierce competition among providers is collapsing prices. The trillion-dollar infrastructure is paid for by the giants. The benefits trickle down.
Question for your strategy: If 53% report gains but only 10% achieve autonomy, where does your company fall on that spectrum? Are you experimenting without measuring, or do you already have clear metrics on what AI saves you?
China Breaks the Chains: GLM-5 and the Chips Nobody Authorized
This week proved something the sanctions architects didn’t want to hear. Zhipu AI launched GLM-5: 744 billion parameters, surpasses GPT-4, rivals Claude Sonnet 4.5 across several benchmarks, and was trained entirely on Chinese-made Huawei Ascend chips. Zero NVIDIA dependency. The semiconductor export sanctions designed to slow China down for a decade turned out to be fertilizer for sovereign innovation.
Right alongside it, MiniMax launched M2.5 at US$0.30 per million input tokens. Do the math: if your SMB processes 10 million tokens per month, your bill is $15. Fifteen dollars.
The lesson: China’s strategy mirrors what Android did to iOS. Give away the base models so the world builds on your architecture. For companies watching every dollar of their margin, the legitimate privacy question gets solved with open-source versions that run locally.
Question for your budget: Do you know how much your company pays per token per month? Have you compared prices across providers in the last 90 days, or are you still with the first one you signed up with out of inertia?
The Swarm That Works While You Sleep: Kimi and 100 Agents
This week I saw the future of work, and it wasn’t a chatbot. Moonshot AI launched Kimi K2.5 Agent Swarm: instead of chatting with a single bot, you give it a goal and the system deploys up to 100 specialized sub-agents. Researchers, verifiers, writers, all working simultaneously. It analysed 40 academic PDFs and produced a 100-page report with zero human intervention. It’s the difference between hiring an intern and hiring an entire department for an afternoon.
This is the end of “chat” as the interface for complex work.
The user isn’t the operator. They’re the conductor.
A single employee at your SMB can do the analytical work of an entire department: market research, vendor comparison, legal document synthesis. And here’s the unexpected bonus: the sub-agents verify each other, introducing “productive disagreement” that reduces the hallucinations of any individual model. AI learns to doubt itself to become more reliable.
Question for your productivity: How many hours per week does your team spend reading, comparing, and synthesizing before making a decision? What would they do with those hours if they were freed up?
ByteDance, Disney, and the War for What’s Real: When AI Copies Too Well
ByteDance launched Seedance 2.0: hyperrealistic 15-second clips with perfect lip sync. The level of realism has definitively closed the “uncanny valley.” The problem: the demos included recreations of Dune and SpongeBob. Disney and SAG-AFTRA responded with immediate cease-and-desist orders, calling it a “labour emergency.”
The twist: the legal battle has shifted terrain entirely. It’s no longer just about whether training data violated copyright. Now the issue is what AI generates: indistinguishable replicas of protected scenes. Piracy without copying. Generative piracy.
For SMBs creating content, the barrier to video production has vanished, but legal liability falls entirely on whoever publishes. The competitive advantage isn’t in who generates the best content. It’s in who does it originally and in a legally defensible way.
Question for your marketing: If you can create cinema-quality advertising video in minutes, what legal protections do you have to ensure your generated content doesn’t infringe on third-party rights?
When AI Has a Bank Account: Coinbase and the Birth of the Artificial Economic Actor
I’ll be honest: this one gave me chills. Coinbase launched crypto wallets for AI agents. Not for humans using AI. For AI itself to custody and transact value. With spending limits, approval rules, and full auditing. It’s literally giving a corporate card to a digital employee.
The shift is profound: agents move from passive tools to economic actors.
Picture this: a swarm like Kimi’s could research a problem, identify the solution, hire the service, pay for it, and deliver the result. No human intervention required.
We’re building the rails of a machine-to-machine economy. And that real autonomy (the leap from the current 10% adoption) depends on agents being able to execute actions in the real world. Many of those actions require payments.
Question for visionaries: If an agent could hire services and pay vendors within limits you define, how many processes that currently require three approvals and two days would be resolved in minutes?
MIT, 10 Million Tokens, and Memory That Doesn’t Decay
A team from MIT and Prime Intellect launched Recursive Language Models (RLMs): over 10 million tokens with zero degradation. Think about what that means. That’s equivalent to the entire internal documentation of a mid-sized company. All of it. Without the model forgetting the first document by the time it reaches the last.
It’s the difference between an analyst who loses the introduction by the time they reach the conclusions and one who can jump between sections while keeping the entire narrative in their head.
Startups building on language models, this directly threatens many RAG architectures. If you can feed an entire code repository into a single query without losing precision, the intermediate search layer becomes redundant.
Technical question: If your product relies on RAG for long documents, is your competitive advantage in the search architecture or in the quality of your curation?
“A Country of Geniuses in a Data Centre”: What Dario Amodei Actually Said
Dario Amodei (@Anthropic) stated we’re years away from having “a country of geniuses in a data centre” and that 90-100% of code will be written by AI. Nick Bostrom published his paper “Optimal Timing for Superintelligence” proposing a paradoxical strategy: accelerate development but pause briefly before full deployment. “Fast to the harbour, slow to the dock.” His analogy: it’s not Russian roulette. It’s a risky surgery to cure an otherwise fatal condition.
Here’s the real obstacle: we don’t have instruments for the “docking” moment. Snorkel AI and Hugging Face launched a $3 million fund to create new benchmarks, admitting current ones are useless. Agents score 10 out of 10 on lab exams and fail in the real world. We’re flying blind with the autopilot on.
For decision-makers: don’t deploy AI in critical processes without clear metrics for success and failure. The “vibe” that it “seems to work” doesn’t scale.
A philosophical and practical question: If 90% of code will be written by AI, what skills does your technical team need in two years? Are you investing in what AI can’t replace: defining problems, auditing quality, making ethical decisions?
Waymo, 200 Million Miles, and a Race That’s No Longer About Safety
Waymo testified before the Senate with a striking data point: 200 million autonomous miles, over 400,000 paid trips per week, and safety data superior to the average human driver.
What’s revealing: the most important part of the hearing wasn’t the safety argument. It was the geopolitical one. The push to accelerate federal standards came from the fear that China wins the race, not from evidence that the cars are safe. The discourse shifted from “Is it safe?” to “How do we win?”
Question for logistics: If Waymo is already operating commercially at scale, does your transportation company have a five-year plan that accounts for this reality?
Tools You Can Use on Monday
- Claude “Cowork” for Windows: AI accesses local files, terminal, and executes tasks on your PC. Automate folder organization, data cleanup, and reports without depending on the browser. The “cloud chatbot” just became an employee who uses your computer.
- Kling 3.0 on Leonardo: High-quality advertising videos with audio in minutes. Social media material without production teams. The cost-benefit equation for visual content just changed radically.
- Deep Research with Source Control (GPT-5.2): Specify exact URLs to investigate (competitor blogs, industry forums). It went from “search me something” to “watch exactly these targets.” Accessible competitive intelligence.
- WarpGrep (MorphLLM): Search code repositories 20x faster. If your SMB develops software, this eliminates code navigation bottlenecks.
- ElevenLabs Expressive Agents: Voices that detect and react to emotions. If you run phone support, scale without losing the human touch. Voice is the new brand interface.
My Invitation This Week
Try this clarity experiment.
This week, AI forked into “fast” and “deep.”
Most people use the same tool for everything. That’s like using a scalpel to cut bread and a kitchen knife to perform surgery.
Over the next five business days, every time you use AI, jot down three things in a phone note:
- The task. Be specific: “summarize client email,” “review contract,” “generate campaign ideas.”
- The type of intelligence it required. Type V (velocity): what mattered was getting something fast and good enough. Type P (precision): a mistake would have real consequences.
- Did you match the right tool? Did you use a fast model for something that needed depth, or pay premium for something the basic model could handle?
At the end of the week, tally it up.
Most people will discover that 70-80% of their uses are Type V, and they’re overpaying. And that the remaining 20-30% (the decisions that actually matter) they’re solving with the same fast tool, risking quality where it counts most.
It’s not about switching tools.
It’s about being intentional with which one you use for what.
This week, the gap between the 53% reporting gains and the 10% achieving autonomy won’t close with better models.
It will close with better decisions about how to use the ones that already exist.
Do you know when your business needs speed and when it needs depth?
That distinction, dear Disruptors, is what separates the 53% from the 10%.
Fernando Santa Cruz – Head of AI & Automation @ Adivor Consulting
P.S. A trillion dollars in infrastructure is worthless if the talent doesn’t know what to do with it. The revolution has never lived in the data centre. It’s always lived in the exact moment someone realizes they can use it. Maybe we should spend less on GPUs and more on that moment. Just a thought.