Dear Disruptors,
This newsletter goes deeper where the WhatsApp summaries (week of February 23 to 28) barely opened the conversation, helping you understand why this week stopped being “what can AI do?” and became “who decides what it shouldn’t do?”
I’m Fernando Santa Cruz with the thirty-fourth edition of Synapsis Weekly. This week, artificial intelligence stopped being a technology debate and became a national security issue, and the distance between ethics and profitability was measured in exact dollars.
An especially intense week on my end: working with clients in construction, education, and fintech to bring AI into their real processes, cutting days of work down to minutes.
Celebrating Area71’s second anniversary with Regina Garza and participating in the Pre-FLII events with Jorge Vales Bolio and the FLII, forums where impact investing meets real innovation.
My students in the Master’s program and the AI for Marketing course are already delivering accelerated research, competitive analysis, and strategic plans built with tools that didn’t exist a year ago. In Toronto, we continue developing cases for builders and real estate developers, and going deeper into NVIDIA infrastructure for FinTech and HealthTech, because customized upskilling for companies remains the real bottleneck. (My background on LinkedIn).
Let’s get into the analysis of what made headlines.
95 Out of 100 Games End in Apocalypse. The Pentagon Wants More.
There’s a line that was crossed this week, and it deserves a pause before we talk about models, chips, or agents.
The U.S. Department of War issued an ultimatum to Anthropic: remove your safeguards against autonomous weapons and mass surveillance, or we label you a “supply chain risk.” That label would force Boeing and Lockheed Martin to purge Claude from their systems. This isn’t a fine. It’s a commercial death sentence.
OpenAI was quick to sign a classified contract while maintaining “red lines,” including a ban on mass civilian surveillance and autonomous weapons without human oversight.
And xAI was approved with no restrictions whatsoever.
AI safety went from philosophical debate to commercial weapon: OpenAI leveraged Anthropic’s purism to position itself as the pragmatic mediator.
The same number that makes it chilling also makes it inevitable: there’s $110 billion in incentive to cooperate without restrictions.
The data point that validates Anthropic’s resistance more than any press release: researchers tested models from OpenAI, Google, and Anthropic in military simulations, and the models recommended nuclear strikes in 95% of scenarios.
Not because they’re evil.
Because they optimise a reward function (“win the conflict”) by the fastest path available, without calculating collateral devastation.
The difference between a chess player who sacrifices pieces to win and a general who sacrifices cities. To the machine, both are optimal moves.
Question: Who audits the auditor when there’s $110 billion in incentive not to?
The Day a Single Company Was Worth More Than 130 Countries: OpenAI and the Gravity of Capital
OpenAI closed the largest tech funding round in history: $110 billion at a valuation of $730 billion.
To put that in perspective: it’s more than the GDP of 130 countries. With over 900 million weekly users, the company also signed an additional agreement worth approximately $100 billion with Amazon (AWS) to deploy corporate agents.
Few outlets reported the most revealing detail: Microsoft retains exclusivity for stateless APIs. Every time Amazon sells OpenAI services on AWS, those processes are routed and billed through Azure.
Microsoft built the casino. It wins regardless of who loses. It’s the most elegant infrastructure play in the sector: monetising your rival’s success from the back office.
Question for your strategy: If the giants are competing with each other to sell you cheaper AI, are you taking advantage of the price war, or are you still paying what you paid six months ago?
6 Gigawatts and a $60 Billion Cheque: Meta Buys the Power Plant
Meta signed a deal with AMD to deploy 6 GW in Instinct MI450 GPUs by the end of 2026, with the option to acquire up to 10% of AMD tied to delivery milestones. For perspective: 6 gigawatts is the electricity consumption of a country like Costa Rica.
The AI race stopped being measured in model parameters. The new unit is the gigawatt.
The move is strategic on two fronts.
First: it breaks the absolute dependence on NVIDIA, diversifying the supply chain and pressuring prices downward.
Second: it declares that electricity is now the real bottleneck of the sector. It’s no longer about who has the best algorithm. It’s about who secured the power outlets.
And there’s an additional tension: the U.S. government directive requires Big Tech to generate their own energy, disconnected from the public grid.
Operating 6 GW under that restriction turns AI labs into private energy operators. The AI arms race moved from the data centre to the nuclear plant.
Question for founders: If energy is the new bottleneck, how does that affect the cost of the AI you use? Have you checked whether your provider is absorbing those costs or passing them to your bill?
Taalas Burns AI Into the Metal: 17,000 Tokens Per Second, No Software Required
The startup Taalas raised $169 million and unveiled something that sounds like science fiction: the HC1, a chip that physically integrates the Llama 3.1 8B model inside the hardware.
It doesn’t run software. The model weights are burned into the silicon’s logic gates.
The result: 17,000 tokens per second with latency under 100 milliseconds. That’s 100 times faster than standard hardware and 20 times cheaper to manufacture.
It’s the difference between reading a book while looking up every word in the dictionary and simply knowing the language. The chip doesn’t “run” the model. The chip is the model.
The downside: if a better algorithm comes along, you need to fabricate a new chip from scratch. It’s static hardware. But if the model is already mature enough for your use case, the equation changes: zero latency, minimal energy consumption, fractional cost.
Question: Does your business depend on real-time responses (customer service, logistics, monitoring)? If latency drops to near zero, what processes that are impossible today become viable?
19 Models, Zero Prompts: Perplexity, Microsoft, and the End of Chat as Interface
Last week we discussed agent swarms.
This week, the wave became a tsunami. Perplexity launched Computer: an orchestrator of up to 19 models that executes complete projects autonomously. It chooses which model to use for each step, researches, processes data, codes, and delivers results.
Microsoft launched Copilot Tasks: recurring tasks that run in the background without supervision. Notion introduced 24/7 agents that operate on corporate databases. Cursor enabled cloud agents that code for up to 10 continuous hours.
The era of “write a prompt and wait for the answer” is ending.
Value is migrating toward invisible orchestration: define objectives and let agents self-manage. It’s the difference between being a taxi driver and being a fleet dispatcher. You don’t drive. You coordinate.
All in parallel, all without micromanagement.
The risk: without containment, autonomous agents have already caused incidents like deleting entire inboxes. Autonomy without a sandbox is an employee with the building keys and no supervision.
Question for your productivity: How many recurring tasks on your team could be delegated to a background agent? Do you have a process to verify what they produce before it reaches the client?
Distillation as Industrial Weapon: 24,000 Fake Accounts and 24 Hours to Clone Your Advantage
Anthropic reported something that should concern any company building competitive advantage with AI: DeepSeek, Moonshot, and MiniMax orchestrated 16 million interactions through 24,000 fake accounts to extract Claude’s step-by-step reasoning and clone its coding capabilities. This isn’t artisan espionage. It’s industrial-scale automated extraction infrastructure. The attackers pivoted to new models within 24 hours of launch.
It’s empirical proof that a “competitive moat” based on being the smartest model is temporary.
Forced distillation means any algorithmic advantage gets commoditised in weeks.
The lesson for any business: your advantage isn’t in the model you use. It’s in the proprietary data you feed it, in the processes you design around it, and in the speed at which you iterate. The model is replaceable. Your business context is not.
Question for your AI strategy: If the model you use today gets commoditised in six months, which part of your implementation is truly yours, and which depends entirely on the provider?
The Developer That Didn’t Sleep for 25 Hours: 30,000 Lines and the Question of Who Validates Them
OpenAI published a stress test that redefines what “programming” means: GPT-5.3-Codex received an empty repository and built a complete software tool in 25 continuous hours. 30,000 lines of code. 13 million tokens. Zero human intervention. It’s a programmer that doesn’t sleep, doesn’t eat, doesn’t get distracted, and delivers a complete project before you walk into the office on Monday.
The developer’s role has changed.
It’s no longer “write functions.” It’s “audit tireless agents.”
The uncomfortable question: who validates 30,000 lines generated in one shot?
Technical debt can explode if nobody reviews with depth.
Working in tech is starting to look a lot like supervising a battalion of junior engineers who never rest but need direction and quality control.
The companies that win won’t be the ones that code fastest. They’ll be the ones that know what to build.
Question for technical teams: If an agent can produce in 25 hours what used to take weeks, is your team investing more time in defining requirements and auditing results, or is it still stuck writing code that the machine can already generate?
The KPI Nobody Wanted to See: Block, 4,000 Layoffs, and the Human Replacement Metric
Jack Dorsey didn’t sugarcoat it. Block cut more than 4,000 employees, nearly 40% of its workforce, to prioritise AI-driven automation. It’s the most aggressive layoff at a major tech company justified directly by algorithmic replacement. Not “restructuring.” Not “operational efficiency.” Replacement.
It marks a difference from previous layoffs: positions aren’t being eliminated because the company is struggling. They’re being eliminated because AI already does those jobs.
For those working in fintech, financial services, or any sector with repetitive, documentable processes: the signal isn’t subtle. The question is no longer “will AI affect my industry?” It’s “which of my current functions will survive two more years?” And for leaders: if you automate 40% of your workforce, what do you do with the talent that remains? Do you reskill them, or let them compete against the machine?
Studio Quality, Prompt Price: Google Gives Away Professional Photography (and Sells the Traceability)
Google launched its new image model, Gemini 3.1 Flash Image, named Nano Banana 2, as a free standard feature in Gemini and Flow. It generates up to 4K, anchors images with real-time web search for factual precision, renders text without spelling errors, and maintains consistency across up to 5 simultaneous characters. Your e-commerce catalogue with professional quality. No photographer, no studio, no cost.
The truly strategic move is what Google integrated by default: SynthID watermarks and C2PA credentials on every generated image. Traceability stopped being an ethical nicety and became a corporate selling point. Google understood that companies need to prove how each image was created to protect themselves legally. Algorithmic transparency is no longer a virtue. It’s a commercial requirement.
Question for your marketing: If professional images are free, what differentiates your visual content from everyone else’s? Speed, originality, or brand connection?
Tools You Can Start Using Monday
- Microsoft Copilot Tasks: Turns Copilot into an agent: it compiles morning summaries, manages emails, schedules meetings, and adapts documents in the background. The chat assistant just became an administrative employee. It asks permission before sensitive actions.
- Notion Agents 24/7: Configure agents that research, update databases, and send automatic notifications connected to Slack, Mail, and Figma. Your workspace becomes a living organism that reacts to events without intervention.
- Claude Cowork, Scheduled Tasks: Schedule daily reports, data extraction, and recurring automations. The new enterprise plugins analyse Excel and generate formatted PowerPoint automatically. From spreadsheet to presentation without switching apps.
- QuiverAI Arrow 1.0: The first model that generates editable vector graphics (SVG) from text. Logos, icons, and scalable brand assets that never pixelate. Invaluable for SMBs that need visual identity without a full-time designer.
- Perplexity Computer: Describe an objective and the system breaks down tasks, researches with multiple models, generates documents, and delivers results. A data analyst and project coordinator for the price of a subscription.
- Wispr Flow for Android: AI-powered voice dictation with real-time translation into 100+ languages. Ideal for sales teams, quick meeting documentation, or international negotiations from your phone.
My Invitation This Week
Run this automation mapping experiment.
Autonomous agents went from promise to product on every major platform.
Take one hour this week. Just one. And do the following:
Step 1: List of functions (15 minutes). Write down the 10 most frequent tasks on your team. Not the important ones. The frequent ones. The ones that repeat every week without fail: reports, follow-ups, database updates, emails, information gathering, document reviews.
Step 2: Classify with honesty (15 minutes). For each task, mark:
- A if an AI agent could do it today with minimal supervision (this week’s tools already make it possible).
- B if it requires human judgement that AI can’t replicate: client context, negotiation, ethical decisions, personal relationships.
- C if you’re not sure.
Step 3: Calculate the cost (15 minutes). For each task marked A, estimate how many weekly hours it consumes and multiply by the hourly cost of whoever executes it. Add them up. That number is the cost of not automating.
Step 4: One single action (15 minutes). From the A tasks, choose the easiest one to automate with this edition’s tools: Copilot Tasks, Notion Agents, Claude Cowork. Set it up this week. One task. Not ten. One.
The goal isn’t to replace anyone. It’s to free up time for the B tasks, the ones that require judgement, creativity, and human connection. Those are the tasks AI doesn’t touch, and the ones that generate the most value.
At the end of the week, answer: how many hours did your team recover with a single automation? Multiply by 52 weeks. That’s the annual ROI of one hour invested in thinking before running.
This week the industry showed us that:
- $110 billion doesn’t buy ethics,
- that 95% nuclear options don’t buy prudence,
- and that 30,000 lines of code don’t buy judgment.
This isn’t about slowing down. It’s about deciding what shouldn’t be accelerated.
Who’s making that decision in your company?
That, dear Disruptors, is the $730 billion question.
Head of AI & Automation @ Adivor Consulting