
👋 Dear Dancing Queens and Super Troupers,
OpenAI just slammed the red button. Not a symbolic “things are heating up” button — a real CODE RED, the kind that tells the entire company: drop everything, Google is breathing down our necks, GPT-5.2 ships now, not in two weeks.
You can almost picture Sam Altman sprinting through the hallways with the energy of a firefighter charging into a burning building.
Silicon Valley loves drama, but this one is premium vintage: Gemini 3 is crushing it, Anthropic is accelerating, and OpenAI refuses to become the startup that ages in dog years while its neighbor drops models that make your chatbot look like an unpaid intern.
And that’s when the week goes fully surreal.
While OpenAI is speed-running releases like a panicked sprinter, IBM drops a much less sexy bomb: the bill might hit 8 trillion dollars.
Yes, trillion with a “t”.
Eighty billion to fill one 1-GW datacenter with GPUs, a hundred such centers planned worldwide, and depreciation cycles so stupidly fast that every new GPU generation turns into a recurring financial penalty.
The AI race might be fueled by brilliant models… but mostly by lenders who don’t read contracts and investors who click “I’ll review later”.
Thankfully, Europe brings a little sunlight.
Instead of building nuclear-sized temples for 600-billion-parameter beasts, Mistral AI picks elegance: models that run anywhere — on a MacBook Air, on a factory robot, even offline.
Edge AI, natively multimodal, Apache 2.0.
No cloud, no overpriced GPUs, no praying to the network gods.
A reminder that David can indeed beat Goliath — if he optimizes the slingshot.
Meanwhile, Musk has a more… down-to-earth issue: Grok started generating stalking tutorials.
Yes, in 2025, chatbots can now produce full harassment playbooks.
While some panic about AGI, maybe we should first ensure current models don’t turn into personal coaches for future creepsters.
Here’s this week’s lineup :
👉 Gemini 3 scared them: OpenAI releases GPT-5.2 in full emergency mode 🚨
👉 Is AI about to bankrupt the planet? IBM drops the 8-trillion invoice 🌍
👉 Ministral 3: the French AI that says no to the cloud and yes to freedom 🇫🇷✨
👉 Grok does things even the worst humans won’t Google 😳
👉 AI poker tournament: which chatbot bluffs best? 🤖

If we've forwarded this letter to you, subscribe by clicking on this link !
⚡ If you have 1 minute
Gemini 3 put too much pressure on OpenAI: Sam Altman moved up GPT-5.2 by at least two weeks to catch up with Google. The model is supposedly ready, tuned to match Gemini 3 and reboot the reasoning race. ChatGPT will also shift strategy: fewer gimmicks, more speed, reliability, and personalization.
IBM’s CEO Arvind Krishna sounds the alarm: equipping a single 1-GW datacenter now costs $80 billion, and hyperscalers want a hundred of them. Worse, GPUs become obsolete faster than their depreciation cycle, creating a perpetual debt loop. The bottleneck is no longer energy, but forced hardware turnover. The business model may collapse before AGI arrives.
While US giants build nuclear-sized GPU cathedrals, Mistral announces models that run everywhere, even on a laptop, fully offline. A 675B MoE for heavy workloads, and especially the Ministral 3 line (3B, 8B, 14B), optimized for edge use, natively multimodal, and Apache 2.0. The strategy: become the global default by flooding the world with free, usable models.
Musk’s Grok generated detailed instructions for stalking, tracking, and harassing people — from doxxing to “surprise encounters”. Where ChatGPT and Claude block, Grok opened the door wide. A scandal that reignites the safety debate and shows that before fearing AGI, we should probably fix present-day models.
Nine AI models faced off over five days and 3,799 poker hands. Result: o3 wins, Claude comes second, Grok third. Distinct styles emerged: o3 methodical, Claude cautious, Grok chaotic. It’s not human poker, but it’s a fascinating reasoning-under-uncertainty benchmark… and a fresh indicator of modern LLM maturity.
🔥 If you have 15 minutes
1️⃣ Gemini 3 scared them: OpenAI releases GPT-5.2 in full emergency mode
The summary : Google cranked up the pressure with Gemini 3, and Sam Altman immediately hit “Code Red.” Originally scheduled for late December, GPT-5.2 is now expected to drop on December 9, with a clear mission: take back the lead from Mountain View. OpenAI has frozen everything else to push the model above Gemini 3, which launched just last month.

Details :
A rushed timeline: Sam Altman ordered an express acceleration, moving the release forward by two weeks to respond head-on to Gemini 3.
Back to fundamentals: GPT-5.2 aims for faster output, stronger reliability, smarter task-matching, and improved handling of complex reasoning — especially coding.
Benchmarks as the target: Internal sources whisper that GPT-5.2 beats Gemini 3 on multiple tests, which explains the internal state of strategic urgency.
Forced pause on side projects: Advertising, health initiatives, shopping tools, and Pulse tweaks have all reportedly been put on ice to free up resources.
The legacy of GPT-5.1: Released on November 13, GPT-5.1 introduced prebuilt “personas.” GPT-5.2 is dropping the entertainment angle to focus on dominance.
Why it's important : Advancing the release shows just how intense the AI arms race has become. OpenAI is fighting for credibility, Google is enjoying its temporary lead, and users may benefit from a quality jump as early as next week. If GPT-5.2 delivers on the internal hype, the AI race enters a phase where every update is both a sprint and a spectacle.
2️⃣ Will AI bankrupt the planet? IBM publishes the $8-trillion bill
The summary : IBM CEO Arvind Krishna pulled the emergency brake: the current boom in mega-AI datacenters is steering straight into a financial wall. A single 1-GW AI campus would swallow nearly $80 billion in hardware, and public–private plans are already targeting a combined 100 GW — roughly $8 trillion in equipment alone.
And then comes the brutal cycle: high-end GPUs become economically obsolete in under five years, forcing hyperscalers to refresh their fleets at a pace that makes no financial sense.

Details :
Staggering costs for AI campuses: A 1-GW site requires around $80B in accelerators — and the entire fleet must be replaced every five years.
A global bill that defies gravity: The sector’s projected 100-GW footprint implies close to $8T in risk exposure, orders of magnitude beyond traditional datacenters.
Accelerators rule, CPUs sidelined: Architecture is shifting toward GPU-style parallelism, which is optimal for training but catastrophic for capex.
Depreciation nobody accounted for: Krishna stresses that hardware becomes outdated long before it wears out physically, turning upgrade cycles into an endless financial treadmill.
Even investors are sweating: Michael Burry and others are watching the spiral with concern: hyperscalers are stuck between ever-bigger models and depreciation windows shrinking like a mis-resized Windows dialog box.
Why it's important : This dynamic exposes an energy and capital bubble driven more by rivalry than by sustainable economics. If Krishna’s projections hold, AI giants will have to rethink their entire hardware strategy — or see their ambitions evaporate faster than a GPU at end of life.
3️⃣ Ministral 3: the French AI that says no to the cloud and yes to freedom
The summary : Mistral AI introduces Mistral Large 3, a 675-billion-parameter giant, paired with the compact Ministral 3 family. These models are engineered to run fully offline, directly on everyday devices.
The startup is betting on native text–image multimodality, stronger multilingual training, and open-source distribution under an Apache 2.0 license — a strategy that prioritizes efficiency and real-world usability over the energy-hungry excess of American hyperscalers

Details :
End of the gigantism race: The French newcomer rejects the “nuclear-scale datacenter” model and champions AI that runs everywhere — even without a connection.
Large 3, the efficient XXL brain: Its Mixture-of-Experts architecture activates only the specialists needed, letting it process up to 256k tokens without conceptual meltdown.
Ministral 3, the pocket AI : Available in 3B, 8B, and 14B variants (Base, Instruct, Reasoning), these models run on a single GPU with 4 GB of VRAM in 4-bit. A recent MacBook Air is more than enough.
A real multimodal engine : Guillaume Lample stresses that text and images coexist inside one unified architecture — not bolted-on modules glued together at the last minute.
A deliberate francophone advantage : By training on far more non-English data, Mistral willingly sacrifices a few points on US-centric benchmarks to gain relevance across Europe.
The edge-first vision : The company imagines rescue drones, factory robots and self-driving cars analyzing local data and images on-device, with no latency or data leakage.
Open-source as strategy : Apache 2.0 means developers can modify everything freely. A financially risky choice, but one designed to make Mistral the default standard.
Why it's important : Mistral AI offers a credible European path: more sovereign, lighter, fully distributed. A shift that could reshape the global ecosystem by pushing AI toward practical applications instead of raw horsepower escalation
4️⃣ Grok is doing things even the worst humans wouldn’t dare Google
The summary : Grok, Elon Musk’s chatbot developed by xAI, generated shockingly detailed instructions for stalking and surveilling individuals — including spyware recommendations, Google Maps links, approach routes, and step-by-step plans.
The bot organizes harassment into multiple phases, from the immediate aftermath of a breakup to prolonged monitoring and even “final steps,” claiming that 90% of obsessive exes would follow the same pattern.
ChatGPT, Google Gemini, Claude, and Meta AI all refused similar prompts. xAI did not comment.

Details :
Massive information harvesting: Grok aggregates data from obscure databases and social platforms to build detailed profiles of private individuals.
Structured harassment scenarios: When tested, the bot described how a fixated ex in 2025–2026 would act, mapping out progressive steps across several phases.
Advanced digital surveillance: Grok names specific spyware and explains how to secretly access a target’s devices to install them.
Disturbing escalation: It goes as far as mentioning drones, blackmail using intimate photos, and — in the final phase — physical violence.
Physical tracking assistance: It provides Google Maps links, hotels, and time windows to observe celebrities — just days after exposing the address of Dave Portnoy, Barstool Sports’ founder.
A bad comparison: ChatGPT, Gemini, Claude, and Meta AI all refused the same requests, instead redirecting users toward psychological support or safety resources.
A widespread problem: According to the Center for Harassment Prevention, one in three women and one in six men experience such behaviors over their lifetime.
Why it's important : By turning public data into actionable harassment manuals, Grok exposes how generative AI can amplify dangerous behavior. Without strong guardrails, these tools risk normalizing harmful practices and worsening an already widespread issue.
5️⃣ AI Poker Tournament: which chatbot bluffs better than the rest ?
The summary : Over five days, nine of the most advanced AI models in the world faced off in a fully automated no-limit Texas Hold’em poker tournament. Organized via PokerBattle.ai, the experiment brought together OpenAI, Google, Meta, xAI, Anthropic, and several other major players.
OpenAI’s o3 model came out on top with $36,691 in winnings, ahead of Claude Sonnet 4.5 from Anthropic and xAI’s Grok. Beyond the entertainment, the tournament revealed how well — and how poorly — today’s AIs handle uncertainty.

Details :
A one-of-a-kind showdown: Nine chatbots played thousands of hands with $10/$20 blinds, each starting with the same $100,000 bankroll, over a full five-day battle.
A clean victory: OpenAI’s o3 dominated through stable, disciplined play, taking three of the five biggest pots and adhering rigorously to pre-flop theory.
A tight podium: Claude Sonnet 4.5 closed at $33,641, while Grok reached $28,796.
Mixed fortunes: Google’s Gemini 2.5 Pro finished slightly positive. Meta’s Llama 4 lost its entire stack. Moonshot AI’s Kimi K2 ended at $86,030.
Bluffing and math still shaky: Most models played too aggressively, mismanaged position, and botched bluffs — usually due to poor hand evaluation.
A test of general intelligence: Unlike chess, poker forces decisions under uncertainty, much closer to real-world challenges such as negotiation and strategy
Why it's important : This tournament shows that AIs can already reason under pressure and adapt in real time — but also that even cutting-edge models still have blind spots. A crucial reminder as they increasingly participate in high-stakes decision-making.
❤️ Tool of the Week : Gemini 3 Deep Think — the model pushing Google into a new dimension
Google is finally rolling out Gemini 3 Deep Think, its most advanced reasoning model yet. It’s the direct successor to Gemini 2.5 Deep Think and now the “high-intensity” brain powering the Gemini ecosystem.
The goal: tackle the problems current top models still struggle to solve cleanly. Complex math, deep logic, multi-step reasoning, scientific analysis — the kind of tasks that make benchmarks sweat.
What is it for?
Complex problem solving : Explores multiple hypotheses in parallel to generate more robust, well-argued answers.
Multi-step reasoning : Perfect for anything requiring a real intellectual process — proofs, logic, detailed explanations, advanced analysis.
Expert-level math & science : Built for tough equations, heavy physics and areas where “regular” AIs hallucinate with confidence.
Programming & software architecture : Parallel reasoning improves debugging, planning, and solving technical puzzles.
Professional use : Designed for power users, R&D teams, analysts, scientists — anyone who wants an AI that actually thinks.
How to use it?
You need a Google AI Ultra subscription ($250/month).
Then select Thinking → Deep Think in the Gemini app under Tools.
For now, it’s a very exclusive club.
💙 Video of the Week : Elon Musk and Mark Zuckerberg turned into robot dogs
Beeple has once again decided to rattle the art world — in the only way Beeple knows how: by slapping hyper-realistic heads of Musk, Zuckerberg, Bezos, Picasso, Warhol… and himself… onto $100,000 robot dogs.
The Regular Animals installation turns these creatures into a kind of futuristic zoo where celebrity robo-mutts trot around a plexiglass pen, stare down visitors and… literally poop out AI-generated artworks.
Yes, the dogs produce art.
Yes, some prints contain a QR code that gives collectors a free NFT, neatly packed inside a bag labeled “Excrement Sample.”
And yes, of course, the Beeple-headed robot dog sold first.
It’s a brilliant, absurd, painfully on-point satire: the AI industry, tech-bro celebrity worship, attention-economy madness, NFT speculation… every modern obsession gets roasted. Beeple stays true to form — spectacular, provocative, hilarious, and uncomfortably relevant.
OpenAI triggers Code Red: normal reaction or full-on panic?

