👋 Dear Dancing Queens and Super Troupers,

For a long time, we talked about artificial intelligence as something hazy, wedged between conference promises, overly polished demos, and abstract debates about “potential impact.”
For this first week of 2026, the mood has changed. AI has stopped raising its hand to ask for permission. It walked into the room, pulled up a chair… and started rearranging the furniture.

First, there’s the sharp crack of numbers. Two hundred thousand European banking jobs at risk by the end of the decade. Not because of a financial crisis, not a market crash, but because algorithms have become good enough to swallow entire chunks of back-office work, compliance, analysis, and risk management.

This isn’t a futuristic prediction. It’s a strategic plan. AI no longer “could” transform work. It already is, line by line, department by department, budget by budget.

Meanwhile, in robotics labs, another boundary is giving way. Humanoid robots are starting to react like living organisms. Too much pressure, a dangerous surface, and the robot pulls its arm back before it has even “thought.”
Not because it understands pain, but because it has learned the mechanical equivalent of a reflex. Here, AI drops below the level of reasoning. It settles into reflex, into immediacy, into the body.

And then there’s another shift, even quieter. AI is leaving the screen. It slips into objects without interfaces, into cars already on the road, into discreet boxes or pocket-sized devices.
No need to open an app, type a query, or stare at a display. Intelligence becomes ambient, contextual, almost invisible. Less spectacular, infinitely more effective.

What ties all of these stories together isn’t the technology itself. It’s a change of status. AI is no longer an experimental tool or a chatty assistant. It’s becoming infrastructure. It reshapes skilled work, alters how machines interact with us, and seeps into objects we thought were already “finished.”

Here’s this week’s lineup :

👉 European banks: 200,000 jobs threatened by AI

👉 Robotic skin: humanoids that “feel” pain

👉 OpenAI wants to put ChatGPT… into a pen

👉 An AI brain you can plug into your car

👉 Manus: Meta’s bet on AI that works on its own

If we've forwarded this letter to you, subscribe by clicking on this link !

If you have 1 minute

  • Europe’s major banks are preparing for a massive slimming plan. Back-office automation, compliance, risk analysis… AI does the job faster, cheaper, without lunch breaks. This is no longer a hypothesis, but a roadmap to 2030.

  • A new robotic “skin” allows humanoids to react instantly to dangerous contact without routing everything through a central brain. Like a human reflex. The result: safer, smoother robots, far more credible in human environments.

  • A pen-shaped device, discreet and always within reach, capable of interacting with AI without going through a smartphone. The goal is clear: pull AI out of the screen and make it permanent, contextual, almost invisible. Target launch: 2026–2027.

  • A plug-and-play module promises to add advanced AI capabilities to cars already on the road. Local computing, faster reactions, new assistance features. A way to accelerate onboard AI without waiting for the next generation of vehicles.

  • Meta wants more than chatbots. With Manus, it’s betting on autonomous AI agents that can act, plan, and execute complex tasks. A structural choice in the face of OpenAI and Google.

🔥 If you have 15 minutes

1️⃣ European banks: 200,000 jobs threatened by AI

The summary : Europe’s banking sector is bracing for a massive wave of automation. According to an analysis by Morgan Stanley, relayed by TechCrunch and the Financial Times, more than 200,000 jobs could disappear by 2030. The cause: the rapid adoption of artificial intelligence and the gradual closure of physical branches, all in the name of efficiency.

Details :

  • 200,000 positions at risk: Morgan Stanley estimates that 200,000 banking jobs could be cut in Europe by 2030, around 10 percent of the workforce across 35 major banks.

  • AI on the front line: Banks are betting on artificial intelligence to automate operations and reduce costs, with projected efficiency gains of 30 percent.

  • Most affected roles: Cuts will mainly hit support functions, risk management, and compliance, areas where algorithms analyze data faster than human teams.

  • Branches fading away: The gradual closure of physical branches is accelerating workforce reductions on the ground.

  • Goldman Sachs sets the tone: In the United States, Goldman Sachs announced job cuts and a hiring freeze until the end of 2025 as part of its AI plan dubbed “OneGS 3.0.”

  • ABN Amro goes hard: The Dutch bank plans to cut one-fifth of its workforce by 2028, reflecting the radical nature of some European strategies.

  • Société Générale without taboos: Its CEO says “nothing is sacred,” signaling that all functions are affected by the transformation.

  • A cautious voice: A JPMorgan Chase executive warns in the Financial Times that depriving young bankers of core skills could weaken the sector in the long run.

Why it's important : These announcements mark a major social turning point. AI promises efficiency and profitability, but it is brutally reshaping banking employment. Behind the numbers, the transmission of financial expertise itself is under pressure.

2️⃣ Robotic skin: humanoids that “feel” pain

The summary : In China, researchers have just developed a neuromorphic electronic skin capable of detecting touch, injuries, and even pain. Inspired by the human nervous system, this technology allows robots to react instantly to dangerous contact, without waiting for analysis by a central processor.

Details :

  • A long-standing limitation overcome: Until now, humanoid robots relied on centralized processing. Sensor signals had to be analyzed before any reaction, creating delays that could sometimes be critical.

  • The human reflex as a model: In humans, touching a hot object triggers an almost immediate withdrawal via the spinal cord. This electronic skin reproduces that reflex mechanism.

  • A skin that understands danger: Unlike conventional sensors, the NRE skin not only detects contact but also assesses whether the interaction poses a real risk.

  • A neuromorphic architecture: The technology is based on a hierarchical organization inspired by neurons. Tactile signals are converted into electrical impulses similar to those of human nerves.

  • Four clearly defined layers: The outer layer plays a protective role, like the epidermis. Beneath it, sensors and circuits continuously monitor pressure, force, and surface integrity.

  • Permanent self-diagnosis: Even without contact, the skin regularly sends electrical impulses to the central processor, confirming that everything is functioning normally.

  • Immediate injury detection: In the event of a cut or damage, the impulses stop. The robot can then precisely locate the affected area.

  • Reflexes without going through the “brain”: If pressure exceeds a critical threshold, a high-voltage signal is sent directly to the motors. The robot instantly pulls back an arm or a hand.

  • Safety and empathy: Researchers say this design improves touch sensitivity, safety, and intuitive human–robot interaction, particularly for empathetic service robots.

Why it's important : By giving robots a form of “pain,” this electronic skin brings humanoids closer to human reflexes. It reduces accident risks, improves coexistence with humans, and paves the way for robots truly suited to homes, hospitals, and public spaces. Robotics stops being rigid and becomes instinctive.

3️⃣​ OpenAI wants to put ChatGPT… into a pen

The summary : OpenAI is preparing its first consumer device, which will take the form of a smart pen. According to Wccftech, designed with Jony Ive, former Apple chief designer, the device aims to establish AI as a new central tool of daily life, beyond the smartphone. This smart pen, codenamed “Gumdrop,” is expected between 2026 and 2027.

Details :

  • A deliberate hardware move: OpenAI is collaborating with Jony Ive on several consumer devices, with the explicit goal of challenging the iPhone through AI-assisted productivity.

  • Minimalist design: The device adopts a form factor close to an iPod Shuffle. Compact, it fits in a pocket or can be worn around the neck, with no built-in screen.

  • Sensors on the front line: Cameras and microphones enable advanced contextual perception to better understand the user’s environment.

  • Local AI and hybrid cloud: OpenAI’s customized AI models will run directly on the device, with cloud support for more demanding tasks.

  • Connected handwriting and inter-device communication: Handwritten notes will be converted into digital text and sent instantly to ChatGPT. The pen will be able to communicate with other devices, similar to how smartphones interact today.

  • Production under geopolitical pressure: Initially planned for Luxshare in China, manufacturing could ultimately be handled by Foxconn in Vietnam or the United States, for a launch between 2026 and 2027.

Why it's important : With this AI pen, OpenAI is making its first move into consumer hardware. It’s a strategic pivot aimed at pulling AI out of screens and anchoring it in everyday gestures, while directly challenging the dominance of the smartphone

4️⃣ An AI brain you can plug into your car

The summary : At CES 2026, BOS Semiconductors will unveil a plug-in artificial intelligence module designed to modernize cars without touching their internal architecture. Called AI Box, the solution promises to bring advanced AI capabilities to both existing vehicles and new models. A pragmatic approach, presented in Las Vegas from January 6 to 9, 2026, aimed at accelerating AI adoption in mobility.

Details :

  • An external module, zero surgery: The AI Box connects directly to existing onboard electronics. It avoids replacing infotainment systems or core vehicle platforms.

  • BOS Semiconductors at the helm: The South Korean company, based in Seongnam, specializes in semiconductors dedicated to physical AI and automotive applications.

  • An accelerator for automotive AI: The module supports autonomous driving, software-defined vehicles (SDVs), and the rise of physical AI applied to mobility.

  • CNNs, transformers, and real-time decisions: The AI Box supports models based on convolutional neural networks and transformers, enabling fast decision-making in the real world.

  • Onboard AI, protected data: Voice and video are processed directly inside the vehicle to enhance privacy and security, without relying on the cloud.

  • A stated ambition: Jason Jeongseok Chae, vice president of BOS Semiconductors, says he wants to move beyond the role of automotive supplier to become a central player in physical AI.

Why it's important : With the AI Box, automotive AI becomes modular and accessible. This approach could turn millions of existing cars into intelligent vehicles, without waiting for a new generation of models. A small box, but a major shortcut toward the software-defined car.

5️⃣​ Manus: Meta’s bet on AI that works on its own

The summary : Meta is acquiring Manus, a Chinese AI startup based in Singapore. Valued at over $2 billion, the deal strengthens Meta AI with near-autonomous agents. Mark Zuckerberg continues his personal AI strategy through heavy investment and talent acquisitions.

Details :

  • A strategic acquisition: Meta confirms the purchase of Manus, valued at more than $2 billion, to accelerate the development of intelligent agents integrated into its consumer and professional products, including Meta AI.

  • Truly autonomous agents: Manus claims an agent capable of planning, executing, and completing tasks without repeated back-and-forth, unlike traditional chatbots that require multiple prompts.

  • A vision aligned with Zuckerberg: Barton Crockett, an analyst at Rosenblatt Securities, describes the move as natural and consistent with Mark Zuckerberg’s vision of agent-driven personal AI.

  • A clear mission: Manus says it aims to “extend human reach” by assisting work rather than replacing it, a philosophy Meta says it intends to preserve.

  • Continuity and governance: CEO and cofounder Xiao Hong says Manus will retain its internal operations and decision-making, while continuing to run and sell its service under Meta’s umbrella.

  • A broader AI offensive: In June, Meta had already invested $14 billion to acquire a 49 percent stake in Scale AI and recruit its leadership, signaling an escalation against players like OpenAI.

Why it's important : Meta is strengthening its position in the global AI race by betting on autonomous agents, a technological gamble that could reshape how users delegate everyday digital tasks.

❤️ Tool of the Week : ChatGPT sums up your year

ChatGPT is getting into year-end reviews. OpenAI is rolling out a personalized retrospective, Spotify Wrapped–style, that turns your conversation history into a playful, visual, and surprisingly introspective experience.

What is it for?

  • Looking back at how you used ChatGPT over the year: dominant themes, types of requests, habits.

  • Discovering personalized “rewards” based on your usage, like Creative Debugger or 2 a.m. compulsive thinker.

  • Automatically generating a poem and an image that sum up your AI year, based on your interests.

  • Becoming aware of the real place AI has taken in your daily life, your work, your creativity.

Why it’s clever

This isn’t just another gimmick. It’s a very smart way to normalize the relationship with AI, to make it personal, almost intimate, without forcing usage. OpenAI turns a log history into an emotional object, while reminding us that AI is no longer just an occasional tool, but a regular thinking companion.

How to use it?

  • Have chat history and memory enabled.

  • Be on a Free, Plus, or Pro account (excluding Team, Enterprise, Education).

  • Wait for it to appear on the home screen… or simply ask ChatGPT:
    “Your Year with ChatGPT”

A Wrapped that’s not about music, but about what you search for, doubt, and build. And that might be even more revealing.

💙 Video of the Week : when training a humanoid robot becomes… painful

A viral video shows a teleoperator training Unitree’s humanoid robot G1 via teleoperation. The principle is simple: the human performs movements, the robot imitates them in real time. Except this time, the exercise includes martial-arts kicks… and the robot copies everything. Absolutely everything.

Result: one poorly anticipated kick, the robot reproduces the move exactly, and the operator ends up on the ground, doubled over in pain, while the robot collapses as well. Perfect synchronization. Terrible timing.

Beyond the viral and slightly cruel aspect of the scene, the video says something very serious. Humanoids are now fast, precise, and powerful enough that their physical training is becoming risky. Teleoperation, used to feed imitation learning, is no longer a harmless game when machines gain strength and autonomy.

This sequence marks a discreet but important turning point: we’re entering a phase where training a robot requires the same precautions as working with heavy industrial machinery. Humanoids are no longer fragile or slow. They learn fast… and they strike true.

A video that makes you wince-smile, but reminds us of one essential thing: as robots become more human in their movements, they also inherit their potential for danger.

An AI that’s always there, no screen involved. Do you sign up?

Login or Subscribe to participate

Keep Reading

No posts found