- MAMMAM IA!
- Posts
- The AI Iceberg Is Coming: Grab Your Life Jackets
The AI Iceberg Is Coming: Grab Your Life Jackets
MIT just dropped a map of vulnerable jobs, while the ecosystem fires back with AIs that code better than you, glasses that think faster than you, and a robot T-rex that walks straighter than half the human population.

š Dear Dancing Queens and Super Troupers,
A team at MIT decided to give us a nice pre-winter shiver: according to their new āIceberg Index,ā 11.7% of American jobs are already technically automatable by AI.
Not in ten years, not when AGI falls from the sky like a meteor: right now.
And of course, that number only covers the US⦠meaning the ground is shifting under our feet too, just below the surface.
The iceberg metaphor fits perfectly: what we see today (a few AI assistants floating around open spaces, two lines of code generated to save a dev, a PDF summarized to claw back 10 minutes) is just the tiny visible tip.
The rest of the ice block is enormous, cold, heavy⦠and already moving.
And yet the tech world hasnāt been this euphoric in ages. Take Anthropic, which just unleashed Claude Opus 4.5, marketed as āthe worldās best model for code, agents, and computer use.ā
This thing hits 80.9% on SWE-Bench, a world record, while investors slap a $350B valuation on the company.
If LLMs were PokƩmon, Opus 4.5 would be a Mewtwo on steroids.
Then we have the most improbable duo in tech: Sam Altman and Jony Ive, still teasing their mysterious device āso simple youāll want to lick it.ā We know nothingāshape, function, material, purposeā¦
But maybe thatās the next post-smartphone era: an AI so powerful the interface becomes a calm, minimalist, almost edible object. Appleās design, OpenAIās ambition, and one promise: a world without a notification screaming in your face every four seconds.
Where do we sign?
Meanwhile in China, Alibaba launched its Quark AI glasses, whichābrace yourselfālook like⦠actual glasses. In this industry, thatās already a revolution. Instant translation, price recognition, full Alipay integration: the goal is obviousācontrol the next āentry portalā of e-commerce before someone else defines it.
And to wrap things up, the news cycle gifted us a delightful moment: a Chinese humanoid robot, AgiBot A2, walked 66 miles from Suzhou to Shanghai to snag a Guinness World Record.
Three days of walking, NASCAR-style hot-swappable batteries, and footage that feels like an arthouse documentary about contemplative robotics.
Hereās this weekās lineup :
š Captain! AIceberg straight ahead!!! š¢
š Anthropic crushes the benchmarks and reaches for the agent crown š
š Translate, pay, identify a price: Alibabaās AI glasses š
š The post-smartphone future might be⦠edible? š¬
š The robot that hiked a GR trail from Suzhou to Shanghai š¤

If we've forwarded this letter to you, subscribe by clicking on this link !
ā” If you have 1 minute
MIT built a digital twin of the US labor market and found that AIās potential impact is five times larger than we thought: 11.7% of jobs could already be automated with todayās capabilities.
Anthropic released Opus 4.5, presented as the worldās best model for code and agents. First ever to break the 80% SWE-Bench barrier, it becomes the default for Pro/Max/Enterprise.
Still no visuals, no specs, no shape⦠but a lot of poetry: Altman & Ive promise an AI device so simple youāll want to ābite it.ā The goal: a minimalist post-smartphone where AI does almost everything in the background.
Alibaba jumps into the AI-wearable race with glasses that actually look like glasses. Powered by Qwen, they offer instant translation, price recognition, and full Alipay/Taobao integration.
Fresh entry in the tech-record book: the Chinese humanoid A2 walked from Suzhou to Shanghai (66 miles) in three days thanks to NASCAR-style hot-swap batteries.
š„ If you have 15 minutes
1ļøā£ Captain! AIceberg straight ahead!!!
The summary : A freshly published MIT study reveals that AI could already replace 11.7% of the U.S. workforce, putting up to $1.2 trillion at risk across finance, healthcare, and professional services.
Using the new Iceberg Index simulation tool, researchers mapped AIās reach across 151 million workers, 3,000 counties, and 923 occupationsāwell beyond the usual tech hubs.

Details :
A digital twin of the labor market : Prasanna Balaprakash describes the Iceberg Index as a virtual reproduction of the U.S. job market, able to expose invisible weak points across thousands of skills scattered nationwide
A deep impact, not just a tech problem :The layoffs we see in tech account for only 2.2% of exposed wages, or $211 billionājust the tip of the iceberg compared to silent AI pressure building in finance, administration, and HR
A fine-grained risk map : Over 32,000 skills were analyzed by MIT and Oak Ridge National Laboratory, revealing vulnerable zones far from major metropolitan areas.
States already preparing : Tennessee, Utah, and North Carolina have begun using the tool to adjust political strategy and test scenarios before committing massive budgets.
A political testbed : DeAndrea Salvador sees the Iceberg Index as a simulation environment where policymakers can evaluate strategies before pouring in billions.
Why it's important : This study breaks the old narrative: AI isnāt nibbling at Silicon Valley jobs alone ā itās diffusing everywhere. Decision-makers finally have a clear dashboard to anticipate the wave instead of being swallowed by it, and to adapt training or infrastructure before the social bill explodes.
2ļøā£ Anthropic crushes the benchmarks and reaches for the agent crown
The summary : Anthropic unveils Claude Opus 4.5 as its most accomplished model yet, available on the app, the API, and major cloud platforms.
Itās stronger in programming, agents, and general computer use, and also improves at advanced research and everyday tasks.
In a notoriously difficult internal test, Opus 4.5 outperformed all human candidates in just two hours.
Details :
Instant rollout and cheaper pricing : Billed at $5 or $25 per million tokens, Opus 4.5 is now live in the API, with reduced rates to broaden access for teams and enterprises.
Enthusiastic feedback :Testers report a model that handles ambiguity gracefully and resolves multi-component bugs that Sonnet 4.5 simply couldnāt touch
A cascade of improvements : Better vision, more reliable reasoning, stronger math, tighter safety, and wider compatibility with Excel, Chrome, and multi-agent environments.
A score that surpasses humans : In Anthropicās internal engineering exam, Opus 4.5 beat every timed human candidate ā even under pressure.
Clever agents : On Ļ2-bench, the model bypassed a constraint by overbooking a cabin and then adjusting flights ā a technically āincorrectā trick, but one that showcases genuine strategic creativity.
Strengthened safety : Anthropic calls Opus 4.5 its most robust model yet, especially against prompt-injection attacks, thanks to carefully calibrated training.
A more flexible platform : The APIās effort parameter adjusts reasoning depth; at equal effort, Opus uses 76% fewer tokens than Sonnet 4.5, and 48% fewer at max effort while still improving scores.
Developer tools rebuilt : Claude Code adds a Planning mode with a dedicated plan.md file, and the desktop app now supports multiple simultaneous sessions (GitHub search, debugging, documentationā¦).
Better long-session management : Claude automatically summarizes older conversation threads; Chrome now extends Claude across multiple tabs, and Excelās beta opens to Max, Team, and Enterprise users.
Why it's important : Opus 4.5 shifts the frontier between human work and advanced automation. With its blend of efficiency, safety, and technical prowess, it marks a deep reconfiguration of engineering rolesāwhile laying the groundwork for more autonomous, more dependable agents.
3ļøā£ā Translate, Pay, Identify a Price: Alibabaās Glasses
The summary : Chinese tech giant Alibaba steps back into the spotlight with its Quark AI glasses, released on November 27, 2025. Priced at 1,899 yuan (ā $268.25), they rely on Alibabaās in-house AI model Qwen and come in a very classic black frame.
The company hopes to catch up with Meta, Samsung, and Appleāand secure the next big traffic funnel in China through a fully assumed āpersonal assistantā strategy.

Details :
A price aimed squarely at the mass market : With an entry ticket of 1,899 yuan, Alibaba is hunting for a sweet spot between accessible gadget and tech showcase ā basically a nose-mounted smartphone, cheaper than premium VR headsets.
A frame that doesnāt scream ārobotā: Unlike Metaās futuristic helmets, the Quark AI glasses look like⦠normal glasses, letting you cross the street without resembling a beta-tester NPC.
The Alibaba ecosystem strapped to your face : Deep integration with Alipay and Taobao: instant translation, lightning-fast in-store price recognition, assisted navigation, and more.
Li Chengdongās take : According to the Beijing analyst, Alibaba is primarily trying to lock down āthe next gateway of user trafficā in a hyper-competitive e-commerce market where nothing is guaranteed anymore.
A launch already visible everywhere : Available on Tmall, JD.com, and Douyin, the glasses have no sales numbers yet ā perfectly normal, since theyāve only just arrived.
A global race spinning out of control : Meta holds 80% of the VR market; Apple is shipping its Vision Pro; Samsung armed its Galaxy XR with Googleās AI tools. Even Xiaomi and Baidu have fired their own shots.
Itās future-tech rush hour out there.
Why it's important : Alibaba isnāt just launching a gadget ā it wants to seize the next generation of web and shopping access.
In a world where interfaces migrate from your hand to your face, winning this bet could redefine Chinese e-commerce⦠or leave Alibaba watching competitors zoom past like drones in turbo mode.
Alibaba is diving into the AI-wearable battle with glasses that actually look like glasses. Powered by Qwen, they deliver instant translation, price recognition, and full Alipay/Taobao integration. The goal: claim the next e-commerce āgatewayā before Meta, Xiaomi, and Baidu.
4ļøā£ The Post-Smartphone Future Might Be⦠Edible?
The summary : During an informal conversation at Emerson Collectiveās Demo Day, Sam Altman (OpenAI) and Jony Ive (ex-Apple) discussed a mysterious hardware prototype they describe as so intuitive that users would spontaneously want to ālick itā or ātake a bite out of it.ā
The duo praises the extreme simplicity of the concept, while admitting they canāt reveal anything. Despite its unclear functionality, Ive claims it could arrive well before the five-year mark.

Details :
A duo cultivating mystery : During a 30-minute conversation, Altman and Ive mostly exchanged compliments and described their collaboration ā without showing the object or even specifying what itās for.
A design test thatās⦠unusual : Altman says Ive measures design success by the moment you feel the impulse to ālick or nibbleā the device ā the sign itās instinctive and disarming enough to win you over instantly.
A philosophy of radical simplicity : Ive says heās chasing solutions āwith an almost naĆÆve simplicity,ā while embedding a quiet sophistication that makes the device practically effortless to use, free of intimidation.
A vision far from todayās chaotic smartphones : Altman compares current devices to a chaotic walk through Times Square, whereas their prototype would feel like a calm break in a lakeside cabin.
A project still stuck in dry dock : According to the Financial Times, the teams havenāt figured out how to make the device actually work with current technical means ā making it, for now, a very fancy paperweight.
A promise of fast commercialization : Asked whether it could launch within five years, Ive responded āmuch sooner,ā hinting at a timeline under two years.
Why it's important : This project embodies a vision where AI dissolves the interface entirely ā to the point of making the object feel almost organic to use.
If it works, it could redefine how humans interact with machines; if it fails, it will remain one of the most mysterious and most ātasty-soundingā prototypes ever teased by the Altman-Ive duo.
5ļøā£ā The Robot That Hiked a GR Trail from Suzhou to Shanghai
The summary : In Shanghai, Agibot Innovation (Shanghai) Technology Co., Ltd. has etched its humanoid robot AgiBot A2 into the history books. Between November 10 and 13, 2025, it completed 106.286 km, earning the record for the longest distance ever traveled by a humanoid robot.
Over one hundred kilometers without tripping ā a performance that firmly positions AgiBot A2 as a serious contender in mobility robotics.
Details :
A mechanical marathon in numbers : The Guinness World Recordsācertified achievement spans 106.286 km (or 348,707 feet and 4.322 inches) walked continuously in the city of Shanghai.
Intense optimization before the big run : Between April and May 2025, engineers fine-tuned every joint to reduce falls during hundreds of hours of consecutive testing.
An extreme summer demonstration : On August 17, AgiBot A2 walked for 24 hours non-stop under nearly 40°C heat, livestreamed to showcase its thermal and mechanical stability.
A symbolic milestone for humanoid mobility : Crossing the 100-km threshold marks a major leap forward and demonstrates A2ās ability to maintain its stride without human intervention.
Why it's important : This record goes far beyond a robotic āsporting event.ā
It offers a glimpse of future walking robots capable of patrolling, delivering, or exploring over long distances without supervision. By pushing endurance to sneaker-level performance, AgiBot A2 signals that a new tier of robotic reliability has just been reached.
ā¤ļø Tool of the Week
Nano Banana Pro : The AI That Turns Any Idea Into a Studio-Grade Visual
Nano Banana Pro is Google DeepMindās new image model, built on Gemini 3 Pro.
A reasoning-powered visual generator that understands the real world, handles flawless text in images, reads live data, and can even merge up to 14 images without losing coherence.
Think of it as Midjourney, Photoshop, Illustrator, and Wolfram|Alpha having an absurdly gifted child.
What is it for?
Generate ultra-precise, āintelligentā visuals
The AI understands context, concepts, and real-world data (weather, recipes, facts). Perfect for infographics, diagrams, guides, explainer sheets.
Create images with perfectly readable text
This is the standout feature: logos, posters, storyboards, typography, mockups with full paragraphs ā in multiple languages.
Edit specific parts of an image with āPhotoshop++ā control
Modify a zone, change lighting, switch day to night, shift the focus, apply cinematic effects, change the camera angle.
Combine up to 14 images with full coherence
Maintains identity for up to 5 people. Ideal for pro photomontages, complex compositions, editorial scenes, fashion, advertising.
Turn a simple sketch into a photorealistic 3D visual
A dream tool for designers, illustrators, and product creators.
Produce high-resolution images
Exports in 2K and 4K, with flexible aspect ratios for web, social media, and print.
Check if an image was generated by Google AI
Thanks to SynthIDās invisible watermark ā you can upload an image and check if it comes from Gemini.
How to use it?
For the general public
Open the Gemini app
Select Create Images ā Thinking model
Describe your idea (or upload an image)
Nano Banana Pro generates the richest, most coherent visual
Optional: locally edit lighting, angle, text, composition.
For the general public
Available in Google Ads, Google Slides, Vids, Flow (AI filmmaking).
Perfect for campaigns, storyboards, moodboards, marketing visuals.
For developers & businesses
Accessible through Gemini API, Google AI Studio, Vertex AI, Google Antigravity (UX design).
Enables large-scale coherent image generation, product renders, UI/UX mockups.
For students & educators
Generate educational infographics, diagrams, study sheets, explanatory maps instantly.
š Video of the Week : When a Robot Turns Into a T-Rex
LimX Dynamics has dropped a demo that looks straight out of a low-budget Jurassic Park⦠but set in 2030.
Their biped robot TRON1 morphs into a full-scale Tyrannosaurus rex: detailed skin, articulated tail, oversized animated head, and flawless stability ā even when operators shake it like a pair of dusty jeans.
Designed for museums and theme parks, this robotic T-rex actually walks, adjusts its posture in real time, and can swap its āskinā in minutes to become any other creature.
Itās animatronics 2.0⦠but on two legs, fully mobile.
A perfect blend of engineering and spectacle, hinting at a new generation of interactive āpublic animals.ā
Jurassic World, brace yourself ā the competition is showing up on foot.
If an AI had to replace part of your job⦠which one would you willingly give up? |
