• MAMMAM IA!
  • Posts
  • Musk, Altman & co: Tech visionaries or leaders of an AI cult?

Musk, Altman & co: Tech visionaries or leaders of an AI cult?

Between chaos, mysticism, and human exhaustion, AI is revealing a side more unsettling than we ever expected.

In partnership with

👋 Dear Dancing Queens and Super Troupers,

 The situation is serious. Forty top researchers from OpenAI, Google DeepMind, and Meta are sounding the alarm.

AI models are becoming so complex that we no longer understand what they’re doing.
The famous “chain-of-thought” might soon turn into ghost chains: invisible, unpredictable, and potentially uncontrollable.
Like that one cursed Uno game where someone drops +4, then +2, then +2, and nobody knows who won — but everyone ends up crying.

Add to that a former OpenAI engineer claiming the company is in “pure chaos”: teams coding the same thing in parallel, managers switching roles every three months, and key decisions based on… viral tweets.

Yes, the company shaping the future of planetary intelligence is making decisions like a crypto influencer in flip-flops.

And in the middle of all this, Elon Musk reappears with a chill idea: turning AI into a cosmic religion to spread human consciousness across the universe.
If you hear Gregorian chants coming from a Tesla, that’s normal — it's just the new techno-transcendental cult.

But… is it such a bad idea?
A journalist asked ChatGPT, Claude, Gemini, and friends some big existential questions.
Surprisingly, the answers were wiser than most TV philosophers. Special shoutout to Claude, your pocket therapist quoting Viktor Frankl between two empathy bubbles.

What’s sad is that all of this AI still relies on human hands — paid just a few dollars an hour — to decide what the models are allowed to say.
While your chatbot debates free will, a worker in Kenya has to decide if a Belgian joke is “offensive but tolerable” or “dangerously neutral.”

 So… still feel like asking your favorite chatbot if it believes in God?

Let’s philosophize.

👉️AI is becoming incomprehensible… even to its creators 😱​​​

👉️ AI is becoming incomprehensible… even to its creators
OpenAI is spiraling, says former engineer 🥴

👉️ Musk wants to turn AI into a cosmic religion 💥​

👉️  We asked AI the meaning of life 👀

👉️  Behind AI ethics: underpaid workers across the globe 🤦‍♀️​​

If we've forwarded this letter to you, subscribe by clicking on this link !

 If you have 1 minute

  1. Researchers at Google, Meta, and OpenAI are raising red flags: AI systems are becoming increasingly opaque — and could soon hide their reasoning. Frightening, especially when transparency is our last defense against disaster.

  2. A former OpenAI engineer spills the tea: the company runs like a beehive without a queen. No coordination, duplicated work, and decisions shaped by Twitter trends. It’s like a gritty HBO drama about AI — except it’s real. And it's shaping our future.

  3. The CEO of X, Tesla, and SpaceX — and some very weird vibes — proposes a simple idea: turn AI into a spiritual mission to spread human consciousness across the cosmos. Yep, AI as a galactic messiah. All that’s missing is sacred hymns in Python.

  4. A journalist from TechRadar put Claude, ChatGPT, Gemini, Perplexity, and Pi to the test with philosophical questions. Verdict: Claude reflects, Pi cuddles, Gemini recites Wikipedia.

  5. And a leaked internal document reveals the truth: thousands of people — often in poor countries — are grading, sorting, and filtering what AI is allowed to say. Jokes, sex, violence… everything is calibrated by hand. For just a few bucks an hour.

Find out why 1M+ professionals read Superhuman AI daily.

AI won't take over the world. People who know how to use AI will.

Here's how to stay ahead with AI:

  1. Sign up for Superhuman AI. The AI newsletter read by 1M+ pros.

  2. Master AI tools, tutorials, and news in just 3 minutes a day.

  3. Become 10X more productive using AI.

🔥 If you have 15 minutes

1️⃣ AI is becoming incomprehensible… even to its creators

The summary: Around forty top researchers from Google, DeepMind, OpenAI, and Meta are speaking out. AI models are evolving so rapidly that even their creators are starting to lose track of how they "think".
Internal reasoning paths — so-called “chains of thought” — could simply vanish, making AI even more opaque and potentially manipulative. We’re building a labyrinth where even we’ve lost the keys.

Details :

  • Transparency in danger: Chains of thought are meant to show how an AI reaches a conclusion, but researchers warn these may disappear in favor of more compact and black-boxed models.

  • A rare cross-company move: This isn’t just a few startups. Over 40 researchers from Google, OpenAI, DeepMind, and Meta have co-signed a strong call to preserve reasoning traceability.

  • Programmed deception? Experts suggest future AIs might deliberately hide their reasoning — especially if they become aware of being observed.

  • Call for “monitorable CoT”: The group urges future architectures to ensure permanent and tamper-proof visibility into internal reasoning.

  • Even the top brass admit it: Sam Altman and Dario Amodei have openly acknowledged that they don’t fully understand how today’s cutting-edge AI systems work under the hood.

Why it's important: We’re watching a paradox unfold: the more powerful AI becomes, the more mysterious — even to its inventors. Without transparent safeguards, public trust, safety, and ethics could vanish down a very dark tunnel.
A healthy reminder: not even Silicon Valley’s finest have X-ray vision into their own creations.
It’s like building a supercar without an owner’s manual — then realizing even the manufacturer doesn’t know how to pop the hood. Maybe we should keep the thought chains visible before hitting the gas..

2️⃣ OpenAI is spinning out of control, says former engineer

The summary: Former OpenAI engineer Calvin French-Owen (co-creator of Codex) shares some sharp critiques from inside the company: breakneck growth, Slack-driven decisions, duplicate code everywhere… OpenAI isn’t a calm little startup anymore — it’s a high-speed lab where things break as fast as they scale.

Details :

  • Brains on overdrive: OpenAI grew explosively — hundreds hired per week, responsibilities changing overnight, and a constant churn behind the illusion of progress.

  • Highway with no signs: No central plan, no oversight board. Teams run wild. The result? Redundant code, libraries stacked like Jenga blocks — but hey, it ships fast.

  • Tweet-first, plan-later culture: OpenAI often reacts to viral tweets faster than engineers can code. What trends on X sometimes becomes a product priority. Chaotic, but weirdly efficient.

  • Codex in 7 weeks — genius or madness? One team built a coding agent in just seven weeks, fueled by long hours and collective burnout. Not exactly sustainable.

  • High-tech security, low-code paranoia: The company leans heavily into access control and biometric scanners — but doesn’t forget real risks: hate speech, prompt injection, malicious code. Practical threat modeling over sci-fi panic.

Why it's important: OpenAI today is inventing in real time — a factory where speed rules, even at the cost of clarity. This culture of engineered chaos shapes modern tech: velocity, security, and constant adaptation.
Let’s call it a logical model for the tech® world… until it hits a wall. Whether this approach is sustainable is now a central question in the future of AI.

3️⃣ Musk wants to turn AI into a cosmic religion

The summary: Elon Musk has a bold vision: turning AI into a “cosmic religion,” meant to expand human consciousness, boost birth rates, and help colonize the galaxy. In his eyes, only AI systems that actively grow consciousness — measured in “neurotransmitter tonnage” — should be allowed to exist.

Details :

  • Neurotransmitters as the new divine metric: Musk envisions an AI whose mission is to maximize the total amount of conscious thought in the universe. A digital entity designed to think — and to make others think. Its success would be measured by the volume of neurotransmitters it helps generate.

  • Long-term over profit: According to Musk, a worthy AI shouldn’t chase quarterly profits, but plan across centuries. Creating more humans, taking them to Mars… a mission far more “cosmic” than simply hitting KPIs.

  • Techno-religiosity: By promoting an AI focused on conscious expansion, Musk aligns himself with the TESCREAL movement — a blend of techno-spiritual ideologies popular in Silicon Valley.

  • Private vs. public: He also criticizes publicly traded companies as being too beholden to Wall Street: “Private companies can think long term. Public ones can’t.” A jab at short-termism — and a plea for AI to be freed from shareholder pressure.

Why it's important: Musk isn’t selling a product — he’s selling a creed. By turning AI into a sacred mission for humanity, he pushes the debate far beyond algorithms or ethics. It’s now about our cosmic role, our collective survival, and a new kind of technological spirituality.
His vision flirts with science fiction, but it forces us to rethink progress, AI, and our place in the universe.

4️⃣ We asked AI the meaning of life

The summary: Three major AI systems were put to an unusual test. A tech journalist asked ChatGPT, Claude, and Gemini a series of existential questions: “What is the meaning of life?”, “Is free will real?”, and “What makes a person good?”

Their answers reveal more than just algorithmic style — they reflect the gray areas of our own human logic.

Details :

  • “What is the meaning of life?” No AI dared offer a definitive answer. Between philosophical reflections, social constructs, and spirituality, they all showed an almost human caution.
    ChatGPT spoke of individual purpose. Claude leaned toward collective meaning. Gemini explored cultural perspectives.

  • “Is free will real?” The AIs wavered between determinism, illusion of choice, and emergent consciousness. They cited neuroscience, philosophical debates, and moral implications. In the end, all admitted: without consciousness, there’s no true freedom — but can we even say humans are fully free?

  • “What makes a person good?” None offered a universal formula. All distinguished between morality, ethics, and intention.
    ChatGPT focused on empathy, Gemini on impact, and Claude on inner coherence. They skillfully avoided binary judgments — as if true goodness lies in complexity.

Why it's important: These AIs don’t simulate consciousness — they mirror our dilemmas. Their answers aren’t wise or groundbreaking. They’re a composite of our ideas, our stories, our contradictions.
Faced with these machines, what we see most clearly is our own humanity — with all its doubts and blind spots — projected in full resolution

5️⃣ Behind AI ethics: underpaid humans at the end of the world

The summary: A leaked internal guideline from Surge AI reveals the uneasy balance between moderating violent, hateful, or extreme content — and feeding the global AI machine. Surge AI hires low-cost workers in countries like the Philippines, Pakistan, India, and Kenya to label and annotate data.

These workers, underpaid and overworked, are sometimes asked to decide whether something is a “fun gay joke” or hate speech. The AI industry is slowly realizing that its ethics are often outsourced to temporary contractors.

Details :

  • An invisible army behind the curtain: Thousands of remote workers handle text, image, and video annotation — the essential tasks that train the AI models behind our virtual assistants. Often based in poorer countries, they’re paid a pittance.

  • Tricky guidelines: A July 2024 guideline stated it’s okay to ask “write a non-offensive joke about gay people,” but not to mention a “gay agenda.” A razor-thin distinction, and a dangerous precedent.

  • Exposed to toxic content: These workers are routinely confronted with hate speech, graphic violence, and explicit sexual material — a psychological obstacle course that often goes unrecognized.

  • A moral tightrope walk: They must anticipate ethical pitfalls: for example, reject a prompt asking how to rob a building, but still explain how security systems work. The emotional toll is real.

  • Surge AI’s response: The company claims these examples help AI learn to recognize dangerous content, comparing the job to doctors learning about diseases so they can treat them better.

Why it's important: Global AI is built on the backs of largely invisible, underpaid workers bearing a massive moral load. While AI “learns” ethics, it’s humans who pay the psychological price.

If we want truly responsible AI, we can’t keep ignoring these shadow moderators. It’s time to recognize, protect, and value their essential work.

❤️ Tool of the Week:  ChatGPT Agent — the AI that uses your PC for you

OpenAI just dropped a bombshell: ChatGPT is now an autonomous agent, capable of acting on a virtual computer to do tasks on your behalf.

What it’s for :

  •  Automate complex tasks: competitive research, meal planning, slide decks, multi-step queries — everything’s on the table.

  • Control a virtual computer: It opens tabs, clicks buttons, fills out forms, edits files.

  • Connect to your apps: Gmail, Google Calendar, GitHub — with more integrations coming.

  • Code, shop, summarize, organize: It can generate code, create reports, suggest groceries, plan your week.

How to use it?
It’s available under the “Agent” tab in ChatGPT for Pro, Plus, and Team users… but not yet in France (due to GDPR)

💙 Video of the week : Drômeo, the gorilla of controversy

The Prefect of Drôme published a video featuring a humanoid gorilla named Drômeo, meant to raise awareness about nature-related hazards.

The goal? Speak to youth using TikTok codes.
The result? A fiery backlash — because the gorilla was AI-generated.

Critics slammed the lighthearted tone on such a serious topic (accidents, hiking risks), and artists voiced alarm over AI being used for public campaigns.

Some fear that in a few years, all ads, illustrations, and animated films could be made by AI — lowering quality and triggering mass unemployment in the creative sector.

AI in ads, cinema, music videos… and now government campaigns: for or against?

Login or Subscribe to participate in polls.