• MAMMAM IA!
  • Posts
  • Is ChatGPT quietly putting your brain to sleep?

Is ChatGPT quietly putting your brain to sleep?

From MIT to OpenAI, the alarms are piling up: AI could make you dumber, bankrupt you... or kill you. But don’t worry — it’s also here to whisper sweet nothings in your favorite celebrity’s voic

In partnership with

👋 Dear Dancing Queens and Super Troupers,

Bad news this week: your brain might be melting… and you don’t even realize it.
According to a study from MIT, using ChatGPT reduces activity in areas linked to memory, critical thinking, and creativity.
Basically, while you're outsourcing that email or homework assignment, your cortex is taking a nap. Who do we thank? GPT, of course.

But while our minds are snoozing, AI is heating up fast.
OpenAI is sounding the alarm: its upcoming models could help any wannabe mad scientist cook up biological weapons.
Yes, that’s right — deadly viruses delivered on an API silver platter.

Meanwhile, Zuckerberg is throwing nine-figure checks at building his own “superintelligence lab,” aiming to outsmart the human mind.
And if you were thinking of switching to a chill job at Amazon — bad luck. Andy Jassy just calmly announced that AI will replace a large share of positions.

No worries though, you can always chat with Jordan — the ex-tabloid star turned AI muse on OhChat, the OnlyFans of the future. She never sleeps, always remembers you, and whispers things Siri wouldn’t dare say…

Critical thinking in free fall, jobs disappearing, desire on-demand… AI is remixing humanity at full speed.

Now it’s up to us: inspiring co-pilot, or sleep-inducing autopilot?

Let’s not lose our minds, entirely

👉️ The dark side of ChatGPT: is it melting your brain? 🤯​ ​

👉️ Creating a deadly virus with AI? This nightmare is getting real ​💥​

👉️ Meta’s billion-dollar bet on the ultimate AI 🤔​

👉️ Fewer humans, more algorithms: Amazon’s new game plan 🤐

👉️ AI is climbing into your bed (and your wallet) 💲

If we've forwarded this letter to you, subscribe by clicking on this link !

If you have 1 minute

  1.  An MIT study shows that intensive use of ChatGPT reduces activity in brain areas linked to memory, critical thinking, and creativity. Smooth texts, passive minds, and a growing "cognitive debt" are silently accumulating.

  2. OpenAI’s next models could help even amateurs design biological weapons. The company is calling for strict oversight — but admits that even a 99.999% safety rate might not be enough...

  3. Mark Zuckerberg has launched an ultra-select "Superintelligence Lab," is hiring with million-dollar offers, and just injected $14 billion into Scale AI. The goal? Surpass human intelligence — hello AGI.

  4. Amazon CEO Andy Jassy says AI will replace a large number of administrative roles. Fewer humans, more agility, more profit? Workers aren’t exactly thrilled…

  5. Katie Price aka Jordan is now an AI star on OhChat, the “OnlyFans 2.0.” Visual and vocal clones are selling artificial intimacy by subscription — deepening our isolation inside the digital world.

The Daily Newsletter for Intellectually Curious Readers

Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.

🔥 If you have 15 minutes

1️⃣ The hidden cost of ChatGPT on your brain

The summary: A study from the MIT Media Lab reveals a troubling side effect of AI: using ChatGPT for writing may weaken memory, creativity, and critical thinking.
Users' brains go into literal “cognitive standby” when interacting with the tool. Texts become bland, brain activity slumps, and a risk of intellectual dependency emerges.
AI — not just an assistant, but a mental anesthetic?

Details :

  • MIT experimental study: 54 participants were divided into three groups (ChatGPT, Google, no tool) and asked to write argumentative essays during brain MRI sessions

  • Collapse of cognitive activity: The ChatGPT group showed the weakest activation in brain regions associated with planning, working memory, and creativity.

  • Generic, soulless writing: Essays written with ChatGPT were rated as “bland,” often interchangeable, and lacking personal nuance.

  • Growing AI dependence: Over time, ChatGPT users copied more and rewrote less — losing motivation to think or rephrase ideas.

  • Google kept brains engaged: Those using Google did more research, thought about sources, and felt more satisfied with their work.

  • Balanced use is possible: A hybrid group, forced to think first without AI before accessing it, showed stronger brain activity — proving that thoughtful use of AI can actually help.

  • Cognitive debt: Researchers warn of a long-term effect, where AI-induced mental laziness gradually erodes intellectual capacities — especially in younger users.

Why it's important: As AI slips into schools, homework, work emails and even private journaling, this study issues a clear warning: thinking is a muscle.
If used passively, AI risks atrophying it. This isn’t about being technophobic — it’s a call for conscious, educational use.
AI can enrich our minds… or slowly hollow them out.

2️⃣ Creating a deadly virus with AI? This nightmare is becoming real

The summary: OpenAI warns that its upcoming AI models — increasingly powerful in the biological domain — could enable even non-experts to create or replicate biological weapons.
The growing sophistication raises serious concerns and calls for urgent safety reinforcement before any public release.

Details :

  • Imminent high-risk level: Upcoming models are expected to reach “High” status on OpenAI’s Preparedness Framework, meaning they pose an increased risk of enabling bioweapons, even for amateurs.

  • Biotech dual-use dilemma: The same functions that benefit medical research (biological data reasoning, chemical reaction predictions, lab guidance) can be repurposed for malicious use.

  • Amateurs + AI = danger: OpenAI highlights the risk of “novice uplift” — where a total beginner could design or replicate a known biological weapon using AI assistance.

  • Extreme reliability required: Detecting and blocking dangerous use cases must approach near-perfection — a 99.999% success rate might still not be enough.

  • Proactive steps: OpenAI is organizing an event with NGOs and government scientists to jointly assess both the promises and perils of emerging AI models.

  • Ongoing safety measures: These include collaboration with biosecurity experts, extensive red-teaming, sophisticated filtering systems, tighter access controls, and human-in-the-loop review.

Why it's important: We’ve crossed into dangerous new territory: after automating code, images, and text, AI is now venturing into biology.
It’s becoming a powerful double-edged tool, capable of accelerating medicine… or unleashing global threats.

This isn’t science fiction: OpenAI is setting a high bar, delaying releases for safety, and calling for regulatory and technical safeguards.
The message is clear: safety before power but will that be enough for governments and the industry?

3️⃣ Meta wants to build the ultimate AI (and is throwing billions at it)

The summary: Meta, under Mark Zuckerberg’s direction, is launching a new “superintelligence” lab and massively investing to rival OpenAI and Google.
By acquiring 49% of Scale AI for $14.3 billion and offering nine-figure hiring packages, the goal is clear: reach Artificial General Intelligence (AGI).
But this bold strategy comes with internal doubts about feasibility and long-term vision.

Details :

  • Mega-investment in Scale AI: Meta is injecting $14.3 billion to acquire 49% of Scale AI, bringing its founder Alexandr Wang into the new Superintelligence Lab.

  • Targeted elite recruitment: Zuckerberg is offering seven- to nine-figure salaries, aiming to gather around 50 researchers physically close to him.

  • Frustration and direct intervention: Delays and underwhelming performance from Llama 4 (nicknamed “Behemoth”) prompted Zuckerberg to take direct control of the initiative.

  • The superintelligence race: Meta’s stated goal goes beyond AGI — it’s aiming to surpass human intelligence with full-blown superintelligence.

  • Persistent skepticism: Internal voices and external experts question whether the vision lacks a clear roadmap, and whether Meta’s past missteps weaken its appeal.

Why it's important: Meta is shifting gears in the AGI race, with massive stakes: billions in funding, talent poached with absurd salaries, and global ambitions.
But this frenzy also raises uncertainties: is superintelligence truly attainable? And if execution falters, will Meta fall back into another Llama 4-style fiasco?
We’re witnessing a strategic reinvention that could accelerate — or derail — the future of AI… and all of us along with it.

4️⃣ Fewer humans, more algorithms: Amazon’s new game plan

The summary: Amazon’s not beating around the bush anymore. AI is going to replace humans — and the CEO is saying it outright.
In an internal memo, Andy Jassy announced that headcount will “naturally” decrease in the coming years as AI takes over many tasks. Forget the “human-machine collaboration” talk — Amazon is going all-in: fewer people, more tech, and goodbye to "non-strategic" roles.

Details :

  • Andy Jassy's official memo: On June 17, Amazon’s CEO sent a blunt message to staff: “AI will reduce the size of our corporate workforce.”

  • 1,000 internal AI projects: Amazon is working on everything — voice assistants, stock forecasting, hiring, logistics, automated customer service — aiming to optimize it all.

  • Leaner, scrappier teams: Jassy’s vision? Smaller, more agile, more tech-driven teams. Translation: if you don’t have an AI copilot in your brain, you’re out.

  • Self-training required: Employees are encouraged to upskill in AI (“be curious”), which sounds a lot like a polite “you’re on your own.”

  • Tense work atmosphere: Some employees are calling out the double standard. Amazon talks about innovation, but acts like a well-oiled layoff machine.

  • A broader trend: Other giants (IBM, JPMorgan, Meta…) are also cutting jobs, gradually replacing support functions with AI.

Why it's important: This isn’t a trend — it’s a declared policy. AI is no longer just an assistant, but a cost-cutting tool. And when Amazon sets the tone, the whole industry listens.
The “productivity shock” promised by AI begins here, in the eerily quiet hallways of emptied open-plan offices.

5️⃣ AI is sliding into your bed (and your wallet)

The summary: A former British tabloid star, an AI startup… and a platform that monetizes your fantasies.
Katie Price (aka Jordan) has joined OhChat, the “OnlyFans of AI,” where users can chat, flirt, or fantasize with AI-powered celebrity avatars.
Thanks to vocal and visual cloning tech, users interact with a sexy, virtual version of their favorite star — with no time limits… or shame.

Details :

  • Jordan, AI edition: Katie Price has lent her voice and image to create a digital twin of her glam-era alter ego Jordan, a 1990s icon.

  • OhChat platform: Launched in late 2024, marketed as the “AI-era OnlyFans,” it already has over 200,000 users, mostly in the US.

  • Revenue split: 80% of earnings go to the celebrities, 20% to the platform. About 20 public figures have already signed on, including Carmen Electra.

  • How it works: Creating an AI avatar takes a few hours, around 30 photos, and a voice sample. OhChat uses a Meta LLM to handle the conversations.

  • Moderation levels: Katie Price is at “Level 2” — topless images and spicy chats, but no full nudity or explicit scenes.

Why it's important: This marks the rise of a new market at the intersection of adult entertainment, generative AI, and the attention economy.
The result? Digital clones that never sleep, effortless monetization of desire, and an impending boom in algorithmic “sextainment.”

❤️ Tool of the Week: Midjourney goes video! 

Midjourney has officially launched its first video model. It transforms a single image into a 4 to 10-second animated clip — all in the dreamy, signature style that made the platform famous.
This isn’t Sora. This isn’t cinematic realism. This is Midjourney in motion.

What it’s for :

  • Bring your images to life: Turn any still image — from Midjourney or elsewhere — into a stylized video.

  • Artistic exploration: Smooth animation, surreal vibes, consistent visual effects — no need for realism.

  • Create video loops: Perfect for GIFs, social visuals, or immersive clips.

  • Image-to-image interpolation: Coming soon — generate fluid, stylized transitions between two images.

  • How to use it:
    Go to the Midjourney website, log in, upload an image, head to your library, and select the “Animate” option.
    You’ll get a 4 to 10-second video with a fluid and expressive render.

💙 Video of the week :  AI creates the first Cat Olympics

Competitive diving cats in 5 to 10 seconds of pure aquatic madness?
Don’t worry — no real felines were harmed. It’s just a clip created with MiniMax’s video model, Hailuo 02.

Why is it going viral?
Honestly, the visuals are jaw-dropping. Fur movement, splash effects, realistic dive arcs — all powered by advanced physics simulation and multimodal prompts.

Since I started using ChatGPT...

Login or Subscribe to participate in polls.