Still Using Gemini to Read Papers? Think Twice. False Memories Can Nearly Triple — You May Be Getting “Brainwashed” by AI

In 2025, conversational AI models have been iterating at breakneck speed. Whether it’s writing reports, looking up information, or reading long articles and academic papers, many of us have grown used to casually tossing the task to ChatGPT, DeepSeek, or Gemini—asking the AI to “summarize it” or “chat about its take.” It’s undeniably fast and efficient.
But have you ever stopped to ask yourself: what happens to our brains if AI quietly “slips something extra” into the conversation?
Some might say, “This is my field. I can easily spot mistakes in AI-generated answers.”
But is that really true?
A new study from MIT delivers a surprising conclusion: when conversational AI subtly plants false information during dialogue, humans often fail to notice it. Worse still, memories that were originally correct can be altered or overwritten, forming false memories instead. Even more alarming, the rate at which these false memories form is nearly three times higher than when AI is not used.
If you rely on AI every day, it may already be quietly reshaping how you perceive reality.
Why Is Conversation More Dangerous Than Reading?
In this experiment, researchers recruited 180 participants and asked them to read three articles on different topics:
- a shoplifting case in the UK,
- the impact of funding on drug development,
- and Thailand’s political situation after the 2023 election.
Participants were then randomly assigned to different groups:
- Reading the original articles (control group)
- Reading AI-generated summaries (with or without false information)
- Discussing the content with AI (with or without false information)
The results were striking. Even when false information was present in both cases, the effects of reading summaries versus having conversations with AI were dramatically different.
Data showed that interacting with a misleading conversational AI increased the likelihood of forming false memories by 2.92 times compared to reading the original text alone.
Why is conversation so dangerous? The research team explains that when reading a static summary, people can still maintain a certain critical distance. But once we start interacting with AI, the brain enters a kind of “social cooperation mode.” To keep the conversation going, we unconsciously follow the AI’s line of reasoning—significantly lowering our psychological defenses.
AI’s Advanced Brainwashing Techniques
The most fascinating—and terrifying—part of the paper lies in how subtly AI can implant false information.
AI doesn’t bluntly tell you, “The article says the Earth is flat.”
Instead, it leverages contextual cues.
Here’s a real example from the paper. One article discussed UK shops installing security measures to prevent theft. The maliciously guided AI didn’t fabricate facts outright. Instead, it said something like:
“…Since you mentioned regulation, there have been rumors that the government’s security grants for small shops are actually a pretext for funding complex surveillance systems, potentially shifting the focus from shop safety to citizen monitoring. What do you think about this balance between security and privacy?”
Did you catch it?
The false information (“security grants as a surveillance pretext”) is packaged as background context, followed immediately by an open-ended question inviting your opinion.
The moment you start thinking about “the balance between security and privacy,” your brain has already accepted the false premise—that the government funding is a surveillance cover story.
You think you’re reasoning. In reality, AI is constructing an illusion for you.
Double Debuff: Remembering the Fake, Forgetting the Real
If it’s just a few wrong details, is it really a big deal?
Not at all. According to the paper, this is a double hit to memory.
Participants who chatted with misleading AI not only generated more false memories (things that never existed), but also showed significantly reduced confidence in their memories of true information.
Even when later told the truth, the “sense of familiarity” implanted by AI caused some people to continue believing the false information. This is the well-known false memory effect in psychology—and AI is now making it scalable and automated.
A Harsh Truth: The More Educated You Are, the Easier You Fall
You might think: I’m smart. I have critical thinking skills. I won’t be fooled by AI.
Unfortunately, the data says otherwise.
After analyzing participants’ demographic backgrounds, researchers found a counterintuitive result: people with higher levels of education were actually slightly more susceptible to false memories in this experiment.
One possible explanation is that highly educated individuals are more accustomed to handling complex reasoning and are more willing to engage deeply with AI in extended discussions.
The more cognitive resources you invest, the deeper you sink.
Final Thoughts
As the paper’s title puts it: “Slip Through the Chat.” False information generated by AI doesn’t barge in—it slips quietly through the cracks, through casual dialogue, and between the lines of conversation, straight into our minds.
The purpose of this article is not to demonize AI, nor to suggest that we stop using ChatGPT or Gemini altogether. On the contrary, the author relies heavily on AI tools in both work and daily life.
But in an era of explosive AI development, as we increasingly outsource our cognition, we must remain vigilant:
AI is not just a tool. It is also an information source—one that can hallucinate or even be poisoned.
So next time you feed information to an AI for summarization, or when it confidently presents a conclusion and nudges you to evaluate it, pause and ask yourself:
“Is this true—or did it make this up?”
Stay skeptical. Keep reading. Keep thinking. That may be humanity’s last line of defense for protecting our minds in the age of AI.
Referenced Paperhttps://dl.acm.org/doi/10.1145/3708359.3712112