Mindfulness
fromPsychology Today
2 days agoHow Saying "Please" to AI Changes the Way We Think About It
Using polite language with AI creates perceived relationships that reduce objectivity and increase unhealthy reliance on its responses.
Accomplishment Hallucination is a cognitive state in which speed feels like competence, output feels like accomplishment, and work feels done when the actual work-the thinking-through, the failure-mode analysis, the sitting with uncertainty until the problem reveals its structure-hasn't happened at all. Physics need not apply. AI can create a similar state in waking life—literally, as your very words assume form before your eyes like a conjuring sorcerer. But, like real life, the code may be buggier than we realize.
Called the Cognitive Reflection Test ( CRT), it has been around since 2005 but recently gained popularity on social media, with one TikTok user's breakdown of the three questions getting 14million views. The test was created by psychologist Shane Frederick, now at the Yale School of Management, to help predict whether people are likely to make common mistakes in thinking and decision-making.
Have you ever experienced an encounter with an image in the sky or thought that the lyrics to your favourite song related to your personal life? These are examples of having moments that are either unsettling, poetic, or just plain strange. Such experiences are known as apophenia, expressions of our innate tendency to find patterns and attribute meaning to things that are random.
In 2025, researchers from OpenAI and MIT analyzed nearly 40 million ChatGPT interactions and found approximately 0.15 percent of users demonstrate increasing emotional dependency-roughly 490,000 vulnerable individuals interacting with AI chatbots weekly. A controlled study revealed that people with stronger attachment tendencies and those who viewed AI as potential friends experienced worse psychosocial outcomes from extended daily chatbot use. The participants couldn't predict their own negative outcomes. Neither can you.
A widely discussed concern about generative AI is that systems trained on biased data can perpetuate and even amplify those biases, leading to inaccurate outputs or unfair decisions. But that's only the tip of the iceberg. As companies increasingly integrate AI into their systems and decision-making processes, one critical factor often goes overlooked: the role of cognitive bias.
How do you know if, say, marrying your dating partner will lead to long-term happiness? Or whether accepting a demanding new job (with all the added responsibilities and time dedication) will bring lasting fulfillment? These and other major life choices are made based on the belief that you truly know yourself (i.e. your abilities, values, and desires). In other words, they rely on (presumably accurate) self-knowledge.
I remember working on my book and catching myself mid-paragraph. I'd just finished a sentence that felt particularly satisfying to write and paused to ask: Why does this feel so good? The answer wasn't flattering. What I'd written sounded smart, but it wasn't clear. I realized I'd been unconsciously filtering ideas through "does this make me look clever?" instead of "will this help the reader?"
But I managed to access my inner professional by simply showing curiosity and asking, "What story are you telling yourself about why he is doubling his efforts to help out?" She replied, "It makes me think that he feels I'm incompetent and that he can do it better than me. I think it's his job to support our family, and mine is to be responsible for all things related to our home."
It's fair to say that belief is rarely rational. We organize information into patterns that "feel" internally stable. Emotional coherence may be best explained as the "quiet logic" that makes a story satisfying, somewhat like a leader being convincing or a conspiracy being oddly reassuring. And here's what's so powerful-It's not about accuracy, it's the psychological comfort or even that "gut" feeling. When the pieces fit, the mind relaxes into complacency (or perhaps coherence).
The problem of people falling for falsehoods has become an urgent issue in recent years, as new technologies have conspired with sociopolitical currents within the culture to spread misinformation at unprecedented speed and reach. Psychologists who study this issue have focused mainly on individual vulnerabilities: the cognitive quirks and biases that predispose us to believe falsehoods, buy into lies, and give in to speculation.
Thinking forward is an automatic process. Cause, then effect. Input, then output. A to B. It feels logical-and normal to start with a conclusion, then find justification around it.But we can always take our thinking a step further. Sometimes, the best way to get the answers you want is to think backwards. It's called mental inversion. Turn the whole thinking process upside down. As the great algebraist Carl Jacobi said, "Invert, always invert."
Everyone employs bias-otherwise known as cognitive shortcuts-in their lives every day. Imagine you're scrolling through your social media feed and immediately dismiss a news article because it comes from a source you don't typically trust. Or maybe you're convinced your favorite restaurant is the best in town, remembering all the great meals you've eaten there while forgetting that mediocre dinner last month.
The investment decisions people make are often influenced by cognitive biases that can lead to irrational decisions or behavior contrary to personal financial goals. Cognitive biases, like the status quo bias, showcase a tendency to prefer existing circumstances over change.