AI like ChatGPT can get “brain rot” by scrolling the internet all day

A recent study suggests that Large Language Models, such as ChatGPT, can experience a decline in performance – similar to human ‘brain rot’ – if they are repeatedly exposed to unimportant or low-quality information.

“Brainrot” is a popular internet term used to describe consuming content that’s silly, pointless, or doesn’t offer any real value. It’s especially common on platforms like TikTok, where people quickly scroll through lots of short videos.

Large language models, such as ChatGPT, can also get fixated on things. Researchers from Texas A&M, UT Austin, and Purdue University recently tested this by giving several AI models very simple, unremarkable information to process.

We sorted through irrelevant data and divided it into two groups: popular, viral social media posts, and longer articles or videos that don’t offer much substance.

AI models can get brainrot too

So, I’ve been testing out a bunch of AI models – Llama3 8B, Qwen2.5 (both the 7B and 0.5B versions), and Qwen3 4B – and honestly, they all seemed to struggle a bit as I pushed them. It was like they were losing their edge, showing some cognitive decline. It wasn’t a total failure, but definitely noticeable.

Meta’s Llama model showed the most significant decline in performance after being exposed to low-quality, nonsensical content. It had trouble with logical thinking, understanding the meaning of text, and following safety guidelines. While Qwen 3 4B handled the problematic content better, it still experienced some negative effects.

According to Junyuan “Jason” Hong in a post on X (formerly Twitter), the more someone is exposed to harmful online content, the worse its negative effects on their brain become – a direct relationship between exposure and impact.

Models fed inaccurate or misleading information experienced a noticeable drop in their ability to think and perform tasks. Specifically, we saw declines in their reasoning skills, understanding of lengthy texts, adherence to ethical standards, and a tendency to exhibit negative personality traits.

— Junyuan “Jason” Hong (@hjy836) October 19, 2025

ChatGPT itself was not tested in the research, but it shows how much AI can suffer when fed content that doesn’t teach it anything of substance. 

AI chatbots are now a common part of everyday life, and are even assisting the U.S. military with important decisions.

Major General William Taylor recently shared with Business Insider that he’s developed a strong connection with ChatGPT.

Read More

2025-10-23 14:49