Reddit mods furious after site stops bots from auto-banning users who post in certain subreddits

Reddit is updating its moderation rules to prevent bots from automatically banning users just for being members of specific groups. This means bots won’t be able to automatically punish people simply for participating in certain communities.

Starting March 19th, changes will be made to commonly used moderation bots. These bots will no longer be able to automatically ban users based on what they do in other communities on Reddit.

I’m really glad Reddit is tackling this issue with bans! They’ve explained they’re trying to stop people from getting punished just because of where they post, and not because of anything they actually did within a specific subreddit. It’s about judging people on their actions in a community, not just where they happen to be online, and that makes a lot of sense to me.

In November 2024, popular streamer Asmongold was surprised to find himself banned from the Dragon Age subreddit. The ban stemmed from his association with his personal subreddit, r/asmongold.

Reddit administrators have noticed that many communities use automated bots to block unwanted users. However, Reddit says these bots often end up banning people across multiple communities, even if those people haven’t actually done anything wrong.

Reddit announced it’s changing how it bans users, explaining that previous methods – where many accounts were blocked based on their connection to others – were confusing, unfairly impacted innocent users, and often blocked the wrong people.

Reddit is stopping third-party bots from automatically banning users just because of which other communities they’re involved in.

Ban bot policy update: removing automated bans based on community association
by
u/quietfairy in
modnews

Moderators say the change could make harassment harder to control

The change immediately faced backlash from moderators who use automated tools to oversee large online communities and stop groups from harassing others.

Several moderators expressed concern that the update would add to their responsibilities and create challenges in safeguarding at-risk groups.

One Reddit user questioned whether the announcement would significantly increase the work for moderators of communities like those focused on LGBT+ topics.

One moderator pointed out that the update doesn’t account for how harassment usually spreads across different online communities.

I’ve been seeing a lot of complaints online about big subreddits being really toxic. Apparently, reporting harassment or coordinated attacks just doesn’t seem to do anything to fix the problem, which is super frustrating for a lot of us.

Some users questioned whether Reddit’s current moderation features could do everything ban bots previously handled. Reddit suggested moderators use its existing tools like the Harassment Filter, Crowd Control, Reputation Filter, and Ban Evasion Filter to help.

Reddit administrators recently made another adjustment to how the platform operates. Back in December, they limited the number of popular communities (subreddits) that volunteer moderators can oversee.

Read More

2026-03-06 18:51