Made with Midjourney. Half code, half Inca Kola.

Clickbait and Karma: A Tale of Two Platforms

April 16, 2025

For this week’s blog post in my misinformation class, we were asked to compare two digital platforms and how each one is trying to deal with misinformation. The assignment is meant to help us better understand what works, what doesn’t, and why these efforts matter.

Disclaimer: My use of social media is limited. I don’t spend a lot of time on platforms like Instagram, Reddit, or Facebook outside of using them to find ideas for my small business or look up tips and tricks for games my husband and I enjoy. Because of that, I don’t usually run into much misinformation there.

YouTube is different. I use it regularly, not just for business and creative ideas, but also to stay up to date on current events. I avoid traditional news sites because, in my opinion, they’ve become too biased or influenced by outside interests. YouTube gives me a space to explore different viewpoints and hear from creators who may have strong opinions but aren’t stuck in their own echo chambers. Reddit, while more focused for me in terms of searches, has shown me more than just quick takes. Even when I’m looking for something specific, I’ve still caught more than a few threads filled with strong opinions that seem to be led by emotion more than facts. That kind of energy makes it easy for misinformation to take root and spread. Because of that, I thought it would be interesting to compare YouTube and Reddit, two completely different platforms that have unique ways of propagating how misinformation shows up online.

For anyone who doesn’t use these platforms much, here’s a quick breakdown. Reddit is more a giant collection of online forums, each focused on a specific topic. These forums are called “subreddits,” and they cover everything from niche hobbies to serious news discussions. What makes Reddit unique is that each subreddit is run by community moderators, regular users who set the rules and manage content. Posts and comments can be upvoted or downvoted by other users, which affects how visible they are.

YouTube, on the other hand, is a video-sharing platform owned by Google. It’s built around content creators who upload videos for everything from how-tos and opinion pieces to full-on news coverage. Viewers can like, comment, and subscribe to channels, and YouTube’s algorithm recommends videos based on what you watch. Because it’s so widely used and visually driven, it’s often a go-to source for information, even when that information isn’t always accurate.

Reddit’s Misinformation Tug-of-War

Reddit’s system puts a lot of trust in its users to manage content, but the platform does have backup measures when things go off the rails. While moderators of each subreddit handle most of the rule enforcement, Reddit steps in when needed.

There are sitewide rules against content that could cause harm, like medical lies or false claims about voting. If something gets reported enough or a community ignores the rules, Reddit staff can remove it or ban users.

One of Reddit’s strongest tools is quarantining. This limits visibility for a subreddit by hiding it from search, stopping recommendations, and adding a warning before users enter. A good example is r/NoNewNormal. The subreddit pushed false claims about COVID, masks, and vaccines, and was eventually banned after it sparked attacks on other communities. Reddit also applied quarantines to over 50 similar subs. The Guardian shared the full story and Reddit’s updated rules here.

In late 2023, Reddit launched its Contributor Program, which pays eligible users real money based on karma and gold earned from posts and comments. While this adds an incentive to post thoughtful or popular content, it could also encourage people to create attention-grabbing posts just for rewards. One catch is that too many downvotes can reduce a user’s total karma, which might disqualify them from earning. That means misinformation that gets called out by the community won’t just hurt visibility, it could hurt a user’s chances to monetize.

Reddit also puts out transparency reports that show how often they take action against misinformation and other policy violations. The reports include data on removed posts, banned accounts, and how Reddit responds to rule-breaking across the platform.

YouTube’s Filtered Fixes

YouTube takes a direct approach to managing misinformation. If a video includes harmful or false claims, it can be taken down by YouTube itself. Repeat rule-breakers risk losing their channels through a strike system. Users only get one warning, and then three strikes within 90 days means permanent removal.

For videos that don’t cross the line but still cause concern, YouTube adds panels under the content with links to trusted sources. These are common on videos about elections and health. They recently shared updates to their election policies in this blog post, showing tighter guidelines around false voting claims and election outcomes.

YouTube also reduces how much false content can earn. When a video is flagged, they may demonetize it, taking away ad revenue. This puts pressure on creators to be more careful without needing to delete every video.

Their recommendation system was also changed to limit the spread of misleading content. In a recent update, YouTube began targeting misleading thumbnails and titles as well. This article from Google explains the shift, which is aimed at cutting down videos that try to trick users into watching, otherwise known as Clickbait.

They continue to publish transparency reports to track how content is flagged, removed, or adjusted. It’s not a perfect system, but it shows ongoing effort.

What’s Working, What’s Not

Reddit and YouTube take two totally different paths when it comes to fighting misinformation, and honestly, both have hits and misses.

Reddit’s strength is its flexibility. Because subreddits set their own rules, many communities are quick to remove false content and keep discussions sharp. But that same setup is also its biggest flaw. Some subs are strict while others let anything fly, even if it’s totally false. There’s no consistency, which makes it harder to know what content to trust; thus putting more responsibility on the reader. Reddit has stepped in more lately, like with quarantines and platform-wide bans, but it still feels like a patchwork system. If Reddit gave moderators stronger tools or clearer platform-backed rules around misinformation, it could help bring balance across communities without removing their independence.

The new Contributor Program adds another layer to this, since users now have a financial reason to post content that will get noticed, which isn’t always the same thing as content that’s accurate.

YouTube has the opposite issue. Its policies are clear and applied sitewide, but that doesn’t always mean things get handled well. The strike system works in theory, but borderline content still gets views before it’s flagged. And the algorithm, while improved, can still steer people into conspiracy territory if they aren’t careful. The added info panels are helpful, and demonetizing bad content hits where it hurts, but the platform could do more to show users why a video was flagged or restricted. More transparency and user education would go a long way.

Both platforms could learn from each other. Reddit could use more top-down structure, while YouTube could open up more space for user-driven feedback and community moderation. Right now, each platform covers the blind spots the other still has.

How They Could Do Better

If I had to offer advice to both platforms, it would come down to building smarter systems that help users before misinformation spreads, not just after.

For Reddit, giving moderators access to more built-in tools could make a huge difference. AutoModerator is helpful, but it’s limited unless the mods put in a ton of setup work. A Reddit-created misinformation filter that works across subreddits, sort of like what Instagram does with false information overlays, could make things more consistent. Reddit could also provide optional training or dashboards for mods to spot patterns in flagged posts. That would let Reddit stay community-focused without ignoring the bigger picture.

For YouTube, better communication is key. When a video is flagged or demonetized, creators and viewers should be able to see exactly why. Other platforms like TikTok already show clear content warnings with links to more info. YouTube has the data – it just needs to show more of it to users. Another smart move would be creating a visible feedback loop. Let users suggest corrections or add community notes, similar to what X is trying (even if it’s messy over there). If handled right, that kind of input could improve accuracy and trust.

In the end, both Reddit and YouTube have the reach to shape what people believe. Giving users more context, better tools, and a clearer view of what’s being flagged would help stop misinformation before it spreads too far.

Reddit and YouTube couldn’t be more different in how they’re built, but they both carry real weight in shaping what people see, believe, and share. Reddit puts that power mostly in the hands of its communities, while YouTube takes a top-down approach with platform-wide rules and automation. Both methods have their strengths, but also some pretty clear gaps.

What stands out most to me is that no one system works perfectly on its own. People need tools, transparency, and the chance to think critically about what they’re seeing. Whether it’s a Reddit thread full of emotion or a YouTube video with questionable claims, the real solution is a mix of better platform support and more informed users.

Misinformation isn’t going away anytime soon, but that doesn’t mean we can’t do better. Platforms have a role to play, but so do we.

- the Alchemist