fbpx

How do people even get shadowbanned

In simple terms, “shadowbanned” is a form of online censorship where you’re still allowed to speak, but hardly anyone gets to hear you. It is more often that not triggered by the type of content you are promoting (this could be one singular video or a series).

Oftentimes, the most challenging part of being shadowbanned is never getting an answer as to why or when it will end. This can lead to users being made to feel as if they are being ‘punished’. For some users, this is with reason. Say for example, they are perpetuating racial hate and misogyny. For other users, it feels unjust. The ‘free the nipple’ feminist wave is a great example of female content users who were shadowbanned simply for showing some skin in artistic context.

It can also feel bitter when we consider that shadowbanning is bizarrely selective; some accounts that do perpetuate misogyny and hate *cough Andrew Tate* are receiving hyper engagement whereas a picture of Ashely Graham wholesomely breastfeeding is taken down and as a result her – and others like her – engagement suffers.

Two decades into the social media revolution, it’s now clear that moderating content is important to keep people safe and conversation civil. But we the users want our digital public squares to use moderation techniques that are transparent and give us a fair shot at being heard. Musk’s exposé may have cherry-picked examples to cast conservatives as victims, but he is right about this much: Companies need to tell us exactly when and why they’re suppressing our megaphones, and give us tools to appeal the decision.

First we have to agree that shadowbanning exists. Even victims are filled with self-doubt bordering on paranoia: How can you know if a post isn’t getting shared because it’s been shadowbanned or because it isn’t very good? When Black Lives Matters activists accused TikTok of shadowbanning during the George Floyd protests, TikTok said it was a glitch. As recently as 2020, Instagram’s head, Adam Mosseri, said shadowbanning was “not a thing” on his social network, though he appeared to be using a historical definition of selectively choosing accounts to mute. 

Essentially, the lack of accountability from the platforms themselves isn’t encouraging. It also feels like a lazy approach in regulating what is and isn’t appropriate content. It’s giving; “we’ll let the algorithm figure it out”. 

But the statistics don’t like it and according to a recent survey by the Center for Democracy and Technology (CDT) found nearly 1 in 10 Americans on social media suspect they’ve been shadowbanned.

However, on behalf of the platforms themselves there are signs of very late steps in the right direction. For example, on Dec. 7, Instagram unveiled a new feature called Account Status that lets its professional users know when their content had been deemed “not eligible” to be recommended to other users and appeal. “We want people to understand the reach their content gets,” says Claire Lerner, a spokeswoman for Facebook and Instagram parent Meta.

Musk’s “Twitter Files” expose some new details on Twitter’s reduction systems, which it internally called “visibility filtering.” Musk frames this as an inherently partisan act — an effort to tamp down right-leaning tweets and disfavored accounts such as @libsoftiktok. But it is also evidence of a social network wrestling with where to draw the lines for what not to promote on important topics that include intolerance for LGBTQ people.

Meta and Google’s YouTube have to some extent articulated their effort to tamp down the spread of problematic content, each dubbing it “borderline.” Meta CEO Mark Zuckerberg has argued it is important to reduce the reach of this borderline content because otherwise its inherent extremeness makes it more likely to go viral.

In regards to fixing shadowbanning, what needs to change is how social media makes visible its power. In other words, reducing visibility of content without telling people has become the norm when it shouldn’t be. It can also be argued that the fact that the industry reduces content without notice also needs to be addressed. But building transparency into algorithmic systems that weren’t designed to explain themselves won’t be easy. 

Tarleton Gillespie, author of the book “Custodians of the Internet”, brilliantly stated that he wishes all social media would add in a little information screen that gives you all the key information about whether it was ever taken down, or reduced in visibility — and if so, what rule it broke. (There could be limited exceptions when companies are trying to stop the reverse-engineering of moderation systems.)

Musk said earlier in December he would bring something along these lines to Twitter, though so far he’s only delivered on a “view count” for tweets that give you a sense of their reach. 

But most importantly, we the users need to feel empowered and the ability to push back when algorithms misunderstand us or make the wrong call will enforce that. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts