Published at 24.02.2021
New design, new features, new users: everything is always new on Instagram. Almost everything, at least. Because the discussion about whether platforms like Facebook, TikTok and Instagram are doing enough against the phenomenon of online hate do is a pretty old one. But what does Instagram actually do to prevent bullying, hate speech, and abuse?
Why does Instagram have a problem?
In theory, Instagram doesn't have a problem - simply all online platforms do. Because hate and bullying have found a new home online. Many people online feel like they are being operate in a lawless space and say or do things that "in the real world", they would never do or say.
Due to the fact that many platforms have millions of users who are online and posting every day, it is a huge challenge to catch those who spread hate and attack people personally. Providers are constantly caught between the greatest possible freedom and the protection of users.
When a platform is flooded with hate, users no longer feel comfortable and the company behind it has to face criticism or expect public opinion to get worse. So Instagram has to watch its image. The company has two main problem areas to deal with: Attacks and hate over Direct Messages and Hate comments under feed posts.
Artificial intelligence and emphasis on positive content
Algorithms make sure everything runs smoothly on the Internet. They are also the biggest adjusting screw through which companies are able to make large-scale changes. Instagram, in its fight against online hate, is therefore relying on artificial intelligence.
Instagram doesn't tell us how exactly that works. They do, however, admit quite publicly that positive content are more likely to be preferred than negative ones. Violence, racist terms, insults - if the algorithm detects these things (or topic-related keywords) in a post, it is less likely to be displayed to other users. At the same time, certain terms can be detected in real time, which is why certain posts or comments are not approved in the first place.
If you write a comment or want to publish a post, you may be warned that it could be problematic content. Instagram then holds the post or comment for you and you have to confirm again that you really want to publish it. With this, Instagram gives users an additional hurdle when it comes to insulting someone or spreading problematic content.
Blocking and alternatives
The first idea most Internet users have when they become victims of attacks is this: jam Because just about every platform offers the option for users to block each other, i.e. hide posts and restrict communication.
On Instagram, this feature exists as well. If you block another person, you won't see any more posts from them, and that person won't be able to send new direct messages. Slightly less aggressive is muting , where stories and/or feed posts just don't show up on the home feed.
Last but not least, Instagram also offers a third option that is rarely found on other social media: narrowing (Restrict.) Anyone who restricts another user receives a notification when the person in question wants to send a message or write a comment - this must then be released manually. This prevents insults or inappropriate content from appearing for all to see. The beauty of this is that the other user doesn't know that their comment has been restricted, they still see it. This provides the opportunity to "block" someone without drawing too much attention to it. Because Block or Unblock sometimes only leads to more violent reactions.
A particular challenge is hate messages in the so-called DMs Because these messages are private, Instagram can't easily control them. Instead, many messages are first listed under "request".
If a user sends insults or hate via direct message, these can be reported by the recipient. This usually results in a temporary blocking of the account However, Instagram now states that repeat offenders will be consistently and permanently banned.
Reporting and user assistance
Instagram keeps trying new ways to protect its users. There are Awareness campaigns and the features described above. Instagram also publicly clarifies again and again: We defend ourselves against bullying and hate on our platform.
But at the end of the day, it's a battle against windmills. Without the help of us Instagram users, the problem cannot be overcome. So it remains necessary for each individual to keep their eyes open and use the following function when in doubt: reporting
Anyone who sees an inappropriate post or comment can report it to Instagram. It will then be investigated and either deleted or it will be decided that it does not violate the User rules violates. This is exactly where opinions differ: what must be deleted and what still falls under freedom of expression?
The year 2021 has yet to provide a satisfactory answer to these or any other questions about cyber bullying. All we can do on Instagram is be as conscientious as we can ourselves: Avoid insults and always remember that there's a real person on the other end of the conversation. Also, report things that catch our eye. And don't deny Instagram responsibility, but keep pushing for new ways to limit and combat internet hate.