Thierry Henry quits social media and calls for platforms to act against racist online attacks

0
26

On March 26, French football legend and recently retired coach of CF Montreal, Thierry Henry, quit all social media platforms in a bold statement against online racism. Talking directly to his combined 14.8 million followers on Twitter, Facebook and Instagram before closing his accounts, he said:

Hi Guys,
From tomorrow morning I will be removing myself from social media until the people in power are able to regulate their platforms with the same vigour and ferocity that they currently do when you infringe copyright…. pic.twitter.com/gXSObqo4xg
— Thierry Henry (@ThierryHenry) March 26, 2021

For Henry, it was time to pressure the online networks to do something against racist abuse, which had spiraled out of control during the COVID-19 pandemic lockdowns and stadium closures, moving from physical and verbal assaults—bananas being pelted on the pitch and monkey chants when Black players were on the ball—to online abuse and bullying. As a result, there has been a surge of these attacks in the second half of 2020 and continuing to 2021. In his statement, the former Arsenal and Barcelona striker wrote:

“The sheer volume of racism, bullying and resulting mental torture to individuals is too toxic to ignore. There HAS to be some accountability. It is far too easy to create an account, use it to bully and harass without any consequence and still remain anonymous. Until this changes, I will be disabling my accounts across all social platforms. I’m hoping this happens soon.”

He added:

“I’m not saying it’s not good to have social media, I’m just trying to say that it has to be a safe place. Basically, I did what I felt and I hope it can inspire people to do the same thing if they feel the same way.”

Henry’s announcement came two days after a similar decision by another public figure, American model and TV personality, Chrissy Teigen who shut down her Twitter account after facing online abuse.

Twitter stands accused of allowing insulting language against users, usually from anonymous accounts, in posts that are often racist or misogynistic, and while the networking giant has published and quoted its “hateful conduct policy,” it has not generally been swift in dealing with these cases.

COVID-19 halts sports world, racism rears its head again

In 2020, the world of sport came to a halt due to the pandemic. Many major sporting events, including the Olympic Games and the Euro2020 football tournament, were altogether postponed. During this time, a pivotal incident was the killing of George Floyd as he was being arrested in the U.S., an event that prompted protests on racism and discrimination across the globe.

As protests happened both on the streets and online, civil rights groups pushed for more action from the social media companies. The National Association for the Advancement of Colored People (NAACP), the Anti-Defamation League and Color of Change sought an audience with Facebook’s Mark Zuckerberg. This was due to pressure from major advertisers forcing the company to address racism and discrimination among other social injustices on the platforms. Facebook previously delayed commenting or went mute on any feedback about combating the social injustices.

With the relaxing of lockdown measures in the U.K., the English Premier League (EPL) was among the first leagues to resume playing and as a symbol of protest and solidarity players decided to take a knee before the start of all games.

In a game between Manchester City and Burnley F.C., a plane flew overhead displaying the banner, “White Lives Matter Burnley” in opposition to the players’ protest. With restrictions to football fans being at live matches, the next target became players’ social media platforms, including Facebook, Instagram and Twitter.

Increase in virtual racial attacks against football players

Early this year, a number of players in the English Premier League (EPL) were targets of online racist attacks on Twitter and Instagram. Marcus Rashford was the target of an online attack following his team’s 0-0 draw against arch-rivals, Arsenal.

In a tweet published on January 30, the 23-year-old Manchester United forward said:

The player had recently been awarded an MBE, for his efforts in successfully campaigning to get the UK government to allow approximately 1.3 million school-going children to claim free school meal vouchers during summer holidays as a result of the COVID-19 pandemic.

In support of players, the EPL demanded social media companies put in place serious measures against racist attacks.

In a statement, the CEO of the EPL said:

“ I am appalled to see the racial abuse received by the players this week.” [February 2021] “Racist behaviour of any form is unacceptable and nobody should have to deal with it. Tackling online hate is a priority for football, and I believe social media companies need to do more.”

A tweet via the EPL’s official Twitter handle called on social media platforms to take action against online abuse. The tweet which contained a joint letter to both Twitter CEO Jack Dorsey and Mark Zuckerberg is the latest call under the #NoRoomForRacism campaign which the EPL first launched in 2019 in a bid to combat discrimination.

Tackling racism and addressing discrimination in football

Elsewhere, the UK government threatened the social media companies with hefty fines if they failed to tackle racism on their platforms. Instagram announced a couple of measures to tackle online abuse following the spate of racist attacks on the Premier League footballers. The company indicated it would remove accounts being used to send abusive messages.

This had been building up in 2020, and some of the social media platforms took a proactive approach, providing the guidelines for their users. Tiktok’s Eric Han, Head of Safety in the US, shared ‘Tiktok’s 5 ways to counter the hate speech’ in which the company pledged to “continue to take a proactive approach to stopping the spread of hate and known hate and violent extremist groups,” and laid out its code of conduct. As a consequence, more than 380,000 videos were removed on TikTok as well as 64,000 hateful comments, and 1,300 accounts banned in the U.S.

KickItOut.org, an organization formed to tackle racism and address discrimination in football, combined efforts with the EPL, Football Association (FA) and the Professional Footballers’ Association (PFA) to launch a fresh look at these efforts.

“The group identified a set of common principles which will drive their working agenda moving forward. The principles include that: football and social media should be places where everyone feels that they belong; discrimination, hate and abuse towards those who play, support or work in the game is totally unacceptable and will not be tolerated; online and offline hate must have real-world consequences for perpetrators and individuals should be held accountable for their actions.”

It’s time internet companies did more

In February 2021, Republic of Ireland defender Cyrus Christie said that the high-profile cases of online racist abuse of footballers in recent months are just the tip of the iceberg. The fate of black players and those from other ethnic minority communities in the lower divisions are largely ignored despite increased public awareness.

While social media platforms have been used to share human interest stories and connect people across the world, in a 2020 article with The Conversation, Imran Awan wrote about online abuse and noted the following;

“ As Black Lives Matter continues to draw attention to racism—the trigger pushback from people using social media to express sentiments against people of colour—it’s time internet companies did more to tackle all forms of bigotry”.

He continued further saying;

“It’s important to recognise that these comments on social media reflect wider attitudes that are endemic in the offline world. Social media can appear to act as a megaphone for racists, but these opinions are much more mainstream than you think. As a society we need to grapple with how these ideas have become normalised, and challenge and expose them.
Social media companies including Facebook, Twitter and now TikTok have taken active steps to block and remove those people clearly linked with the far right. But this is only a starting point. More needs to be done to identify other individuals who are less obviously spreading hatred, often under the protection of anonymity. Only then can we try and effectively change attitudes and reduce social media’s significant capacity for harm.”

In 2018, Twitter had introduced an algorithm in an effort to tackle harassment. The system sought to use behavioral signals—how users react to a tweet—to assess if an account is adding to or detracting from conversations. The updated algorithm was meant to result in certain tweets being pushed further down in a list of search results or replies, but these tweets would not be deleted from the platform. This was not effective. In January 2020, Twitter announced they were testing new features that would allow users to control who can reply to their tweets. It seems, more still needs to be done.

In a related study on “Rising Levels of Hate Speech & Online Toxicity During This Time of Crisis,” AI-based start-up, l1ght, showed a marked increase in hate speech, with the target being Asians—more specifically Chinese, blaming them because of the origins of the coronavirus in China. This study showed the effects of more time online as countries imposed lockdowns and curfews to restrict movement during the pandemic.

More needs to be done and with the current trends of a few famous personalities quitting the social media platforms, it remains to be seen how the respective platforms will effect their codes of conduct. With more spotlight on race and online attacks, regulators may join in the conversation to make social media platforms safer spaces.

This article is: Creative Commons — Attribution 3.0 Unported — CC BY 3.0 globalvoices.org

LEAVE A REPLY

Please enter your comment!
Please enter your name here