Kenya has taken center stage as governments worldwide debate how to regulate social media algorithms.
Specifically, all eyes are on the Kenyan High Court, where a coalition of victims of hate crime and human rights groups are suing Meta, claiming that the company’s social media recommendation algorithms amplify hatred and fuel ethnic conflict.
Kenya happens to be the location for the legal showdown due to its proximity to Ethiopia, where the claimants say that Facebook has played an outsized role in spreading disinformation, feeding the fire of inter-community hatred and enabling online abuse.
Should it be awarded, the 200 billion Kenyan shillings ($1.6 billion) Meta is being sued for will be used to create a fund for victims of ethnic violence in Ethiopia’s recent conflict.
Alongside charities including Amnesty International, the legal action is being brought by Abrham Meareg, the son of an Ethiopian university professor who was murdered weeks after posts inciting hatred and violence against him spread on Facebook.
The case centers on claims that Facebook failed to remove the posts until it was too late, despite being asked by Meareg’s family to do so.
The Online Safety Debate in the UK
In the U.K., the case of Molly Russell, a teenager who died from an act of self-harm after viewing online material related to suicide and depression, has become a driving force in the debate over online safety and the role of platforms and their algorithms in content moderation.
Critically, an inquest into Russell’s death found that she “died from an act of self-harm whilst suffering from depression and the negative effects of online content.” A report summarizing the inquest’s findings was sent to social media companies including Meta, Pinterest, Twitter and Snap.
Russell’s death ignited a conversation in the U.K. over the nature of harmful content and whether social media platforms are doing enough to prevent its spread. Her death was also explicitly cited by the government in its move to include stronger protections for children in the Online Safety Bill.
“The government [will] use the Online Safety Bill to create a new criminal offense of assisting or encouraging self-harm online. This means that one of the main types of content campaigners are concerned about, following cases such as the death of Molly Russell, will now be covered and effectively tackled by platforms under their illegal content duties in the Bill,” a government press release said in November.
Having been revised multiple times by successive U.K. governments in its arduous passage through parliament, the Online Safety Bill is next due for debate on Jan. 16, when the opposition Labour party has promised to table amendments that would resuscitate previously dropped provisions to outlaw “legal but harmful” material.
Who is Responsible for What?
In Kenya and the U.K., questions over the legal definition of hate speech and digital platforms’ responsibilities to prevent its spread point to a debate as old as the internet that pits online safety against free speech.
But with algorithms now responsible for deciding what gets seen by whom, the question is no longer whether platforms are responsible for the content they host, but what responsibility they have in promoting or inhibiting its circulation.
Ultimately, social media algorithms don’t operate in isolation, and reducing them to mathematical abstractions obscures the human networks in which they are embedded, which include content moderators platforms employ to take down harmful content.
An interesting aspect of the Online Safety Bill is that it will grant the U.K.’s communications regulator, Ofcom, some of the most sweeping powers among national regulators to oversee social media companies. These include the authority to issue fines, demand technical changes, and even direct payment providers, advertisers and internet service providers to stop working with a firm should it fail to comply.
Likewise, should the Kenyan High Court side with Amnesty International and its co-claimants, Meta will have a strong financial incentive to invest in systems that better prevent the spread of harmful material in markets it has been accused of overlooking when it comes to protecting users.
“Meta has failed to adequately invest in content moderation in the Global South, meaning that the spread of hate, violence and discrimination disproportionally impacts the most marginalized and oppressed communities across the world,” a lawyer for the claimants stated.
For all PYMNTS EMEA coverage, subscribe to the daily EMEA Newsletter.