A couple of months ago, the New York Post published an article on its website, which sharply criticised the American (then) presidential candidate Joe Biden. Private emails supposedly leaked to the press, showing that Biden would have met with an adviser for the Ukranian company Burima during his vice-presidency, while his son Hunter was on the board. This sparked controversy, as Biden sr. was working on policy for Ukraine at the time. The reliability of this information and the authenticity of the emails have been called into question, however. Facebook and Twitter took a big step in imposing restrictions on the article.
Facebook restricted linking to the article, so that they may investigate the validity in the meantime. Twitter took it a step further by banning users from posting the article at all, since it contained private information from hacked emails, which is against its policies. This decision was not met with a warm welcome: Republicans felt silenced and others questioned the power of social-media giants to sweep certain information and opinion under the rug. This illustrates an important question in digital society: where do we draw the line between free speech and limiting the spread of misinformation?
Twitter and Facebook do not have the answer, either. Only a couple days after imposing the restrictions, Twitter lifted them again because the article had already spread around the internet, meaning that it could not be deemed ‘private’ anymore. This decision was made in spite of the fact that spreading hacked or personal information is still against its policies. Facebook, on the other hand, has imposed significantly more restrictions the last couple of months: political advertisements were banned until the end of elections and holocaust denials, anti-vaccine adverts and certain conspiracy theories were deleted from the website, where all this had been allowed before. Similarly themed on Dutch soil: Maurice de Hond was removed from LinkedIn recently, due to his criticism on the current corona-policy, which – according to him – ‘had no basis whatsoever’.
The free speech dilemma persists. On the one hand, freedom of speech is an essential right, one should not have to ask his government or a company permission to express criticism, satire, opinion or story. The European Court for Human RIghts even ruled that diversity of opinions, tolerance and open-mindedness are essential to our democracy (Handyside v UK).
On the other hand, the spread of misinformation – or ‘fake news’ – can have dangerous and polarising consequences. Theories that COVID-19 does not exist, that vaccines cause autism in children or that a political party is conspiring against your right to exist have a palpable effect on both society and (vulnerable) individuals. If hate speech or public discrimination can be punishable (art. 137c-d Dutch Penal Code), it would not be out of place to address group manipulation.
In the staggering amount of information circulating these days, facts are often accompanied by falsehoods, which makes safeguarding the truth an urgent matter. However, are social media companies the right fit to perform this task? Their records for dealing with privacy and data are not exactly spotless.
The influx of information will keep increasing and with it the amount of fake news, conspiracy theories, propaganda etc. Freedom of speech and access to information remain essential components of our democracy, but the ability to distinguish fact from fiction is just as important. Whether censorship of ‘fake news’ should be performed by social media companies is debatable, but we certainly need to take action. Facebook and Twitter performing fact checks is a first step, at least.