Fake news and a 400-year-old problem: how can we end the post-truth crisis?

Fake news and a 400-year-old problem: how can we end the post-truth crisis?


The internet echo chamber satiates our appetite for pleasant lies and reassuring falsehoods and has become the defining challenge of the 21st century

The internet age made big promises to us: a new period of hope and opportunity, connection and empathy, expression and democracy. Yet the digital medium has aged badly because we allowed it to grow chaotically and carelessly, lowering our guard against the deterioration and pollution of our infosphere.

We sought only what we wanted entertainment, cheaper goods, free news and gossip and not the deeper understanding, dialogue or education that would have served us better.

The appetite for populism is not a new problem. In the ferocious newspaper battles of 1890s New York, the emerging sensational style of journalism in Joseph Pulitzers New York World and William Randolph Hearsts New York Journal was dubbed yellow journalism by those concerned with maintaining standards, adherence to accuracy and an informed public debate. We now have the same problem with online misinformation.

Humans have always been prejudiced and intolerant of different views. Francis Bacons philosophical masterwork Novum Organum, published in 1620, analyses four kinds of idols or false notions that are now in possession of the human understanding, and have taken deep root therein.

One of them, the idols of the cave, refers to our conceptual biases and susceptibility to external influences. Everyone … has a cave or den of his own, which refracts and discolours the light of nature, owing either to his own proper and peculiar nature; or to his education and conversation with others; or to the reading of books, and the authority of those whom he esteems and admires; or to the differences of impressions, accordingly as they take place in a mind preoccupied and predisposed or in a mind indifferent and settled; or the like. It is at least a 400-year-old problem.

READ  I declare 2016 the year of the group chat social media your own way | Bridie Jabour

Likewise, the appetite for shallow gossip, pleasant lies and reassuring falsehoods has always been significant. The difference is that the internet allows that appetite to be fed a bottomless supply of semantic junk, transforming Bacons caves into echo chambers. In that way, we have always been post-truth.

These kinds of digital, ethical problems represent a defining challenge of the 21st century. They include breaches of privacy, of security and safety, of ownership and intellectual property rights, of trust, of fundamental human rights, as well as the possibility of exploitation, discrimination, inequality, manipulation, propaganda, populism, racism, violence and hate speech. How should we even begin to weigh the human cost of these problems? Consider the political responsibilities of newspapers websites in distorting discussions around the UKs Brexit decision, or the false news disseminated by the alt-right, a loose affiliation of people with far-right views, during the campaign waged by President-elect Donald Trump.

So far, the strategy for technology companies has been to deal with the ethical impact of their products retrospectively. Some are finally taking more significant action against online misinformation: Facebook, for example, is currently working on methods for stronger detection and verification of fake news, and on ways to provide warning labels on false content yet only now that the US presidential election is over.

But this is not good enough. The Silicon Valley mantra of fail often, fail fast is a poor strategy when it comes to the ethical and cultural impacts of these businesses. It is equivalent to too little, too late, and has very high, long-term costs of global significance, in preventable or mitigable harms, wasted resources, missed opportunities, lack of participation, misguided caution and lower resilience.

READ  THIS Bachelor Contestant Was Arrested For Theft!

A lack of proactive ethics foresight thwarts decision-making, undermines management practices and damages strategies for digital innovation. In short, it is very expensive. Amazons same-day delivery service, for example, systematically tends to exclude predominantly black neighbourhoods in the 27 metropolitan areas where it was available, Bloomberg found. It would have been preventable with an ethical impact analysis that could have considered the discriminatory impact of simple, algorithmic decisions.

The near instantaneous spread of digital information means that some of the costs of misinformation may be hard to reverse, especially when confidence and trust are undermined. The tech industry can and must do better to ensure the internet meets its potential to support individuals wellbeing and social good.

We need an ethical infosphere to save the world and ourselves from ourselves, but restoring that infosphere requires a gigantic, ecological effort. We must rebuild trust through credibility, transparency and accountability and a high degree of patience, coordination and determination.

There are some reasons to be cheerful. In April 2016, the British government agreed with the recommendation of the House of Commons Science and Technology Committee that the government should establish a Council of Data Ethics. Such an open and independent advisory forum would bring all stakeholders together to participate in the dialogue, decision-making and implementation of solutions to common ethical problems brought about by the information revolution.

READ  UK housing market settles down post-Brexit, says Rics - BBC News

In September 2016, Amazon, DeepMind, Facebook, IBM, Microsoft and Google (whom I advised on the right to be forgotten) established a new ethical body called the Partnership on Artificial Intelligence to Benefit People and Society. The Royal Society, the British Academy and the Alan Turing Institute, the national institute for data science, are working on regulatory frameworks for managing personal data, and in May 2018, Europes new General Data Protection Regulation will come into effect, strengthening the rights of individuals and their personal information. All these initiatives show a growing interest in how online platforms can be held more responsible for the content they provide, not unlike newspapers.

We need to shape and guide the future of the digital, and stop making it up as we go along. It is time to work on an innovative blueprint for a better kind of infosphere.

  • Luciano Floridi is professor of philosophy and ethics of information at the University of Oxford and faculty fellow of the Alan Turing Institute. He is a member of the EUs Ethics Advisory Group on data and ethics, the Royal Society and British Academy Working Group on Data Governance, and the Google advisory board on the right to be forgotten, and chairman of the ethics advisory board of the European Medical Information Framework. He has published, by the Oxford University Press: The Fourth Revolution How the Infosphere is Reshaping Human Reality (2014), The Ethics of Information (2013) and The Philosophy of Information (2011)

Read more: https://www.theguardian.com/technology/2016/nov/29/fake-news-echo-chamber-ethics-infosphere-internet-digital

Top