Social media and democracy

This is the edited (and slightly expanded) text of a presentation I gave at a panel discussion on ‘Fake News, Social Media and Preserving Democracy in the Age of AI’ in Palo Alto, hosted by the alumni of several UK universities, on Sept 12 2019:

Under the title of this panel, I guess that two themes that people would expect to hear about under would be:

  1. The latest scandalous revelations about what Facebook, Google, Twitter – and other giant tech platforms – have been up to. Both Facebook and Google have both been in that kind of trouble only in the past few days. Google was fined $170m (estimated to be around 10 hours of the company’s revenue) for mishandling the privacy of children.
  • How we sort out the crisis in journalism…the disappearance of newspapers, the hollowing out of newsrooms or the assault on journalists from the White House and elsewhere.

And I’m going to touch on both these, but first I want to do something else and try to peer behind the chatter and recent headlines. If we look at the underlying causes of both the anxieties about democracy and communications technology we may be able to see more clearly what we should do.

Here’s a quick clue: these are issues which are not going to be solved by technological innovation. Technology will likely help; but the choices we face are moral, political, democratic. Information power has shifted with the exploitation of all the opportunities that the world wide web and the internet have provided. Societies have a choice about how power ought to be distributed. The distribution of power in society is one definition of politics.

The internet has – among a lot of other effects – rewired the public sphere. We learn about and discuss things we can’t see and hear with our own eyes and ears what in radically different ways thanks to the devices in everyone’s pocket.

There’s plenty to celebrate in that, but two aspects of this matter for this evening:

  • Social networks connect people and make no explicit ranking between them as to the quality, importance or accuracy of what is shared. Everyone’s expertise is as good as anyone else’s.
  • Online information does not need to include any clues about where it comes from.

The costs of giving so many more people a voice which could carry further were under-appreciated. Social networks and search engines are not media organisations in the strict sense, but they are part of the ‘infrastructure of free speech’. Weaknesses built into them are risks to the quality of public discussion and of public reason.

The growing awareness that these were big problems coincided with a cascade of political crises in Europe and the US: resentment and mistrust of elites, recession (or econ standstill), emergence of parties whose appeal lay in they’re not being established parties. A lot of this is characterized as the rise of the ‘far-right’. Parties of the right have benefitted, but it’s more accurate to say that it’s a dealignment from mainstream parties:

  • The socialist parties in Germany and France – dominant players 15 years ago – are in crisis. Squeezed from both left and right, their poll ratings are miserable.
  • In Spain the two new parties scaring the mainstream are one on the left, Podemos, and one on the right, Vox.
  • Here in the US: not only Donald Trump, but also Bernie Sanders
  • In the UK the Conservative Party are veering to the right. But the Brexit Party is picking up votes from disillusioned Labour voters. Over the last 3 years, Labour ought to have been shredding the Conservative government. They didn’t because they are divided over Brexit – and because Corbyn isn’t up to the task.

It’s very tempting to attribute this change in the political weather to manipulation by social media. Try to resist the temptation. This is not to excuse bad actors (in the UK the Leave campaign remain under investigation for breaches of data protection – they’ve already been fined for one – and electoral spending laws) but to stress that we are looking at long, deep trends here. And the evidence that social media messages and micro-targeted ads change the votes of people is proving hard to find.

People say: ‘Can’t we find a way of stopping all these lies on social media?’ (by which they usually mean Facebook). Here’s one example of how this is not as simple as it looks.

Here is an ad run by the Brexit Leave campaign on social media in 2016 which warns about what might happen when Turkey becomes a member of the EU.

Is it a lie? As a matter of judgement, I’d say it is. In theory, Turkey can join the EU. In practice the chances of Turkey doing that are somewhere between infinitesimal and non-existent. But should a democracy classify it as untrue?  Who would do that? What would the chances be of enforcing a ban? Not so clear. It’s not even clear that it provably untrue; just extraordinarily improbable.

Democracy is about competition. Competing politicians stretch the truth. I happen to live in a marginal parliamentary constituency in London. During an election, printed nonsense comes through our letterbox every day. No one needed to invent social networks to talk rubbish.  But you can’t filter election literature by a truth test.

Facebook been apologizing and retreating since 2016 over almost every aspect of political advertising since the presidential election here. But it remains true that the micro-targeting of ads has been used by both parties here in the US and everywhere else. Hillary Clinton failed to learn the lessons which Obama had taught the Democrats about social networks. Her use of Facebook wasn’t as good as Trump’s.

2016 referendum vote in the UK was polluted by lies, no question. But the reason the vote went the way it did was the meeting of two disruptions:

  • Almost a decade of voter disillusion in the developed economies (especially over automation and job losses, mistrust of the mainstream media, global migration, decades of hostility/disinterest in the EU (did the media help create that? Yes), resentment of the political and business elite for cronyism and corruption.
  • A long-running civil war inside the Conservative party. Currently, there is a revolutionary faction in charge. And for that now-dominant group, the end of leaving the EU justifies the means. Anything goes. On Wednesday morning, three Scottish judges ruled that the Johnson government had unlawfully suspended its own parliament in an attempt to force through Brexit. That’s the judges defending a parliament elected by about 46m people against a Prime Minister elected by a around 92k Conservatives.

It’s also worth adding that the arguments over Brexit, Trump and social media have awoken people to the fact that the social networks and tech giants have been allowed an extraordinary latitude – particularly by the American and British governments – without much accountability. The ‘techlash’ is now changing that.

What’s the net effect of all this? Someone asked the novelist Salman Rushdie: ‘What is the biggest problem of all?’

Rushdie replied: ‘Our collective inability to agree on the nature of reality. There are such conflicting descriptions of how things are that it becomes difficult to make agreements that allow people to move forward.’

Rushdie is right: our problem is truth decay, the shrinking space where people can agree about facts and evidence. People sometimes wonder why politicians like Trump don’t seem worried about being caught lying. Doubt is his product. Muddy the waters; blur peoples’ view of the world.

What can we do?

  • Quite a bit is happening already; a few quick examples. Factcheckers are proliferating. Labelling stuff as fictitious or dubious can backfire (by just making it better known), but at least people are alert to the need to provide the evidence quickly and to call out fakery. Canada has just embarked on a bold experiment to jump on misinformation in their election campaign. The BBC, the tech companies and news publishers have announced new cooperation to the same end. A group of Belgian journalists have formed ‘Lie Detectors’, to go into schools and teach about misinformation. As someone said, we need to train everyone’s ‘cognitive muscles for scepticism.’
  • We could reverse the reluctance to use international human rights law as a benchmark for the behaviour of global social media networks. Facebook’s development of a ‘content oversight board’ leans in this direction, which is a promising development. For example: you’re worried about the diseases whose incidence is increasing because of ‘anti-vaxxing’ campaigns? The International Convention on Civil and Political Rights (ICCPR) lists some exceptions which allow governments to place some restrictions on free speech. A public health emergency is one of those. The rising graphs of illnesses such as measles or yellow fever seem to me to justify temporary legal restrictions on social media.
  • We should focus on two things that make societies vulnerable to misinformation and disinformation. The hi-tech platforms have legal responsibilities (against hate speech for example), but they also have civic responsibilities beyond their formal legal duties to minimise harm and for the integrity of the public sphere.
  • Specifically:
    • Social media’s power lies in the ability to amplify what is said. The right to free speech does not include the right to be amplified. Here I think technology may have a role to play im improving networks scope to cut back rapid amplification where there is a public interest case to do that. Limiting amplification is not the same as curtailing freedom of speech.
    • Curation, otherwise known in the past as ‘editing’. No one wants Google, Facebook or Twitter to stop people expressing themselves if possible. But these networks already have, and use, editorial powers: the sequence in which you see content is decided by algorithms written by humans; Facebook is negotiating a new way of presenting high-quality news; Richard (Gingras) posted today on Google’s tightened criteria for quality news. The platforms, although vast and global, are morphing into editorial operations, reluctantly admitting that they are wrestling with dilemmas which have given editors, lawyers and legislators headaches for centuries. This is all good as far as goes. There’s just one problem: there’s no accountability to any external body with an eye to the public interest on whether it works well or not.
  • That brings us to transparency. If the world’s legislators are determined to regulate social networks – as they now often say – I hope they will start with making the operations of social networks more visible. There are of course difficult issues to negotiate: not harming free speech by giving the wrong information to a state, not making it easier to manipulate networks and defensible commercial secrecy. These are not insuperable issues. We cannot audit or achieve any accountability without knowing what’s happening in the black box. That will not be achieved by self-regulation alone but by the force of law
  • Last suggestion for helping journalism and fighting the pollution of the public sphere: defend – and pay for – good journalism. Help experiments which explore business models for original, quality reporting.

There is no silver bullet. At least half the fire-from-the-hip suggestions about regulating the social networks that I have heard from politicians would lurch into unreasonable rules restricting speech. Many of the ideas now in circulation suffer from two simple snags: they’re undemocratic and they won’t work. But neither can we simply rest on a ‘free speech’ slogan and expect the harms caused by online communication to fade away. They won’t. We have to think intelligently about specific remedies for actual damage done.

One last thing we can do: recognise everyone’s responsibility.

Earlier this year, a distinguished political psychologist called Shawn Rosenberg shocked a meeting of his professional peers by saying that he thought it quite possible that democracy might just fail, worldwide. Hios argument was that democracy is hard to do and we don’t seem to have either the energy or the appetite. Elites are losing control of the institutions which are designed to protect us from our worst instincts. You don’t have to agree with his argument to see the warning in there.

It’s down to us.

The other panellists in Palo Alto were Richard Gingras, VP of News at Google and Quentin Hardy, editorial lead at Google Cloud and the discussion was moderated by Rachel Myrow of KQED. Video here. I will try to insert some audio extracts from the discussion and links soon.

Share

1 comment

  1. Excellent article.
    A number of nails hit on heads.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.