The anatomy of Facebook’s ‘huge mistake’ (or what Zuckerberg could say to the US Congress)

This week, Mark Zuckerberg is due to appear before the US Congress. There’s no shortage of people offering hard questions which should be fired in his direction. Here’s what I think he he ought to say before the interrogation begins:

“I know that members of the committee will want to know about attempts to interfere in American elections, whether social media ruins childhood or whether democracy is damaged by digital communication. We’ll get to those issues, but in opening I’d like to draw the committee’s attention to some of the bigger questions below the surface of the recent controversies.

Apologies first. We at Facebook have made many avoidable mistakes. We were blind to the scale of civic responsibilities inherent in what we do. We were warned and we didn’t want to hear. When I said the other day that in retrospect ‘we clearly should have been doing more all along’, I was drawing attention to our repeated failures to be imaginative in seeing our responsibilities to many different societies, and not just to this one.

I’m going to stop senior people at Facebook saying that were surprised when our network turned out not to be the uncomplicated force for good which we said it was. Quite a number of us have said that one failure or another was ‘unforeseen’ or that we were ‘caught out’. You can occasionally say ‘we didn’t see that coming’. But you can’t say it all the time. Truth is, we’re up to our necks in problems and we should have foreseen at least some of them.

Why didn’t we? Three reasons at the start: naïve optimism (‘we help the world by connecting people – what can go wrong?’), over-excitement about what engineering can solve and a hunger for growth. Network effects can build powerful monopolies. Then we found we had created the most powerful advertising machine the world has ever known. It depended on plenty of engagement and interaction which created data which we could sell for people to reach their chosen target consumers. We were still saying to ourselves ‘don’t ask permission first – if it goes wrong, ask forgiveness afterwards.’ This was simply careless. Every science hits an ethical crisis sooner or later: in physics, it was the invention of dynamite, in nuclear physics the atom bomb. This is the ethical crisis for computing science. It goes a long way beyond Facebook.

We need help with this. You’ll be tempted to reply that what we need is regulation and I’ll come to that in a moment. But even if we had been super-alert to every harm or risk that we created, we are being asked –  minute by minute nowadays – to answer questions which have been taxing philosophers, politicians, editors and legislators for centuries. Questions like: how do I know that this is true? How do I know what the effect of these words and/or pictures will be? Where in this given situation is the right line drawn between the protection of privacy and free speech?

How did we innocents find ourselves drowned in these hard problems?

  • In the digital age, the creation of content has been decoupled from its distribution. We thought that we could avoid these issues because we are ‘just a platform’. Just one part of our huge mistake.
  • We have gone from gatekeeper media (with editors and such) to uncontrolled peer-to-peer communication. We’re not the only network with these challenges: look at Twitter, look at Reddit.
  • The unprecedentedly low costs of creating, copying and distributing images, words and sounds creating a waterfall of material through every smartphone.
  • Information is plentiful but harder to authenticate. We all present our best face and image to the world that we can. Facebook likes to help with that. But that is the same technology which assists fakery (and the faking of video and audio has hardly got going) of information which ought to be reliable, in the public interest.
  • We – that is to say algorithms designed by humans – select and we rank the information that our users see. In the digital era, that is an editorial activity.
  • Emotion is overcoming reason. Facebook is great for posting picture of a family picnic; it is also good for stoking ethnic tension in a divided society. Emotion and reason always co-exist in debate, but in public life they are in a new relationship. We thought that we had the basis of a new social system, superior to any in the pre-digital age. But social systems are not built merely on impulses; good ones have more often rested on people mastering their worst instincts and not giving way to them. (Please note that #deleteFacebook is fuelled by the one thing that invariably does really well on social media: outrage.)

Let me be honest: these problems are out of our control, which why I say we need help and why I haven’t objected to the idea of regulation. For example, I would not and could not object to laws which force us to be open about what we do. We have just made commitments about the transparency of political advertising. Let’s discuss a legal framework for that – for all social media. I hope it might act as a template for the bodies which supervise elections all over the world.

Irrespective of the state of the law, Facebook must step up. I want to make you aware of three basic commitments which I make today and which will be immediately followed by practical steps:

  • We need to work with legislators here and everywhere in the world we operate to handle the social, educational and legal problems of hyper-connectivity. A space for dialogue and exchange with a minimum risk of harm cannot be an impossible dream.
  • Facebook cannot and should not survive if we maximize profit at the expense of society. Period. We have made a ton of money out of targeted advertising, but that was not what we started out to do. Our shareholders must face this truth. They are not the only people to whom we are accountable.
  • We will be as open as we can in explaining how we work. That means quite considerably more frank than we have been in the past.

In that context, we have to work on the following changes. These are transformative changes for us, not adaptive ones, and they won’t be easy or always quick.

  • We have encouraged the idea that everything shared on Facebook is of equal value. When you’re talking about friends and family, that makes sense. For news, opinion, marketing and advertising that’s plainly false. We now tell you when we spot something which is disputed by fact-checkers, but that doesn’t seem to have much effect (and it may serve to advertise disinformation). We will need to be more careful in labelling the quality of information in your feed and giving you alternative versions. But would you please like to tell us how far you would like us to act as censors? We’ve floated the idea of having wise women and men advise on what is ‘acceptable speech’; but that is only a start. And do you seriously want to delegate policing the limits of free speech to orivate companies?
  • We need to rethink our value. Connecting people and sharing information do a lot of good. But connecting and exchanging information is not a moral activity on its own. The purpose of the communication counts. Facebook might be useful for organizing a church cleaning rota; it is just as useful for coordinating ethnic cleansing. Some wider knowledge of history and philosophy might have alerted us earlier to the risks inherent in what we were doing.
  • We have to wholly reorganize privacy controls (yes, I’m aware I’ve said this before) so that as far as our business depends on monetizing data gathered from the network, we are doing nothing underhand, illegal or unethical. We have made money from uninformed consent. Now it has to be informed. We have lied to ourselves and to our users in the past about how seriously we take these concerns.
  • We know a lot about authenticating online information and data. That expertise from here on is at the disposal of government and legislators like yourselves. We are only at the beginning of developing our skills at identifying and flagging inaccurate or potentially harmful content. People have talked a lot about better ‘trust signals’ for news. No harm in that, but the key is authenticating where stuff comes from. We will actively help work on identity labels, source transparency indicators and verified content labels. The risk that developed societies will be unable to agree on how to recognise evidence, reason and fact is one of the greatest dangers we face today.
  • Lastly, we are a private company that grew at warp speed. We have become an institution – and I don’t think we have thought enough about the difference between the two. Institutions are hard to make and not as simple to run as companies. Institutions embody values and change gradually, carefully. We recently made some big algorithm changes to our News Feed. Thousands of news organisations across the world have been thrown by this and I apologise to them all. We need stability and predictability to rebuild trust in what we do.

There are many things which we have to do to earn trust. The profitability of data took us over. But we’ve built a network and platform that can do good. It’s that public value we’re going to work on now. With you.”

No, I don’t think Zuckerberg’s going to say this. It’s a fantasy. Would any lawyer allow him to open himself to class actions from groups in Facebook’s 2bn users? And stock price would fall faster than it has already. But this thought exercise may give you a clue to how far Zuckerberg has to go to even start meeting the criticisms levelled at his network.

 

 

 

 

 

 

 

 

 

 

 

 

Share

Tags: , ,

Comments are closed.