A short handbook on opening up the hi-tech giants

During the final months of 2017 a lot of public and private attention was being directed at opening up the secrets of the algorithms used by social networks and search engines such as Facebook and Google. They have edged cautiously towards opening up, but too little and too late. The attacks on their carelessness have mounted as their profits have climbed.

The public pressure came from voices (including mine) arguing that inquiries into misinformation/disinformation in news were all very well but missed the main point. Attention is also being paid to this in private negotiations between the social networks and news publishers.

These discussions have included the suggestion that the networks might make much more detailed data on how they operate available to the publishers, but not to the public. This kind of private deal won’t work if it’s tried. The functioning of the networks is crucial to publishers, but it matters to a lot of other people as well.

You may think that your elected representatives are on the case: there’s an inquiry into disinformation in news in the UK parliament. German and French politicians are bearing down on the online giants. But not much will change until these legislators and pundits look at the detail of how social networks function. I suspect that the German and French attempts to regulate these platforms will, however well-intentioned, misfire. Regulation of self-expression is inherently difficult because of the collision with rights of free speech.

The would-be regulators are rushing to prohibit things without looking at the hard problem. This post is an attempt to explain to those eager legislators what the real issue is and in a little more detail how it might be tackled.

Social networks are private companies with public power. That is to say that connecting people to information at the scale they do is an issue which affects the health of societies. They have civic responsibilities even if they are not publishers of content. Any society is entitled to ask ‘what do we know?’ and ‘how do we know it?’. Right now, some of the answer is buried in code which only a handful of people with expertise and access can grasp.

Mark Zuckerberg regularly acknowledges Facebook’s accountability for its power but usually does so in language which is bland, general and oblique (see his new year ‘mission’ for Facebook and its 2bn users). Google has been ahead of Facebook in acknowledging that it has social, moral and political obligations as well as legal ones, even if they sometimes fail to fulfill them.

A quick example. A lot of people would like material wiped from the internet. A few years ago, the EU’s highest court delivered a right for a complainant to apply to have links taken out of search engines, making a given fragment of information hard or impossible to find – the so-called ‘right to be forgotten’. That right will be strengthened by a new law coming into force this year.

All the evidence suggests that this change to the law has worked reasonably well, largely because Google, which has processed the vast majority of these cases, has been able to devote resources to fixing the issue. But this is a private company arbitrating issues which often involved a clash between the rights of free expression and privacy behind closed doors. New jurisprudence is being made in secret. That is the application of private power to matters of public interest. (More on this here).

But the hi-tech companies are not in favour of greater transparency. I suspect that their real fear is of evidence which exposes the gap between the company’s business model and the public interest. Treating all code and data created by a company as walled-off intellectual property because the organisation depends on it to compete is going to starve the public sphere of the information needed to understand how society’s connective tissue is now woven. Propelled by artificial intelligence, machine learnng and the ‘internet of things’, we may in the end need a new definition of intellectual property.

This is not what we hear from the Silicon Valley. Depending on who’s asking for access to algorithms or to the people who design them, tech firms use these arguments (my comments in italics);

  • Revealing algorithms will allow hackers and trolls to game and manipulate our systems. Disclosing some software would run that risk. But plenty can be publicly debated about what an algorithm does without making the lines of code public. Regulators may have to be given private access to commercial secrets in order to judge some issues.
  • The algorithms are commercial secrets which we are entitled to protect. In some cases this may be a fair argument, but it is not a catch-all alibi for refusing access to everything. There is a lot which networks do which is not algorithmic secret sauce and which may be relevant: the way in which human moderators at a network deal with free speech vs privacy conflicts might be one example.
  • We can’t explain privacy decisions without repeating breaches of privacy. Courts of law have been solving this problem for more than a century when judging privacy cases without repeating the breach of privacy while explaining in full why a given decision fulfills the law.
  • This stuff evolves and moves so fast, we’d be drowning you in data no one understands. Try it and see what happens. Regulators – either new ones or more likely adapted data protection authorities – will have to deploy the kind of expertise necessary to make good public interest decisions. That may involve substantial changes.

Each of these four points above requires law and regulators to make delicate balances where rights or interests collide. These balancing decisions will suffer in the state of ignorance which exists now.

Transparency is not a cure-all for the various ills which people are now laying at the door of the hi-tech firms. But it is an essential pre-requisite for anything constructive that might follow. Good regulation should be based on specific remedies to specific harms. That can’t be done without better facts.

Take the vexed area of social media and elections. Many researchers in several countries are now burrowing into the data to discover whether ‘news’ that was actually fiction affected voting decisions. Despite all that has been said and written, we may know a lot about the appearance of fictional ‘news’ but we have learnt little about its effect on voter choice. Most free electoral systems attempt to allow argument and persuasion in an electoral campaign but frown on or outlaw deception.

Making compulsory the disclosure of how online political advertising works would make the business of regulating anything that needed to be controlled by law much more exact and affective.

Framing disclosure requirements will not be easy. Which networks carry information of public value and importance and when? All, or some? How could a law define the data required to be disclosed? Which expert-equipped agency would define whether disclosure obligations had been met or take action on what had been disclosed? Politicians of the younger generation who look for the answers to these questions will be pioneers and pathfinders.

One of the leading authorities in this field, Professor Philip Howard of Oxford, writes in his latest book that ‘now is the time to encode the next internet with democratic virtues’. I’m not sure that making a degree of transparency mandatory for hi-tech will be that hard. Most of the people inside these companies want to do good.

But they work for organisations which have been allowed to grow huge and profitable with little oversight. And those organisations, proclaiming that they were giving people new means of expression connecting them in new ways, opened a Pandora’s Box of problems to do with algorithmic harm, free speech, privacy and a lot else. Wrestling with what’s emerged must be done in the open and not behind a blank screen.

 

 

 

 

 

 

 

 

Share

Tags: , , , , ,

Comments are closed.