First of all, I want to wish everyone a very happy and healthy Data Privacy Day. If you’re looking for a fun way to celebrate, I suggest making a list of all the steps you’ve taken to protect your privacy, and then remember that telecom companies were selling your real-time location to anyone who could afford it until last week.
Next up, two stories about Facebook and power.
The first concerns Facebook’s plan to build a kind of independent Supreme Court for content moderation. It’s an idea that Mark Zuckerberg floated last April in a conversation with Ezra Klein, and that Facebook formally committed to in November.
Today Nick Clegg, the company’s new head of policy and communications, announced Facebook’s next steps toward building what it is now calling an "oversight board” — and published its draft charter:
As we build out the board we want to make sure it is able to render independent judgment, is transparent and respects privacy. After initial consultation and deliberation, we’ve proposed a basic scope and structure that’s outlined in this draft charter. We’ve also identified key decisions that still need to be made, like the number of members, length of terms and how cases are selected.
We’ll look to answer these questions over the next six months in a series of workshops around the world where we will convene experts and organizations who work on a range of issues such as free expression, technology and democracy, procedural fairness and human rights. We’ll host these workshops in Singapore, Delhi, Nairobi, Berlin, New York, Mexico City and many more cities — soliciting feedback on how best to design a board that upholds our principles and brings independent judgment to hard cases.
I like the idea of the board, which promises to devolve power over speech and content moderation to a more diverse group of subject matter experts, who are less tethered to the politics of one country or the financial interests of the platform. As I wrote last summer:
What Facebook is describing with these ideas is something like a system of justice — and there are very few things it is working on that I find more fascinating. For all the reasons laid out by Radiolab, a perfect content moderation regime likely is too much to hope for. But Facebook could build and support institutions that help it balance competing notions of free speech and a safe community. Ultimately, the question of what belongs on Facebook can’t be decided solely by the people who work there.
The draft charter offers useful new details on how this will all work. The company plans to put 40 people on the board, which has seemed both too small and too big to me today depending on which way I look at it. Cases will be decided by smaller panels of board members, who will choose the cases upon which they wish to deliberate based on their interests. They will publish their opinions, but not their individual votes. And they will be paid.
Facebook will pick the first group, and board members will serve three-year terms. Afterward, each outgoing member will choose their own successor. Current and former Facebook employees are prohibited from joining the board, as are government officials.
At Wired, Issie Lapowsky likes the general idea but worries that the sheer size of Facebook will make the system described in the draft charter unworkable:
No team, no matter the size or scope, could ever adequately consider every viewpoint represented on Facebook. After all, arguably Facebook’s biggest problem when it comes to content moderation decisions is not how it’s making the decisions or who’s making them, but just how many decisions there are to make on a platform of its size.
In seeking to fix one unprecedented problem, Facebook has proposed an unprecedented, and perhaps impossible, solution. No, the decisions Facebook’s supreme court makes won’t dictate who’s allowed to get married or whether schools ought to be integrated. But they will shape the definition of acceptable discourse on the world’s largest social network.
I suspect that we will have much more to argue about as the oversight board takes shape, and considers its first cases. But so much Facebook criticism starts from the observation that the company has unprecedented size and power — and so to see it devolve power back to its own community, even in a limited way, feels worthy of encouragement.
Particularly so, given that the day’s other big story involves a consolidation of power. ProPublica and several other outlets have built tools that allow reporters, with the consent of users who install their tools, to collect information about which ads are being targeted at them. But as Jeremy B. Merrill and Ariana Tobin reported today, the tools stopped working this month — because Facebook blocked the tools’ ability to pull in data regarding ad targeting.
It’s part of a long-running game of cat and mouse between Facebook and journalists, in which journalists build tools for scraping data from Facebook, and Facebook tells them to knock it off. Merrill and Tobin argue the data they were collecting made research possible that Facebook’s own tools do not:
The latest move comes a few months after Facebook executives urged ProPublica to shut down its ad transparency project. In August, Facebook ads product management director Rob Leathern acknowledged ProPublica’s project "serves an important purpose.” But he said, "We’re going to start enforcing on the existing terms of service that we have.” He said Facebook would soon "transition” ProPublica away from its tool.
Facebook has launched an archive of American political ads, which the company says is an alternative to ProPublica’s tool. However, Facebook’s ad archive is only available in three countries, fails to disclose important targeting data and doesn’t even include all political ads run in the U.S.
Facebook’s former chief security officer, Alex Stamos, argued on Twitter that Facebook’s move was best understood as a defensive move against ad blockers, rather than an offensive move against journalism. Facebook has pledged to build a separate API for its ad archive that might enable more of the kind of research that ProPublica has been doing, but the API has been slow in coming.
And as Stamos notes, there’s good reason for that. Every API opens a company up to some level of risk — the Cambridge Analytica scandal being the canonical example. And to its credit, as I’ve noted before, Facebook has continued to make good-faith efforts to make some data available to researchers — never as much as they would like, but more than you might expect.
The most ambitious such effort is called Social Science One. It’s a partnership between researchers and the private sector that seeks to use platform data to do social science. Its first project with Facebook, announced last April, will examine the relationship between social networks and democracy.
But as Robbie Gonzalez noted in Wired last week, the project has been very slow going:
Make no mistake: Getting SSO off the ground was—and continues to be—a royal pain, what with all the legal paperwork, privacy concerns, and ethical considerations at play. Details of the industry-academic partnership are too complex to relate here (though I’ve written about them previously), but suffice it to say that King and his SSO cofounder, Stanford legal professor Nathan Persily, earlier this month published a 2,300-word update on the status of their initiative, more than half of which is devoted to the ongoing challenges they face in bringing it to fruition. "Complicating matters,” they write, "is the obvious fact that almost every part of our project has never even been attempted before.” [...]
But if all goes well, SSO could have a more lasting impact, by setting up a framework for secure, ethical, independent research within the tech giants. There’s no reason future investigations, funded and overseen by SSO or a similar outfit, can’t grapple with big questions on well-being. They should also involve companies other than Facebook. We not only want to know what a vulnerable individual watches on YouTube, we also want to know what’s happening when they go to Reddit, what questions they ask their Alexa or Google Home, or how they feel when they post on Instagram. We need these companies to open their doors, and their datastreams, in a prescribed way that respects every participant in the process.
Over time, I hope that efforts like SSO become easier for legitimate social scientists to undertake, and that similar tools become available to journalists. Data-scraping Chrome extensions have resulted in some excellent journalism. But it’s also true that they are exploiting vulnerabilities in Facebook’s code that can and are used for bad purposes.
I wish ProPublica and others could continue doing their journalism while a better system is worked out. But that’s the thing about power: those who have it tend to give it up slowly. And when they do, it’s almost always on their terms.