Michael McConnell, the Richard and Frances Mallery Professor at Stanford Law School, explains the work of the Facebook Oversight Board and discusses the relationship between the First Amendment and disinformation.
Michael McConnell is the Richard and Frances Mallery Professor at Stanford Law School, where he also directs the Constitutional Law Center. The former Circuit Judge on the U.S. Court of Appeals for the Tenth Circuit also is a co-chair of Facebook’s Oversight Board. That body is charged with helping the social media platform deal with difficult questions about freedom of expression online.
In this conversation with Lindsay Lloyd, the Bradford M. Freeman Director of the Human Freedom Initiative at the George W. Bush Institute, and William McKenzie, Senior Editorial Advisor at the Bush Institute, McConnell explains the work of the Oversight Board. The University of Chicago Law School graduate also discusses the relationship between the First Amendment and disinformation and misinformation. And he comments on the challenges to free speech on college campuses.
More than a year has passed since the insurrection at the Capitol after some rather incendiary comments by then-President Trump. Facebook made the decision to limit his access to the platform. How do you assess how the country in general and Facebook in particular are striking a balance between the need for public safety and the importance to our democracy of political expression?
This is a continual tension that’s never going to be completely resolved. The concept of safety, of course, is a bit of a fuzzy word. It’s used to cover a lot of different things all the way down to whether people feel good about themselves. It is important for those of us who believe in vigorous free speech not to buy into the idea that our hurt feelings are a danger to democracy.
That, however, doesn’t mean there aren’t actual dangers. There’s hardly anything more of a threat to democracy than political violence, whether it’s coming from the left or the right. Political violence has destroyed republics before.
Today, political violence is often organized over social media platforms. Extremist voices use the platforms for recruiting, not just identifying people who are already on their side. They use the platforms to stimulate and incite further support.
In between those two extremes of hurt feelings and political violence, there’s a vast gray area and nobody’s going to get it right all the time.
Before we go further, could you explain the mission and work of Facebook’s Oversight Board?
It has become increasingly clear even to the leadership of the company that I still tend to call Facebook but now calls itself Meta that a private profit-making corporation is not the best final decision maker with respect to what people can say in a free society. Even Facebook didn’t fully want that task.
Governmental intervention might well be worse. So, the idea was to create an outside body that would be independent of the company, made up of individuals from all over the world. There are 20 members of the board, most of them with very distinguished backgrounds and all of them completely independent of Facebook.
Don’t just take my word about their backgrounds. We have a Nobel Peace Prize winner, the recently retired deputy chief judge of the European Court of Human Rights, a former African judge, a former prime minister of Denmark, and retired leading editors of prominent newspapers from Britain and Indonesia.
In other words, people whose substantial reputations will enable them to decide these cases the way they think. They’re not going to be deciding them the way Facebook wants. And they’re not going to decide them the way Twitter storms may want. Some of the most difficult questions facing the company will be put to this board.
The scope and jurisdiction began small, but they have already greatly increased. My guess is that over time the board will have more oversight of the content decisions. The board has nothing to do with anything else about the company. It’s all about what content is permitted on the platform.
How many cases have come before the board? And what’s been the response from the corporate leadership to decisions or recommendations that you’ve made?
The board has been operating since October 2020. Since then, there have been well over a million appeals to the board from members of the public. Most appeals have to do with people objecting to their own content being taken down. Some have to do with what they consider objectionable content being left up.
Of those vast number of appeals, the board has decided 20 cases with published opinions. That’s not a very large number. You can go on the website and read the decisions. We are trying to choose cases that have real impact and that touch upon problems that the company frequently faces.
We have final authority about whether material will be restored to the platform or not. And we have another important authority, and that is to make policy recommendations for changes in the way that the company operates its community standards and practices.
They’re not going to be deciding them the way Facebook wants. And they’re not going to decide them the way Twitter storms may want. Some of the most difficult questions facing the company will be put to this board.
The company has promised to comply with our final decisions on restoration of any particular material. They have not promised to adopt all our recommendations. What they have promised to do, and I think this is very significant, is to take all of the recommendations seriously and respond publicly within a certain period of time.
Initially, it was 30 days, but that turned out to be overly optimistic. We’re going to be extending that to 60 days. At that point, the company will either accept or reject the recommendation or say we’re working on it. Sometimes the answer is, “We like the recommendation and plan to implement it, but the engineering tasks are more serious than you might have guessed.”
In some cases, they believe they’ve already followed the recommendation. And in others, they simply say no. But we have had a favorable implementation of many of these recommendations and a favorable reaction to even more.
You recently said on a podcast that we are not going to make the internet okay. Can you explain what you mean by that?
This is a vast problem. It’s like pollution and climate change and pandemics and anything else. There’s not going to be an end to the problem. I will feel good about the work we’ve done if, at the end of a few years, we’ve made it measurably and appreciably better.
The big problem is not social media. The big problem is human beings. Both the good and the bad in human beings is amplified by these media. We’ve all been using speech for both good and bad purposes since human beings emerged from our caves. But it’s social media that enables people to disseminate these messages to billions of people around the world for essentially zero marginal cost and in real-time.
The big problem is not social media. The big problem is human beings. Both the good and the bad in human beings is amplified by these media. We’ve all been using speech for both good and bad purposes since human beings emerged from our caves.
That’s what’s new, plus the algorithms. The feeds tend to give priority to the messages that people are going to get more intense about. And again, that’s both good and bad. The bad is what I think a lot of people are particularly focused on now. We are not going to solve that problem, but we do think we can bring about genuine improvement by bringing more transparency, consistency, and fairness to the process.
Is there a difference between the speech or expression of an ordinary citizen versus that of a political leader or head of government? If so, what is that difference?
That’s a surprisingly hard question. In theory, the same rules apply to everyone. But as is so often the case in an application of a neutral rule, the rule may be the same, but circumstances are going to be different.
Take the newsworthiness aspect. There are times when the platform will make the decision to leave a message up that might otherwise violate one of the community standards because it’s newsworthy. It’s important for people to know about the message. That is not favoritism for political figures, but it does have the effect oftentimes of making their speech more newsworthy than anyone else’s. So, it does have the effect of treating them somewhat differently.
Now there’s a flip side of this that I’m very happy to defend, which is that critiques of political leaders are deliberately given more leeway. New York Times v. Sullivan is the leading case that gives ordinary people a greater latitude when criticizing or saying nasty things about political leaders than they do in criticizing their neighbors.
This goes both ways.
What about the social media companies themselves? Do they have certain free speech rights as corporations?
That’s a matter of national law. If we’re talking in the United States, the U.S. Supreme Court never has said one way or the other. Lower court decisions have recognized that the platforms have at least some of the significant characteristics of the press and that they’re not an automatic billboard. They curate speech and if they didn’t, our inboxes would be filled with spam. Nobody wants an un-curated Facebook or YouTube. So, they do perform some editorial functions, and that is protected under the free press clause of the First Amendment.
As far as how that applies in countries around the globe, we would have to do a 180-country survey.
As a constitutional law professor, how do you see the First Amendment applying to misinformation as well as disinformation?
In any number of cases, the United States Supreme Court has unequivocally held that speech cannot be suppressed merely because those in command regard it as false. There have been some cases where the speech was unquestionably false. The clearest example was United States v. Alvarez, which had to do with the so-called Stolen Valor Act. The law made it a crime for a person to claim to have received a military Medal of Honor when they did not. The court held that statute is unconstitutional.
In any number of cases, the United States Supreme Court has unequivocally held that speech cannot be suppressed merely because those in command regard it as false.
Now, oftentimes misinformation and false speech can be punished and curbed, but not just because it’s false. It has to be because it’s both false and harmful in some particular way. There are a number of areas where misinformation or false speech is punishable, but not across the board.
A lot of us grew up with the maxim that you don’t have the right to yell “fire” in a crowded theater. Does that principle still apply?
Yes, but note that yelling “fire” in the crowded theater is not permissible just because it’s false. It’s that it’s going to cause people to run for the exits and trample each other and even result in fatalities.
That maxim is designed not as an excuse for more government regulation, but rather as a warning that government regulation kicks in at a pretty extreme end of the spectrum.
Debates about free speech are happening on college campuses. How can colleges balance free speech with a concern for civility?
Public universities are considered an arm of the government and they are governed by the First Amendment. In their pedagogical capacity, universities can do lot to promote civility, but the First Amendment doesn’t allow them to punish student speech merely on the ground that it is not civil.
Private universities have some more latitude. But in California, a law prohibits private universities from punishing any student about speech that would be protected by the First Amendment.
Universities have their hands tied. It doesn’t keep them from trying. A lot of hate speech codes have been struck down as unconstitutional. I don’t think any of them have ever survived judicial review.
I believe, and I think most observers think this is true, the greatest threats to freedom of speech on most American campuses today are not from faculty or administrators, but from other students. The atmosphere of intolerance and bullying toward minority points of view has caused a lot of people who might otherwise have been dissenting voices to go into a defensive crouch and not express themselves.
I believe, and I think most observers think this is true, the greatest threats to freedom of speech on most American campuses today are not from faculty or administrators, but from other students.
Do you see any middle ground in this debate? Some think campuses should be safe speech zones, while others think campuses need to be robust places where you get exposed to different lines of thought that will prepare you for the world.
I don’t think it’s always true that the middle ground is the right answer, and universities, if they are safe spaces, are really not universities anymore. Universities have to be places where there can be an exchange of ideas that may sometimes be very uncomfortable for people. You should not go to college to be safe from disagreements.
Is there such a thing as unacceptable speech? If so, where should the bar be set?
Absolutely there is such a thing. As a civilized country, we have all kinds of norms that are enforced, not through law, not through punishment, but through parents and civil society teaching children right and wrong. Those mechanisms are not governed by the First Amendment, but they are extremely important.
You use the term “unacceptable.” There is no catchall “unacceptable” category in the First Amendment. But there are many defined forms of speech presenting particularized harms, which have always been recognized as being subject to regulation. Obscenity at some extreme level. Defamation. National security violations. Crime-facilitating speech. Commercial speech that is misleading and leads to consumer fraud. In general, incitement and threats. The list is not short.