Can we regulate social media without breaking the First Amendment?

One of the hardest problems at the intersection of tech and policy is the question of how to regulate social media platforms. Everyone seems to think we should do it, Democrats, Republicans, and even Facebook is running ads saying it welcomes regulation. It is weird. Everyone agrees on the idea, but no one agrees on the execution. Everyone agrees that the platforms should be more transparent but not about what. Researchers should have access to users' data. What about data privacy? The bills have been stuck out forever.

The biggest obstacle is the First Amendment. Like everyone else, social media companies have a First Amendment right to free speech, and a lot of the proposals to regulate these companies look like government speech regulations.

In the past year, Texas and Florida passed state laws that would regulate big platforms, but both were ruled unconstitutional by courts and put on hold pending appeal. The Florida law excused any company that owned theme parks in the state, which was an openly corrupt concession to Disney. It was very good.

The Knight First Amendment Institute's executive director, Jameel Jaffer, co-authored a piece in The New York Times opinion section arguing that the arguments being made by the platform companies against them go too far.

I asked Jameel to talk about that line of argument because I thought it was fascinating.

Here we go.

The transcript was lightly edited for clarity.

Jameel, you are the executive director of the Knight First Amendment Institute. We are here to decode.

Thank you.

The New York Times opinion section has a piece about the various laws in Texas and Florida that regulate social media companies and the arguments that they are making against those laws. We should start with the basics, because I want to talk about all that. What is the Knight First Amendment Institute?

The institute is located at Columbia University. Five years ago, we were established by Columbia and the Knight Foundation to focus on digital-age free speech issues. Through litigation, research, and public education, we do that. We have brought a number of lawsuits related to free speech and social media. The lawsuit that forced President Trump to stop blocking critics from his account was the most famous case that we have brought over the last few years. We have brought a number of lawsuits relating to the intersection of new technology and the First Amendment.

Is Knight's litigation the primary product lawsuits? Is it research? What is the day like?

We are both of those things. We have a litigation team of about a dozen lawyers who work on cases involving free speech, privacy, and new technology. We bring those cases to court. We argue them at all levels of the federal court system.

Many of our lawyers are from other organizations. I was at the American Civil Liberties Union for 14 years. Some of my colleagues were at organizations that dealt with free speech and privacy issues. Again, in the context of new technology.

We sponsor academic research and academic symposia around the same set of issues through our research program. Over the last few years, we have spent less time on public education, but we are going to spend more time on that this year and in the future.

The First Amendment and social media have a complicated relationship. If I had to make a prediction for the future, it would be that the drive to regulate social media in this country will come to a head and we will have to contend with the First Amendment. The courts in Texas and Florida have put on hold some laws due to First Amendment challenges. Congress loves to yell at tech executives. I don't know if it's going somewhere, but it keeps happening. There is a dull roar about Section 230. In the next year or so, it will add up to, "What are the limits of the First Amendment?" That feels real to me.

It is already happening, right? The lower courts are challenging some of the laws. The Texas law has been challenged. The cases are going up because the district court judges have reached conclusions about the laws. I don't know if those are the cases that will lead to the Supreme Court showdown, but if it's not these laws, there will be others. I believe Wisconsin has a law in the works. Some states are considering different types of regulation of social media platforms.

There are a lot of ideas on the table. The details of the laws that are actually passed are what the lawsuits will look like in practice. The courts are going to have to start grappling with some questions over the next year or two. What does the First Amendment mean in this context?

The New York Times opinion section covered the arguments that social media platforms are making in the courts. They are co-opting the First Amendment in their lawsuits. Explain that to me.

:noupscale is a file on thechorusasset.com

Nilay Patel is the host of a show about big ideas and other problems.

The brief we filed in the Florida case was used in the op-ed piece. On the one hand, we agree with the companies that these laws are unconstitutional, but on the other, we were a little bit uneasy about these lawsuits. These laws require transparency on major platforms. They give users all kinds of rights when their accounts are taken down. The law in Florida restricts platforms from deplatforming or shadow banning political candidates or media organizations. The law of Texas forbids discrimination on the basis of viewpoint.

There are very comprehensive regulations to the social media platform. Both of the laws are viewpoint discrimination because they were enacted in response to the platforms' editorial decisions. In the months before the enactment of these laws, some of the platforms took down President Trump's accounts. They did not allow access to reporting about Hunter Biden. They said vaccine misinformation was false.

These laws were payback for the decisions. The fact that the laws were payback for those decisions dooms them for constitutional purposes, as we said in our brief. The courts should be justified in throwing these laws out. The social media companies are making more than that argument.

The platforms are saying that these laws are discrimination. They argue that the Florida and Texas laws have many problems beyond viewpoint discrimination and that they should be thrown out. Other laws that wouldn't have those problems would be pre-empted. The companies argue that the First Amendment entitles them to the same protections as newspapers.

They argue that any law that burdens their editorial decision-making should be subject to the most stringent constitutional review and possibly even considered unconstitutional. They argue that any law that targets larger platforms should be subject to the most serious constitutional scrutiny.

The Florida and Texas laws will be struck down if you accept all of the arguments. It will be almost impossible for the legislature to pass more modest laws. Laws that impose reasonable transparency requirements on the companies or give users reasonable due process protections. They can restrict what kind of information they can collect and how they can use it. Privacy laws are what they are in other words. If you accept the company's arguments. The Texas and Florida laws are not the only ones that will be struck down. We are looking at laws that might be more reasonable.

Do you think the audience for your piece were regular people? Was it the judges who read the opinion section? Was it the entire bar?

There has been a kind of bifurcation in the community that spends a lot of time thinking about the First Amendment and regulation of the platforms. The platforms and the state governments represent the two poles that are represented in these cases. We have the same rights as newspapers and any law that would be unconstitutional with respect to newspapers must also be unconstitutional with respect to social media companies. The platform has its own First Amendment rights. The platforms have no First Amendment rights to speak of in this context, and the state governments that have staked out a kind of "none" position should be allowed to do so.

The First Amendment doesn't leave us with only two possibilities, so this kind of bifurcation is a real problem. There is a lot of space in between. You could have a set of rules that made it difficult for state governments to use social media as a means of political debate, but still allow them to impose reasonable privacy and transparency protections. The First Amendment should not leave us with only two unattractive options, because there is a middle ground.

I think that message was for people in our small community of lawyers and tech policy experts who are thinking about these issues every day, but who have come to the wrong conclusion that there are only two unattractive options on the table. I don't want to be too generous to our own writing, but it was also an effort to bring a degree of nuance to a conversation that has sometimes lacked.

Let me push you on the nuances here. I was a lawyer for 20 minutes. I was not good at it. The social media companies are arguing for a maximal interpretation of the First Amendment. Their lawyers are paid to do that. It felt weird to say they were doing anything wrong after reading the piece. That is what lawyers are supposed to do.

It is notable that some of these companies, including Facebook, have been out there saying to Congress that they want regulation. We want regulation.

Please regulate us.

They are going into the district courts around the country to say that any regulation would be unconstitutional. It seems like an important thing to note. Even if it is not realistic to think that the companies will change their legal arguments, it is important to ensure that other people are not fooled.

We shouldn't think that support of the First Amendment means support of the platforms. The First Amendment might be different from what the platforms are saying, and it might be different from what the state governments are saying. Being a champion of the First Amendment in this context doesn't necessarily mean lining up behind the social media companies.

That is not a message for the companies, but for others who are just trying to figure out what their views are on this topic.

It feels like things are going to come to a head in the next year or so, I think that is what I mean. I would think that any First Amendment organization would push for a maximal interpretation of the First Amendment. There is a lot of nuance here.

I don't think that the phrase "a maximalist understanding of the First Amendment" is a good way of understanding what's going on here. Free speech rights are being asserted by the platforms. They say, "We should get to decide what our community looks like." Users are asserting their free speech rights. They say that they have the right to participate in the new public square and that they should be allowed to do so without being interfered with. Governments are asserting a kind of free speech interest when they say that we need to protect the integrity of the public square. The public square needs to work for our democracy. Public discourse needs to be used for democratic ends.

All of those are free speech arguments. You have to take all of the interests into account when you make a decision on what shape the First Amendment should take. I don't think it's a question of a maximalist understanding of the First Amendment. It is a question of what the First Amendment meant to protect. What are the values that the First Amendment was meant to protect and what shape do we need to give the First Amendment to ensure that?

The First Amendment was meant to protect the process of self-government.

The First Amendment was meant to protect the process of self-government. It shouldn't accommodate regulations that interfere with the process of self-government, but it should accommodate regulations that protect or strengthen it. The First Amendment should not be used to make room for the kind of regulations that are effectively state efforts to get the platforms to censor. It should make room for regulations that will help us understand better what is happening in public discourse, for example how the platforms are shaping public discourse through their editorial decisions.

It seems like the First Amendment should be sympathetic to that, if the regulations are narrowly tailored and drafted in a way that doesn't give government actors the ability to rig the game. It is not going to be a matter of just applying rules that we developed 50 years ago before anyone had even conceived of the internet, but whether any particular regulation should survive the kind of First Amendment review I just described. It will be going back to the values that make up the First Amendment and asking, again, what kinds of rules we need in place to give effect to those values.

The key line from your op-ed was, "The First Amendment should apply differently to social media companies than it does to newspapers because social media companies and newspapers exercise editorial judgment in different ways." If you do X kind of editorial judgment, you get newspaper protection, and if you do Y kind of editorial judgment, you get a lesser social media company protection. What is the line?

I regret that particular phrase. It should matter how editorial discretion is exercised, that's what I meant to say, which is a little bit different from what we said. It doesn't matter if it's a platform or a newspaper exercising it, editorial discretion should be exercised.

This is a very concrete way of thinking about this. We traded drafts with The New York Times several times when we submitted this op-ed. They asked why we were using the word here. Why are you using this phrase? The structure of the argument was suggested by them. They had suggestions about what to say. They had questions about our claims. They sent it to editors who had commented on what we had written. After selecting a title, they placed the op-ed on their website, and then they made a decision about whether to have it in the newspaper as well.

When they decided to put it in the newspaper, they had to make a decision about attaching a photograph to it. Editors were talking to one another about the kinds of things that they think about when making editorial judgments.

Platforms do not do that. That doesn't mean platforms aren't involved in editorial judgement. We have been saying for the past 20 minutes that platforms do engage in editorial judgment, but they exercise it in a different way. Content moderation decisions are used to exercise it. They implement it through both humans and computers.

Can I interrupt you?

Yeah.

You can see how this is similar to the Section 230 debate, in that it says there is a difference between a platform and a publisher.

No. I am not sure if it does. Let me tell you what the argument is, and then you can make a decision. The argument is that the two types of editorial decision-making look different, and that it is possible that any regulation that would burden the editorial judgment of The New York Times might not burden the editorial judgment of the social media platform.

If Congress tries to regulate the platforms, could they do this to The New York Times? We are going to end up with no regulation at all. Congress can't say to The New York Times, "Explain why you rejected this op-ed" or "If you reject the op-ed, then you need to give the person who submitted it an opportunity to appeal the decision to the editor-in-chief." Some of the platforms have already started to provide some of the transparency that I think is necessary for that kind of regulation.

If it were offensive to the idea of editorial discretion that platforms should be required to provide that kind of transparency, it is a little odd that they are already doing that. The only argument we were trying to make was that the editorial discretion of these two types of entities might affect the constitutionality of any particular regulation. It seems to me that it has to be true. The First Amendment is not indifferent to the editorial decision-making. It wouldn't make sense. It wouldn't make sense that the platforms are transparent in ways that newspapers never are. It seems like an important fact to me.

I would push you on, because you say there is a set of actors that look like open access social media platforms and there is a set of actors that look like The New York Times. What do you think are the differences?

They are doing different things. Yeah.

What is the line? How would you define that? The New York Times has a comment section. That looks like Facebook.

The New York Times looks like a social media company if it has a comment section.

Absolutely. Yes. I don't think the line should be between newspapers and social media companies. The line should be drawn on the basis of the kind of editorial judgment being exercised in that particular context. The New York Times looks a little bit like a social media company if it has a comment section. What the Times is doing is similar to what Facebook is doing in its main business.

I wouldn't draw a line between newspapers and social media companies. I would draw it on the basis of the function. What function is the entity engaged in? You might still ask the same question, how are you going to draw the lines based on function? The only way to answer that question is on a case-by-case basis. That is what courts do. The lines are drawn on the basis of case-by-case decision-making.

This is not a new proposition, even on the question of which entities are exercising editorial judgment. There are many Supreme Court cases in which the court has used the phrase "editorial judgment" through case-by-case decision making. The court said that the newspaper was exercising editorial judgment in the 1974 case.

The case was brought 12 years later, in which the court said that a utility was exercising editorial judgment when it decided whether or not to include certain content in the envelopes it sent.

The Court said in a case a decade later that the parade organizers were exercising editorial judgment in deciding who could or could not participate. It is possible to have the same kind of case-by-case decision-making with respect to what kinds of burdens on editorial judgment are permissible.

The only argument I can make is that the kinds of burdens that are constitutionally permissible might turn on whether you are regulating a parade, a utility, or a newspaper. It seems to me that it has to be right.

I am on that. I am looking at the attempts at definitions across the policy landscape. The attempts at definitions keep crashing into the rocks.

Yeah.

The Federal Trade Commission is trying to get Facebook to stop being a monopoly social media provider. The lawsuit failed because they couldn't define the market that Facebook operates in. They failed to define the market for Facebook services. I am looking at the laws of Florida and Texas. Disney would not be hit by the law because it excludes people who own a theme park.

The government seems to be flailing even on the first cut.

That is true, but you don't have to be a First Amendment expert to know that the Disney exception was not going to fly.

Yeah.

The bad definitions are a reflection of bad motives, but there are more serious efforts at the federal level. The bill that was issued last week by Coons, Portman, and Klobuchar was almost entirely about transparency and would require platforms to share certain kinds of data with researchers. It would be a good place for journalists to study the platforms. It is very carefully done. I think it could be improved. I am certain that there are changes that could be made that would make it stronger and more resistant to First Amendment challenges. It seems like a very serious effort at drafting regulation that would strengthen democratic values online, and shouldn't be seen as an insult to the First Amendment.

It is possible that the first wave of these laws, like the Florida and Texas laws, just don't go anywhere, for good reasons, but I don't think we should assume that no regulation is possible here.

There is a view in the tech industry that I hear quite a bit. It is surprising to me because the platform's lawyers are in court arguing for their interpretations of the First Amendment. The other view is that the First Amendment is annoying. I wish the United States would write a speech code that says Nazis are illegal, and write the content moderation standards for us, like Germany did. It is an endless pain point because we are trying to do it. I am tired of it. I am going to change my company name to Meta and do metaverse stuff instead of thinking about speech regulation at scale.

It is more common than I thought. I think that is a terrible answer, but I hear it a lot. Do you think the United States can do something like that for this group of companies?

Do that mean to impose that kind of speech code?

Yes.

No. I don't think the First Amendment is an obstacle to good ideas, but it is an obstacle to some bad ideas. I would put that one on my bad ideas list. If anyone finds this proposal appealing, they should think about how that power would have been used if the last administration had it. How would President Trump define vaccine misinformation if he had the power to do so? What would the code look like for the platforms?

I think that the people who think that a speech code would solve our problems here might not have thought about what it would look like, who would get to write it, and who would enforce it. The First Amendment protects us from that so-called solution to the problem, and I think we are very lucky to have it.

I don't think the First Amendment is an obstacle to the kinds of regulations that actually make sense, that would actually do something to address real problems. I keep going back to transparency, privacy, and due process, but there are other things as well. If Congress wanted to give developers the right to build on top of the digital infrastructure that the big technology companies have created, I don't think the First Amendment would be a problem. Creating privacy protections that limited what the companies can collect and how they can use that information would have a direct impact on privacy, but also on the quality of our speech environment because it's all that data that feeds micro-targeting.

Congress could improve our speech environment in significant ways without generating serious First Amendment issues if they implemented some of the non-viewpoint discriminatory proposals that we have now mentioned several times.

I want to come up with a big idea. You said these are the new public squares. The users of these platforms are interested in what's happening. Most people in America are more affected by the moderation policies of YouTube than any state or federal law. People are more aware of the strikes on the video sharing site than the speed limit around them. The platforms and their rules are in your face when you use the internet. Justice Clarence Thomas wrote a concurrence that said we should call social media companies common carriers.

Yes.

We should leave the First Amendment alone and regulate these companies like the phone company, using the language from telecom law. The Supreme Court seems poised to flip 50 years of precedent in the case of abortion. The court seems to be willing to leave precedent behind. A lot of the First Amendment precedent is only 70 years old. 70 years ago, the idea of strict scrutiny was new.

Yes.

It seems like there is room for another method of thinking about how to regulate these platforms that are almost completely unregulated, but might lead to other significant kinds of consequences. Do you think that is a danger? Do you think that is an opportunity? What do you think about that?

I don't think it's obvious that this would avoid First Amendment issues. I think it would be able to generate its own set of First Amendment challenges because the platforms would argue that they are not common carriers. They have never held themselves out to the entire public in the way common carriers do. The platforms all have community standards, and they all have moderation policies, which makes them look very different from AT&T, which is open to all comers. There is a certain kind of historical circularity to that argument, but the fact is that these platforms exercise a great deal of editorial discretion. A lot of people are complaining that they are exercising editorial discretion. The fact that they are exercising editorial discretion makes them look different from railways and the telecoms.

I am not sure how far that argument goes as a matter of legal doctrine or common sense, but you are correct that Justice Thomas expressed some enthusiasm for it in that concurrence, and appeals court judges have also expressed some enthusiasm for it. The common carrier argument is used by both Florida and Texas. The social media platforms can be characterized as common carriers if they are just hosting. It is not obvious to me that that distinction is workable. Can you draw the line between hosting and everything else? It seems complicated to me, but maybe there is value in distinguishing these two functions. I don't think that asserting that the platforms are common carriers will get us out of the First Amendment world. It is going to raise a lot of other First Amendment questions.

I have been covering net neutrality for 10 years and the phrase "common carrier" lights my brain on fire. Where do you think this will go next? What should people be looking for over the next year?

There are going to be two appeals court decisions in the next few months. The first case is Florida. I think the Texas case will be heard by the Fifth Circuit in the summer, if not in the spring. Even if one of those cases goes to the Supreme Court, I think they will end up shaping the legislative debate at the state and federal level over the next year or two.

We filed the brief in the Florida case because of that. We want to make sure that the courts understand the implications of accepting the arguments that the companies are making, and that if you do, you are not just ruling out the Florida and Texas laws. You don't think that any other legislation will come up in the future.

Great. This has been illuminating, Jameel. Thank you for coming on.

Thank you. Happy to do it.