Twenty-five years ago, Congress established safe harbor protections to protect users from legal liability for content they post on social media platforms. While these platforms have many benefits, we've seen how devastating they can be for social change since 1996. The authors state that what they have learned is that Section 230 is out of date and should be updated to hold social media platforms accountable for the design and implementation of their sites.Broad safe harbor protections are provided for social media platforms that allow users to post content on their sites. These protections were created in Section 230 (1996) of the 1996 Communications Decency Act. They are a result of a long-gone age when there was no technological advancements and little technological innovation. These protections have been greatly rewritten since the beginning of the 21st century. It is time to review and revise these protections, and for all leaders whose businesses rely on the internet to understand how they might be affected.There are many social benefits to social media platforms. They provided a voice for oppressed people in the Arab Spring, and they also served as a platform for the #MeToo and #BlackLivesMatter movements. They raised $115 million for ALS through the Ice Bucket Challenge and helped coordinate rescue efforts for Hurricane Harvey victims.We have also seen the social destruction these platforms can cause and this has led us to ask previously unimaginable questions regarding accountability. How much should Facebook be held responsible for the Capitol Riots? Much of the planning took place on Facebook. How much should Twitter be held responsible for allowing terrorist recruitment? What proportion of responsibility should Backpage or Pornhub take for the facilitation of child sexual exploitation? What about other social media platforms that profited illegally from the sale of assault weapons and pharmaceuticals? These questions were not addressed by Section 230.Two key subsections of Section 230 govern user-generated content. Section 230(c),(1) protects platforms against legal liability for harmful content that is posted on their websites by third parties. Section 230(c),(2) allows platforms to check their sites for harmful content. However, it does not require that they remove any material and protects them against liability if they do not.These provisions are good, except for the bad parts.It's easy to see the good stuff. We want social-media platforms to continue to be in business because they provide social benefits. However, it is difficult to see how they could be held liable for any and all content posted by third parties via their websites. This concern was addressed by Section 230(c).(1)Section 230(c.2) was created in response to a 1995 court decision that stated platforms who monitor user-generated content on their sites are publishers and legally responsible for any such content. To encourage platforms to check their sites for harmful or socially-oriented content, Congress passed Section 230(c).This seemed like a sensible approach at the time. These two sections are in conflict. You give platforms legal immunity for all content they post. This reduces their incentive to remove harmful content. In 1996, this didn't matter: Even though social media platforms were not legally obligated to remove harmful content from their platforms, it was logical that they would do so for their own economic interests, to protect their brands.We've learned a lot from 1996.We have learned that social media posts can cause significant harm. We also discovered that platforms don't have enough incentive to police their platforms and protect their brands. We have found that socially damaging content can be very profitable for platform owners, while doing little to harm their brand or public image.There is growing consensus today that Section 230 needs to be updated. Today, there is a growing consensus that Section 230 needs to be updated.What could Section 230 be changed? A variety of proposals have been made by legal scholars. They all use a carrot and stick approach. In 2017, a Fordham Law Review article was written by Danielle Citron, Benjamin Wittes and argued that Section 23 should be amended with the following changes: A provider or user of an interactive computing service shall not take reasonable steps to prevent others from causing serious harm.This argument, which Mark Zuckerberg echoed in testimony that he gave in Congress 2021, is linked to the common law standard for duty of care, which American Affairs Journal described as follows:Businesses have an obligation under common law to take reasonable steps not to cause injury to customers and to take steps to prevent them from causing harm. In certain situations, that duty creates an affirmative obligation for businesses to stop one party from using their services in harming another. Platforms could be found guilty under common law if they create an unreasonably unsafe environment or fail to prevent one user harming another user.This line of thinking has been adopted by the courts in recent years. The Texas Supreme Court ruled, in a decision dated June 25, 2021, that Facebook is not protected by Section 230 when it comes to sex-trafficking recruitment on its platform. The court stated that Section 230 does not create a lawless internet no-man's-land. It is one thing to hold internet platforms responsible for the words and actions of their users. Federal precedent clearly states that Section 230 prohibits it. It is quite another to hold internet platforms responsible for their own actions. This is especially true for human trafficking.The duty-of care standard is a solid one and courts are pushing for it. They are holding social media platforms accountable for how they design and implement their sites. Facebook should have recognized that it had to take stronger measures against user-generated content that supports the overthrow or destruction of the government. This would be in line with any reasonable duty-of care standard. Pornhub should also have known that pornhub did not allow 14-year-olds to upload explicit videos.Some people believe reform is necessary. Section 230 supporters argue that it allows innovation because small businesses and startups might not have the resources to provide the same level protection as Google. This concern would be addressed by the duty-of care standard. What is reasonable protection for a billion dollar corporation will naturally differ from what is reasonable for small startups. One criticism of Section 230 reform was that it would stifle freedom speech. This is simply false. All the duty-of care proposals that are on the table today do not address content that isn't protected by the First Amendment. First Amendment protections do not apply to speech that incites harm (yelling in a packed theater), encourages illegal activity, or propagates certain types obscenities (child sex-abuse materials).This is a change technology firms must embrace. Social-media platforms with low incentives to reduce harm are making it more difficult for the public to use these services. This is affecting both commercial and social interaction online.A restoration of the duty to care is unlikely to cause any harm for most legitimate platforms. User-generated content is a major risk factor. Many online businesses do not host any of this content. Online businesses are expected to act responsibly and, as long as they have a reasonable duty, there is little chance of them being sued. As noted, they should take reasonable steps to protect their services and minimize the risks associated with them.Good actors need to be able to clearly distinguish between the services they provide and the ones provided by bad actors. Only those who do not meet the duty will be held accountable under a duty of care standard. Broader regulatory intervention, however, could restrict the ability of businesses to make decisions and impose costs. As long as bad actors continue to cause harm, the odds of such broad regulation increasing are higher. Section 230 should be amended.