The biggest companies in the tech industry have been allowed to mark their own homework. They hid behind the tech industry adage of "move fast and break things" to protect their power.

Food and beverage companies, the automotive industry, and financial services are all subject to regulation in order to ensure high degrees of ethics, fairness, and transparency. Tech companies argue that legislation will limit their ability to act effectively, turn profits, and do what they become powerful for. The UK's long-awaited Online Safety Bill is one of the bills that aim to curb these powers. The bill won't be effective because of its limitations

The duty of care for monitoring illegal content on platforms has been in the works for a long time. It could set a dangerous precedent for free speech and the protection of marginalized groups by requiring platforms to restrict content that is technically legal but could be considered harmful.

More than one million people said they had suffered threatening behavior online in the past year, according to a survey. Twenty-three percent of the people surveyed were members of the LGBTQIA community, and 25 percent of them had experienced racist abuse online.

Legislation aimed at tackling some of these harms will come into effect in the UK in 2023. The effectiveness of the online safety bill has been raised many times by campaigners, think tanks, and experts. The bill doesn't specifically name minoritized groups, such as women and the LGBTQIA community, even though they are disproportionately affected by online abuse.

There are no specific processes to define what significant harm is or how platforms would have to measure it, according to the Carnegie UK Trust. The bill would drop the requirement that Ofcom should encourage the development and use of technologies and systems for regulating access to electronic material. The legislation won't be effective because it won't be able to account for harms caused by platforms that haven't gained prominence yet.

Legislation has been passed in other countries to change platforms. Germany was the first country in Europe to take a stance against hate speech on social networks, with platforms with more than 2 million users having a seven-day window to remove illegal content. In 2021, EU lawmakers set out a package of rules on Big Tech giants through the Digital Markets Act, which stops platforms from giving their own products preferential treatment, and in 2022, we saw progress with the EU Artificial Intelligence Act, which involved extensive consultation with civil society organizations. A new internet code of practice was issued by the federal government in Nigeria in order to address misinformation and protect children from harmful content on the internet.

The UK will make progress on a regulatory body for tech companies in the years to come. More needs to be done to protect vulnerable people online despite the online safety bill.