The Online Safety Bill has been added to by the UK government.

The latest package of measures to be added to the draft are intended to protect web users.

The Bill has far broader aims as a whole, which include a sweeping content moderation regime targeted at explicitly illegal content but also legal but harmful stuff, with a claimed focus of protecting children from a range of online harms.

Critics say the legislation will kill free speech and cause a lot of legal risk and cost for doing digital business in the UK. Firms are offering to sell services to help platforms with their compliance of course.

The draft legislation has been scrutinized by two committees. One called for a sharper focus on illegal content, while another warned that the government's approach is unlikely to be robust enough to address safety concerns.

The bill continues to grow in scope.

The draft includes a requirement for adult content websites to use age verification technologies, as well as a massive expansion of the liability regime, with a wider list of criminal content being added to the face of the bill.

The Department of Digital, Culture, Media and Sport (DCMS) says that the latest changes will only apply to the biggest tech companies, meaning that platforms will be required to provide users with tools to limit how much legal content they are exposed to.

It's not clear what evidence they're drawing from beyond anecdotal reports of individual anonymous accounts being abusive.

It is easy to find examples of abusive content being given out by verified accounts. Nadine Dorries, the secretary of state for digital, was at a parliamentary committee hearing when she lashed out at an LBC journalist.

Single examples don't really tell you much about systemic problems.

A recent ruling by the European Court of Human Rights reiterates the importance of anonymity online as a vehicle for the free flow of opinions, ideas and information.

The UK legislators need to be careful if the government claims that the legislation will make the UK the safest place to go online.

UK publishes draft Online Safety Bill

It might be useful for lawmakers to consider the financial incentives associated with the spread of content on certain high-reach, mainstream, ad-funded platforms, where the problem of internet trolly is especially problematic.

The UK's approach to tackling online harassment is different.

The government is trying to force platforms to give users options to limit their exposure, despite the fact that DCMS recognizes the role of algorithms in creating harmful content.

The UK's existing data protection regime against people-profiling adtech is something privacy and digital rights advocates have been calling for for years.

The government wants people to give more of their personal data to adtech platform giants in order to create new tools to help users protect themselves. The government is considering reducing the level of domestic privacy protections for Brits.

The Bill's latest additions will make it a requirement for the largest platforms to offer ways for users to verify their identities and control who can interact with them.

The platforms must give users the option to opt out of using certain methods to fulfill this identity verification duty, but the onus will be on them to decide which methods to use.

Dorries said that tech firms have a responsibility to stop anonymous troll pollution.

We have listened to calls for us to strengthen our online safety laws and are announcing new measures to put greater power in the hands of social media users themselves.

People will now have more control over who can contact them and be able to stop the wave of hate that is served up to them.

The ability to see a feed of replies from other verified users is already available on the platform. The UK's proposal looks set to go further, requiring all major platforms to add or expand such features, making them available to all users and offering a verification process for those who are willing to prove an ID in exchange for being able to maximize their reach.

The law won't require specific verification methods, but the Ofcom will offer them.

Some platforms will allow users to verify their profile picture to make sure it is a real likeness. They could use two-factor authentication, where a platform sends a prompt to a user's mobile number for them to verify. The government suggests that verification could include people using a government-issued ID.

The oversight body which will be in charge of enforcing the Online Safety Bill will set out guidance on how companies can fulfill the new user verification duty.

In developing this guidance, Ofcom must ensure that the possible verification measures are accessible to vulnerable users and consult with the Information Commissioner, as well as vulnerable adult users and technical experts.

The UK isn't pushing for a complete ban on anonymity, which will be a relief to digital rights groups.

The UK's strategy when it comes to online trolly is to put limits on the freedom of reach on mainstream platforms.

DCMS writes that banning anonymity online would negatively affect those who have positive online experiences or use it for their personal safety such as domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality.

It will stop victims from being exposed to abusive content if it is legal and does not violate the platform's terms and conditions.

Neil Brown, an internet, telecoms and tech lawyer at Decoded Legal, wasn't sure if the government's approach was consistent with human rights.

I am skeptical that this proposal is consistent with the fundamental right to receive and impart information and ideas without interference by public authority.

Compelling platforms to implement these measures seems to be questionable legality, as it would be lawful for a platform to choose to implement such an approach.

Under the proposal, those who want to maximize their online visibility/reach would have to hand over an ID, or prove their identity to major platforms.

Although the proposals stop short of requiring all users to hand over more personal details to social media sites, the outcome is that anyone who is unwilling or unable to verify themselves will become a second class user.

Those who are willing to spread bile or misinformation under their own names are not likely to be affected, as the additional step of showing ID is unlikely to be a barrier to them.

The government's proposal would mean that users of in-scope user-generated platforms who do not use their real name as their public-facing identity would still be able to share.

Brown was a little more positive about this aspect.

He warned that a lot of people may be too cautious to trust their ID to platforms. Over the years, the outing of all sorts of viral anonymous bloggers highlights the motives for shielded identities to leak.

This is marginally better than a policy where your name is made public, but only marginally so.

User controls for content filtering

DCMS said it will require category one platforms to provide users with tools that give them greater control over what they are exposed to on the service.

The bill will force in-scope companies to remove illegal content, such as child sexual abuse imagery, the promotion of suicide, hate crimes and terrorism. There is a growing list of toxic content and behavior on social media which falls below the threshold of a criminal offence but still causes significant harm.

Racist abuse, self- harm and eating disorders, and anti-vaccine misinformation are included. Much of this is already forbidden in the terms and conditions of social networks, but it's allowed to stay up and is promoted to people via the internet.

Under a second duty, companies will have to make tools available for their adult users to choose whether they want to be exposed to any legal but harmful content on a platform.

New settings and functions could be used to prevent users from receiving recommendations about certain topics.

Its press release gives an example of the content on the discussion of self-harm recovery being something which a particular user may not want to see.

Brown was more positive about the plan to require major platforms to offer a user-controlled content filter system, with the caveat that it would need to be user-controlled.

Concerns about workability were raised by him.

The idea of a content filer system would allow people to have a degree of control over what they see on a social media site. Users can choose what goes on their personal blocking lists. He told us that he wasn't sure how that would work in practice.

When the government refers to any legal but harmful content, could I choose to block it? Is that anti-democratic even though it is my choice to do so?

Is it possible to block all content if I consider it to be harmful? I do not.

Is it okay for a politician to make abusive or offensive comments? Or is it going to be a more basic system that will allow users to block nudity, profanity, and whatever a platform decides to depict self- harm, or racism?

It might be easier to achieve if it is left to platforms to define what certain topics are. I wonder if providers will resort to over blocking in order to ensure that people don't see things they want to be suppressed from.

Huge swathes of specific details are not yet clear because the government intends to push so much detail through via secondary legislation. Further details of the new duties will be set out in forthcoming Codes of Practice set out by Ofcom.

Without more practice specifics, it is not possible to understand how platforms may be able to implement these mandates. Most of what we have is government spin.

How might platforms approach a mandate to filter legal but harmful content?

Assuming the platforms themselves get to decide where to draw the line, Brown predicts that they seize the opportunity to offer a massive vanilla.

They could use over blocking as a tactic to discourage people from changing to a platform that has a lot of censorship.

Platforms would have plausible deniability in this scenario since they could argue the user themselves chose to see harmful stuff. They didn't opt out since the filter was turned off. Can you blame the government?

Any data-driven harms would be off the hook. The user would be responsible for online harm if they didn't turn on the high-tech sensitivity screen. Responsibility was diverted.

Which, frankly, sounds like the sort of regulatory overside an adtech giant like Facebook could happily get behind.

The full package of proposal coming at platform giants from Dorries and co. poses a lot of risk and burden.

The secretary of state has made it clear that she would be happy to lock up the likes of Mark Zuckerberg and Nick Clegg.

Under threat of massive fines and/or criminal liability for named execs, the Bill was recently expanded to mandate proactive takedowns of a much wider range of content.

platforms will need to remove all that stuff up front, rather than acting after user reports as they have been used to, as the case may be. Which ends their content business as usual.

New criminal communications offences will be added to the bill by the DCMS.

It might be a lot easier since the tech giants are unwilling to properly resource human content moderation.

Make cat pics and baby photos all the way down, and hope the eyeballs don't roll away, and the profits don't drain away, but Ofcom stays away.

UK warns Facebook to focus on safety as minister eyes faster criminal sanctions for tech CEOs

UK revives age checks for porn sites