Illustration by Alex Castro / The Verge

According to an internal memo sent to a team ofTwitter employees, shopping features pose content moderation risks and could be used in a way that leads to individual or societal harm

A person with knowledge of it said that the memo was sent in early July to a group of employees. The email warns that content moderation hasn't been prioritized in shopping and that some existing and not-yet-released features are categorized as high risk.

First rolled out last summer, it allows brands to list items for sale and pin a few products at the top of a merchant's profile. Users can link out to a merchant's website if they want to buy the product, but they can't directly purchase it on the social network. As of June, all US merchants were able to list and showcase up to 50 products in their storefront with the expanded shop module.

The memo paints the picture of a bare-bones process

Several elements of the e- commerce tool are categorized as high in the memo. The memo warns that merchant-generated fields could be used by bad actors in harmful ways.

Anyone with a professional account that sells items in the US can add products to their profile with the shop feature. Merchants can add a custom shop name and description to the expanded shop when they choose to. People working on shopping say there are dangers.

A bare-bones process for looking for and removing potentially abusive or harmful content is depicted in a memo. The platform doesn't have a policy on what is considered a shop name or description violation There is no way for users to report storefronts for content in these fields, and there is no way to detect violations in shop names and descriptions.

The main selling point of the shopping features is shareability, and the company has introduced other updates like reminders to let customers know about new releases from brands. The ability to click into merchant shops, view products, and click out to merchant websites is currently available to users of the social network. According to the internal memo, the ability to share storefronts could lead to harmful content being amplified further, increasing the visibility of content that violates the rules.

Proactive measures to detect violations are “limited”

The memo says that there are automated detection mechanisms for individual products. The company doesn't have a lot of staff or tools for further review.

Users are more likely to see violative shops or violative goods in a shop that is accessible. Bad actors may use social media to amplify harmful or violative goods.

Lauren Alexander confirmed that the memo was authentic and that it was part of a new feature assessment. Different teams can give input to make sure new product releases are safe.

Alexander says that they are always working to improve the safety of their service. We have taken the time to test our new shopping surfaces so that before we scale or expand to new markets, we have the chance to gather learnings to better inform our approach to health and safety.

A paid subscription service, live broadcasts on Spaces, and paid Super Follows are just some of the revenue sources that have been tried out by the company. Though the platform has been rolling out and expanding its commerce tools, it is lagging behind other social networks and has the potential to be a large part of the company.

The company suffered from negligent security and bot removal issues, among other problems, according to a report by the former head of security. The platform would combine a health team working on reducing misinformation and other toxic content with a services team that works on junk mail, according to a report.