Kinzen is a startup that uses machine learning to analyze and report potentially harmful statements to humans. Even as it ramps up its focus on user-generated podcasts and audiobooks, the acquisition is meant to help it deliver a safe, enjoyable experience on its platform around the world.
The startup's tech has been "critical to enhancing our approach to platform safety." Kinzen claims to be able to analyze audio content in several languages and use data from the internet and human experts to figure out if certain claims are harmful. It claims to be able to spot dog whistles, seemingly innocuous phrases that have a darker meaning.
It is difficult to catch all the rule-breakers because of the large amount of user-generated audio on the service.
Kinzen will be brought in-house to improve the company's ability to detect and address harmful content.
The company has spent the past few years consulting with experts and launching advisory councils in an attempt to figure out moderation after it purchased a do-it-yourself podcasting platform. It really doesn't have a lot of oversight on the amount of audio content that was added to its service by "independent creators."
The public debate over what people like Joe Rogan say on Spotify-exclusive podcasts is a perfect example of how controversy can be avoided. Imagine if there are issues that crop up in that proportionally small amount of content.
According to a recent report from the Anti-Defamation League, content that is clearly in violation of the company's policies can still escape action. The report looked at explicitly white supremacist music, but Kinzen seems to be more focused on finding problematic spoken-word content and bringing it to the attention of moderation. If the program wants to listen to the thousands of podcasts added to the platform each day, it will have to work overtime.