The use of analysis is welcomed by those who monitor tech policy. The director of the internet policy unit at the Tony Blair Institute for Global Change says that new online safety regulators and independent auditors should look at using tech such as TrollMagnifier to assess existing safety systems. Content manipulation, which covers coordinated disinformation campaigns as well as any content presented to mislead or falsely attributed to an individual or entity, is not allowed on the website. The teams that detect and prevent this behavior on our platform are dedicated. 99% of policy-breaking content is removed before a user sees it.
The replicability of the academics' troll hunting approach is being circumspect by two researchers. Some organic behavior can look like troll-like, according to the researcher, who pointed to errors in earlier research that weren't able to distinguish between authentic and inauthentic behavior. He wants to see how much of the data is from communities who are interacting with each other, rather than state-sanctioned troll communities. The results themselves are something Golovchenko is concerned about. The paper is ambitious, but I am not sure how to evaluate the accuracy of the tool the authors present. The tool is trained on accounts that have been discovered, so that the worst-designed ones represent the tip of the state-sponsored disinformation capabilities. The accounts are made to be undetected. By design, studies like this give us the bare minimum, because we are talking about state actors that spend resources to stay hidden.
The paper is welcomed by others. According to the analyst at the Institute for Strategic Dialogue, the researchers have developed a clever way of scaling up the identification of accounts engaged in coordinated troll activity. It is difficult to do such tracking without a seed list of known accounts to see echoes of, and O'Connor points out that it is possible to do that on the internet. He says that more data is always the answer to help us, and subsequently help platforms understand and tackle emerging tactics, tools, and narratives favored by bad actors on social media.
Researchers hope that transparency will help them pay back the money they have spent on the site. The technique is going to help social network companies, says Stringhini. While they have more indicators to look at that could provide hints about a troll users real background, such as IP addresses and browser fingerprints, examining the pattern of content posting could help identify more inauthentic users more accurately.
It's not easy to find inauthentic users on the site. The mission to stir hearts and minds seemed to be unsuccessful when Bootinbull went silent on the platform on December 3, 2015. Their last post? A woman asks a man if he drink beer in the beginning of a long joke. The reply was "Just beer".
There are more great WIRED stories.