Will Smale is a business reporter for the British Broadcasting Corporation.
A growing number of companies are using fake accounts on social media to spread false information about their competitors.
The chief executive of Logically, a high-tech monitoring firm that uses artificial intelligence to find fake news on social media, is warning people to be careful.
While Mr Jain's main customers are the British, American and Indian governments, he says that he is being approached by some of the world's largest retailers. They are asking for help to stay safe.
He says that we seem to be on the verge of an era of disinformation. "We are seeing that some of the same practices that have been used by nation state actors, like Russia and China, in social media influence operations, are being adopted by some more unscrupulous competitors of some of the biggest companies in the world."
The attackers are going to war against them on social media.
The use of fake accounts to "deceptively spread and artificially amplify" negative product or service reviews is a main attack tactic.
The bots can be used to hurt a competitor. An unscrupulous competitor can try to exaggerate their rival's financial troubles if the retailer has disappointing financial results.
Mr Jain doesn't rule out that some smaller Western businesses are doing the same against their lager rivals as foreign competitors.
"Yes foreign competitors are doing this, but even some domestic ones who don't have the same standards around their operations," he said. An emerging company will usually go after an incumbent using these means.
Mr Jain said he wouldn't be surprised if some established brands were also using these tactics.
New Tech Economy explores how technological innovation will shape the new economy.
More than 20 million social media posts a day are searched by Logically's artificial intelligence to find those that are suspicious. Human experts and fact checkers look at the flagged items.
They contact the social media platform that deals with misinformation when they discover it. Some people take down the accounts while others don't. It's up to the platform to decide.
When it comes to attacks on companies, the posts are usually taken down within two hours. This compares to a few minutes for threats of violence.
The 175 employees in the UK, US and India are key to the firm's success. There are clear limitations to going with a technology-only approach.
It's important to have experts in our decision making.
One of the UK tech firms that uses artificial intelligence to monitor social media for misinformation is Factmata.
Humans can be involved in the monitoring work if clients request them, but the artificial intelligence can be more objective. We risk applying our own biases to the findings if we put any humans in the middle of the artificial intelligence and the results.
Factmata's artificial intelligence is trained to identify different aspects of content in order to weed out the bad stuff.
He is referring to content that on first glance might be considered to be fake, but is actually satire, irony, and content that could well be drawing attention to issues for a good cause. He says they don't want to call them bad.
Factmata's artificial intelligence tries to find the source, the first account or account that started the lie or rumour, and focus on getting them removed.
More brands have to be aware of the risks of fake news on social media, according to him. A brand can be damaged if it is wrongly accused of racism. People can decide not to buy from it.
Prof Wachter is a senior research fellow in artificial intelligence at Oxford University.
She says that it's understandable that we use technologies such as artificial intelligence to deal with the problem.
If we can agree on what constitutes fake information that should be removed from the web, artificial intelligence can be a viable solution. We were unable to find alignment on this.
Is this content legit? If this is my opinion, what would I do? It could be a joke. Who makes the decision? If we humans can't even agree on this issue, how is this supposed to be dealt with?
Humans might not be able to detect many of the subtle nuances in human language. According to research, sarcasm and satire can only be detected in less than half of the time.
The truth is not being acted as the guardian of the truth by Factmata. "Our role is not to decide what is true or false, but to identify the content we think could be fake or harmful to a degree of certainty," he said.