Natural language processing continues its quest to reach new corners. It's now phishing emails. Researchers found that the GPT-3 deep learning model, as well as other AI-as a-service platforms, could be used to lower the barriers to entry to spearphishing campaigns on a large scale.Researchers debated for years whether it was worth the effort to train scammers machine learning algorithms that could generate convincing phishing messages. Mass phishing messages can be simple and straightforward, but they are highly effective. However, crafting targeted spearphishing messages that are highly targeted and customized is more labor-intensive. NLP could be a great help in this situation.A team from Singapore's Government Technology Agency presented a new experiment at the Black Hat security conference in Las Vegas. They sent 200 targeted phishing email to their colleagues using an AI-as a service platform and emails they had created. The links in both messages were not malicious, but they simply reported back the clickthrough rates to researchers. The researchers were shocked to discover that the AI-generated messages had a higher clickthrough rate than the human-written ones.Researchers have shown that AI requires some expertise. Eugene Lim, a cybersecurity specialist at the Government Technology Agency, said that it costs millions to train a good model. It costs just a few cents to put it on AI as-a-service. And it's very easy to use. Just text in and text out. It doesn't even need to run any code. All you have to do is give it a prompt, and it will provide you with the output. This lowers the barrier to entry and opens up the possibility of spearphishing to more people. Now every email sent on a large scale can be customized for each recipient.OpenAI's GPT-3 platform was used by the researchers in combination with other AI as-a-service products that focus on personality analysis to create phishing emails specific to their colleagues' backgrounds. Machine learning based on personality analysis is used to predict the proclivities of a person's mentality and behavior based on their behavioral inputs. Researchers were able to create a pipeline that refined and groomed the emails before they were sent out by running them through multiple services. The researchers claim that the results sound strangely human. They also say that the platforms provided surprising details, such as mentioning a Singaporean law to generate content for Singaporeans.The researchers were impressed with the quality of the artificial messages and the number of clicks they received from their colleagues, but they also noted that this was only the beginning. Although the sample was small, and the target population was very homogeneous in terms both of geography and employment, it was also relatively small. Additionally, both the AI-as a service pipeline-generated messages and the human-generated messages were generated by insiders. This is in contrast to outside attackers who tried to strike the right tone.Tan Kee Hock is a cybersecurity specialist at the Government Technology Agency.The researchers were still inspired by the findings to explore how AI-as a service might play a role for spearphishing campaigns and phishing. OpenAI, for instance, has been concerned about the possibility of misuse of its service and other similar ones. Researchers note that OpenAI and other AI-as a service providers adhere to clear codes of conduct. They also attempt to audit their platforms for malicious activity or verify users identities.OpenAI stated in a statement that misuse of language models is a serious industry issue. This is part of our commitment for safe and responsible AI deployment. GPT-3 is granted through our API. Every production use of GPT-3 is reviewed before it goes live. To reduce the risk of API users using GPT-3 for malicious purposes, we impose rate limits and other technical measures. We use audits and active monitoring systems to detect potential misuse early on. We are constantly improving the effectiveness and accuracy of our safety tools.