AI Can Write in English. Now It's Learning Other Languages

Artificial intelligence has allowed machines to produce snippets of English that are acceptable in recent years. They are now moving onto other languages.
Aleph Alpha, a startup based in Heidelberg, Germany has created one of the most powerful AI languages models. It is fluent in English, French, Spanish and Italian, fitting the algorithm's European roots.

This algorithm is based on machine learning advances that allow computers to understand language and sometimes seem like they are speaking it. The algorithm draws on the knowledge it has gained from browsing the internet to create coherent articles about a subject or answer general questions.

However, the answers may be different from those provided by similar programs in the US. Aleph Alpha answers the question about the greatest sports team of all time with a reference to a German soccer team. The Chicago Bulls and New York Yankees are more likely to be mentioned in a US-built model. The algorithm will adjust its cultural perspective to ensure that you can ask the same question in French. Aleph Alpha can be asked a question in any language, and it will give you the answer in another.

This is transformational AI, according to Jonas Andrulis (founder and CEO of Aleph Alpha), who has previously worked with Apple AI. Europe is relegated as users of products from China or the USA if it lacks the technical skills to build these systems.

Machine learning is making some progress after decades of slow progress in helping machines understand the meaning of sentences and words. Startups are eager to make a profit from AI's increasing language skills.

OpenAI, an American startup, was the first one to present a new type of AI language model called GPT-2 in 2019. The API allows researchers and startups to access a more powerful version of GPT-3. Several other US companies are also working on similar tools, including Anthropic and Cohere, both founded by OpenAI alumni.

A growing number of companies from outside the US, including South Korea, Israel, and China, are now developing general-purpose AI language tools. While each effort is unique, they all share the same machine learning advances.

Partly, money is behind the rise of AI programs that use language in useful ways. There are many things that can be built upon them, including intelligent email assistants and programs that can write useful computer code.

These large language models are astonished at how much they understand about the world. Professor, Stanford, Chris Manning

AI has been trying to get machines to understand language for a long time. Because words and concepts can be combined, language is powerful. This allows for a vast array of thoughts and ideas. It can be difficult to decode the meaning of words due to frequent ambiguity. Also, it is impossible to put all of the rules of language into computer programs (although some have tried).

Recent advances in AI have shown that machines can learn some remarkable language skills by simply reading the internet.

Google researchers released details in 2018 about a new type of large neural network that is specialized in natural language understanding. It was called Bidirectional Encoder Representatives from Transformers (or BERT). This demonstrated that machine learning can lead to new advancements in language understanding, and spurred efforts to explore these possibilities.

OpenAI presented GPT-2 a year later. It was built by feeding large amounts of text from the internet to a very large language model. It requires enormous amounts of computer power and engineering skill. However, it appears to unlock a new level in machine understanding. GPT-2 and GPT-3 are capable of generating paragraphs of coherent text about a subject.