Silicon Valley CEOs tend to focus on the positives when announcing their company's next big thing. In 2007, Apple's Steve Jobs praised the first iPhone's revolutionary user interface and breakthrough software.

The LaMDA 2 can converse on any topic, and has performed well in tests with employees. He announced a new app that will allow outsiders to try the bot. While we have improved safety, the model might still generate inaccurate, inappropriate or offensive responses.

There is a mixture of excitement, puzzlement, and concern swirling around a string of recent breakthroughs in the capabilities of machine learning software that processes language.

The power of auto-complete and web search has been improved by the technology. It has created new categories of productivity apps that help workers by generating code. Last year, when he first disclosed the LaMDA project, Pichai said it could eventually be put to work inside the search engine, virtual assistant, and workplace apps. It is unclear how to reliably control these new wordsmiths.

Machine learning researchers call it a large language model because of the example of the LaMDA, or Language Model for Dialogue Applications. The term is used to describe software that builds up a statistical feeling for the patterns of language by processing huge volumes of text. LaMDA was initially trained with more than a trillion words from online forums, Q&A sites, and other websites. The data helps the algorithm perform tasks like generating text in different styles, interpreting new text, or functioning as a chatbot. If these systems work, they won't be like the frustrating chatbot you use today. When presented with something they don't understand, the only things that can be done with the Google Assistant and Amazon'sAlexa are pre-programmed tasks. A computer that you can actually talk to is what is being proposed.

At times, chat logs show that LaMDA can be informative, thought-provoking, or even funny. It can be hard to shake the idea that there is a technology that can provide new insights into the nature of language and intelligence.

When he announced the first version of LaMDA last year, Pichai made clear that he sees it potentially providing a path to voice interface vastly broader than the often frustratingly limited capabilities of services like Amazon's voice assistant and Apple's voice assistant. Now the leaders of the company seem to think they have found the path to creating computers you can talk with.

Large language models have been shown to speak dirty, nasty, and racist. A lot of the content on the web is not good. Openai, the company behind language generator GPT-3, has reported that its creation can perpetuate stereotypes about gender and race, and asks customers to implement filters to screen out bad content.