It has been speculated that artificial intelligence could one day replace traditional search engines. The technology is not mature enough to be put in front of users, with problems including bias, toxicity, and their tendency to simply make information up.
According to a report from CNBC, a recent all-hands meeting was held in which the CEOs of the two companies talked about the rise of the chat service. The employee asked if the launch of the bot was a missed opportunity for the company. The company had to move more conservatively than a small startup because of the "reputational risk" posed by the technology according to the report.
We want to make these things into real products.
Dean said that they are looking to get these things out into real products and into things that are more prominently featuring the language model. We need to get this right. This is an area where we have to be bold and responsible so we have to balance that.
There are a number of large artificial intelligence language models equal in capability to Openai's. BERT, MUM, and LaMDA are some of the programs that have been used to improve the search engine. Parse users' queries to better understand their intent is one of the improvements. When a searchsuggests a user is going through a personal crisis, for example, MUM helps it understand and directs the user to help from groups like the Samaritans. In order to give users a taste of the technology, the company has launched a number of apps, but has limited interactions with users in a number of ways.
OpenAI, too, was cautious in its development of its technology, but changed its mind with the launch of its new product. Even as the company eats huge costs keeping the system free to use, it has been a storm of beneficial publicity and hype.
Although they have a lot of flexibility in generating language, they have a lot of known problems. They amplify social biases found in their training data, often denigrating women and people of color, and they are easy to trick. Users have found that a wide range of issues, from making up historical and biographical data, to justifying false and dangerous claims like telling users that adding crushed porcelain to breast milk can support the infant digestive system, is covered up by the company.
Dean acknowledged these challenges in the meeting. For search-like applications, the factuality issues are important and for other applications, bias and toxicity are also paramount. He said that machines can do things. If they aren't sure about something, they'll tell you that elephants are the largest eggs or something.
The question of whether or not to use a bot to replace a traditional search engine has been under consideration for a long time at the company. The same challenges that Pichai and Dean are explaining to staff are what caused Timnit Gebru and Margaret Mitchell to be fired from the company. In May last year, a group of researchers from the company looked at the question of artificial intelligence in search. According to the researchers, one of the biggest issues is that LLMs don't have a true understanding of the world, they are prone to hallucinating, and they are incapable of justifying their utterances.
It is not a good idea to rely on it for anything important right now.
Rival tech companies will no doubt be calculating whether launching an artificial intelligence-powered search engine is worth it just to steal a march on Google. If you're new to the scene, "reputational damage" isn't a big deal.
It seems to be trying to damp down expectations for Openai. It's a mistake to rely on it for anything important right now, it's a preview of progress.