Meta’s new learning algorithm can teach AI to multi-task

Data2vec is a model that can learn to understand the world in more than one way. The Allen Institute for Artificial Intelligence in Seattle has an employee who works on vision and language. It is a promising advancement when it comes to generalized systems for learning.

Although the same learning formula can be used for different skills, it can only learn one at a time. Once it has learned to recognize images, it needs to learn to recognize speech. It is hard to give an artificial intelligence multiple skills at once, but that is something the Meta AI team wants to look at next.
The researchers were surprised to find that their approach performed better than existing techniques at recognizing images and speech, as well as leading language models on text understanding.

Mark is thinking about metaverse applications. He posted on Facebook that this will eventually be built into augmented reality glasses. If you miss an ingredient, it could prompt you to turn down the heat or more complex tasks.

The main lesson for Auli is that researchers should leave their silos. He says you don't need to focus on one thing. It might help across the board if you have a good idea.