Sometimes, the larger the neural networks powering today's artificial intelligence, the more intelligent they can be. For example, recent leaps in machine language understanding have been based on the creation of some of the largest AI models and stuffing them full of text. These networks could be expanded to an almost inconceivable size by a new cluster of computers. This will allow for further AI advancements in areas such as robotics, computer vision, and language understanding.
Cerebras Systems is a startup that built the largest computer chip in the world. Now, it has developed technology that allows a cluster of these chips to run AI models that are over a hundred times larger than the most complex ones.
Cerebras claims it is now capable of running a neural network that has 120 trillion connections. This allows Cerebras to mathematically simulate the interaction between synapses and biological neurons. Today's most advanced AI models have approximately a trillion connections and cost millions to train. Cerebras claims that its hardware will be able to run calculations in half the time as existing hardware. Cerberas claims that its chip cluster and power requirements will still be prohibitive, but it will cost less.
Courtesy Cerebras
Andrew Feldman, the founder and CEO at Cerebras, said that we built it using synthetic parameters. He will be presenting details of the technology at a chip conference this Wednesday. We know that we can do it, but we haven’t trained a model because we’re infrastructure builders and there is no model yet.
Most AI programs today are trained with GPUs. This type of chip was originally developed for creating computer graphics, but is well-suited for parallel processing required by neural networks. The vast majority of large AI models are divided across hundreds or dozens of GPUs and connected with high-speed wiring.
While GPUs are still a good choice for AI, companies will increasingly look for ways to make their models more powerful and provide better results. New chip designs that are specifically designed for AI have seen a Cambrian explosion due to commercial interest and recent advances. This is the Cerebras chip. While most semiconductor designers would cut a wafer into smaller pieces to make individual chips out of it, Cerebras uses the whole thing to pack in more computational power. It also has its cores and computational units communicating more efficiently. While a GPU usually has several hundred cores and Cerebras' latest chip, the Wafer ScaleEngine Two (WSE-2), contains 850,000.