Were you not able to attend the event? You can find all of the summit sessions in our library. Here is the place to watch.

One of the most common ways for organizations to scale and run increasingly large and complex artificial intelligence is with the open-sourced Ray framework.

Ray allows machine learning models to scale across hardware resources and can also be used to support MLops. Over the last two years, Ray has had a number of different versions.

The general availability of Ray 2.0 at the Ray Summit was released today. TheAIR is intended to be a layer for executing ML services and is part of Ray 2.0.

A new enterprise platform was announced by Anyscale, which is the lead commercial backer of Ray. A new round of funding co-led by existing investors Addition and Intel Capital was announced by Anyscale.

MetaBeat is a sequel to Meta Beat.

Thought leaders will give guidance on how metaverse technology will transform the way all industries communicate and do business in San Francisco on October 4.

Register Here

"Ray started as a small project at UC Berkeley and it has grown far beyond what we imagined at the outset," said Robert Nishihara during his keynote at the Ray Summit.

OpenAI’s GPT-3 was trained on Ray

The reach and importance of Ray is hard to overstate.

Big names in the IT industry are using Ray during his keynote, according to Nishihara. One company that uses Ray to help scale its machine learning platform isshopify, which makes use of PyTorch and TensorFlow. Ray technology helps train thousands of machine learning models and is used by grocery delivery service Instacart. According to Nishihara, Amazon is a Ray user.

The group behind the GPT3 Large Language Model and DALL-E image generation technology is led by Ray.

Greg Brockman said at the Ray Summit that they were using Ray to train their biggest models. It's been helpful for us to be able to scale up to a really large scale.

Brockman said that he sees Ray as a developer-friendly tool and that it is a third-party tool that Openai doesn't have to maintain.

Brockman said that when something goes wrong, we can complain on GitHub and get an engineer to work on it.

More machine learning goodness comes built into Ray 2.0

The primary goal for Nishihara was to make it simpler for more users to be able to benefit from the technology, while providing performance improvements that benefit users large and small.

Organizations can get tied into a specific framework for a certain workload, but realize over time that they also want to use other frameworks, according to Nishihara. An organization might start out with just TensorFlow, but want to use PyTorch and Hugging Face in the same workload. It will now be easier for users to unify their ML workload with the help of theAIR in Ray 2.0

Ray 2.0 wants to help solve model deployment with the Ray serve deployment graph capability.

It is one thing to deploy a few machines. It is another thing to deploy hundreds of machine learning models when they are dependent on each other. Ray 2.0 will include Ray Serve deployment graphs, which solve this problem and provide a simple Python interface for model composition.

The goal of Nishihara and Ray is to make it easier to build and manage machine learning workloads.

Nishihara said that they want to get to the point where any developer or organization can succeed with the use of artificial intelligence.

The mission of VentureBeat is to be a digital town square for technical decision-makers to gain knowledge. You can learn more about memberships.