The business of artificial intelligence is explored in this article.

There has been a lot of interest in large language models. In the last few years, we have seen LLMs used for many exciting tasks, such as writing articles, designing websites, and even writing code.

There is a chasm between showing a new technology do something cool and using the same technology to create a successful product with a workable business model.

If you subscribe to our newsletter, you'll get a weekly recap of our favorite artificial intelligence stories.

The first real LLM product was launched by Microsoft last week. This application has a strong product/market fit, is hard to beat, is cost efficient, and can become a source of great profit.

First, LLMs are fascinating, but they are useful when applied to specific tasks as opposed to artificial general intelligence. Large tech companies like Microsoft and Google have an unfair advantage in being able to market their products.

Specialized LLM

github-copilot-code-generation

Copilot is an artificial intelligence programming tool that is installed as an extension on popularIDEs. Suggestions are provided as you write code. It can complete a line of code, create entire blocks of code, and more.

Codex is a large language model that made the headlines for its ability to perform a wide range of tasks when it was first released. Codex has been fine adjusted for programming tasks. Impressive results are produced by it.

One important fact is underscored by the success of Codex. Specialization beats generalization when it comes to putting LLMs to use. The start-up had no intention of teaching it how to help code when it was first training. It was meant to be a general purpose language model that could translate from one language to another.

Copilot and Codex have proven to be great hits in one specific area. Codex can't write poetry or articles like GPT-1, but it has proven to be very useful for developers of different levels of expertise. Codex has more memory and compute efficiency than GPT-3. It is less prone to the pitfalls that models like GPT 3 often fall into because it has been trained for a specific task.

Copilot doesn't know anything about computer code and GPT3 doesn't know anything about human language. A transformer model has been trained on millions of code repository. It will try to guess the next sequence of instructions based on a prompt.

Good predictions can be made with Copilot's huge training and neural networks. It could make dumb mistakes that the most inexperienced programmers wouldn't. It doesn't think like a programmer does. It can't think in steps, think about user requirements and experience, and build successful apps. It isn't a replacement for humans.

Copilot’s product/market fit

github-copilot-stack-overflow-competition

Product/market fit or proving that it can solve a problem better than alternatives in the market are some of the milestones for any product. Copilot has been a huge hit.

More than one million developers have used Copilot since it was released a year ago.

40 percent of the written code is accounted for by Copilot in files where it's activated. While there are limits to Copilot's capabilities, developers and engineers say that it improves their productivity greatly.

In some use cases, Copilot is competing with StackOverflow and other code forums, where users must search for the solution to their problem. The added value of Copilot is visible and obvious in this case. Copilot does most of the work for developers, instead of leaving their IDE and searching for a solution on the web.

Copilot is competing against manually writing frustrating code, such as configuring matplotlib charts in python. Most of the burden on developers is alleviated by Copilots output.

Copilot has been able to cement itself as a superior solution to problems faced by many developers every day. Running test cases, setting up web server, documenting code, and many other tasks that used to require a lot of manual effort have been told to me by developers. Copilot helps them save a lot of time.

Distribution and cost-efficiency

GitHub-Copilot-microsoft-openai

The product/market fit is just one component of a successful product. If you can't find the right distribution channels for your product, you're doomed. You will need a plan to maintain your edge over competitors, prevent other companies from replicating your success, and make sure that you can continue to deliver value down the road.

Microsoft needed to bring together several important pieces in order to turn Copilot into a successful product.

Thanks to its exclusive license to Openai's technology, it needed the right technology first. Microsoft is one of the financial backers of Openai, which has stopped open-sourced its technology. Codex and Copilot were created off GPT 3.

Large tech companies have been able to create language models that are similar to GPT 3. The cost of training and running LLMs is very high.

It takes a lot of money to evaluate a model that is 10 times smaller than Codex. Ben Allal referred to a benchmark that costs thousands of dollars for a smaller model.

Ben Allal said that there are security issues due to the fact that you have to execute untrusted programs to evaluate the model which may be malicious.

Training costs can range from tens to hundreds of thousands of dollars depending on the size and number of needed experiments.

Von Werra stated thatference is one of the biggest challenges. It's an engineering challenge to get a 10B model to feel responsive to the user.

Microsoft has an advantage here. A large cloud infrastructure has been created that is specialized for machine learning models. Inference is run and suggestions are provided. Microsoft can provide Copilot at an affordable price. Students and maintainers of popular open-source repositories will be able to use Copilot for free.

The pricing model made developers much more satisfied than they would have been had it not been for it.

"As a machine learning engineer, I know that a lot goes into building products like these, especially Copilot, which provides suggestions with sub-millisecondslatency." It's not possible to build an infrastructure that serves these kinds of models for free for a long time.

Code generator LLMs are not impossible to run.

In terms of the compute to build these models and necessary data: that's quite feasible and there have been a few replications of Codex from Meta and CodeGen. Many companies could do this if they wanted to, as there is some engineering involved in building the models into a nice product.

This is where the third part of the puzzle begins. Microsoft was able to put Copilot into the hands of millions of users thanks to the acquisition of GitHub. Microsoft has two of the most popularIDEs with hundreds of million of users. Developers don't have to worry about adoption of Copilot as opposed to another product.

Microsoft seems to have solidified its position as the leader in the emerging market for artificial intelligence-assisted software development. The market can change direction. I have pointed out before that large language models will allow for the creation of new applications and markets. Sound product management won't be changed by them.

TechTalks is a publication that examines trends in technology and how they affect the way we live and do business. The evil side of technology, the darker implications of new tech, and what we need to look out for are some of the topics discussed. The original article can be found here.