Midjourney began alpha testing version 4 of its text-to-image synthesis model for subscribers on Saturday. The new model gives more detail than before, inspiring some artists to say that v4 is too easy to use.
In March, Midjourney opened to the public. It gained a large following because it was publicly available before Stable Diffusion and Dall-E. Winning art contests and showing up on stock illustration websites made the news before long.
Midjourney improved its model with more training and features. The current model is called "v3". The Midjourney v4 is going to be tested by thousands of members of the service's Discord server. Users can try v4 by appending "-v 4" to their prompt.
The founder of Midjourney wrote in an announcement that V4 is completely new. The first model trained on a new Midjourney Artificial Intelligence supercluster has been in the works for over nine months.
In our tests of Midjourney's v4 model, we found that it provided a much greater amount of detail than v3 did. Some results we've seen can be hard to distinguish from actual photos.
AdvertisementOther features of v4 are listed.
- Vastly more knowledge (of creatures, places, and more) - Much better at getting small details right (in all situations) - Handles more complex prompting (with multiple levels of detail) - Better with multi-object / multi-character scenes - Supports advanced functionality like image prompting and multi-prompts
- Supports --chaos arg (set it from 0 to 100) to control the variety of image grids