Dreamfusion is a text-to-3D-image generator that is powered by artificial intelligence.
Sort of. The paper is a proof of concept. DreamFusion is an evolution of Dream Fields, a text to 3D image generator. DreamFusion uses a neural network that can create synthetic 3D scenes using partial 2D datasets and a pre-trained text-to- image prompt model.
What is the twist? DreamFusion uses a different pre-trained model than Dream Fields did, and it is called Imagen.
Google figured out how to use its own Openai tech. Things should be kept in-house.
Dreamfusion is our new method for text to 3D. The co-author of the proof-of-concept paper is a research scientist at the search engine giant. We used a pre-trained text-to- image model to improve a NeRF. No 3D data was required.
The DreamFusion models are pretty impressive, with high-quality normals, surface geometry and depth, and are relight.
They have all of the right elements, even though they aren't as realistic as DALL-E 2. The proportions are correct and the depth is correct. The next version of the tech is definitely a visual improvement over the first one.
It's not clear when Dreamfusion will be available to the public, but we can definitely see a number of applications already.
A researcher says an image generating artificial intelligence invented its own language.