Every year, Adobe shows off what it calls "Sneaks," R&D projects that may or may not be used in commercial products. We were given a preview of the conference before it even started.
Adobe calls it Project Clever Composites because it uses artificial intelligence for image processing. It automatically predicts an object's scale, determining where the best place to put it in an image is, estimating the lighting conditions and generating shadows in line with the image's aesthetic.
Adobe describes it this way.
Image composting lets you add yourself in to make it look like you were there. Or maybe you want to create a photo of yourself camping under a starry sky but only have images of the starry sky and yourself camping during the daytime.
Adobe tells me that it can be a lot of work and takes a long time. Usually, it involves finding a suitable image of an object or subject, carefully cutting the object or subject out of the image and editing its color, tone, scale and shadows to match its appearance with the rest of the scene into which it is being pasted. This is gone thanks to Adobe's prototype.
A research engineer with Adobe said that they developed a more intelligent and automated technique for image object compositing. The technology uses multiple deep learning models and millions of data points to determine semantic classification, scale-location prediction, lighting estimation, shadow generation and others.
The image was created by Adobe.
Each model in the image-compositing system is trained on its own for a specific task, like searching for objects consistent with a given image. A separate, artificial intelligence-based auto-compositing pipeline is used to predict an object's scale and location.
Users can combine objects with just a few clicks.
There are a number of components that need to be composed in order to achieve automatic object compositing. Our technology makes it possible for all of these components to work together.
The system is a tech demo. Work is already underway on an improved version that will support 3D objects, not just 2D.
The goal is to make the task of creating realistic and clever composites for 2D and 3D drag-and-drop. It will make it easier for those who work on image design and editing to create realistic images since they will now be able to search for an object to add, carefully cut out that object and modify the color, tone or scale of it.