While many developers are researching how to use generative AI to create entire 3D objects from scratch, Adobe has already used its Firefly AI model to optimize existing 3D workflows. At the Game Developers Conference on Monday, Adobe introduced two new integrations for its Substance 3D design software suite, allowing 3D artists to quickly create creative assets for their projects using text descriptions.
The first is the "Text to Texture" feature of Substance 3D Sampler, which Adobe says can generate "realistic or stylized textures" based on descriptive prompts, such as scaled skin or woven fabric. These textures can then be directly applied to 3D models, eliminating the hassle of designers searching for appropriate reference materials.
The second feature is the "Generate Background" tool of Substance 3D Stager. This allows designers to generate background images for the objects they compose in 3D scenes using text prompts. The clever part here is that both of these features actually use 2D imaging techniques, just like the Firefly tool Adobe previously used in Photoshop and Illustrator. Firefly does not generate 3D models or files - instead, Substance is using 2D images generated from text descriptions and applying them in a seemingly 3D way.
The new "Text to Texture" and "Generate Background" features can be found in the beta versions of Substance 3D Sampler 4.4 and Stager 3.0, respectively. Sébastien Deguy, Head of Adobe's 3D and Metaverse division, said that both features are free during the beta testing period and have been trained on Adobe's own assets, including proprietary reference materials and licensed Adobe stock.