Adobe’s new Firefly model makes it easier to use Photoshop’s AI tools


Adobe is adding some new generative AI tools to its Photoshop creative software that aim to give users additional ways to control the designs they generate. Powered by Adobe’s new Firefly Image 3 foundation model, these new tools are available today via the Photoshop beta desktop app, and will be generally available “later this year” according to Adobe’s Press release.

The most notable tool is Reference Image, which uses user-uploaded images to inspire the output generated by Adobe’s AI, matching similar elements in style and color. For example, instead of repeatedly tweaking a prompt description like “a blue vintage truck with flower decals,” users can instead provide a reference image that Photoshop will use as a guide.

“Prompting is a pain in the butt,” Ely Greenfield, chief technology officer for Digital Media at Adobe told The Verge. “Why spend an hour trying to craft a three-paragraph prompt if you have an image that you’ve created that’s exactly the thing you want to reference? The saying ‘a picture is worth a thousand words’ applies here.”

The uploaded reference image guides the results generated by Photoshop’s AI, preventing users from making frequent adjustments to text prompts.
Image: Adobe

Users are expected to have the rights to use images they want to reference. Greenfield told the Verge that a message will flag this ownership requirement when the tool is first used, and that the company is working on a universal “do not train” tag for Adobe’s Content Authenticity Initiative that will also block images from being used as a reference. Images uploaded as reference materials won’t be used to train Firefly. Despite the ownership responsibility being placed on users, Adobe says this new referencing tool is still “safe for commercial use” — one of the most notable advantages that Adobe claims Firefly has over rival generative AI models.

Additional generative AI tools available in the Photoshop beta app include Generate Background, which replaces and creates new background images for things like product photography, and Enhance Detail, which increases clarity and makes images appear sharper.

Adobe is pitching Generate Background as a tool for quickly adding variety to product photography without needing reshoots.
Image: Adobe

Generate Similar is also available, which uses one of the three images generated by Photoshop’s Firefly tools as a reference to produce similar-looking content, while Generate Image allows users starting with a blank page to generate an entire image from a text description for the first time.

Firefly tools output three generated results to choose from in Photoshop — if there’s one you’d like to see similar versions of, Generate Similar will do just that.
Image: Adobe

Adobe’s third-generation Firefly model, which has higher-quality image generation capabilities compared to its predecessor, is also available in a public, global beta for anyone to try outside of Photoshop via the Firefly web application. Adobe says its latest Firefly model delivers “photorealistic quality like never before with better lighting, positioning, and attention to detail.” Firefly Image 3 is more capable than the previous Firefly model at understanding long, descriptive text prompts, and can produce clearer text in the images it generates.

Rather than masking or other more labor-intensive workflows, users can quickly change things like color using the new Adjustment brush.
Image: Adobe

Outside of generative AI, Adobe is also adding some new, standard tools to Photoshop that can speed up creative processes. These include an Adjustment Brush that lets Photoshop users make non-destructive changes, such as color adjustments, to specific sections of an image. There’s also a new Adjustment Presets that can quickly change an image using filters, and an improved Font Browser that gives users real-time access to the over 25,000 fonts in Adobe’s cloud without leaving the Photoshop application.



Source link