Home > Online Image Generators > Adobe Firefly
Adobe Firefly can be used to generate images from its website by typing in prompts. But creating images from scratch is just one small part of what Firefly can do. You can also use Firefly from within Adobe's industry-standard paint program Photoshop, where it excels at extending and modifying images.
Photoshop requires a subscription. As a part of the "Photography Plan," you get Photoshop for $9.99 per month, including Adobe Lightroom, 20 GB of cloud storage, and up to 1000 Firefly generations per month. (Pay close attention to these plans, because if you start with a free trial for new users, you could end up with a different plan that costs twice as much, for the same software.)
Needing a subscription to use a paint program is something that many people complain about, but the 2023 upgrade to Photoshop did make subscribing seem like a much more compelling option. Photoshop's new embrace of AI imaging gave it an effective AI-powered Remove tool, an AI-powered Subject Selection function that can mask-out people and other complex subjects, also the generative AI model called Adobe Firefly. Firefly works remotely when you use it to create, extend, or modify images, but the results are seamlessly passed back into layers in Photoshop.
While Firefly isn't quite in the same league as competing image generators such as DALL-E or Midjourney in interpreting prompts and creating new images (more about this below), it really shines when used from within Photoshop to power functions such as Generative Fill and Generative Expand.
With Generative Fill, you select an area of an image, type a prompt for what you want to see there, and Adobe Firefly does what's commonly called inpainting to regenerate the selected area of the image, seamlessly matching the style and lighting and surroundings of the selected area, while replacing the contents with something that matches the prompt. This is probably the most amazing photo retouching tool ever created, and it works not just on photographs but also on other kinds of art, including AI-generated images.
If you generate images in Midjourney, you'll notice that Midjourney supports its own kind of inpainting called "Vary (Region)" which works very much like Photoshop's Generative Fill. However, Midjourney only allows this option before you upscale your images, not afterwards. If there are any areas of your high-resolution images that didn't come out well in the Creative Upscale process, Photoshop can be a great way to fix them. Here's an example:
The original concept art, as generated in Midjourney.
A new version, with certain areas fixed with Photoshop's Generative Fill.
Here's a little Photoshop trick I used in this example: After selecting each area to fix, I pressed Q (for Quick mask), then Ctrl-L (Levels), and dragged the lower slider down from 255 to a much lower value, such as 50 or 100. This faded out the selection, making it a transparent mask instead of an opaque one. Then I pressed Q again to get out of quick mask mode, typed the prompt I wanted into the contextual taskbar, and pressed Generate. Fading out the mask is important if you want Generative Fill to respect the original contents of the area, instead of drawing something entirely new to completely replace what was there.
Photoshop's little window for generating images seems like a work in progress. It doesn't have any history of your previously used prompts and doesn't have sliders to control the image generation. If it had sliders for things like how much it replaced or respected the original image, I wouldn't need to keep undoing the generation, going through the process of adjusting the transparency of a mask and then re-typing the prompt each time I tried a new generative fill with a different amount of change. The window does have an option to load a separate file as a reference image, although again there's no slider for the weight or amount of influence the reference image will have.
Besides Generative Fill, the other amazing new function that Photoshop uses Firefly for is Generative Expand. This lets you un-crop an image, seamlessly extending it beyond its original borders. Even though other models can do this basic technique (called outpainting), the results from Generative Expand are some of the best I've seen. Even with high-resolution images, Adobe Firefly comes up with impressively seamless results, matching the contents, style, perspective, and lighting from the original shot as it invents new areas of the image.
If you use DALL-E 3 to generate images, you've probably noticed that it doesn't support many different aspect ratios. But that's what Photoshop is for. If you take a square image from DALL-E into Photoshop, it's quick and easy to make a wide-screen version:
The original DALL-E 3 image fit my prompt well, but the composition isn't the best (especially with the alien facing to the right, lacking look-space against the right side of frame.)
A new version of the image, extended using Photoshop's crop tool set to Generative Expand. I typed the simple prompt "Stained Glass window" to extend the canvas, then used Generative Fill to add the planet Saturn and a star to selected areas.
If you use Stable Diffusion locally on your computer, you're in luck. Not only can Photoshop be used for final retouching of images, but there are also plug-ins that better integrate Photoshop with Stable Diffusion. You can do AI functions that Firefly doesn't support, such as real-time painting in Photoshop, while having Stable Diffusion transform the look or style of the image according to your prompt or settings. Photoshop is one of only two paint programs that have this level of AI integration possible via plug-ins, with the other one being the open-source paint program Krita.
Images generated by Firefly need to pass through a content filter that tries to detect any inappropriate content in what it is generating, or in the surrounding image that it is extending or modifying. This means that you may not be able to un-crop an image if there's a subject in the image that Firefly's content filter doesn't approve of.
In some cases, you may need to create a new layer, scribble over the subject that is causing the problem to cover it up, and then try again with your Generative Expand or Generative Fill. You can delete the cover-up layer afterwards, but even when workarounds like this are possible, it is strange that you can find yourself in an adversarial relationship with your own paint program.
Note that content filtering only applies to the functions that use Firefly. Other AI-powered tools in Photoshop such as the extremely useful Remove tool, or the intelligent Select Subject function, can be used with any image.
While Firefly is a champion of inpainting and outpainting, it can also create new images from scratch, based only on a prompt. Firefly can be used to create images on firefly.adobe.com, and also from within other Adobe apps, including Adobe InDesign and Adobe Express.
I tried a version labelled "Firefly Image 3 (preview)," and used it to generate images based on some of the same prompts that I also tested in DALL-E 3 and Midjourney version 6. Here's an example prompt I tried:
In my first few tests, Firefly didn't give me a "bike path" or any "bicyclists," but only scattered some unmanned bicycles around the image, sometimes placing them on the roof of the beach house. Some of the images came out looking more like a photo collage than a single, coherent photograph.
In testing the system, I found that sometimes when it seems to ignore words or phrases from a prompt, it helps to edit the prompt down to a much shorter phrase, including only a few of the words that you care about. Remember that you have Photoshop available afterwards, so there's no reason to be too specific about small stuff like a person's eye color, if you can use Generative Fill to fix that later.
Image generated in Adobe Firefly.
The images that Firefly outputs are 2,048 pixels across (or higher, depending on aspect ratio), meaning that they are higher resolution that the images you get from DALL-E 3. Even if they can't follow every word in a complex prompt, they do seem suitable for a range of graphic design tasks, whether they are used in print or online. When you download an image from Firefly's web interface, the site even offers to set you up in Adobe Express, where you can easily superimpose text or use the image within a preset template to make different graphic designs.
Firefly's web interface lets you select or upload images that will guide the structure or visual style of the images. These are really helpful. When I wanted a logo of a woman's hand picking an apple, Firefly initially couldn't get the hand pose right. When I uploaded an image as a structure guide, Firefly started using the composition and pose from the reference image and gave me much nicer output.
The Firefly website has a large control panel full of options, letting you adjust things down to the f-stop of the camera that would have been used. It does seem to lack any kind of history function, though, so if you find a prompt and combination of settings that work well for you one day, you may have trouble recreating the magic on the next day.
Where Adobe Firefly really shines is in what it adds to Photoshop. Starting with all the retouching tools of a full-featured photo editor and adding a powerful yet easy to use generative AI system creates a really compelling overall package. In this light, it's best to think of the option to generate entirely new images as just the icing on the cake, a potentially useful extra function in a system that already more than pulls its weight as an image editor.
Copyright © 2024 by Jeremy Birn
Welcome to the Internet! Websites use cookies to analyze traffic and optimize performance. By accepting cookies, data will be aggregated with all other user data.