Photo Editing | Popular Photography https://www.popphoto.com/category/photo-editing/ Founded in 1937, Popular Photography is a magazine dedicated to all things photographic. Tue, 30 Aug 2022 19:43:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popphoto.com/uploads/2021/12/15/cropped-POPPHOTOFAVICON.png?auto=webp&width=32&height=32 Photo Editing | Popular Photography https://www.popphoto.com/category/photo-editing/ 32 32 The state of AI in your favorite photo editing apps https://www.popphoto.com/how-to/ai-photo-editing-apps/ Tue, 30 Aug 2022 19:43:41 +0000 https://www.popphoto.com/?p=184122
ON1’s Super Select AI feature automatically detects different subjects
ON1’s forthcoming Super Select AI feature automatically detects various subjects/elements (shown in blue), allowing user to quickly create an editable mask. ON1

From Lightroom to Luminar Neo, we surveyed the field and these are the most-powerful AI-enhanced photo editing platforms.

The post The state of AI in your favorite photo editing apps appeared first on Popular Photography.

]]>
ON1’s Super Select AI feature automatically detects different subjects
ON1’s forthcoming Super Select AI feature automatically detects various subjects/elements (shown in blue), allowing user to quickly create an editable mask. ON1

Artificial Intelligence (AI) technologies in photography are more widespread than ever before, touching every part of the digital image-making process, from framing to focus to the final edit. But they’re also widespread in the sense of being spread wide, often appearing as separate apps or plug-ins that address specific needs.

That’s starting to change. As AI photo editing tools begin to converge, isolated tasks are being added to larger applications, and in some cases, disparate pieces are merging into new utilities.

This is great for photographers because it gives us improved access to capabilities that used to be more difficult, such as dealing with digital noise. From developers’ perspectives, this consolidation could encourage customers to stick with a single app or ecosystem instead of playing the field.

Let’s look at some examples of AI integration in popular photo editing apps.

ON1 Photo RAW

ON1 currently embodies this approach with ON1 Photo RAW, its all-in-one photo editing app. Included in the package are tools that ON1 also sells as separate utilities and plug-ins, including ON1 NoNoise AI, ON1 Resize AI, and ON1 Portrait AI.

The company recently previewed a trio of new features it’s working on for the next major versions of ON1 Photo RAW and the individual apps. Mask AI analyzes a photo and identifies subjects; in the example ON1 showed, the software picked out a horse, a person, foliage, and natural ground. You can then click a subject and apply an adjustment, which is masked solely to that individual/object.

ai photo editing tools
In this demo of ON1’s Mask AI feature under development, the software has identified subjects such as foliage and the ground. ON1

Related: Edit stronger, faster, better with custom-built AI-powered presets

ON1’s Super Select AI feature works in a similar way, while Tack Sharp AI applies intelligent sharpening and optional noise reduction to enhance detail.

Topaz Photo AI

Topaz Labs currently sells its utilities as separate apps (which also work as plug-ins). That’s great if you just need to de-noise, sharpen, or enlarge your images. In reality, though, many photographers buy the three utilities in a bundle and then bounce between them during editing. But in what order? Is it best to enlarge an image and then remove noise and sharpen it, or do the enlarging at the end?

Topaz is currently working on a new app, Photo AI, that rolls those tools into a single interface. Its Autopilot feature looks for subjects, corrects noise, and applies sharpening in one place, with controls for adjusting those parameters. The app is currently available as a beta for owners of the Image Quality bundle with an active Photo Upgrade plan.

ai photo editing tools
Topaz Photo AI, currently in beta, combines DeNoise AI, Sharpen AI, and Gigapixel AI into a single app. Jeff Carlson

Luminar Neo

Skylum’s Luminar was one of the first products to really embrace AI technologies at its core, albeit with a confusing rollout. Luminar AI was a ground-up rewrite of Luminar 4 to center it on an AI imaging engine. The following year, Skylum released Luminar Neo, another rewrite of the app with a separate, more extensible AI base.

Now, Luminar Neo is adding extensions, taking tasks that have been spread among different apps by other vendors, and incorporating them as add-ons. Skylum recently released an HDR Merge extension for building high dynamic range photos out of several images at different exposures. Coming soon is Noiseless AI for dealing with digital noise, followed in the coming months by Upscale AI for enlarging images and AI Background Removal. In all, Skylum promises to release seven extensions in 2022.

ai photo editing tools
With the HDR Merge extension installed, Luminar Neo can now blend multiple photos shot at different exposures. Jeff Carlson

Adobe Lightroom & Lightroom Classic

Adobe Lightroom and Lightroom Classic are adding AI tools piecemeal, which fits the platform’s status of being one of the original “big photo apps” (RIP Apple Aperture). The most significant recent AI addition was the revamped Masking tool that detects skies and subjects with a single click. That feature is also incorporated into Lightroom’s adaptive presets.

ai photo editing tools
Lightroom Classic generated this mask of the fencers (highlighted in red) after a single click of the Select Subject mask tool. Jeff Carlson

It’s also worth noting that because Lightroom Classic has been one of the big players in photo editing for some time, it has the advantage of letting developers, like the ones mentioned so far, offer their tools as plug-ins. So, for example, if you primarily use Lightroom Classic but need to sharpen beyond the Detail tool’s capabilities, you can send your image directly to Topaz Sharpen AI and then get the processed version back into your library. (Lightroom desktop, the cloud-focused version, does not have a plug-in architecture.)

What does the consolidation of AI photo editing tools mean for photographers?

As photo editors, we want the latest and greatest editing tools available, even if we don’t use them all. Adding these AI-enhanced tools to larger applications puts them easily at hand for photographers everywhere. You don’t have to export a version or send it to another utility via a plug-in interface. It keeps your focus on the image.

It also helps to build brand loyalty. You may decide to use ON1 Photo RAW instead of other companies’ tools because the features you want are all in one place. (Insert any of the apps above in that scenario.) There are different levels to this, though. From the looks of the Topaz Photo AI beta, it’s not trying to replace Lightroom any time soon. But if you’re an owner of Photo AI, you’ll probably be less inclined to check out ON1’s offerings. And so on.

More subscriptions

Then there’s the cost. It’s noteworthy that companies are starting to offer subscription pricing instead of just single purchases. Adobe years ago went all-in on subscriptions, and it’s the only way to get any of their products except for Photoshop Elements. Luminar Neo and ON1 Photo RAW offer subscription pricing or one-time purchase options. ON1 also sells standalone versions of its Resize AI, NoNoise AI, and Portrait AI utilities. Topaz sells its utilities outright, but you can optionally pay to activate a photo upgrade plan that renews each year.

ai photo editing tools
AI-enhanced photo editing tools come in many forms, from standalone apps to plugins to built-in features in platforms like Lightroom. Getty Images

Subscription pricing is great for companies because it gives them a more stable revenue stream, and they’re hopefully incentivized to keep improving their products to keep those subscribers over time. And subscriptions also encourage customers to stick with what they’re actively paying for.

For instance, I subscribe to the Adobe Creative Cloud All Apps plan, and use Adobe Audition to edit audio for my podcasts. I suspect that Apple’s audio editing platform, Logic Pro would be a better fit for me, based on my preference for editing video in Final Cut Pro versus Adobe Premiere Pro, but I’m already paying for Audition. My audio-editing needs aren’t sophisticated enough for me to really explore the limits of each app, so Audition is good enough.

In the same way, subscribing to a large app adds the same kind of blanket access to tools, including new AI features, when needed. Having to pay $30-$70 for a focused tool suddenly feels like a lot (even though it means the tool is there for future images that need it).

The wrap

On the other hand, investing in large applications relies on the continued support and development of them. If software stagnates or is retired (again, RIP Aperture), you’re looking at time and effort to migrate them to another platform or extricate them and their edits.

Right now, the tools are still available in several ways, from single-task apps to plug-ins. But AI convergence is also happening quickly.

The post The state of AI in your favorite photo editing apps appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Bring on the noise: How to save high ISO files deemed ‘too noisy’ https://www.popphoto.com/how-to/ai-de-noise-software/ Sun, 14 Aug 2022 15:00:00 +0000 https://www.popphoto.com/?p=182365
A interior photo of a church with lots of visible noise from using a high ISO.
Crank that ISO, AI de-noise software is here to save the day. Shot at ISO 6400 with noise corrections made using On1 NoNoise AI. Jeff Carlson

Today's AI de-noise software is surprisingly powerful.

The post Bring on the noise: How to save high ISO files deemed ‘too noisy’ appeared first on Popular Photography.

]]>
A interior photo of a church with lots of visible noise from using a high ISO.
Crank that ISO, AI de-noise software is here to save the day. Shot at ISO 6400 with noise corrections made using On1 NoNoise AI. Jeff Carlson

In photography, knowing when to put the camera away is a valuable skill. Really dark situations are particularly difficult: without supplemental lighting, you can crank up the ISO to boost the camera’s light sensitivity, but that risks introducing too much distracting digital noise in the shot. Well, that was my thinking until recently. I now make photos I previously wouldn’t have attempted because of de-noise software, fueled by machine learning technology. A high ISO is no longer the compromise that it once was.

That opens up a lot of possibilities for photographers. Perhaps your camera is a few years old and doesn’t deal with noise as well as newer models. Maybe you have images in your library that you wrote off as being too noisy to process. Or you may need extremely fast shutter speeds to capture sports or other action. You can shoot with the knowledge that software will give you extra stops of exposure to play with.

Sensor sensibility

Too frequently, I run into the following circumstances. In a dark situation, I increase the ISO so I can use a fast enough shutter speed to avoid motion blur or camera shake. The higher ISO ekes out light from the scene by sending more power to the image sensor, boosting its light sensitivity. That power boost, however, introduces visible noise into the image. At higher ISO values—6400 and higher, depending on the camera—the noise can be distracting and hide detail.

The other common occurrence is when I forget to turn the ISO back down after shooting at night or in the dark. The next day, in broad daylight, I end up with noisy images and unusually fast shutter speeds because the camera is forced to adapt to so much light sensitivity. If I’m not paying attention to the values while shooting, it’s easy to miss the noise by just looking at previews on the camera’s LCD. Has this happened to some of my favorite images? You bet it has.

Incidentally, this is one of those areas where buying newer gear can help. The hardware and software in today’s cameras handle noise better than in the past. My main body is a four-year-old Fujifilm X-T3 that produces perfectly tolerable noise levels at ISO 6400. That has been my ceiling for setting the ISO, but now (depending on the scene of course) I’m comfortable pushing beyond that.

The sound of science

Noise-reduction editing features are not new, but the way we deal with noise has changed a lot in the past few years. In many photo editing apps, the de-noising controls apply algorithms that attempt to smooth out the noise, often resulting in overly soft results.

A more effective approach is to use tools built on machine learning models that have processed thousands of noisy images. In “Preprocess Raw files with machine learning for cleaner-looking photos,” I wrote about DxO PureRAW 2, which applies de-noising to raw files when they’re demosaiced.

If you’re working with a JPEG or HEIC file, or a Raw file that’s already gone through that processing phase, apps such as ON1 NoNoise AI (which is available as a stand-alone app/plug-in and also incorporated into ON1 Photo RAW) and Topaz DeNoise AI analyze the image’s noise pattern and use that information to correct it.

Testing various de-noise software

A interior photo of a church with lots of visible noise from using a high ISO.
Viewed as a whole, the noise isn’t terrible. Jeff Carlson

The following image was shot handheld at 1/60 shutter speed and ISO 6400. I’ve adjusted the exposure to brighten the scene, but that’s made the noise more apparent, particularly when I view the image at 200% scale. The noise is especially prominent in the dark areas.

A interior photo of a church with lots of visible noise from using a high ISO.
But zoom in and you can see how noisy the image is. Jeff Carlson

Lightroom

If I apply Lightroom’s Noise Reduction controls, I can remove the noise, but everything gets smudgy (see below).

A interior photo of a church with lots of visible noise from using a high ISO.
Lightroom’s built-in tool isn’t helpful when correcting. Jeff Carlson

ON1 NoNoise AI

When I open the image in ON1 NoNoise AI, the results are striking. The noise is removed from the pews, yet they retain detail and contrast. There’s still a smoothness to them, but not in the same way Lightroom rendered them. This is also the default interpretation, so I could manipulate the Noise Reduction and Sharpening sliders to fine-tune the effect. Keep in mind, too, that we’re pixel-peeping at 200%; the full corrected image looks good.

A interior photo of a church with lots of visible noise from using a high ISO.
Compare the original with the corrected version of the image using the preview slider in ON1 NoNoise AI. Jeff Carlson

Looking at the detail in the center of the photo also reveals how much noise reduction is being applied. Again, this is at 200%, so in this view the statues seem almost plastic. At 100% you can see the noise reduction and the statues look better.

A interior photo of a church with lots of visible noise from using a high ISO.
Detail at the middle of the frame. Jeff Carlson

Topaz DeNoise AI

When I run the same photo through Topaz DeNoise AI, you can see that the software is using what appears to be object recognition to adjust the de-noise correction in separate areas—in this case not as successfully. The cream wall in the back becomes nice and smooth as if it was shot at a low ISO, but the marble at the front is still noisy.

A interior photo of a church with lots of visible noise from using a high ISO.
Topaz DeNoise AI’s default processing on this image ends up fairly scattershot. Jeff Carlson

Bring the noise

As always, your mileage will vary depending on the image, the amount of noise, and other factors. I’m not here to pit these two apps against each other (you can do that yourself—both offer free trial versions that you can test on your own images).

What I want to get across are two things. One, AI is making sizable improvements in how noise reduction is handled. And because AI models are always being fed new data, they tend to improve over time.

But more important is this: Dealing with image noise is no longer the hurdle it once was. A noisy image isn’t automatically headed for the trash bin. Knowing that you can overcome noise easily in software makes you rethink what’s possible when you’re behind the camera. So try capturing a dark scene at a very high ISO, where before you may have just put the camera away.

And don’t be like me and forget to reset the ISO after shooting at high values the night before. Even if the software can help you fix the noise.

The post Bring on the noise: How to save high ISO files deemed ‘too noisy’ appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Edit stronger, faster, better with custom-built AI-powered presets https://www.popphoto.com/how-to/ai-powered-presets/ Fri, 29 Jul 2022 01:17:32 +0000 https://www.popphoto.com/?p=180624
A lighthouse photo with a purple filter applied
The editing platform Luminar Neo offers plenty of AI-powered sky replacement presets. Jeff Carlson

Good old-fashion presets are more powerful when combined with AI-assisted subject masking.

The post Edit stronger, faster, better with custom-built AI-powered presets appeared first on Popular Photography.

]]>
A lighthouse photo with a purple filter applied
The editing platform Luminar Neo offers plenty of AI-powered sky replacement presets. Jeff Carlson

It’s time to confess one of my biases. I’ve traditionally looked down on presets in photo editing software.

I get their utility. With one click you can apply a specific look without messing with sliders or knowing the specifics of how editing controls work. There’s certainly appeal in that, particularly for novice photo editors. And selling presets has become a vector for established photographers to make a little money on the side, or have something to give away in exchange for a newsletter sign-up or other merchandise. (I’m guilty of this too. I made some Luminar 4 presets to go along with a book I wrote years ago.)

I’ve just never seen the value in making my photos look like someone else’s. More often than not, the way I edit a photo depends on what the image itself demands. 

And then I saw the light: presets are not shortcuts, per se, they’re automation. Yes, you can make your photos look like those of your favorite YouTube personality, but a better alternative is to create your own presets that perform repetitive editing actions for you with one click.

For instance, perhaps in all of your portrait photos, you reduce Clarity to soften skin, add a faint vignette around the edges, and boost the shadows. A preset that makes those edits automatically saves you from manipulating a handful of controls to get the same effect each time. In many editing apps, presets affect those sliders directly, so if those shadows end up too bright, you can just knock down the value that the preset applied.

The downside is that a preset affects the entire image. Perhaps you do want to open up the shadows in the background, but not so much that you’re losing detail in the subject’s face. Well, then you’re creating masks for the subject or the background and manipulating those areas independently…and there goes the time you saved by starting with a preset in the first place.

Regular readers of this column no doubt know where this is headed. AI-assisted features that identify the content of images are making their way into presets, allowing you to target different areas automatically. Lightroom Classic and Lightroom desktop recently introduced Adaptive Presets that capitalize on the intelligent masking features in the most recent release. Luminar Neo and Luminar AI incorporate this type of smart selection because they’re both AI-focused at their cores.

Lightroom Adaptive Presets

A photo of a statue against a blue sky
An unedited image. Lightroom’s “Adaptive: Sky” presets let you adjust the look of the sky with a few clicks. And the “Adaptive: Subject” presets do the same for whatever Adobe deems to be the main subject, in this case, the statue. Jeff Carlson

Related: Testing 3 popular AI-powered sky replacement tools

Lightroom Classic and Lightroom desktop include two new groups of presets, “Adaptive: Sky” and “Adaptive: Subject.” When I apply the Sunset sky preset to an unedited photo, the app identifies the sky using its Select Sky mask tool and applies adjustments (specifically to Tint, Clarity, Saturation, and Noise) only to the masked area.

A photo of a statue against a purple sky
Only the area that Lightroom Classic identified as the sky is adjusted after choosing the “Adaptive: Sky Sunset” preset. Jeff Carlson

Similarly, if I click the “Adaptive: Subject Pop” preset, the app selects what it thinks is the subject and applies the correction, in this case, small nudges to Exposure, Texture, and Clarity.

A Lightroom mask on a statue.
“Adaptive: Subject Pop” selects what Lightoom believes to be the main subject of an image. Jeff Carlson

Depending on the image, that might be all the edits you want to make. Or you can build on those adjustments.

A Lightroom mask on a statue.
The final image with AI-powered presets applied to both the sky and the statue. Jeff Carlson

Related: ‘Photoshop on the Web’ will soon be free for anyone to use

Now let’s go back to the suggested portrait edits mentioned above. I can apply a subtle vignette to the entire image, switch to the Masking tool and create a new “Select Subject” mask for the people in the shot. With that active, I increase the Shadows value a little and reduce Clarity to lightly soften the subjects.

A photo of a couple
Increasing Shadows and bringing down Clarity brightens and softens the subjects’ skin in this portrait. Jeff Carlson

Since this photo is part of a portrait session, I have many similar shots. Instead of selecting the subject every time, I’ll click the “Add New Presets” button in the Presets panel, make sure the Masking option is enabled, give it a name and click Create. With that created, for subsequent photos I can choose the new preset to apply those edits. Even if it’s a preset that applies only to this photo shoot, that can still save a lot of time. 

Lightroom presets
Select the Masking option is turned on when creating a new adaptive preset, since by default it’s deselected.

Luminar Presets

When Luminar Neo and Luminar AI open an image, they both scan the photo for contents, identifying subjects and skies even before any edits have been made. When you apply one of the presets built into the apps, the edits might include adjustments to specific areas. 

A lighthouse photo
Luminar Neo offers a variety of sky-replacement presets. Jeff Carlson

For an extreme example, in Luminar Neo’s Presets module, the “Sunsets Collection” includes a Toscana preset that applies Color, Details, and Enhance AI settings that affect the entire image. But it also uses the Sky AI tool to swap in an entirely new sky.

The portrait editing tools in Luminar by default fall into this category, because they look for faces and bodies and make adjustments, such as skin smoothing and eye enhancement, to only those areas. Creating a new user preset with one of the AI-based tools targets the applicable sections.

A lighthouse photo with a purple filter applied
The Toscana preset in Luminar Neo is a good example of how a preset can affect a specific area of the image, replacing the sky using the Sky AI tool. Jeff Carlson

Preset Choices

The Luminar and Lightroom apps also use some AI detection to recommend existing presets based on the content of the photo you’re editing, although I find the choices to be hit or miss. Lightroom gathers suggestions based on presets that its users have shared, grouped into categories such as Subtle, Strong, and B&W. They tend to run the gamut of effects and color treatments, and for me that feels more like trying to put my image into someone else’s style.

Instead, I’ll stick to presets’ secret weapon, which is to create my own to automate edits that I’d otherwise make but take longer to do so.

The post Edit stronger, faster, better with custom-built AI-powered presets appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to edit your digital photos to look like film https://www.popphoto.com/how-to/edit-photos-to-look-like-film/ Fri, 24 Jun 2022 20:52:02 +0000 https://www.popphoto.com/?p=175690
edit photos to look like film the archetype process
Edited using The Archetype Process Fuji Pro400H +2 Normal Frontier profile. The Archetype Process

Nail the analog aesthetic using camera profiles that mimic the look of classic film stocks.

The post How to edit your digital photos to look like film appeared first on Popular Photography.

]]>
edit photos to look like film the archetype process
Edited using The Archetype Process Fuji Pro400H +2 Normal Frontier profile. The Archetype Process

Analog photography has been experiencing a renaissance over the past several years, yet despite an increase in demand, film prices are sky high and availability is scarce. Luckily, there are plenty of ways to edit digital photos to look like film.

We chatted with portrait photographer and editor Dustin Stockel, who started The Archetype Process (TAP), a company dedicated to creating realistic emulations of classic film stocks in the form of camera profiles. Prior to founding TAP, Stockel worked as a photo editor in film labs and for private clients.

Profiles vs. presets 

the archetype process film presets wedding photography
Edited using The Archetype Process Kodak Portra 400 +1 Normal Frontier profile. The Archetype Process

Related: Going back to film? Here’s what’s changed

There’s no lack of choice on the market when it comes to emulating film in post. Both Mastin Labs and Noble Presets have made names for themselves as some of the leading preset makers on the market. But how do presets differ from profiles? While both are intended to be applied to Raw files, they actually work in quite different ways.

Presets are exactly what they sound like. When using a Raw processing platform like Lightroom or Capture One, a preset is applied to an image or series of images with just a few clicks of the mouse. This automatically adjusts various editing parameters—like exposure, contrast, grain, etc.—within the platform to mimic a specific film look. Once applied, these parameters can be dialed back down (or up) as the user wishes.

Color profiles, on the other hand, are generally applied to images right after they’ve been ingested and before any edits are applied. In most Raw processers, users select a color profile from the top right of the edit panel before fine-tuning their shots with the various parameters available. Working with a film-emulating color profile, right from the start, offers users the advantage of potentially being able to dial in a more nuanced celluloid look, more quickly, compared to presets alone.

Both are effective options in your editing workflow, though, and combining them might be the missing piece to the aesthetic that just one or the other won’t provide. In addition to TAP, other analog-mimicking profiles include Digistock’s Kodachrome and The Digital Darkroom’s Chroma and Aero Infrared, among others.

Get to know the film you want to emulate

This might feel like a no-brainer, but Stockel notes that it is a common error. To accurately recreate the film look, you’ll ideally want to shoot some film first. If you don’t know how your preferred film stock responds in a given situation, it will be difficult to emulate it digitally or know if you’re on the right track. Though shooting film will require an investment of time and money, familiarizing yourself with the stocks you want to work with can save you frustration in the long run. 

“Understanding the process of working with a film lab and getting film scans back that [you] like goes right along with that,” he elaborates. “Another mistake I see is thinking that a specific film has a locked-in or predefined look. In reality, there are so many variables that the photographer and lab control that are actually responsible for that look.”

the archetype process film presets elopement photography
Edited using The Archetype Process Kodak Tri-X Normal profile. Daniel Kim Photography

Tips to edit pictures to look like film 

When testing film, it’s important to make sure the shooting conditions you’re experimenting in match those you’ll likely be working in. “The biggest thing a photographer can do to recreate the look of film scans is to shoot the same way or in a similar way that they would shoot film,” Stockel shares. “More specifically, shoot in the same amount and quality of light.”

Another thing he recommends is to understand that a film stock doesn’t have a predefined “look.” Depending on how it’s shot, a film stock can produce a wide range of results. TAP’s profiles are meant to give photographers leeway to achieve any of the diverse options possible. Applying a profile or preset will put you in the ballpark of where you want to be, but then it’s up to you to dial in the settings to get the exact look you’re after.

the archetype process film presets wedding photography
Edited using The Archetype Process Kodak Portra 400 +1 Normal Frontier profile. Daniel Kim Photography

The 1, 2, 3 process 

Often, Stockel will advise photographers on what he calls the “1, 2, 3 process.” Step one is the vision. What are you trying to create? Once you have an idea, you can then move to step two. Think about how you’ll record all the information required to achieve the look. This includes the cameras, lenses, location, light, and the exposure triangle, just for starters. 

When looking to emulate film in post, it’s best to capture Raw files that contain as much exposure information as possible, which is why your exposure parameters in step two are so important. When in doubt Stockel recommends underexposing an image rather than overexposing.

“I don’t think there’s anything unique to shooting to emulate film that needs to be done,” Stockel says. “Simply exposing to the right is still the best overall way to record the data needed to get the look that one wants.”

The last step is where profiles come in. Now that you’ve got a Raw photo with plenty of exposure data, it’s time to pick your profile, make your adjustments, and see your analog-emulating vision through.

The post How to edit your digital photos to look like film appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Excire Foto 2022 can analyze and keyword your entire photo library using AI https://www.popphoto.com/how-to/excire-foto-2022/ Fri, 24 Jun 2022 00:08:53 +0000 https://www.popphoto.com/?p=176189
Photographer at a computer importing photos
Thanks to tools like automatic keywording and duplicate detection, metadata management can take little effort. Getty Images

Tidy up your image database with just a few clicks of the mouse.

The post Excire Foto 2022 can analyze and keyword your entire photo library using AI appeared first on Popular Photography.

]]>
Photographer at a computer importing photos
Thanks to tools like automatic keywording and duplicate detection, metadata management can take little effort. Getty Images

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

I sometimes feel like the odd-photographer-out when it comes to working with my photo library. I’ve always seen the value of tagging images with additional metadata such as keywords and the names of people who appear (I’ve even written a book about the topic). 

However, many people just don’t want to bother. It’s an extra step—an impediment really—before they can review, edit, and share their images. It requires switching into a word-based mindset instead of an image-based mindset. And, well, it’s boring.

And yet, there will come a time when you need to find something in your increasingly growing image collection, and as you’re scrolling through thumbnails and trying to remember dates and events from the past, you’ll think, “Maybe I should have done some keywording at some point.”

In an earlier column, I took a high-level look at utilities that use AI technologies to help with this task. One of the standouts was Excire Foto, which has just been updated to version 2.0 (and branded Excire Foto 2022). I was struck by its ability to automatically tag photos, and also the granularity you can use when searching for images. Let’s take it for a spin.

Related: The best photo editor for every photographer

A few workflow notes

Excire Foto is a standalone app for macOS or Windows, which means it serves as your photo library manager. You point it at existing folders of images; you can also use the Copy and Add command to read images from a camera or memory card and save them to a location of your choice. If you use a managed catalog such as Lightroom or Capture One that tracks metadata in its own way, Excire Foto won’t work as well. A separate product, Excire Search 2, is a plug-in for Lightroom Classic.

Or, Excire Foto could be the first step in your workflow: import images into it, tag and rate them, save the metadata to a disk (more on that just ahead), and then ingest the photos into the managed photo editing app of your choice.

Since the app manages your library, it doesn’t offer any photo editing features. Instead, you can send an image to another app, such as Photoshop, but its edits are not round-tripped back to Excire Foto.

For my testing, I copied 12,574 photos (593 GB) from my main photo storage to an external SSD connected to my 2021 16-inch MacBook Pro, which is configured with an M1 Max processor. Importing them into Excire Photo took about 38 minutes, which entailed adding the images to its database, generating thumbnail previews, and analyzing the photos for content. Performance will depend on hardware, particularly in the analysis stage, but it’s safe to say that adding a large number of photos is a task that can run while you’re doing something else or overnight. Importing a typical day’s worth of 194 images took less than a minute.

Automatic keywording

excire foto 2022
Review and rate photos in Excire Foto 2022. Jeff Carlson

To me, those numbers are pretty impressive, considering the software is using machine learning to identify objects and scenes it recognizes. But still, do you really care about how long an app imports images? Probably not.

But this is what you will care about: In many other apps, the next step after importing would be to go through your images and tag them with relevant terms to make them easier to find later. In Excire Foto, at this point all the images include automatically generated keywords—much of the work is already done for you. You can then jump to reviewing the photos by assigning star ratings and color labels, and quickly pick out the keepers.

I know I sound like a great big photo nerd about this, but it’s exactly the type of situation where computational photography can make a big difference. To not care about keywords and still get the advantages of tagged photos without any extra work? Magic. 

excire foto 2022
The keywords in blue were created by the app, while keywords in gray were ones I added manually. Jeff Carlson

I find that Excire Foto does a decent-to-good job of identifying objects and characteristics in the photos. It doesn’t catch everything, and occasionally adds keywords that aren’t accurate. That’s where manual intervention comes in. You can manually delete keywords or add new ones to flesh out the metadata with tags you’re likely to search for later. For example, I like to add the season name so I can quickly locate autumn or winter scenes. Tags that the software applies appear with blue outlines, while tags you add show up with gray outlines. It’s also easy to copy and paste keywords among multiple images.

All of the metadata is stored in the app’s database, not with the images themselves, so you’re not cluttering up your image directories with app-specific files (a pet peeve of mine, perhaps because I end up testing so many different ones). If you prefer to keep the data with the files, you can opt to always use sidecar files, which writes the information to standard .XMP text files. Or, you can manually store the metadata in sidecar files for just the images you want.

Search that takes search seriously

excire foto 2022
Explore the keyword hierarchy tree to perform specific term searches. Jeff Carlson

The flip side of working with keywords and other metadata is how quickly things can get complicated. Most apps try to keep the search as simple as possible to appeal to the most people, but Excire Foto embraces multiple ways to search for photos.

A keyword search lets you browse the existing tags and group them together; as you build criteria, you can see how many matches are made before running the search. The search results panel also keeps recent searches available for quick access.

excire foto 2022
You can get pretty darn specific with your searches. Jeff Carlson

Or consider the ability to find people in photos. The Find Faces search gives you options for the number of faces that appear, approximate ages, the ratio of male to female, and a preference for smiling or not smiling expressions.

excire foto 2022
The Find Faces interface allows you to search for particular attributes. Jeff Carlson

Curiously, the people search lacks the ability to name individuals. To locate a specific person you must open an image in which they appear, click the Find People button, select the box on the person’s face, and then run the search. You can save that search as a collection (such as “Jeff”), but it’s not dynamically updated. If you add new photos of that person, you need to manually add them to the collection.

excire foto 2022
Search for a person by first opening an image in which they appear and selecting their face identifier. Jeff Carlson

It appears that the software isn’t necessarily built for identifying specific people, instead, it’s looking for shared characteristics based on whichever source image is chosen. Some searches on my face brought up hundreds of results, while others drew fewer hits.

Identifying Potential Duplicates

New in Excire Foto 2022 is a feature for locating duplicate photos. This is a tricky task because what you and I think of as a duplicate might not match what the software identifies. For instance, in my library, I was surprised that performing a duplicate search set to find exact duplicates brought up only 10 matches.

That’s because this criteria looks for images that are the exact same file, not just visually similar. Those photos turned out to be shots that were imported twice for some reason (indicated by their file names: DSCF3161.jpg and DSCF3161-2.jpg).

excire foto 2022
How duplicates like this get into one’s library will forever be a mystery. Jeff Carlson

When I performed a duplicate search with the criteria set to Near Duplicates: Strict, I got more of what I expected. In the 1007 matches, many were groups of burst photos and also a selection of image files where I’d shot in Raw+JPEG mode and both versions were imported. The Duplicate Flagging Assistant includes the ability to reject non-Raw images, or in the advanced options you can drill down and flag photos with more specific criteria such as JPEGs with the short edges measuring less than 1024 pixels, for example.

excire foto 2022
Choose common presets for filtering possible duplicates, or click Advanced Settings to access more specific criteria. Jeff Carlson

As with all duplicate finding features, the software’s job is primarily to present you with the possible matches. It’s up to you to review the results and determine which images should be flagged or tossed.

End Thoughts

It’s always tempting to jump straight to editing images, but ignoring metadata catches us out at some point. When a tool such as Excire Foto can shoulder a large portion of that work, we get to spend more time on editing, which is the more exciting part of the post-production process, anyway.

The post Excire Foto 2022 can analyze and keyword your entire photo library using AI appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Turbocharge your wedding edits with the help of AI https://www.popphoto.com/how-to/edit-wedding-photos-faster-ai/ Fri, 27 May 2022 12:00:00 +0000 https://www.popphoto.com/?p=172905
Lightroom photo
Carol Harrold

Here's how AI tools in Lightroom, Photoshop, and Luminar Neo can help speed up the time it takes to edit a wedding gallery.

The post Turbocharge your wedding edits with the help of AI appeared first on Popular Photography.

]]>
Lightroom photo
Carol Harrold

Photographing someone’s Big Day is a beautiful—and stressful—job, especially if you’re not a seasoned pro. This week, PopPhoto is serving up our best advice for capturing that special kind of joy.

A typical wedding day photoshoot can result in thousands of images. After the photographer has spent hours actively capturing the event, hours of culling and editing still loom ahead of them. In an earlier Smarter Image column, I offered an overview of apps designed to sort and edit your photos faster. For this installment, I want to look at the editing side and how AI tools can shave off some of that time.

Consider this situation: You’ve done your initial sort and now you have a series of photos of the bride. They were made in the same location, but the bride strikes different poses and the framing is slightly different from shot to shot. They could all use some editing, and because they’re all similar they’d get the same edits.

This is where automation comes in. In many apps, you can apply edits to one of the images and then copy or sync those edits to the rest. However, that typically works globally, adjusting the tone and color evenly to each full image. What if the overall photo is fine but you want to increase the exposure on just the bride to make her stand out against the backdrop? Well, then you’re back to editing each image individually.

But not necessarily. The advantage of AI-assisted processing is that the software identifies objects within a scene. When the software can pick out the bride and apply edits only to her—even if she moves within the frame—it can save a lot of time and effort.

For this task I’m looking specifically at three apps: Adobe Photoshop, Adobe Lightroom Classic (the same features appear in the cloud-based Lightroom desktop app), and Skylum Luminar Neo. These apps can identify people and make selective edits on them, and batch-apply those edits to other images.

First, let’s look at the example photos I’m working with to identify what they need. Seattle-based photographer Carol Harrold of Carol Harrold Photography graciously allowed me to use a series of photos from a recent wedding shoot. These are Nikon .NEF Raw images from straight out of the camera. 

An unedited set of six similar photos of the bride.
An unedited set of six similar photos of the bride. Carol Harrold

The bride is in shadow to avoid harsh highlights on a sunny day, so as a consequence I think she would benefit from additional exposure. Although she’s posing in one spot, she faces two different directions and naturally appears in slightly different positions within each shot. A single mask copied between the images wouldn’t be accurate. For the purposes of this article, I’m only focusing on the exposure on the bride, and not making other adjustments.

Adobe Photoshop

One of Photoshop’s superpowers is the Actions panel, which is where you can automate all sorts of things in the app. And for our purposes, that includes the ability to use the new Select Subject command in an automation.

In this case, I’ve opened the original Raw files, which processes them through the Adobe Camera Raw module; I kept the settings there unchanged. Knowing that I want to apply the same settings to all of the files, I’ll open the Actions panel and click the [+] button to create a new action, name it, and start recording. 

Next, I’ll choose Select > Subject, which selects the bride and adds that as a step in the action.

Selecting the subject while recording an action inserts the Select > Subject command as a step.
Selecting the subject while recording an action inserts the Select > Subject command as a step. Carol Harrold

To adjust the exposure within the selection, I’ll create a new Curves adjustment layer. Doing so automatically makes a mask from the selection, and when I adjust the curve’s properties to lighten the bride, the effect applies only in that selection.

I’m using a Curves adjustment to increase exposure on the bride in the first photo, though I could use other tools as well.
I’m using a Curves adjustment to increase exposure on the bride in the first photo, though I could use other tools as well. Carol Harrold

In the interests of keeping things simple for this example, I’ll stick to just that adjustment. In the Actions panel, I’ll click the Stop Recording button. Now I have an action that will select any subject in a photo and increase the exposure using the curve adjustment.

To apply the edits to the set of photos, I’ll choose File > Automate > Batch, and choose the recorded action to run. Since all the images are currently open in Photoshop, I’ll set the Source as Opened Files and the Destination as None, which runs the action on the files without saving them. I could just as easily point it at a folder on disk and create new edited versions.

It’s not exciting looking, but the Batch dialog is what makes the automation possible between images.
It’s not exciting looking, but the Batch dialog is what makes the automation possible between images.

When I click OK, the action runs and the bride is brightened in each of the images.

In a few seconds, the batch process applies the edits and lightens the bride in the other photos.
In a few seconds, the batch process applies the edits and lightens the bride in the other photos. Carol Harrold

The results can seem pretty magical when you consider the time saved by not processing each photo individually, but as with any task involving craftsmanship, make sure to check the details. It’s great that Photoshop can detect the subject, but we’re also assuming it’s detecting subjects correctly each time. If we zoom in on one, for example, part of the bride’s shoulder was not selected, leading to a tone mismatch.

Watch for areas the AI tool might have missed, like this section of the bride’s shoulder.
Watch for areas the AI tool might have missed, like this section of the bride’s shoulder. Carol Harrold

The upside is that the selection exists as a mask on the Curves layer. All I have to do is select the area using the Quick Selection tool and fill the area with white to make the adjustment appear there; I could also use the Brush tool to paint it in. So you may need to apply some touch-ups here and there. 

Filling in that portion of the mask fixes the missed selection.
Filling in that portion of the mask fixes the missed selection. Carol Harrold

Lightroom Classic and Lightroom

Photographers who use Lightroom Classic and Lightroom are no doubt familiar with the ability to sync Develop settings among multiple photos—it’s a great way to apply a specific look or LUT to an entire set that could be a signature style or even just a subtle softening effect. The Lightroom apps also incorporate a Select Subject command, making it easy to mask the bride and make our adjustments.

With the bride masked, I can increase the exposure just on her.
With the bride masked, I can increase the exposure just on her. Carol Harrold

In Lightroom Classic, with one photo edited, I can return to the Library module, select the other similar images, and click the Sync Settings button, or choose Photo > Develop Settings > Sync Settings. (To do the same in Lightroom desktop, select the edited photo in the All Photos view; choose Photo > Copy Edit Settings; select the other images you want to change; and then choose Photo > Paste Edit Settings.)

However, there’s a catch. The Select Subject needs to be reprocessed before it will be applied. In Lightroom Classic, when you click Sync Settings, the dialog that appears does not select the Masking option, and includes the message “AI-powered selections need to be recomputed on the target photo.”

Lightroom Classic needs to identify the subject in each image that is synced from the original edit.
Lightroom Classic needs to identify the subject in each image that is synced from the original edit. Carol Harrold

That requires an additional step. After selecting the mask(s) in the dialog and clicking Synchronize, I need to open the next image in the Develop module, click the Masking button, and click the Update button in the panel. 

It’s an extra step, but all you have to do is select the mask and click Update.
It’s an extra step, but all you have to do is select the mask and click Update. Carol Harrold

Doing so reapplies the mask and the settings I made in the first image. Fortunately, with the filmstrip visible at the bottom of the screen, clicking to the next image keeps the focus in the Masking panel, so I can step through each image and click Update. (The process is similar in the Edit panel in Lightroom desktop.)

As with Photoshop, you’ll need to take another look at each image to ensure the mask was applied correctly, and add or remove portions as needed.

Luminar Neo

I frequently cite Luminar’s image syncing as a great example of how machine learning can do the right thing between images. Using the Face AI and Skin AI tools, you can quickly lighten a face, enhance the eyes, remove dark circles, and apply realistic skin smoothing, and then copy those edits to other photos. From the software’s point of view, you’re not asking it to make a change to a specific area of pixels; it knows that in each photo it should first locate the face, and then apply those edits regardless of where in the frame the face appears.

I can still do that with these photos, but it doesn’t help with the exposure of the bride’s entire body. So instead, I’ll use the Relight AI tool in Luminar Neo and increase the Brightness Near value. The software identifies the bride as the foreground subject, increasing the illumination on her without affecting the background.

Luminar Neo’s Relight AI tool brightens the bride, which it has identified as the foreground object.
Luminar Neo’s Relight AI tool brightens the bride, which it has identified as the foreground object. Carol Harrold

Returning to the Catalog view, we can see the difference in the bride’s exposure in the first photo compared to the others. 

Before syncing in Luminar Neo
Carol Harrold

To apply that edit to the rest, I’ll select them all, making sure the edited version is selected first (indicated by the blue selection outline), and then choose Image > Adjustments > Sync Adjustments. After a few minutes of processing, the other images are updated with the same edit. 

After syncing, the image series now features the lightened bride.
After syncing, the image series now features the lightened bride. Carol Harrold

The results are pretty good, with some caveats. On a couple of the shots, the edges are a bit harsh, requiring a trip back to the Relight AI tool to increase the Dehalo control. I should also point out that the results you see above were from the second attempt; on the first try the app registered that it had applied the edit, but the images remained unchanged. I had to revert the photos to their original states and start over.

The latest update to Luminar Neo adds Masking AI technology, which scans the image and makes the individual areas it finds selectable as masks, such as Human, Flora, and Architecture. I thought that it would allow me to identify a more specific mask, but instead, it did the opposite when synced to the rest, applying the adjustment to what appears to be the same pixel area as the source image.

Unfortunately, the Masking AI feature doesn’t work correctly when syncing adjustments between photos.
Unfortunately, the Masking AI feature doesn’t work correctly when syncing adjustments between photos. Carol Harrold

The AI Assistant

Wedding photographers often work with one or more assistants, so think of these AI-powered features as another assistant. Batch processing shots with software that can help target adjustments can help you turn around a large number of images in a short amount of time.

The post Turbocharge your wedding edits with the help of AI appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Testing the advantages of Apple’s ProRAW format https://www.popphoto.com/how-to/apple-proraw-explained/ Thu, 12 May 2022 20:39:05 +0000 https://www.popphoto.com/?p=171727
The US Capitol building.
Captured on an iPhone 13 Pro in Apple ProRAW and processed through Apple Photos. Jeff Carlson

Apple ProRAW is a hybrid file format that combines the flexibility of a traditional Raw file with the benefits of AI-powered image processing.

The post Testing the advantages of Apple’s ProRAW format appeared first on Popular Photography.

]]>
The US Capitol building.
Captured on an iPhone 13 Pro in Apple ProRAW and processed through Apple Photos. Jeff Carlson

Photo technology continually advances, and generally, that’s great for photographers. But let’s be honest, lately, that pace seems to be overwhelming. It often feels as if we don’t have much choice between embracing or rejecting the changes. 

In a recent Smarter Image column, I wrote about how to outsmart your iPhone camera’s overzealous AI. The author of a New Yorker article bemoaned Apple’s computational photography features for creating manipulated images that look “odd and uncanny.” My column pointed out that by using third-party apps, it’s possible to capture photos that don’t use technologies like Deep Fusion or Smart HDR to create these blended images.

Although true, that also feeds into the idea that computational photography is an either/or choice. Don’t like the iPhone’s results? Use something else. But the situation isn’t that reductive: sometimes smart photo features are great, like when you’re shooting in low light. A quick snap of the iPhone (or Google Pixel, or any other computationally-enhanced device) can seize a moment that would otherwise be lost with a regular camera while you’re fiddling with settings to get a well-exposed shot.

How can we take advantage of the advancements without simply accepting what the camera’s smart processing gives us? 

The promise of Raw

This isn’t a new question in digital photography. When you capture a photo using most cameras, even the simplest point-and-shoot models, the JPEG that’s created is still a highly processed version of the scene based on algorithms that make their own assumptions. Data is then thrown out to make the file size smaller, limiting what you can do during editing.

One answer is to shoot in Raw formats, which don’t make those assumptions in the image file. All the data from the sensor is there, which editing software can use to tease out shadow detail or work with a range of colors that would otherwise be discarded by JPEG processing.

If you’ve photographed difficult scenes, though, you know that shooting Raw isn’t a magic bullet. Very dark areas can be muddy and noisy when brightened, and there’s just no way back from an overexposed sky comprised of all-white pixels.

The ProRAW compromise

This swings us back to computational photography. Ideally, we want the exposure blending features to get an overall better shot: color and detail in the sky and also plenty of shadow detail in the foreground. And yet we also want the color range and flexibility of editing in Raw for when we need to push those values further.

(News flash: We’re photographers, we want it all, and preferably right now thank you.)

Apple’s ProRAW format attempts to do both. It analyzes a scene using machine learning technology, identifying objects/subjects and adjusting exposure and color selectively within the frame to create a well-balanced composition. At the same time, it also saves the original Raw sensor data for expanded editing.

There’s a contradiction here, though. As I mentioned in Reprocess raw files with machine learning for cleaner-looking photos, a Raw file is still just unfiltered data from the sensor. It doesn’t specify that, say, a certain range of pixels is a sky and should be rendered with more blue hues. Until software interprets the file through the demoasicing process, the image doesn’t even have any pixels.

Apple’s ProRAW solution is to create a hybrid file that actually does include that type of range-specific information. ProRAW files are saved in Adobe’s DNG (digital negative) format, which was designed to be a format that any photo editing software could work with (versus the still-proprietary Raw formats that most camera manufacturers roll with). It’s important to point out that ProRAW is available only on the iPhone 12 Pro, iPhone 12 Pro Max, iPhone 13 Pro, and iPhone 13 Pro Max models

To incorporate the image fusion information, Apple worked with Adobe to add ProRAW-specific data to the DNG specification. If an editing app understands that additional information, the image appears as it does when you open it on the iPhone, with editing control over those characteristics. If an app has not been updated to recognize the revised spec, the ProRAW data is ignored and the photo opens as just another Raw image, interpreting only the bare sensor data.

So how can we take advantage of this?

Editing ProRAW photos

In my experience, ProRAW does pretty well with interpreting a scene. Then again, sometimes it just doesn’t. A reader pointed out that the photos from his iPhone 12 Pro Max tend to be “candy-colored.” Editing always depends on each specific photo, of course, but reducing the Vibrance or Saturation values will help that; the Photographic Styles feature in the iPhone 13 and iPhone 13 Pro models can also help somewhat, although the specific attributes you can change are tone and warmth, not saturation specifically. And, of course, that feature is only on the latest phones. 

With the iPhone 13 Pro, my most common complaint is that sometimes ProRAW images can appear too bright—not due to exposure, but because the image processor is filling in shadows where I’d prefer it to maintain darks and contrast.

Let’s take a look at an example.

Editing ProRAW files in Apple Photos

In this ProRAW photo shot a few weeks ago with my iPhone 13 Pro, Apple’s processing is working on a few separate areas. There’s a lot of contrast in the cloudy sky, nice detail and contrast on the building itself, and plenty of detail on the dark flagpole base in the foreground.

The US Capitol building.
The photo is straight out of an iPhone 13 Pro. Jeff Carlson

Want to see the computational photography features at work? When I adjust the Brilliance slider in Apple Photos, those three areas react separately.

Moving the Brilliance slider in Apple Photos adjusts the foreground, building, and sky separately.

However, I think this is an instance where the processing feels too aggressive. Yes, it’s nice to see the detail on the flagpole, but it’s fighting with the building. Reducing Brilliance and Shadows makes the image more balanced to my eyes.

The US Capitol building.
Reducing the brilliance and shadows results in a pleasing image. Jeff Carlson

The thing about the Photos app is that it uses the same editing tools for every image; Brilliance can have a dramatic effect on ProRAW files, but it’s not specifically targeting the ProRAW characteristics.

Editing ProRAW files in lightroom

So let’s turn our attention to Lightroom and Lightroom Classic.

The US Capitol building.
Here’s what the photo looks like when opened in Lightroom. The app recognizes the format and applies the Apple ProRaw profile. Jeff Carlson

Adobe’s solution for working with that data is to use a separate Apple ProRaw profile. If we switch to another profile, such as the default Adobe Color, the Apple-specific information is ignored and we get a more washed out image. That can be corrected using Lightroom’s adjustment tools, of course, because the detail, such as the clouds, is all in the file.

The US Capitol building.
With the Apple Color profile applied, much of the contrast and dark values are lost. Jeff Carlson

With the Apple ProRaw profile applied, though, we can adjust the Profile slider to increase or reduce the computational processing. Reducing it to about 45, in this case, looks like a good balance.

The US Capitol building.
Adjusting the profile amount creates an image with better tones. Jeff Carlson

Editing ProRAW files in RAW Power

The app RAW Power takes a similar approach, but with more granularity in how it processes raw files. For ProRAW photos, a Local Tone Map slider appears. Initially, it’s set to its maximum amount, but reducing the value brings more contrast and dark tones to the flagpole.

The US Capitol building.
RAW Power controls the ProRAW areas using a separate Local Tone Map slider. Jeff Carlson

This is just one example image, but hopefully, you understand my point. Although it seems as if computational processing at the creation stage is unavoidable, I’m glad Apple (and I suspect other manufacturers in the future) are working to make these new technologies more editable. 

The post Testing the advantages of Apple’s ProRAW format appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Testing 3 popular AI-powered sky replacement tools https://www.popphoto.com/how-to/use-ai-to-replace-a-sky/ Wed, 27 Apr 2022 19:30:27 +0000 https://www.popphoto.com/?p=169784
A reflection in Mono Lake at sunset.
The sky and water reflection in this image were both replaced using tools in ON1 Photo RAW 2022. Jeff Carlson

In this week's Smarter Image column, we're looking at sky replacement features in Adobe Photoshop, Luminar Neo, and On1 Photo RAW.

The post Testing 3 popular AI-powered sky replacement tools appeared first on Popular Photography.

]]>
A reflection in Mono Lake at sunset.
The sky and water reflection in this image were both replaced using tools in ON1 Photo RAW 2022. Jeff Carlson

AI-assisted photo technologies mostly exist to help you save time while editing, or improve image quality using small sensors or when processing images. But sometimes they can radically change your photos, as in the case with sky replacement features.

Swapping a sky in a photo was initially a head-scratcher for me. One of the appeals of landscape photography, for instance, is to get out amid nature and experience the colors and wonder of a sunrise or sunset. Doing that takes work: planning the shoot, determining the best time to arrive and set up, checking weather estimates, picking a composition, and sometimes standing around in cold weather waiting for the show to begin.

But with AI sky replacement, you could theoretically show up at any time, hang your camera out the car window, snap a shot, and then add someone else’s spectacular sky using your computer later. It feels like cheating and reinforces the feeling of many photographers that AI technologies are marginalizing craft and hard work. 

That’s an awfully traditional mindset, though, and I had to remember that photography encompasses a larger spectrum than my experience. Sky replacement is useful in real estate photography, where it’s rarely possible to wait around a house for ideal conditions, especially if you’re shooting three houses that day. Or you may need a better sky for an online advertisement.

Or you might be a landscape photographer who did put in the work, got skunked by a flat sky in a location you can’t easily return to, and want to make a creative composition anyway. We forget that most photography is art, and doesn’t need to hew to journalistic expectations of accuracy.

AI Sky Replacement

Replacing skies isn’t new. With patience, you could use software that supports layers to define a mask for the sky and put another sky image in its place. That takes time, particularly if the sky is interrupted by objects such as tree branches or a complicated skyline.

The goal of a successful sky swap is, of course, to make it appear as if the new sky was originally there all along. But that incorporates several pieces:

  • The sky should have a clean edge, taking into account interruptions. This is usually the most difficult part because the software must determine which areas belong to the sky and which belong to the foreground.
  • The non-sky elements of the image need to match the exposure and coloring that the new sky would cast over the scene. A sky isn’t just background—it’s the light source and filter for everything we see.
  • There needs to be a way for you to fine-tune the mask and the color in areas the software didn’t catch.
  • The tool should take into account reflections. Nothing ruins the illusion like a new sky with the original sky reflected in the water below.

And let’s not forget the obvious, which is the responsibility of the editor: Make sure light sources match and shadows are cast in the correct direction. After all, the goal is to present the illusion of a natural sky, and those are obvious flags that can ruin the effect.

Several photo editing apps include sky replacement features, each of them taking slightly different approaches. For this article, I’m looking at Adobe Photoshop, Skylum Luminar Neo, and ON1 Photo RAW 2022. I’m also applying sky images that are included in each app. You can add your own images to each one, too.

Below are the two test images will use.

hospital under grey skies
A hospital under cloudy skies will be the first test image. Jeff Carlson
Mono Lake
The second test is this photo of Mono Lake. Jeff Carlson

Photoshop Sky Replacement

You could say Photoshop is the original sky replacement utility since its layers and selection tools were what you needed to use. Now, Adobe includes a specific Sky Replacement tool: Choose Edit > Sky Replacement.

In my first test image, the ruins of a hospital, the feature right away has done a good job of replacing the sky, including in the windows where the sky shows through. The edges are clean, including the tree branches that have grown up beyond the top of the wall.

how to replace a sky in photoshop
Photoshop’s Sky Replacement looks convincing from the start. Jeff Carlson

It includes controls for shifting and fading the mask edge, adjusting the brightness and temperature of the sky, and moving the sky image itself, both using a Scale slider and by dragging with the Move tool.

Switching to a sunset image also shows that the foreground lighting is adapting to the new sky, with options for adjusting the blend mode and lighting intensity. The Sky Brush tool allows some manipulation of the edges.

sky replacement in photoshop
The sunset image in Photoshop adjusts the lighting on the foreground. Jeff Carlson

And typical of Photoshop, the default output option is to create new layers that include all the pieces: a masked sky image, a foreground lighting layer with its own mask, and adjustment layers for the colors. It’s nicely editable.

sky replacement photoshop tutorial
You say you love layers? Photoshop outputs all of its sky components into their own layers. Jeff Carlson

Notably missing, though, is recognition of reflective areas. When I apply a sky to an image of Mono Lake in Photoshop, the sky is changed but the glassy lake remains the same.

photoshop tutorial sky replacement
Something’s missing here in Photoshop. Jeff Carlson

Luminar Neo Sky AI

When I open the first image in Luminar Neo and choose an image from the Sky AI tool, the initial replacement is also pretty good. It has detected the top-right window, but not the openings in the center. And it’s unsure about the branches sticking up from the top of the wall, mostly catching their detail but also revealing an obvious halo and some of the original gray clouds.

replace sky Luminar Neo
Luminar Neo also does a good job, with a few hiccups. Jeff Carlson
luminar neo sky AI replacement
Looking at the branches close up reveals areas of the original image coming through.

To handle these discrepancies, Luminar uses a trio of Mask Refinement controls—Global, Close Gaps, and Fix Details—which to be honest are best used by sliding them and seeing what happens. In this case, increasing Global and reducing Close Gaps helps with the branches.

sky replacement luminar neo AI
Adjusting the Mask Refinement controls improves the treatment of the branches. Jeff Carlson

However, none of the controls can coax the sky into the windows at the bottom. That’s because the algorithm that detects the sky has decided they’re not part of the mask, and there’s nothing I can do to convince it otherwise. The Sky AI tool includes a manual Mask tool (as do most of Luminar’s tools), but in this case I can paint in rough areas using only a brush tool, exposing or hiding only the areas the AI has generated.

The Scene Relighting controls do a pretty good job of adapting the exposure and color and even include a “Relight Human” slider to adjust the appearance of the sky’s color when people are detected in the scene. I also appreciate the Sky Adjustments controls that help you match the sky to the rest of the image, such as defocusing it or adding atmospheric haze. However, note that the lighting isn’t really the problem here; with the sun setting behind the structure, more of the foreground would naturally be in shadow, illustrating the importance of the editor choosing appropriate imagery.

how to replace sky luminar neo
A late sunset image casts a darker hue to the foreground. Jeff Carlson

Where Sky AI excels over Photoshop is its reflection detection, which in the Mono Lake image has created a convincing sky and reflection. I have the ability to adjust the opacity of the reflected image and also apply “water blur” to it.

sky replacement luminar neo tutorial
Luminar Neo’s reflection looks natural in this photo. Jeff Carlson

ON1 Photo RAW 2022

In ON1 Photo RAW 2022, the swapped sky has its pluses and minuses. It’s identified all the window openings correctly and handled the intruding branches pretty well. However, there’s obvious haloing around the top edges of the building, a telltale sign of a swapped sky.

how to replace sky ON1 Photo Raw
At the start, the new sky in ON1 Photo RAW 2022 has obviously been added. Jeff Carlson
On1 Photo Raw sky replacement tutorial
The branches look fine, but the glow around the walls makes them seem otherworldly. Jeff Carlson

That can be mitigated using the Fade Edge and Shift Edge controls, but not entirely. Increasing the fade can sometimes make the edit less noticeable. Also, note that the choice of sky can be more or less effective.

sky replacement ON1 Photo Raw
Fading and shifting the edge of the mask helps, but it’s still noticeable. Jeff Carlson

The foreground lighting controls let me adjust not only the amount and blend mode of the effect, but also the color itself using an eyedropper tool, which provides more control.

how to replace sky ON1 Photo Raw
With this sunset image and the foreground coloring, the entire shot looks more natural. Jeff Carlson

ON1 Photo RAW 2022 does include reflection awareness, with controls for setting the opacity of the image and the blend mode.

On1 Photo Raw sky replacement
Now that’s what I was hoping to see when I went to Mono Lake, but I was only able to be there in the middle of the afternoon. I’d also need to do more work to reduce the exposure in the foreground due to the light source being low in the sky. Jeff Carlson

Skies Wide Open

As you can see, replacing a sky is a tricky feat. It can be made easier using AI technologies, but there’s still more work involved. In both Luminar Neo and ON1 Photo RAW, it’s possible you’d do part of the work there and then clean up the image in Photoshop.

The post Testing 3 popular AI-powered sky replacement tools appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Preprocess Raw files with machine learning for cleaner-looking photos https://www.popphoto.com/how-to/preprocess-raws-dxo-pureraw-2/ Wed, 13 Apr 2022 18:20:42 +0000 https://www.popphoto.com/?p=168297
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2. Mason Marsh

We tested out DxO PureRAW 2's DeepPRIME technology to see if it could improve noise and detail in our shots. Spoiler: it did.

The post Preprocess Raw files with machine learning for cleaner-looking photos appeared first on Popular Photography.

]]>
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2. Mason Marsh

Machine learning technology is used in many aspects of modern photography, from shooting images that would otherwise be difficult to capture to speeding up sorting and editing. This week I want to focus on a targeted implementation that ripples through the entire image edit process: translating and processing the data in Raw files. Specifically, I’m looking at DxO PureRAW 2, which uses a technology called DeepPRIME to demosaic and denoise the unedited data in Raw files to create better editable versions. The technology is also available in DxO PhotoLab 5, the company’s photo editing app.

The process of Raw processing

First, let’s get on the same page about Raw processing. 

If your camera is set to shoot in JPEG or HEIC formats, the camera’s processor does a lot of work to take the light information captured by the sensor, interpret it as luminance and color values, and create an image made up of colored pixels. The photo is also compressed to save storage, so a significant amount of image data is thrown away.

When you shoot in the camera’s Raw format, the file it saves is made up entirely of the data captured by the image sensor. It includes not just the values from each pixel, but information about how the sensor deals with digital noise, characteristics of the lens being used, and more. You also end up with larger image files because none of the data is purged in a processing step.

At this point, that file isn’t really a “photo” at all. The data has to be decoded before it can be viewed as an image. (When you see the shot on the camera’s screen, you’re actually looking at a JPEG preview.)

The first step is demosaicing, which interprets the color values produced by patterned filters on the camera’s image sensor. Most cameras use sensors with Bayer filters, while Fujifilm cameras use sensors with X-Trans filters.

The photo software’s algorithms translate that data to pixels and then apply some other processing adjustments such as denoising based on the information in the Raw file to minimize digital noise such as that caused by shooting at high ISO values. At that point, the Raw file is ready for your edits to tone, color, and so on. 

Software such as Adobe Lightroom or Capture One automatically handles this process, making it appear that the Raw file has been opened just as if you’d opened a JPEG file. Some apps use an intermediary step, such as when Adobe Photoshop opens Raw images first in the Adobe Camera Raw interface.

Into the DeepPRIME

A night photos of the Seattle skyline with space needle.
A photo of the Seattle skyline captured at ISO 3200 and processed through Adobe Lightroom Classic. Jeff Carlson

Related: The best photo editor for every photographer

That window between Raw data and editable file is where DxO PureRAW 2 comes in. Its DeepPRIME technology uses machine learning to demosaic and denoise the data and create a better image to work with. It then pulls from DxO’s extensive library of camera and lens profiles to apply optical corrections. (And honestly, DeepPRIME is just an awesomely cool name.)

In my testing, I’m seeing results in two specific areas: processing noisy high-ISO images and working with the Raw files from my Fujifilm X-T3, which are sometimes underserved by Lightroom’s conversion.

A night photos of the Seattle skyline with space needle, zoomed to 200%.
Here’s the above image zoomed into 100%. The crop on the left was processed through Lightroom, the crop on the right through DxO PureRAW 2. The latter looks significantly cleaner while still maintaining a good level of detail. Jeff Carlson
A night photos of the Seattle skyline with space needle, zoomed to 100%.
At 200%, the PureRAW 2 example looks a tad soft but I still prefer this output to Lightroom’s. Jeff Carlson

In this example of the Seattle skyline, shot handheld at ISO 3200, the Raw file processed by Lightroom Classic reveals noticeable noise. I’ve made only exposure changes since the original was still pretty underexposed. The PureRAW-processed image is much cleaner and only reveals some denoising softening when viewed at 200% magnification.

Another noisy example

A night photo of a Bristlecone Pine.
The image on the left was processed through Lightroom with no adjustments made, the image on the right was processed through PureRAW 2, also with no adjustments made. Mason Marsh

For another example, this image of a Bristlecone Pine was shot on a Sony A7R IV at ISO 12800 and processed in Lightroom and using PureRAW 2 without any other adjustments applied.

A night photo of a Bristlecone Pine zoomed in to 100%.
A 100% crop with the Lightroom example on the left and the PureRAW 2 one on the right. The latter maintains as much or more detail than the former, without any of the ugly noise. Mason Marsh
A night photo of a Bristlecone Pine zoomed in to 200%.
The same example cropped to 200%. Mason Marsh

Processing Fujifilm Raw files with PureRAW 2

As a Fujifilm camera owner, I was also happy to hear that PureRAW 2 added support for Fujifilm’s X-Trans Raw files. Lightroom’s conversion can create a “wormy” appearance, particularly in textured areas. Shooting autumn foliage, for instance, often vexes me because, while the overall image looks good, at 100% magnification and closer the pattern is pretty obvious.

A photo of fall foliage with a pond in center.
The image on the left was processed through Lightroom, the image on the right through PureRAW. From an overview, they look pretty similar. But zoom the shots in (see examples below) and you’ll notice crisper detail in the PureRAW version. Jeff Carlson

Lightroom includes its own machine-learning-based feature to reprocess Raw files (in the Develop module, choose Photo > Enhance), which improves on the default appearance, but to my eye, the PureRAW version looks better, if a little over-sharpened when viewed magnified.

A photo of fall foliage with a pond in center, cropped to 200%.
A 200% crop shows Lightroom’s standard processing on the left, Lightroom’s processing with the machine-learning-based “Enhanced” feature turned on (center), and PureRAW 2’s processing on the right. Jeff Carlson

I also ran the same image through Capture One 22, because it tends to handle Fujifilm Raw files well. The image is an improvement over Lightroom, but I still prefer the PureRAW output.

A photo of fall foliage with a pond in center, cropped to 100%.
A 100% crop shows Lightroom’s “Enhanced” processing on the left, PureRAW 2’s output in the center, and Capture One 2022’s output at the right. Jeff Carlson

Final thoughts

Since PureRAW is dealing just with the Raw translation stage, you can process images regardless of which app you prefer to edit in. In my Lightroom Classic examples, I ran PureRAW as a plug-in, which creates a new image saved to my Lightroom library. You can alternately run PureRAW 2 as a standalone app, process your files, and then import the resulting DNG images into your app of choice or edit them singularly.

One final note: DxO PureRAW 2 works with just about any Raw image available, with one notable exception, Apple ProRAW files. That’s because ProRAW images, while sharing many of the editable characteristics of Raw files such as greater dynamic range, are already demosaiced when they’re saved.

The post Preprocess Raw files with machine learning for cleaner-looking photos appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to use artificial intelligence to tag and keyword photos for better organization https://www.popphoto.com/how-to/tag-and-organize-photos-with-ai/ Thu, 10 Mar 2022 13:00:00 +0000 https://www.popphoto.com/?p=164783
A women with glasses and the reflection of a computer screen in her lenses.
Getty Images

Tagging images with keywords is time-consuming, here's how AI can help shoulder some of the weight of this oh-so-dull task.

The post How to use artificial intelligence to tag and keyword photos for better organization appeared first on Popular Photography.

]]>
A women with glasses and the reflection of a computer screen in her lenses.
Getty Images

Computational photography technologies aim to automate tasks that are time-consuming or uninspiring: Adjusting the lighting in a scene, replacing a flat sky, culling hundreds of similar photos. But for a lot of photographers, assigning keywords and writing text descriptions makes those actions seem thrilling.

When we look at a photo, the image is supposed to speak for itself. And yet it can’t in so many ways. We work with libraries of thousands of digital images, so there’s no guarantee that a particular photo will rise to the surface when we’re scanning through screenfuls of thumbnails. But AI can assist.

Keywords, terms, descriptors, phrases, expressions…

I can’t overemphasize the advantages of applying keywords to images. How many times have you found yourself scrolling through your photos, trying to recall when the ones you want were shot? How often have you scrolled right past them, or realized they’re stored in another location? If those images contained keywords, the shots could often be found in just a couple of minutes or less. 

The challenge is tagging the photos at the outset.

It seems to me that people fall on the far ends of the keywording spectrum. On one side is a hyper-descriptive approach, where the idea is to apply as many terms as possible to describe the contents of an image. These can branch into hierarchies and subcategories and related concepts and all sorts of fascinating but arcane miscellany.

On the other side is where I suspect most people reside: keywords are a time-consuming waste of effort. Photographers want to edit, not categorize!

This is where AI technologies are helping. Many apps use image detection to determine the contents of photos and use that data when you perform a search. 

A screenshot of the AI features in Apple Photos
Apple Photos found photos of sunflowers…and tater tots. Jeff Carlson

Related: Computational photography, explained: The next age of image-making is already here

For example, in Apple Photos, typing “sunflower” brings up images in my library that contain sunflowers (and, inexplicably, a snapshot of tater tots). In each of these cases, I haven’t assigned a specific keyword to the images.

Similarly, Lightroom desktop (the newer app, not Lightroom Classic) takes advantage of Adobe Sensei technology to suggest results when I type “sunflower” in the Search field. Although some of my images are assigned keywords (at the top of the results list), it also suggested “Sunflower Sunset” as a term.

A screenshot of the AI features in Apple Photos
I never added a “sunflower” keyword to this image, as you can see in the Info panel, but Photos recognizes the flower in it. Jeff Carlson

That’s helpful, but the implementation is also fairly opaque. Lightroom and Photos are accessing their own internal data rather than creating keywords that you can view. 

What if you don’t use either of those apps? Perhaps your library is in Lightroom Classic or it exists in folder hierarchies you’ve created on disk?

Creating keywords with Excire Foto

I took two tools from Excire for a quick spin to see what they would do. Excire Foto is a standalone app that performs image recognition on photos and generates exactly the kind of metadata I’m talking about. Excire Search 2 does the same, just as a Lightroom Classic plug-in.

I loaded 895 images into Exire Foto, which it scanned and tagged in just a couple of minutes. It did a great job of creating keywords to describe the images; with people, for instance, it differentiates between adults and children. You can add or remove keywords and then save them back to the image or in sidecar files for RAW images. 

Excire Foto screenshot of AI tools.
Excire Foto analyzed the selected image and came up with keywords that describe aspects of the photo. Jeff Carlson

So if the thought of adding keywords makes you want to stand up and do pretty much anything else, you can now get some of the benefits of keywording without doing the grunt work. 

Generating ‘alt text’ for images

Text isn’t just for applying keywords and searching for photos. Many people who are blind or visually impaired still encounter images online, relying on screen reader technology to read the content aloud. So it’s important, when sharing images, to include alternative text that describes their content whenever possible.

Screen shot of adding alt text in Instagram.
The above shows how to add alt text on Instagram. Jeff Carlson

For example, when you add an image to Instagram or Facebook, you can add alt text—though it’s not always obvious how. On Instagram, once you’ve selected a photo and have the option of writing a caption, scroll down to “Advanced Settings,” tap it, and then under “Accessibility” tap “Write Alt Text.”

However, those are additional steps, throwing up barriers that make it less likely that people will create this information.

That being said, Meta, which owns both Instagram and Facebook, is using AI to generate alt text for you. In a blog post from January 2021, the company details “How Facebook is using AI to improve photo descriptions for people who are blind or visually impaired.”

A close-up photo of water on a leaf in an image editing window.
Facebooks’s automatically-generated alt text did an ok job identifying what’s in the above photo. Jeff Carlson

The results can be hit or miss. The alt text for the leaf photo above is described by Facebook as “May be a closeup of nature,” which is technically accurate but not overly helpful.

When there are more specific items in the frame, the AI does a bit better. In the image below—an indulgent drone selfie—Facebook came up with “May be an image of 2 people, people standing and road.”

A B&W photo of two men holding an umbrella in an image editing window.
The alt text for this image is a bit more accurate, though the text still doesn’t quite describe the image. Jeff Carlson

Another example is work being done by Microsoft to use machine learning to create text captions. In a paper last year, researchers presented a process called VIVO (VIsual VOcabulary pretraining) for generating captions with more specificity.

So while there’s progress, there’s also still plenty of room for improvement.

Yes, Automate This Please

Photographers get angsty when faced with the notion that AI might replace them in some way, but creating keywords and writing captions and alt text doesn’t seem to apply in the same way. This is one area where I’m certainly happy to let the machines shoulder some of the work, provided of course that the results are accurate. 

The post How to use artificial intelligence to tag and keyword photos for better organization appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>