Almost all photographers know about panoramas and HDRs. Most also know about focus stacking. But how often do you hear photographers talk about a fourth method of blending photos together – image averaging?
Although image averaging has picked up a bit more popularity in recent years, it’s still not especially well-known. That’s a shame; depending on what subjects you photograph, image averaging can extend your shooting capabilities significantly.
Let me demonstrate.
Table of Contents
What Is Image Averaging?
As the name implies, image averaging involves stacking multiple photos on top of each other and averaging them together. Generally, all the images in question are taken from the same camera position using identical camera settings.
The main purpose of image averaging is to reduce noise. However, it can also be used to simulate motion blur, akin to using a longer shutter speed.
How It Improves Image Quality
Much of the noise that appears in your photos is random. It looks completely chaotic, like in this crop of a blank wall:
If you take a series of photos with the same settings, the pattern of noise generally isn’t correlated from photo to photo. So, a pixel that’s bright in one image may be dark in another, and vice versa. This means, when you average multiple photos together, the overly bright or dark pixels will start to balance out, reducing the total level of noise in the image:
The more photos you average, the less noise will be in your final result. Each time that you double the number of photos you average, you will improve the noise levels by one stop. By averaging together four, eight, sixteen, etc. photos, you can get vast improvements in the level of noise in your photos.
Note, however, that image averaging is susceptible to movement in your photo. If you want everything in your photo to be sharp in the final image average, you must make sure that neither your camera nor your subject moves between photos. So, as nice as it would be if this method worked for sports or wildlife photography, most of the time it simply won’t.
Image Averaging Method in Photoshop
Image averaging is quite easy to do in Photoshop. There are two methods you can follow.
- Method one: Load all the individual photos as layers. Keep the bottom layer at 100% opacity. Reduce the layer above it to 1/2 opacity (50%). Go to 1/3 (33%) opacity for the next layer up. Then 1/4 (25%), 1/5 (20%), 1/6 (17%), and so on
- Method two: Load all the images as layers. Select them all, then go to Layer > Smart Objects > Convert to Smart Object. Then go to Layer > Smart Objects > Stack Mode > Mean
Both methods will produce the same result. Method two is much easier when you have a large number of images to average. However, method one leaves the individual layers intact so that you can edit them separately from one another, should you so choose.
When to Use Image Averaging
There are three main situations where image averaging is especially helpful:
- Using a smaller camera sensor (including a drone)
- Photographing the Milky Way
- Simulating a long exposure
Let’s go through each of these situations in more detail.
Smaller Camera Sensor
One problem with a smaller camera sensor is that, even at base ISO, you may still have high levels of noise in your photo. Image averaging can be a way to simulate a lower base ISO on such cameras.
For example, if a photo from a point-and-shoot camera has objectionable noise in the shadows at base ISO 100 – but your subject is stationary, and you’re on a tripod – why not just shoot multiple photos to reduce noise? You can take a series of images to average later, which can improve your noise levels significantly.
I use this technique all the time on my drone, the DJI Mavic 2 Pro. Among drones, the Mavic 2 Pro has a relatively large 1-inch type camera sensor. But it’s still not at the level of even an entry-level DSLR, and there is reasonably high noise even at base ISO 100.
However, the Mavic 2 Pro also has a built-in “burst mode” that fires five images rapidly in sequence. Since all five photos are captured in about one second – and assuming the drone is hovering rather than moving – there’s essentially no shift in composition from shot to shot. This means that image averaging is an excellent method to reduce noise. (Incidentally, averaging five photos together results in about 2.3 stops of image quality improvement, which leads to roughly the same image quality as ISO 100 on a full-frame DSLR!)
Here’s how one of my drone photos looks, uncropped:
When zooming in, you can see some pixel-level noise. This will be further exaggerated the more post-processing that I do on the image:
However, after averaging together the five photos from the burst, the noise levels are much lower:
That’s a really exciting improvement! I’m always after maximum image quality in my photos, and this lets me get wall-sized aerial prints without purchasing a $5000+ drone.
In fact, you may be interested to hear that image averaging is also how many smartphones boost image quality in low light nowadays. You hold the phone steady for a few seconds while the phone takes many photos in a row, which it then aligns and averages behind the scenes. The result is that it’s possible to take photos like this with a phone that look perfectly usable (especially on a small screen):
Milky Way and Astrophotography
One of the biggest uses of image averaging is to capture large amounts of detail in the night sky. It’s a popular technique among both the telescope crowd and those who use a wide-angle lens on an ordinary DSLR.
The telescope method should be reasonably obvious. So long as you have a tracking head to follow the stars, you can average together as many photos as you like to improve detail in the night sky. This way, you can take 100+ photos with several minutes of exposure each, then average them for a combined several hours of exposure time. You can even average together photos taken on different nights!
I personally don’t do anything at that level, but I still like using image averaging for some basic deep-sky astrophotography. For instance, here’s a single image of the Orion Nebula taken at ISO 12,800:
Here’s the final stack I made of 250 individual photos of Orion, which I tracked manually from shot to shot:
You can also use a similar technique for ground-based Milky Way photos. You might think this wouldn’t work, because the stars are moving across the sky and thus will look blurred when averaged. However, it actually does work, so long as you use software that’s designed to align the stars independently of the foreground prior to averaging the images together. This feature allows you to capture much, much higher image quality than usual at night.
I used image stacking here to capture an extremely sharp Milky Way photo at f/4. The photo below is a blend of 14 individual frames, each shot at ISO 6400:
You can see how the image quality compares between one of the individual photos in the stack (first image) and the final stack (second image):
(As you can see, my image blending software did a good job aligning the stars prior to averaging, despite the complex foreground. I have a further article on that topic here.)
You can also use this technique to extend your depth of field at night to capture detailed foregrounds. That’s one of the “holy grails” of Milky Way photography which is normally very difficult to achieve – a large depth of field. But with image blending, it’s hardly a challenge at all.
I did it below by taking 33 photos at ISO 51,200 and f/8, then used image averaging to blend them together:
An individual photo in this stack has so much noise that it’s completely unusable:
And finally, I was able to use this technique to vastly extend my image quality while photographing the comet NEOWISE earlier this year. With a 105mm lens, I took 53 photos at f/2.8, 3 seconds apiece, and ISO 16,000. The final blend would have been impossible to capture sharply in a single photo. The 53 images simulate approximately ISO 320 in terms of noise level, even though I shot each individual photo at ISO 16,000:
Finally, image averaging would also allow you to capture high-quality Milky Way photos with more basic camera equipment, such as an asp-c sensor DSLR with an 18-55mm kit lens, or even a point-and-shoot. There are many good possibilities here!
Again, though, you do need specialized software that aligns the stars prior to image averaging, or your stars will be blurry. The two most popular such options are Starry Landscape Stacker ($40, Mac only) and Sequator (free, Windows only).
Simulating Long Exposures
So far, I’ve only covered situations where you want to avoid the motion blur image averaging picks up. That won’t always be the case.
Sometimes, motion blur can look really interesting in a photo, such as photographing a waterfall or moving clouds. In situation like that, the typical solution is simply to use a long exposure like a 30 second shutter speed to capture motion blur. However, image averaging can simulate the same effect, which can be useful if you don’t have a neutral density filter with you.
I did that here to get a smoother appearance in the water. This a single image, taken without any special camera settings:
And then a blend of four such images to simulate a longer exposure:
Here’s how it looks as a single image with an ND filter instead – pretty similar to my eye, although a bit smoother:
The more photos you take, the smoother the long exposure effect will be. You’ll certainly want an ND filter if you take a lot of long exposure photos like this, but image averaging is a solid backup option.
Hopefully this article demonstrated just how powerful image averaging can be in photography! I use it for every single drone photo that I take these days, as well as many of my Milky Way photos in order to improve image quality or extend my depth of field. You may also find other uses for it, such as improving your camera’s dynamic range at base ISO, that I didn’t go into in this article. But the three main uses that I covered are the biggest that you’re likely to encounter.
As always, if you have any questions or comments, let me know below.