Almost all photographers know about panoramas and HDRs. Most also know about focus stacking. But how often do you hear photographers talk about a fourth method of blending photos together – image averaging?
Although image averaging has picked up a bit more popularity in recent years, it’s still not especially well-known. That’s a shame; depending on what subjects you photograph, image averaging can extend your shooting capabilities significantly.
Let me demonstrate.
Table of Contents
What Is Image Averaging?
As the name implies, image averaging involves stacking multiple photos on top of each other and averaging them together. Generally, all the images in question are taken from the same camera position using identical camera settings.
The main purpose of image averaging is to reduce noise. However, it can also be used to simulate motion blur, akin to using a longer shutter speed.
How It Improves Image Quality
Much of the noise that appears in your photos is random. It looks completely chaotic, like in this crop of a blank wall:

If you take a series of photos with the same settings, the pattern of noise generally isn’t correlated from photo to photo. So, a pixel that’s bright in one image may be dark in another, and vice versa. This means, when you average multiple photos together, the overly bright or dark pixels will start to balance out, reducing the total level of noise in the image:

The more photos you average, the less noise will be in your final result. Each time that you double the number of photos you average, you will improve the noise levels by one stop. By averaging together four, eight, sixteen, etc. photos, you can get vast improvements in the level of noise in your photos.
Note, however, that image averaging is susceptible to movement in your photo. If you want everything in your photo to be sharp in the final image average, you must make sure that neither your camera nor your subject moves between photos. So, as nice as it would be if this method worked for sports or wildlife photography, most of the time it simply won’t.

Image Averaging Method in Photoshop
Image averaging is quite easy to do in Photoshop. There are two methods you can follow.
- Method one: Load all the individual photos as layers. Keep the bottom layer at 100% opacity. Reduce the layer above it to 1/2 opacity (50%). Go to 1/3 (33%) opacity for the next layer up. Then 1/4 (25%), 1/5 (20%), 1/6 (17%), and so on
- Method two: Load all the images as layers. Select them all, then go to Layer > Smart Objects > Convert to Smart Object. Then go to Layer > Smart Objects > Stack Mode > Mean
Both methods will produce the same result. Method two is much easier when you have a large number of images to average. However, method one leaves the individual layers intact so that you can edit them separately from one another, should you so choose.
When to Use Image Averaging
There are three main situations where image averaging is especially helpful:
- Using a smaller camera sensor (including a drone)
- Photographing the Milky Way
- Simulating a long exposure
Let’s go through each of these situations in more detail.
Smaller Camera Sensor
One problem with a smaller camera sensor is that, even at base ISO, you may still have high levels of noise in your photo. Image averaging can be a way to simulate a lower base ISO on such cameras.
For example, if a photo from a point-and-shoot camera has objectionable noise in the shadows at base ISO 100 – but your subject is stationary, and you’re on a tripod – why not just shoot multiple photos to reduce noise? You can take a series of images to average later, which can improve your noise levels significantly.
I use this technique all the time on my drone, the DJI Mavic 2 Pro. Among drones, the Mavic 2 Pro has a relatively large 1-inch type camera sensor. But it’s still not at the level of even an entry-level DSLR, and there is reasonably high noise even at base ISO 100.
However, the Mavic 2 Pro also has a built-in “burst mode” that fires five images rapidly in sequence. Since all five photos are captured in about one second – and assuming the drone is hovering rather than moving – there’s essentially no shift in composition from shot to shot. This means that image averaging is an excellent method to reduce noise. (Incidentally, averaging five photos together results in about 2.3 stops of image quality improvement, which leads to roughly the same image quality as ISO 100 on a full-frame DSLR!)
Here’s how one of my drone photos looks, uncropped:

When zooming in, you can see some pixel-level noise. This will be further exaggerated the more post-processing that I do on the image:

However, after averaging together the five photos from the burst, the noise levels are much lower:

That’s a really exciting improvement! I’m always after maximum image quality in my photos, and this lets me get wall-sized aerial prints without purchasing a $5000+ drone.
In fact, you may be interested to hear that image averaging is also how many smartphones boost image quality in low light nowadays. You hold the phone steady for a few seconds while the phone takes many photos in a row, which it then aligns and averages behind the scenes. The result is that it’s possible to take photos like this with a phone that look perfectly usable (especially on a small screen):

Milky Way and Astrophotography
One of the biggest uses of image averaging is to capture large amounts of detail in the night sky. It’s a popular technique among both the telescope crowd and those who use a wide-angle lens on an ordinary DSLR.
The telescope method should be reasonably obvious. So long as you have a tracking head to follow the stars, you can average together as many photos as you like to improve detail in the night sky. This way, you can take 100+ photos with several minutes of exposure each, then average them for a combined several hours of exposure time. You can even average together photos taken on different nights!
I personally don’t do anything at that level, but I still like using image averaging for some basic deep-sky astrophotography. For instance, here’s a single image of the Orion Nebula taken at ISO 12,800:

Here’s the final stack I made of 250 individual photos of Orion, which I tracked manually from shot to shot:

Huge difference!
You can also use a similar technique for ground-based Milky Way photos. You might think this wouldn’t work, because the stars are moving across the sky and thus will look blurred when averaged. However, it actually does work, so long as you use software that’s designed to align the stars independently of the foreground prior to averaging the images together. This feature allows you to capture much, much higher image quality than usual at night.
I used image stacking here to capture an extremely sharp Milky Way photo at f/4. The photo below is a blend of 14 individual frames, each shot at ISO 6400:

You can see how the image quality compares between one of the individual photos in the stack (first image) and the final stack (second image):
(As you can see, my image blending software did a good job aligning the stars prior to averaging, despite the complex foreground. I have a further article on that topic here.)
You can also use this technique to extend your depth of field at night to capture detailed foregrounds. That’s one of the “holy grails” of Milky Way photography which is normally very difficult to achieve – a large depth of field. But with image blending, it’s hardly a challenge at all.
I did it below by taking 33 photos at ISO 51,200 and f/8, then used image averaging to blend them together:

An individual photo in this stack has so much noise that it’s completely unusable:

And finally, I was able to use this technique to vastly extend my image quality while photographing the comet NEOWISE earlier this year. With a 105mm lens, I took 53 photos at f/2.8, 3 seconds apiece, and ISO 16,000. The final blend would have been impossible to capture sharply in a single photo. The 53 images simulate approximately ISO 320 in terms of noise level, even though I shot each individual photo at ISO 16,000:

Finally, image averaging would also allow you to capture high-quality Milky Way photos with more basic camera equipment, such as an asp-c sensor DSLR with an 18-55mm kit lens, or even a point-and-shoot. There are many good possibilities here!
Again, though, you do need specialized software that aligns the stars prior to image averaging, or your stars will be blurry. The two most popular such options are Starry Landscape Stacker ($40, Mac only) and Sequator (free, Windows only).
Simulating Long Exposures
So far, I’ve only covered situations where you want to avoid the motion blur image averaging picks up. That won’t always be the case.
Sometimes, motion blur can look really interesting in a photo, such as photographing a waterfall or moving clouds. In situation like that, the typical solution is simply to use a long exposure like a 30 second shutter speed to capture motion blur. However, image averaging can simulate the same effect, which can be useful if you don’t have a neutral density filter with you.
I did that here to get a smoother appearance in the water. This a single image, taken without any special camera settings:

And then a blend of four such images to simulate a longer exposure:

Here’s how it looks as a single image with an ND filter instead – pretty similar to my eye, although a bit smoother:

The more photos you take, the smoother the long exposure effect will be. You’ll certainly want an ND filter if you take a lot of long exposure photos like this, but image averaging is a solid backup option.
Conclusion
Hopefully this article demonstrated just how powerful image averaging can be in photography! I use it for every single drone photo that I take these days, as well as many of my Milky Way photos in order to improve image quality or extend my depth of field. You may also find other uses for it, such as improving your camera’s dynamic range at base ISO, that I didn’t go into in this article. But the three main uses that I covered are the biggest that you’re likely to encounter.
As always, if you have any questions or comments, let me know below.
Hey folks! Spencer here. I’ve seen some messages asking how I’m doing and what I’m up to, since I haven’t written an article on Photography Life in a few months. These days, I’m working almost full time on YouTube making videos about photography and (finally) starting an Instagram. I may still stop by occasionally to write some articles on Photography Life in 2021, as time permits. Thank you for the kind messages, and wishing you and your families a happy New Year!
Nice to see a new article from you. But if your future is YouTube, I guess I will bid you farewell. I prefer reading articles. I don’t enjoy video. It’s a choice that I’ve made — just as it’s a choice you have made.
Best regards and thanks for all the articles,
DavidB
Thank you, David. It is indeed.
I know that readers on PL aren’t big YouTube watchers in general. Personally, I’m much more comfortable as a writer than a filmmaker. But none of that changes the trends these days, which are extremely video heavy. To make a living out of teaching photography, that’s just where I need to be.
Thanks to you and other readers for following my articles for the past 6 years! For those who do watch YouTube, I hope to see you there, too. And as I said at the end of the article, I’ll still drop by here occasionally.
I agree, video isn’t for me!
Welcome back spencer. Nice to see you posting here again. I have always wanted to learn image averaging. This guide was simple and well explained. But the technique needs more elaboration and input from many people.
Thank you, Muhammad! I don’t want to give the wrong idea – I’m not “back” so much as just planning to stop by with an article occasionally in 2021. YouTube is what I’m working on nowadays, and it requires almost all my attention.
If you want elaboration on the step-by-step-process, there’s just not much that you need to do differently in the field aside from taking a series of images. But you can read more on using it for drone photography here: photographylife.com/how-t…one-photos
And astrophotography here: photographylife.com/night…e-stacking
Thats good. Youtube suits you. You are young, smart and energetic, maybe you should get into app development for stuff like image averaging to make dslr and mirrorless easier to work with in this technique
That’s very kind of you to say! I know that most PL readers aren’t heavy YouTube viewers, and life moves on – but after writing more than 300 articles here, it feels like my second home.
Apps are on my radar along with a number of other things for the next few years. Thanks for the encouragement!
Excellent article! Thanks! and Happy New Year!
Good that you’re back, Spencer! I always enjoy your articles and learn something from each and every one.
Your articles may not attract millions of clicks alone, but they increase the value of PL over all the other gear devoted sites tremendously, to me.
Continue the good work, both you and Nasim.
Thank you for the kind words, Chris!
I really do want to emphasize that I’m not “back” to the degree you may be thinking and will be spending most of my time on making YouTube videos for the foreseeable future. But I do hope to write articles on Photography Life every so often, if an idea pops into my head.
I wish I could do both, but doing either one properly takes almost all my attention.
An easy technique, and superb results!
Thanks for sharing it with us, Spencer, and I wish you a prosperous 2021!
And damned be the Covid!
Tord
Great article Spencer!
I guess there is also the trick to make moving objects, like tourists, or cars disappear, using the median stack filter that can be usefull
Nice article Spencer. I particularly like the results of the astro shots.
If I understand the technique correctly, the reduction in noise is an automatic result of multiple exposures, there is no ‘smart’ algorithm being applied that identifies noise (?). Hence, I suppose it would also work if you use the multi-exposure in camera, using the ‘Average’ setting? Indeed, it would be interesting to see results of that with and without in camera noise-reduction as well, as that is a ‘smart’ technology. Could be too ‘smooth’ though I guess.
That’s right, no algorithm making tricky decisions. You could conceivably use the same technique in a darkroom if you could align the negatives in the enlarger precisely enough!
Good point about the in-camera methods. I admit, because those settings are JPEG-only, I’m not especially familiar with them as a raw shooter. But I see no reason why it wouldn’t work in theory.
Thanks Spencer. Not much noise in film photography of course, just grain which is uniform throughout the image due to the requirement for larger silver halide chrystals in fast film emulsions, so no chance to cancel it out.
Yes, I’m referring to film grain. Unless I’m misunderstanding what you’re saying, wouldn’t each photo on film have a different pattern of grain? It’s certainly not like regularly-aligned pixels. (I’m not referring to double exposures on one piece of film, which wouldn’t work, but to multiple separate photos.)
Granted, it would be near impossible to align the negatives properly on top of one another in a darkroom, but the principle of reducing grain through image averaging still stands. An easier way to show that would be to scan the film, align the images in Photoshop, and use the same averaging technique.
Not that I recommend any of this, it’s just in theory. I may be misunderstanding you as well, if so, my apologies.
Unfortunately, you have misunderstood film grain Spencer. In high ISO emulsions, the silver halide chrystals are larger in both exposed and unexposed areas, be they either B&W tone or colour hue. There are no areas that are not grainy. Averaging lots of grain, would just be lots more grain.
I believe you’re mistaken. I’m certainly open to tests that show the opposite. But I just imported some crops into Photoshop from side-by-side images taken on film, aligned them, and averaged them. I see just the same benefits as with digital. I’m happy to email you the test results if you like.
This result makes intuitive sense as well. Given that film grain has no fixed pattern from shot to shot, it will inevitably average out as more and more shots are averaged (such as a particularly bright speck of grain in one shot having less and less of an impact, when averaged with shots that don’t have such a speck).
Spencer, is there an analogous postprocessing software technique to produce shallow depth of field by merging multiple images? That is the one limitation that frustrates me after switching from full frame to micro 4/3. Your current article makes me think there might be a back-door way to produce the shallow depth of filed that’s so easy with a full frame camera and a 24-70/ f2.8 lens. My iphone can do it in portrait mode and I understand Apple accomplishes that trick via software.
Thanks for another great, very useful article.
Sure thing!
The only way I know of is to create what’s known as a Brenizer method panorama.
Zoom in beyond the composition you actually want. Use the widest aperture you have available, then capture a panorama (usually multi-row). The more you zoom and the more images in your panorama, the shallower your depth of field will be.
Hope this helps!
Here are the Brenizer method articles by Romanas Naryškin:
photographylife.com/tag/b…zer-method
Happy New Year, Spencer.
Kindest regards,
Pete
Thank you, Pete! Happy New Year to you as well.
Happy New Year Spencer! Great article! Question: For Astrophotography, do you need to do image averaging if you are using a star tracker? Would it make the image come out better? Thanks! Happy New Year!
Happy New Year, Zigman! Yes, it’s a good idea to stack photos when you’re using a star tracker. If your tracking alignment isn’t perfect, if your tripod shifts partway through, or if an airplane flies through your photo, an ultra-long exposure (say, 30 minutes) could be ruined. Taking 15 exposures of 2 minutes apiece, deleting any bad frames, aligning them (if they’re out of alignment), and averaging them will eliminate these problems. It also tends to result in less thermal noise, particularly if you wait 10-15 seconds between exposures.
Thanks!
Spencer,
I always enjoy your thoughts and methodologies. But I have a question.
Does a merging of the images using Photoshop’s Photomerge method work as well as method #1 in your article?
I took a multiple set of images of the same view using an IOS setting in an old camera I have that I knew would result in a lot of noise. I then used your method #1 and separately used the Photomerge method. There appeared to be about the same improved results for each. Is this reasonable or have I just let my eyes lie to me?
The best to you,
Ed
Spencer,
Upon VERY close inspection I think the “layer” method reduces the noise moreso than does the “Photomerge” method. But I would still like to hear your thoughts.
Ed
Spencer,
Not to be going on about this, but I would add the following:
I applied Topaz’s DeNoise AI to one of the images (there are 5 in my test) and found that method #1 in your article did as well as the Topaz AP. That is saying a lot in favor of the technique you discussed.
(I a, not discounting that results depends on the subject for which image copies are captured.)
P.S.: Keep up the good efforts you make, video or written.
Ed
Much appreciated, Ed! It sounds like you answered your own question already, but Photomerge isn’t made for this type of image blending, even though it does employ some averaging in its blending algorithm. I’d stick with either of the layer methods instead, but props to you for finding different ways to extend Photoshop’s capabilities.