The transition from film to digital was one of the most dramatic shifts in the history of photography, and countless new techniques arose along the way. Everything from exposing to the right to the ability to review photos on-the-fly dramatically changed the photographic world. Of all these changes, though, one has transformed landscape photography far more than any other: the advent of digital image blending.
At first glance, image blending may not seem like such a momentous development; however, it has had wide-reaching effects. Some of the most difficult scenes to photograph – high-contrast lighting, for example – suddenly are accessible to any photographer, opening up a new world of landscapes to photograph. To me, that makes it one of the most dramatic shifts in the history of landscape photography, nearly as important as the creation of color film.
Of course, image blending existed to some degree in the darkroom days – particularly for panoramas or similar composites. However, digital blending expanded these capabilities several times over, revolutionizing their usefulness. At the same time, relatively few film photographers – only those who had access to a darkroom and a fairly advanced skill set – could blend photos together in the first place. Digital software dramatically lowered the price of entry.
Interestingly enough, digital image blending existed several years before the prevalence of digital cameras, so long as you were willing to scan your film and edit it on a computer. Of course, the explosion of digital cameras played a major role in making digital post-production a mainstream concept, but it is really the software that had the most impact. In this article, I will cover some of the most important types of image blending – HDR, focus stacking, and panoramas – and how they changed the world of landscape photography so significantly.
Table of Contents
1) HDR
Perhaps the most obvious example of image blending is HDR photography. As ugly and overdramatic as it can be, HDR – or any method of blending different exposures – is one of the most crucial developments in the history of photography. No longer are you limited to your camera’s ability to capture information; now, you can photograph scenes in extreme lighting conditions without any problem.
On one hand, it can be argued that HDR isn’t necessary with film photography, since the shadow and highlight detail is far greater than what we get with a digital camera. To some degree – especially with certain films – this is absolutely true. If you shoot with black-and-white negatives, for example, the latitude in your photographs will be far greater than anything a digital camera could achieve. However, that is not always the case. If you shoot something like Velvia slide film, you will have a very limited dynamic range.
So, for some films – and for digital photography – the ability to blend HDR images is simply essential. Even with filters and careful exposure techniques, some scenes are impossible to photograph in a single frame. Exposure blending lets you photograph a landscape in any light – and, with good post-processing techniques, the resulting photographs can look completely natural.
2) Focus Stacking
One post-processing technique that is nearly impossible to replicate in the darkroom is focus stacking. At its most basic, focus stacking simply lets you blend together the sharpest parts of several images, resulting in a photograph that appears completely in-focus (for more info, read our tutorial on focus stacking). It is an essential part of landscape photography, as well as other genres such as architecture or macro photography.
Focus stacking can be replicated somewhat with film, but only if you have particular types of equipment. For example, a tilt-shift lens – like the Nikon 24mm f/3.5 lens – lets you tilt the plane of focus at a steep angle, rendering a landscape perfectly in-focus. The same is possible for view cameras, which offer even more movements than a tilt-shift lens.
However, as useful as this equipment can be, it is relatively expensive or difficult to use. By comparison, digital blending can be done no matter what equipment you own, stretching a photograph’s depth of field as far as you need. At the same time, digital blending works far better with macro scenes, which tilt-shift lenses and view cameras cannot always capture easily. On balance, the shift to digital processing was a significant change to the way most photographers worked with depth of field.
3) Panoramas
Panoramas are an interesting topic to explore, especially since it was possible to create them in the golden days of film. For one, some panorama-specific cameras existed, such as the Fuji GX617 (which, however impractical in today’s world, is still my if-I-win-the-lottery camera). In the darkroom, too, it is possible – if a bit tricky – to stitch a panorama from a set of negatives. Barring a panoramic camera or solid darkroom skills, though, many photographers would simply place a few prints next to one another and form a panorama. This technique is still used today.
Of course, the digital darkroom offers a much wider range of options. For one, it is very easy to blend multi-row panoramas with today’s software – something that would have been remarkably difficult, to say the least, without a computer. With digital processing, it takes just a few seconds to stitch together a simple panorama, as compared to the tremendous amount of time it would take in the darkroom.
Plus, digital panoramas have the benefit of being easy to edit. If something goes wrong, you can start over as many times as you need; it is significantly easier to correct any mistakes that appear. Although panoramas were possible with film – again, particularly with specialized cameras like the GX617 – the flexibility of digital processing means that you can take panoramas in far more conditions than what used to be possible.
4) Conclusion
For certain genres of photography, the advent of digital image blending may not have been particularly noteworthy. Sports photographers, for example, rarely blend images together even today. However, for landscape photography, image blending represents a tremendous shift in the ability of photographers to express their creative vision. Now, no matter the difficulty of a landscape, blending makes it possible to take the photograph you want.
In general, before the days of post-processing software, a view camera was necessary if you wanted such a high degree of flexibility. These cameras could tilt focus, use extremely large film (for cropping into a panorama), and allow negatives with a high dynamic range. In this way, a 4×5 or 8×10 camera can almost replicate the effects described in this article. Even then, though, these adjustments don’t quite match the flexibility of digital processing. And, of course, a view camera has other inherent disadvantages, such as its size, cost, and speed of use.
For me, that is what makes digital post-production so crucial, even more so than digital cameras themselves. Anyone – whether you use a point-and-shoot or a medium format camera – can blend photos together, correcting for the inherent disadvantages of physics in photography. This makes it possible to photograph any scene you want, no matter how tricky it is. At its most basic, image blending means that no landscapes are off limits.
Very relevant and illustrative blog post! Thank you covering up the most important types of image blending! I got your point on stressing out how HDR, focus stacking and panorama changed the world of landscape photography in many significant ways! All your photos were brilliant! Everything comes out naturally with that perfect lighting! You were actually able to really blend the sharpest parts of the image! It completely leads to a better result! Your panorama shots were pretty awesome! And by the way, your conclusion says it all! Great write up, Spencer!
“I feel there should be a distinction between “photography” and “digital imaging.” Each has its place, but they are not the same. One captures a moment, while the other creates a moment. Of course, with digital nothing is ever really “captured.” It is only recorded and almost always modified. Just my opinion, of course.”
I agree with you.
I feel there should be a distinction between “photography” and “digital imaging.” Each has its place, but they are not the same. One captures a moment, while the other creates a moment. Of course, with digital nothing is ever really “captured.” It is only recorded and almost always modified. Just my opinion, of course.
“On one hand, it can be argued that HDR isn’t necessary with film photography, since the shadow and highlight detail is far greater than what we get with a digital camera.”
Isn’t this getting old? I remember back in the 2000s people telling me how color neg stock had 7 stops dynamic range over digital’s 3 or 4 of the time. Now film has somehow miraculously gained more stops over the modern digital cameras, digital which now possesses 13 or so stops of dynamic range.
Even if it did have a greater range, film is dead, because you no longer have the high quality photo-optical printing options at your disposal. Scanning film is laughable.
On a different note, HDR has become a frightening format. I’ve rarely seen an HDR image that represents what we see with the human eye. Now, it CAN be cool as a creative tool, or at least it was until it became a bad cliché. HDR today is essentially laughable.
Image stacking, on the other hand, is a great asset. I use it in product photography, especially macro work, all the time. You need a dedicated app tho. Photoshop need not apply.
Panorama work is also useful, but it implies you’re going to print out BIG. Otherwise the panorama setting in most cameras will cut the mustard for your Facebook post.
I use HDR quite frequently for multi-row panoramas, where a graduated filter would be more difficult to stitch. When done right with luminosity masks you get very natural results, so I wouldn’t say it’s essentially laughable. Although I have seen my share of very poor tonemapping for sure! Also, I often shoot stitched panoramas with either an ultrawide lens or a fisheye for a wider field of view than you could normally get, with no intention of printing it very large, and especially for 360 panos which has finally become mainstream this year (particularly with Facebook adding support for 360 photos last week). So not every pano is necessarily for large prints only, though it’s certainly its biggest advantage.
Spy Black,
The term “dynamic range” is appropriate for *linear* systems that exhibit both hard-clipping at their upper limit and a more or less level-independent noise floor — e.g. linear digital audio and linear digital imaging —, but the term isn’t appropriate for non-linear systems, such as negative film.
The effective dynamic range of Kodak Portra 160NC, for example, is circa 24 f-stops according to the DxO Labs (France) paper: Frédéric Cao; Frédéric Guichard; Hervé Hornung; Régis Tessière, “An objective protocol for comparing the noise performance of silver halide film and digital sensor”, Proc. SPIE 8299, Digital Photography VIII, 829902 (January 24, 2012); doi:10.1117/12.910113.
“The term “dynamic range” is appropriate for *linear* systems that exhibit both hard-clipping at their upper limit and a more or less level-independent noise floor — e.g. linear digital audio and linear digital imaging —, but the term isn’t appropriate for non-linear systems, such as negative film.
The effective dynamic range of Kodak Portra 160NC, for example, is circa 24 f-stops according to the DxO Labs…”
So the term isn’t appropriate for film, but you’re using it in and example of film? And are you really buying that 24 stop line for one second? Have you ever seen anyone make use of that supposed range? Even it if did, how are you going to transcribe it to digital?
The term “dynamic range” is not suitable for film, which is why it isn’t used in film specifications. The term “effective dynamic range” is a suitable equivalent given the parameters specified in the document I referenced. I suggest you read the document rather than using the “shooting the messenger” fallacy in an attempt to discredit the facts.
Very wide contrast film is indeed used for practical purposes, e.g., photographing nuclear explosions: the film created by Charles Wales Wyckoff and EG&G had an estimated recording contrast range of 10⁸, 26.6 f-stops.
Transcribing a very wide contrast negative film to digital is relatively trivial because all film has a non-linear density versus exposure transfer function: the contrast range of the developed film (its density range) is much narrower than its recording contrast range (its exposure range). In effect, film is gamma encoded, hence the characteristic response curves of a film are presented as log₁₀ (density) versus log₁₀ (exposure in lux·seconds).
I will echo what Aaron and Pete have said. Also, with regards to the dynamic range of film, my impression is that it strongly depends upon the type of film and processing that you use. 4×5 negatives processed in the darkroom have vastly more dynamic range than digital sensors, while something like Velvia color slide film has significantly less. Color films in general tend to have less dynamic range than black and white, although this is a generalization — as Pete said, Potra has a very wide dynamic range.
At the end of the day, though, they’re all just tools. If you don’t like HDR, that is perfectly understandable. Same with film versus digital photography in the first place.
“HDR” means nothing other than “high dynamic range” *recording*. What so frequently looks ugly is the garish tone mapping that is used to *render* HDR recordings.
Tone mapping is essential in all photography (apart from spectrophotometrically accurate applications) because one of the primary technical roles of the photographer is to map [to render] — in a believable manner — the contrast range of the object-space scene to the, usually very different, contrast range of the image-space display medium. Most films have an inherently baked-in S-shaped tone mapping curve that can be adjusted somewhat by changing the developing time, the dilution of the developing fluid, and the type of the developing fluid.
Most colour reversal (transparency/slide) films have a recording contrast range of only 4—6 f-stops: Velvia is at the low end of the range, which is why it produces delightful images that were captured on overcast/drab days or in the soft light at ground level in forests. When using these films in high contrast lighting, the photographer has to control the contrast range of the scene using a variety of techniques, including: reflectors; flash guns; graduated filters; waiting until the natural light is more suitable. Photographers who learnt all this stuff decades ago can easily detect “photoshopped HDR images”.
” 4×5 negatives processed in the darkroom have vastly more dynamic range than digital sensors…”
Let’s accept that as gospel for a moment. How are you going to make use of it? Unless possibly you stay in the photo-optical realm, which is far more difficult in this day and age, especially in color work, you MAY be able to make use of it. But we live in the digital domain now, and even though we could conceivably scan it digitally in a multipass process into a 16- or 32-bit space, how are you going to digitally display that on the 8-bit display and printing systems in existence?
Before the digital age, most people printing didn’t know or care about dynamic range, they just shot and printed. I’m referring to people with their own darkrooms, B&W or color, who were technically above “the unwashed masses”. And just think of all the people that would just shoot their pics and drop them off at 1-hour photo services, perfectly happy with what they got from that.
What I’m getting at here is that this whole concept has been blown out of proportion. Even if some film does indeed have greater dynamic range than digital, you’re never going to make use of it unless you have a complete photo-optical production line, know how to use it, and ultimately have a place for people to show up and physically stand in front of your work and admire your handywork.
Which brings us back to the digital domain and those 8-bit printers, monitors, and graphics cards…
All very true. That was certainly part of what I tried to convey in this article — the only way even to approach the flexibility of image blending in the days of film was to use incredibly expensive equipment with a steep learning curve. In today’s world, it is even more impractical!
For what it’s worth, you can make use of a sensor with, say, a twelve-stop dynamic range even on an 8-bit monitor. You simply need to bring the additional bits of dynamic range within the 8-bit range, such as highlight reduction or boosting shadows. That’s why we can tell the difference between a D810 and a Nikon 1 V1 (14.8 and 10.7 stops of dynamic range apiece, according to DxO), even though both technically have wider dynamic ranges than most printers/papers can handle.
That said, I think you are exactly correct when you say that the concept of dynamic range has been blown out of proportion! At some level, it is a bit silly that we compare dynamic range scores from different cameras in the first place. As Pete said, even Velvia was usable in high-contrast scenes, assuming that the photographer had sufficient skill and practice.
Spy Black,
The 8-bits per channel data pipeline is gamma encoded; the contrast ranges are:
12 EV (f-stops) for sRGB;
18 EV for Adobe RGB 1998.
8-bit Adobe RGB has a much wider contrast range than the 14.5 EV of 14-bit linear RAW data. The specification for UHDTV allows for the use of Rec. BT.2020 colour space, which can have up to 12-bits of encoded data per channel.
But that would assume it’s properly encoded. ;-)
That’s why there are international standards for colour spaces, storage formats, and transmission protocols; standards that are completely and unambiguously specified.
That’s a nice concise article, Spencer!
Personally, I’m a “get it as best as you can in one shot” type of guy, but sometimes blending in the digital darkroom is simply indispensable. Panorama’s are one example, even though it’s often hard to retain a strong composition – your dunes shot above is a great example of a situation where the pano worked wonders. Personally, I use blending most often for astrophotography. Astrophotographers not only regularly construct (multi-row) panoramas, but also to increase signal/noise levels (for instance using the freeware DeepSkyStacker), enhance the dynamic range (e.g. for capturing the wide DR of the Orion Nebula), compose “false-colour” images out of narrowband exposures, or produce “time-composites”. Examples of the latter are star-trail images that illustrate Earth’s rotation, or meteor trail composites that show the apparent radiant of a meteor shower. Here’s an example of a composite I constructed from images of last year’s Geminid meteor shower:
www.pbase.com/gblee…9/original (be sure to select “original” size below the picture for a sharp image)
(Technical details: the background star image is a panorama of 3 portrait-oriented pictures (iso 2000, f/4, 25s each), and the foreground landscape a panorama of 2 landscape-oriented pictures (iso 800, f/4, 130s each); the meteor trails were copied in from 26 individual frames out of 693 (!) frames captured in total, all at iso 2000, f/2.8, 25s, shot between 10pm and 4am on the night of 13 to 14 December with my Eos 70D sporting a Tokina 11-16mm at 11mm.)
One more application of “time-composite” blending I can think of is achieving a long-shutter look at the height of day without the use of a ND filter, by blending 10+ short exposures – useful for waterfalls for instance.
cheers,
Greg.
Addendum: should have read Asheesh’s comment first about blending of several exposures as opposed to using ND filters, and your reply… (wish there was an “edit” button!)
Interesting, Greg, thank you for adding this. I don’t know much about astrophotography, but I do understand that stacking/blending is a necessity for many, many images. Your example is the perfect illustration, and quite a nice photograph as well.
Thanks Spencer!
This is an amazingly well timed article and something that I believe is very relevant to all interested in getting great Landscape pictures. I am curious to know your views in the use of Gard ND filters. I don’t have one but while planning for an upcoming trip to Ladakh in Himalyan mountain ranges, I was contemplating of getting a one but am haunted with the question that is it really required any longer with the digital blending etc. ? …could you pls share your insights.
Glad you enjoyed it, Asheesh. My personal opinion is that a Grad ND filter is still very useful if you are trying to balance the sky and the land. Even if you can get a perfect blend in Photoshop, which requires quite a bit of practice with luminosity masking, it still is better to get everything in a single image if you can, just for the sake of simplicity. Not to mention the time it takes to edit a blended photo.
“…it still is better to get everything in a single image if you can, just for the sake of simplicity. Not to mention the time it takes to edit a blended photo.”
Yes indeed.
Thanks for your response Spencer!
I would take the liberty of asking a couple of more questions here; one, as i shared i would be travelling to trans Himalayan range in Ladakh; would a soft edge Grad ND useful even when the horizon is not a straight line in case of capturing mountains? …this is something that i am not being able to wrap my head around.
Two, I would be using a 17-40 f4 lens on a full-frame body; which filter set would you recommend to get me started, that offers best value for money …pls note, i am an enthusiast not a professional.
Looking forward to your response.
Unfortunately, there are no particularly cheap options for a grad ND filter. If you are on a budget, as nice as they can be, it may be worth getting other equipment instead (such as a good backpack for long hikes, or a lightweight travel tripod). That said, I personally use the Lee filter system. It’s a solid system, although I’m sure that many other filter systems are just as good.
For the Lee system, you need three components: an adapter ring for your lens, the Lee filter holder, and the filter itself. The total will be about $265 just for the basics, since you would need a special wide-angle adapter ring with the 17-40mm. I don’t know that I can justify this expense for a single grad filter. However, if you plan to switch to Lee (or a similar filter system) in the long run, it could be worth the initial cost.
I use digital blending, usually via luminosity masks, a LOT with my night photography–using a shorter exposure for sharp stars and a several minute exposure for landscape details. Technically I guess you could call it HDR, though the dynamic range isn’t all that expanded, it’s more just to get enough light at low noise for ground details and blend it with sharp stars which can only be taken at shorter shutter speeds and high ISOs. You can use a tracker too of course to shoot longer exposures of the stars at lower ISOs, but you’d still have to blend with a non-tracked image for landscape details to be sharp. Either way it’s digital blending to the rescue! I usually shoot a panorama at the same time, stitching the blended exposures into a 360 spherical panorama.
Very true, Aaron — and you are certainly correct about luminosity masks! They look much more natural than typical HDR, although I mentioned above that I do like Lightroom’s new HDR feature quite a bit. I can’t imagine the strain of a 360 degree panorama on your computer — do you convert the files to smaller versions first?
Nah, 360 is easy with fisheyes or wide angles, as far as computer specs go. Not very big at all: 90 to 475 megapixels, depending on focal length. It’s the gigapans and larger (35mm into telephoto range) that tax the RAM with Photoshop, or really long holy-grail timelapses with thousands of 36MP D810 files to batch-edit/convert to 16-bit TIFFs and then render to 8K 12-bit Cineform with After Effects–that’s a lot of disk I/O, but SSDs help a lot there. It just gets expensive to get ones large enough. 360 timelapses however DO take quite a bit of time and effort to post-process. I could use a new workstation this year or next. :-P
Interesting — thanks for sharing your process. I can imagine the computing power necessary for a 360 time-lapse, and it makes me glad that I mostly work with single-image stills :)
I also like Lightroom’s new HDR feature. I just really, really wish we could batch it! And I wish LR was more consistent with edits for panoramas and timelapses. I find I can’t do much editing at all without going back to the 2010 process and losing new features because it changes so dramatically between frames if the amount of sky or foreground varies too much between images. I wish LR had a feature where batch editing would make the images more consistent and not apply dramatically different edits from the same sliders based on image content.
BTW, I like SNS-HDR even more than Lightroom’s new HDR for more control. The alpha versions of 2.0 are coming along very, very nicely so far.
How on earth did you manage to take 7 pictures with different focus of that crab without it moving or running away? Was it dead or cryogenically frozen or something? ;)
I find it hard enough to take a good macro picture of insects or animals without them running/flying away, I can’t imagine how I could take 7…
Truth be told, I don’t know how I got it to stand so still! I focus stacked by moving my lens closer and closer to the crab, only by millimeters at a time, rather than by manually focusing. My best guess is that I didn’t make any dramatic movements, but I definitely was lucky :)
I was wondering how you got 7 photos of that crab as well. I have to remember to try this next time a macro subject sits still for me.
Did you use a tripod and a rail?
“As ugly and overdramatic as it can be, HDR – or any method of blending different exposures – is one of the most crucial developments in the history of photography. No longer are you limited to your camera’s ability to capture information; now, you can photograph scenes in extreme lighting conditions without any problem.”
Obviously all true…especially the ugly and overdramatic part.
The problem I see with blending is the temptation it offers people to produce pictures that do not actually represent what the scene looked like at the time the shutter was released. The answer used in justifying the processing generally falls under the “it’s art” umbrella. Of course that is well and good; I have no issue with the “art” side of manipulating digital photographs to suit one’s artistic objectives. However, I think that digital processing at the levels available these days (blending being a good example) has generated a whole world of pictures that are not representative of reality.
My objective is to take pictures that require minimal post processing. It leads me to attempting to be a better photographer, and not a better post processor.
I think you hit the nail on the head. It’s great that these options exist, but if your goal is to represent reality, they can do more harm than good.
That, also, is why I have been so happy with Lightroom’s new HDR feature. Instead of adding bizarre hyper-colors, it just gives you more latitude when you need to lower highlights or boost shadows. I find myself doing HDR now far more than I did in the past.
Thanks for this informative useful article on various means of post processing landscape photographs. I am glad there is an acknowledgement that expanding the exposure range (HDR) adds value to the process. However, I take exception to the notion that dedicated HDR programs result in adding “bizarre hyper-colors” to the image. I like and regularly use LR for post-production. I also use Photomatix Pro for my HDR processing. I have no financial or business interest in Photomatix Pro, but find it an excellent method of HDR processing and a fine company to work with. While Photomatix Pro can be used to make the “over processed HDR look” that are really not appropriate for most landscape work, it need not necessarily result in bizarre hyper-colors. For example, by moving the Color Saturation slider to the right a black and white image can be made. Alternatively by adjusting the colors with LR’s HSL color adjustments garish colors are easily within reach. Both programs are just tools to be used or misused as we see fit. In Photomatrix Pro, there are several ways one can start to adjust a set of bracketed images. While Tone Mapping gives the most control, it is also requires a bit more finesse to produce a “natural look”, but it certainly can be done. At the other extreme, the Fusion starting point gives the least adjustment possibilities and (arguably) a less “HDR” looking image. Since it has a substantial range of adjustment possibilities and a moderate learning curve, Photomatix may not be as accessible as LR’s HDR option. It, and other dedicated HDR programs, may, therefore, not meet all user needs, but they should not be dismissed out of hand. As a side note, Photomatrix Pro does allow bulk processing of images using whatever parameters the user defines. This can be a tremendous time/effort saver in many situations.
That is certainly true. However, there is a reason why many photographers see HDR as a sort of garish effect — quite often, that’s how people use it. I felt that it was important to make the distinction in this article.
Mr. ZeroVc, I agree with you also. There are so many tempting options and sliders available with all the post processing tools these days. Sometimes they are a recipe for disaster, resulting in over saturated and over sharpened images that never look anything like what we saw through our cameras viewfinder. I know all about that, and over the years I, like you, have tried to put the effort into being a better photographer. Thanks for your thoughts, best wishes…
Thanks Philip.
I absolutely agree with you. That why nowadays when we see photos most of them come from “photoshophers”, not from photographer.
I can agree with trying to they best photographer I can. But I disagree with saying most of the photos we see come from “photoshopers” There is no longer SOOC, in fact there never was. Film type always affected the photo. What is real is always subjective and I enjoy all of the different ways people see a scene and process it. People do not see anything the same even with your eyes. When you age, your eyes change, people have varying levels of sight and some people see colors whey music is played. So I have never been sure that my blue is your blue and sunsets with a digital with live view or EVF allow composing with the sun in the frame, whereas with your eye you might have to look away. Painters throughout history have differed on painting style, thus differing ways of processing photos is nothing new. Take for example super macros and digiscoping produce photos beyond what our eye can see. Enough of my rant. Thanks for listening
” So I have never been sure that my blue is your blue…”
Take this test and find out…
www.xrite.com/hue-test
Taking really good pictures at the camera is VERY hard to do. It requires the photographer to have a very good eye for composition, a solid understanding of how to use the camera, and a through grasp of light. Taking really good pictures is also very much about location and timing, with the success of both of these being improved through solid preparation (but I’ll be the first to admit that you can’t discount the role that “luck” plays sometimes in terms of location and timing).
What I think programs like Lightroom and Photoshop have done is provide a way for people to take average (or worse) pictures and process them into “better” looking pictures, even though the resulting pictures don’t look much like what was originally in front of the camera. I personally think it’s an easy trap to fall into, as it is much harder to become a better photographer (at the camera) than it is to become a better post processor.
What “represent reality” means to you? It is scientifically proven that the way we see is complex and dynamic. Just a short list of facts: We scan the “reality” in front of us by moving our eyes constantly, focusing and re-focusing on different parts of that “reality”, adjusting our pupils depending on changing light conditions, we also see in full-contrast and resolution only in the middle circle of our vision (the 60 degree one, hence the “normal” 45mm equivalent lens focal length), in the periphery we see blurred, distorted picture and our brain even doesn’t focus on those areas (just try reading newspaper which is in the area of your peripheral vision and you’ll see what I mean), we don’t see ultra wide-angle too – wide-angle with human vision is achieved by moving your eyes around, even by moving our heads around, then your brain combines all those images together and it even adds to the whole “picture” what we saw few seconds back reading it from the “buffer” of your memory etc. etc. What about bokeh, long exposures or super-telephoto – is that how we see the world? When exactly humans used to see the world in B&W too? And B&W is considered by many the pinnacle of photography as an art. If all that is true, tell us when was photography reflecting 100% the way humans sees the world?
Spencer,
Thanks for your article. If you have time, next time please go into details how to shoot landscape focus stacking.
Thank you! I actually have an article planned for focus stacking in landscape photography — stay tuned.