Depth of field causes more confusion among photographers — beginners and otherwise — than nearly any other topic out there. Many “common knowledge” tips about depth of field have some flaws, or are at least partially inaccurate. At a personal level, it took me far too long to separate the good suggestions from the bad, and I eventually realized that I had been relying upon some erroneous information for years without knowing better. My goal with this article is not to make the most controversial possible statements, or needlessly poke holes in things that are almost entirely true. Instead, my hope is to cover some of the basic, common inaccuracies that you may have heard about depth of field, in case you’ve been relying on faulty information for your own photography.
1) Is it True that Depth of Field Extends 1/3 in Front of Your Subject, and 2/3 Behind?
No, this one isn’t true. The 1/3-front, 2/3-behind suggestion is a fairly common one, but it doesn’t play out in practice.
In fact, the front-to-back ratio for depth of field varies wildly depending upon a number of factors. In very specific cases, it’s true that the ratio can be around 1:2 — but, more frequently, it’s something else entirely.
Which factors matter here? There are three: focal length, aperture, and camera-to-subject distance. As you focus closer, use wider apertures, and use longer lenses, the ratio starts to approach 1:1. When you do the opposite, the ratio quickly passes through 1:2, then 1:3, 1:10, 1:100, and onwards to 1:∞. The range where the focus is 1/3 in front of your subject and 2/3 behind (or the range where it’s close to that ratio) is quite thin indeed.
Where does this tip come from, then? My guess is that it started simply enough: There are cases where the depth of field behind your subject is twice as great as the depth of field in front of your subject. And, with certain lenses and apertures, that spot happens to be a very “medium” focusing distance away from your lens — in the range of 3 meters (10 feet). So, it’s not surprising to me that this morphed into a universal 1/3-front, 2/3-behind suggestion. And, it is indeed useful for beginners to know that depth of field takes longer to fall off behind your subject than in front.
Still, it’s quite a narrow window where the ratio is closer to 1:2 rather than 1:1.5, or 1:3, or 1:4, and so on. The ratio 1:2 isn’t some common figure that tends to occur when you focus at “medium” distances. It’s much more of a special case than a generalization.
2) How Do You Double Your Depth of Field?
It depends. But there is no simple thing you can do to universally double your depth of field for a given photo, so long as you’re defining “double” how most people do, and you’re not calculating polynomial equations in your head.
What about using an aperture that is two stops smaller? Or stepping twice as far away from your subject and refocusing? Or using half the focal length of your current lens?
Nope. None of those things universally double your depth of field, even though you might have heard that they do.
This is easy enough to realize simply by doing a quick thought experiment. Say that you’re using a wide-angle lens, and your depth of field ranges from 1 meter to 15 meters. In this situation, infinity will be almost within your depth of field, but not quite; distant objects are probably only the slightest bit blurry. Still, they aren’t technically sharp enough to count within your depth of field.
In that case, you don’t need to do very much in order to get the farthest objects completely within your depth of field. Simply change your aperture by a fraction of a stop, or use a slightly wider focal length, or step back just a bit and refocus in the same spot.
In all of these cases, a minor change to your settings (focus distance, aperture, or focal length), will increase your depth of field from 14 meters (15 minus 1) to an infinite number of meters. Clearly, that’s more than doubling your depth of field! And, crucially, you don’t need to change your camera settings much in order to accomplish it.
(If you’re wondering about the exact values I used, it’s true that they’re a bit arbitrary. However, to make sure that they were realistic, I used this calculator with a 14mm lens, a subject distance of 2 meters, an aperture of f/5.6, and a 0.015mm circle of confusion. Feel free to use it and play around with your own values.)
That’s why there’s no merit to claims that you can “double your depth of field” by doing one particular thing for any photo. Sometimes, focusing twice as far away will triple your depth of field. Other times, doing exactly the same thing will increase it 10x, 50x, or infinitely. It all depends upon how much depth of field you already have.
3) How Many Variables Affect Depth of Field in a Photo?
Assuming a typical lens, there are three:
- Focal length
- Camera to subject distance (how far away you’re focused)
From time to time, you may hear online that only two variables affect depth of field in a photo: aperture and magnification.
There’s a similar (though slightly less common) argument, too, that two other variables are the only ones that affect depth of field: subject distance and entrance pupil size.
Neither of these claims is technically wrong, but there’s an issue: People who say that depth of field only contains two variables are merging two of the three together. That’s perfectly fine, but the individual components still matter, and they still affect your depth of field.
Magnification merges together focal length and subject distance. (It’s the size of an object’s projection on your camera sensor relative to its size in the real world.)
Entrance pupil size merges together focal length and aperture (focal length divided by f-number).
Most of the time, it doesn’t make things simpler to combine these variables together. No one in the field spends time calculating entrance pupils. The same is true for magnification, unless you’re doing macro photography.
To put it simply, all three components matter — focal length, aperture, and focusing distance. If you change one without compensating by also changing another, you’ll alter your depth of field every time.
4) Do Crop Sensors Have Greater Depth of Field?
This one has a lot of controversy around it, and I don’t want to add to that. The reality is actually quite straightforward.
The short answer is no, crop sensors don’t inherently have more depth of field than large sensors, although it can seem that way — in order to mimic a larger sensor, you’ll have to use wider lenses, which do increase your depth of field. (You also could stand farther back, which again increases your depth of field, although that does alter the perspective of a photo.) But the sensor itself does not directly give you more depth of field.
When it comes down to it, this shouldn’t be too surprising. A crop sensor is like cropping a photo from a larger sensor (ignoring individual sensor efficiency differences and so on). Unless you think that cropping a photo in post-production gives you more depth of field, this shouldn’t cause any confusion (indeed, if you crop a photo and display the final images at the same print size, it’s even arguable that you will see a shallower depth of field in the cropped image, since any out-of-focus regions would be magnified; but now I’ve started diving into a different rabbit hole, and this is a complex discussion for another day).
Still, the claim that small sensors have more depth of field isn’t entirely unfounded. Imagine that you have two cameras — one with a large sensor, and one with a small sensor — as well as a 24mm lens on both. Because the crop sensor will have tighter framing, you might choose to step back or zoom out in order to match what you’d capture with the larger sensor. Both of these options — stepping back or zooming out — do give you more depth of field.
So, the result of using a smaller sensor might indeed be that your photos have more depth of field, if you don’t do anything else to compensate for it. But this is an indirect relationship. The smaller sensor itself is not what causes the greater depth of field; it’s the wider lens or greater camera-to-subject distance.
5) Does the Sharpest Focusing Distance Depend upon Output Size?
No, although it’s a nuanced argument.
Here’s the starting point: If you’re making tiny, scrapbook-sized prints, you have way more leeway in terms of what looks sharp compared to something like a large, 24×36 inch print viewed up close. You won’t notice errors very easily in the small print. Even when the original photo has some major flaws, they won’t be visible if the print is small enough (or far enough away).
But does that mean the sharpest possible focusing distance changes as your print size does? No, not at all.
Indeed, there is only one focusing distance that will provide you with the most detailed possible photo of your subject (or the most overall detail from front to back, if that’s your goal instead). Just because you can get away with focusing on your subject’s nose rather than their eyes, for example, in a small print, doesn’t mean that the “best possible focusing point” is on their nose. Whether you’re printing 4×6 or 24×36, and whether or not you can even see a difference, it’s still technically ideal to focus on their eyes.
Small prints let you mess things up more without noticing a huge effect; that’s very true. But they don’t alter the position of the best focusing point. So, the sharpest focusing distance does not depend upon output size (which is the impression you might get if you follow hyperfocal distance or astrophotography calculators too literally).
6) Do Hyperfocal Distance Charts Take Diffraction into Account?
There are several flaws with hyperfocal distance charts. They don’t consider whether your foreground is nearby or far away (which matters if you want to focus at the proper distance). And, on top of that, they don’t take diffraction into account. They live in a world where f/8 is just as sharp as f/32.
If you’re still using hyperfocal distance charts to focus in landscape photography, you’re missing out on some potential sharpness in your images. It won’t be the difference between a masterpiece and a pile of garbage, but it’s enough that you might save yourself the price of a “sharper” lens by learning the proper technique for your current gear!
7) Should you focus 1/3 of the way into the scene?
I‘m not sure where this myth originated, but it holds no water.
The theory here is that you can get a sharp landscape photo from front to back by focusing 1/3 of the way into a scene — at which point, your foreground and background appear relatively equal in sharpness.
There are two problems here. First, it’s vague. If the farthest element in your photo is a mountain 30 kilometers away, is the “1/3” mark at 10 kilometers away? That would be quite a ridiculous place to focus, since, for all practical purposes, it’s infinity. If you focus at infinity for a landscape photo, you’ll sacrifice foreground detail unnecessarily.
I’ve heard other photographers say that it means 1/3 up the scene, visually speaking — in other words, taking the horizon as the top, and the bottom of your photo as the bottom. But that gets into another issue: The 1/3-up line almost always intersects with elements that are different distances from your camera. So, it still doesn’t tell you anything useful.
In the photo below, for example, would you focus on the nearby hill at the right, or the distant shrubs on the left? They’re both 1/3 up, by this definition:
In short, the 1/3 focus method is confusing to implement, and it’s not accurate. There are better ways to focus in a landscape photo if you want everything to be as sharp as possible.
8) Do Photos Look More Natural When Their Background is Slightly Out of Focus?
It’s an interesting question.
I hear statements like this relatively often: “Personally, for landscape photography, I make sure that my background is slightly softer than the foreground, since it looks more like how our eyes see the world.”
My question: Does it?
If you, personally, like your backgrounds to be slightly softer for aesthetic reasons, go for it. There’s nothing wrong with that decision at all. If you do so, though, keep in mind that it’s a personal creative decision, and not something that necessarily “looks natural” to everyone.
That’s simply because, in the real world, we absolutely have the ability to look in the distance and see a lot of sharpness and detail. Right now, I’m looking out a window at trees more than a mile away, and I can make out individual branches quite well (indeed, better than my camera setup, if I’m using a wide enough lens).
So, no, it’s not inherently natural for the foreground of an image to be sharper than the background. Out-of-focus blur isn’t a particularly strong depth cue to our eyes that something is in the distance.
To demonstrate that point more clearly, I made a quick diagram in Photoshop. This is one of those optical illusions where you can see the figure “popping out” in two different directions. Which direction do you notice first, or most prevalently over time? Do you see the top square at the front, or at the rear?
Personally, when I first look at this diagram, I can’t help but see the top square appearing farther away. Over time, I can flip it in either direction, but it does tend to keep jumping back to the distance. This is despite the fact that it’s the only “in focus” square of the three. Everyone is different, so your mileage may vary; however, in an unscientific survey before publishing this article, I can confirm that six for six saw it this way as well!
If sharpness is such an important cue for telling our brain that an object is nearby, the top square should appear closest for most people, not farthest away. So, what gives?
The answer is that our eyes pick up several depth cues from the real world, and defocus blur in a photograph isn’t one of the big ones. Other depth cues like the height and size of the object in the frame are stronger. Those are the driving forces in the illusion above — not sharpness or blurriness.
Still, I’ll make a couple counterpoints as well.
People, in general, spend a lot of time watching television and movies. So, perhaps our perception depends upon those frequent cues. And in most shows, it’s very common for the background to be slightly blurred (or more) in most scenes, since the focus tends to be on people talking nearer to the camera. There’s no way to rule out the possibility that the same effect could transfer to photos, and create its own depth cue — albeit, not necessarily as strong as others that may exist outside of digital media.
It’s also true that if we look at extremely nearby objects with our own eyes, the background will be clearly out of focus. The same is true if we look in the distance, and there’s something quite close to our eye. So, I could understand an argument that some amount of blur in a photo — foreground or background — can look more natural than perfectly sharp photos with the greatest possible depth of field. Even then, though, our brains always attempt to create a sharp mental map in every direction. Day to day, most people won’t pay attention to out-of-focus blur caused by their own eyes.
To sum it up, this “myth” isn’t as strong or widespread as others out there, but it’s still something you’ll come across. Personally, it is my opinion that landscape photos (or architectural images, and similar) should look sharp from front to back unless you have a separate creative reason not to do so. Other people may have different opinions, and I’m open to changing mine if I see a counterexample where slight blur in the background leads to a more natural look. As a whole, this topic is more about creativity than the technical side of things, which certainly allows for more individual interpretation.
9) What Do You Think of the Merklinger Method of Focusing and Selecting an Aperture?
Especially following my article on the inaccuracies of hyperfocal distance charts, I’ve gotten this question a few times.
I hadn’t heard about the Merklinger method until about a year ago. However, I will emphasize that it has major flaws if you’re using it as a way to capture the sharpest possible photos.
The Merklinger method involves focusing at the farthest object in every photo. If you’ve ever done landscape photography, it should be clear that this technique will make you lose some sharpness, especially if you have nearby foreground elements. By focusing on your farthest subject, you’re throwing away a lot of good depth of field.
Merklinger’s method succeeds at what it aims to do — providing a way to estimate depth of field in an image — but it certainly doesn’t provide a method of capturing maximum sharpness from front to back. Next time you’re out in the field, you can test this by photographing a scene with a nearby foreground. When you focus at infinity, no matter what aperture you use, you’ll get more blur than you would by focusing between the foreground and background.
Hopefully, this helped shine a light on the depth of field myths that you’ll see so frequently today. This is an important enough subject that accurate information is valuable, even if it isn’t always easy to find. And, of course, some of the tips in this article are suggestions more than pure, mathematical debunking. If you want to have defocused backgrounds, for example, go for it! Photography is all about your own creative vision, and that’s not something for me to determine.
Depth of field is a huge topic, and there certainly may be myths I haven’t covered yet. For space purposes, I also didn’t go into all the little nuances of some of these individual points, since this article already is quite long. So, if you have any questions about depth of field, feel free to let me know in the comments section below. I’ll do my best to answer them, or clarify anything I’ve written above.