A few months ago we wrote an extensive article on sensor crop factors and equivalence. In that post we covered several topics: the history of the cropped-sensor formats, brightness of the scene, perspective, depth of field, noise and diffraction. In today’s post I want to focus on (if you’ll excuse the pun) and expand on two of these topics:
- Depth of Field (DOF)
Nothing in this post has to do with the number of megapixels your camera has, as we will be looking at side-by-side comparisons scaled to the same display size – the same way you would be looking at your photos on your tablet, computer monitor or in your photo albums.
1) Big vs Small sensor cameras – can they take identical photos?
At the get-go, let’s just establish that you can really take identically-looking photos with two wildly different cameras if you choose your settings appropriately. If you need convincing, just look at the example below. First, we see a photo I took with with my iPhone 6‘s back-facing camera. Below that, you see the photo I took a few seconds later with my Nikon D600 FX DSLR with the Nikkor 24-70mm f/2.8G lens, set to f/16. Notice how similar they are in terms of perspective, focus and background blur. At the same magnification they look pretty much identical. This is no coincidence!
The iPhone has a *tiny* sensor and a *tiny* lens compared to the Nikon, yet we are supposed to believe that size matters. How can we resolve this paradox?
By the way, if you’re observant you have already noticed that the Nikon DSLR image was cropped (slightly) to achieve an identical 4:3 aspect ratio as the iPhone. You’ll also see that the ISO values and shutter speeds for these two photographs were very different. These are interesting and important side notes, but for now do nothing to resolve our paradox.
There is a simple formula that you can use to compute depth of field equivalence – even between a Nikon FX DSLR and an iPhone, as you just saw.
2) How to compare DoF and field of view for cameras with different sensor sizes (crop factors)
How can two cameras that differ so much in physical size and sensor size produce images that, for all purposes, are somewhat indistinguishable?
I’ll try to keep it simple and it really is, but you will need to know what a camera’s crop factor is. Then, a simple multiplication does the trick:
The effective focal length (which should in fact be called equivalent field of view, as there is no change in physical focal length) has similar implications as the physical focal length on a larger sensor camera. For example, the reciprocal rule, which states that your hand-held shutter speed should be faster than 1 divided by this number is still true. Field of view is also obviously impacted, which is why the term “equivalent field of view” is more appropriate to use, as you are looking at similar framing, despite differences in physical focal length. The effective f-number on the other hand, is only relevant for depth of field – you won’t use it for calculating the required shutter speed, ISO or anything else.
For two cameras to take identical photos of the same scene (in terms of perspective and depth of field), there are three important requirements:
- The two cameras need to be the same physical distance from the subject you’re photographing.
- The focal lengths of the camera lenses need to be set so that the fields of view seen by the cameras are similar
(= effective focal lengths should be similar).
- The lenses’ entrance pupils (the aperture size you see when you look into each lens) must be physically the same (= effective F-numbers must be identical).
- A 100mm lens set to f/2.8 on a Nikon FX camera (which has a crop factor of 1.0) gives you the same field of view and depth of field that a 50mm lens set to f/1.4 would on a μ4/3 camera (which has a crop factor of 2.0).
- The iPhone 6 has a crop factor of 7.21, focal length of 4.15mm and a f/2.2 maximum aperture. This iPhone gives you a similar field of view and depth of field as a full-frame camera with a 30mm lens set to f/16 does.
3) Perspective and Field of View
We can now look at these phenomena in slightly more detail. If you want to play with the numbers yourself, there are some DOF calculators that let you do that online.
3.1) Field of View and Subject Size
At a given distance from your subject, using a smaller sensor will have the same effect as cropping a portion of your photo from the larger-sensor camera.
The interesting thing to note is that this reduced field of view causes the subject to appear larger when you view the two photos side by side. It is easy to see that you would have gotten the same effect if you zoomed the larger-sensor camera’s lens in more, without changing the distance to your subject. This is why you multiply the lens’ focal length by the crop factor to get the effective focal length / equivalent field of view. More interestingly, this additional magnification also magnifies the background blur, reducing the effective depth of field (more about this later).
In previous articles, we have already established the fact that perspective only changes when camera to subject distance changes and perspective is not impacted by focal length. If you are confused by this, here are some basics to reiterate the point. Foreshortening refers to the phenomenon of how perceived size changes with its distance from the observer, and the changing relative sizes of background and subject.
Foreshortening plays a big role in art, and has been extensively studied over the ages. Artists originally only needed to learn the way in which distant objects appear smaller to a human observer. Humans view the world through their eyes which have a constant focal length of approximately 22mm f/2.1, but has an effective focal length of 43mm. Could you therefore say that our eyes have a crop factor of 2.0? More details can be found here.
Our camera gear gives us the opportunity to change focal length. Essentially, longer focal lengths reduce the relative differences in size between a subject and the distant background, whereas wide angles exaggerate this difference.
Perspective (i.e. relative sizes of different objects in the frame) only depends on the distance from the subject. For this reason, two cameras with different size sensors need to be at the same distance from their subjects in order to create photographs with a similar perspective.
4) Depth of Field
4.1) What happens when an object is “not in focus”
First let’s look at what happens in the imaging process. The lens model in the diagram is much simplified, but captures the essential process:
There is one specific distance at which the lens focuses. This distance (called the focus distance) can be adjusted, but at any given instant all points at this distance are projected as points on the image plane.
Whenever a point is further away from the lens than this unique focus distance, its light rays don’t focus on the sensor any more, but intersect somewhere in the air in front of the sensor. These light rays again diverge after their crossing point, ending up as a blurred circle on the sensor. Similarly, when an object is closer than the focus distance, the corresponding light rays hit the sensor before they converge, also forming a blurred circle on the sensor. You may now better understand the pretty bokeh circles that you get when you photograph distant lights while the lens is focused on a nearby subject. The shape of this blurry spot is actually the same as the shape of the lens aperture, and can be manipulated into shapes by using using a paper cut-out to make an aperture into a custom shape like a heart.
4.2) Circle of Confusion
As we saw from the light ray diagram above, there is only a single distance at which even an ideal lens will focus the image perfectly. A point source at any other distance is blurred to a circular blob on the image plane that is called the circle of confusion (CoC). In practice, however, there is a range in which the CoC is imperceptible, as your eyes aren’t good enough to tell that the light is blurred to form a non-zero-sized disc. The transition from imperceptible to perceptible blur varies from person to person but on average subtends an angle of approximately 1 arc minute, as seen from your eye.
Now we can define Depth of Field: it is the region where the CoC is less than a certain value, i.e. where the entire image or a particular area of the image is perceived to be “sharp enough”. Depth of field is therefore based on some definition of “acceptable” sharpness and is essentially an arbitrary specification. The CoC cut-off size that defines depth of field ultimately depends on the resolution of the human eye, as well as the magnification at which the image is viewed.
At normal reading distance 1 arc minute represents a diameter of about 0.063 mm on a piece of paper – about as wide as the thickness of a human hair. Interestingly, the photographic community long ago and for some unclear reason decided that they will instead settle for a coarser limit to the circle of confusion, roughly 0.167 mm, which means you might find depth of field scales printed on some lenses to be a bit over-optimistic. These DOF scales are therefore more a crude rule of thumb than anything useful.
4.3) Three examples of how a smaller sensor influences Depth of Field
4.3.1) Smaller Sensor = decreased depth of field (if identical focus distance, physical focal length and physical f-number)
When you put photographs from two cameras next to each other to compare them, you are typically looking at these images at the same size. However, the image sensors that generated these two images may be very different in size. For example: the iPhone has a sensor that is less that one seventh the size of a 35mm full-frame DSLR in each of its dimensions. This means that the physical image that was projected onto the image plane of the iPhone was magnified by a factor more than seven times more than the DSLR’s image so that it could be displayed at the same size in the side-by-side comparison in this post.
This magnification magnifies everything – also imperfections and blurring in the projected image. This means that, at the same distance from your subject, at the same physical focal length and aperture setting, a camera with a smaller sensor will have shallower depth of field than the one with a larger sensor. The images will have the same perspective, but different fields of view (framing), so it is a bit of an apples and oranges comparison. However, the result is real, and goes contrary to common knowledge and what one might have expected!
4.3.2) Smaller Sensor = increased depth of field (if identical focus distance, effective focal length and physical f-number)
As we saw, the effective f-number of a camera with a smaller sensor in terms of depth of field is higher by a factor equal to its crop factor. This is because at a given distance from your subject, depth of field depends on the physical entrance pupil size. To have an equivalent field of view, the smaller-sensor camera needs to have a shorter physical focal length. At the same physical f-number this corresponds to a smaller entrance pupil size, and hence deeper depth of field.
4.3.3) Smaller Sensor = increased depth of field (if identical subject size, physical focal length and physical f-number)
With the same physical focal length, the smaller-sensor camera has a tighter field of view. In order for the subject of the smaller-sensor camera to fill the same proportion of the frame as in the larger-sensor camera, we have to be further away from it. Moving further away from the subject increases the focusing distance, which strongly increases depth of field. Very roughly speaking, depth of field increases with the square of the focus distance. This effect is counteracted by the shallower depth of field due the increased magnification explained earlier, but because of the more powerful square-law relationship the increase in DoF dominates, yielding a net increase.
For the actual mathematical equations, follow this link.
With two cameras that have very different size sensors you can take photographs that look exactly the same, in terms of Depth of Field and Perspective. However, a large sensor camera gives you more creative freedom in the ability to isolate your subject from the image background.
You cannot simply substitute full-frame lenses with “equivalent” focal length alternatives on smaller sensor cameras because f/2.8 may not be the same f/2.8 you’re used to. And if you look at really fast lenses that give you similar results in terms of subject isolation capabilities (f/1.4 or faster lenses), you might be a bit disappointed to find out that they are either more expensive, or do not offer autofocus capabilities.
In the above case, Olympus should actually be offering you a 12-35mm f/1.4 lens to really give you a similar versatility that the full-frame 24-70mm f/2.8 lens offers. Unfortunately, such a lens does not exist, no matter how much money you want to spend. You simply cannot beat physics!
Nothing has really changed since the film days – larger film and larger sensors have a head start in capturing more detail, to produce cleaner images and a better ability to isolate subjects. Am I saying that bigger is always better? No, of course not, as there are pros and cons to each system. In terms of pure detail and image quality, smaller cameras today produce stunning images that large sensor cameras could not match just a few years back. The degree to which a larger sensor’s possibilities are realized depends in part on the skill of the camera manufacturer. Furthermore, these advantages have implications on size, weight and cost. For many photographers the trade-offs that a large-sensor system demands are too much to embrace.
The choice of a camera system today boils down to one’s needs. For most photographers out there, smaller systems are going to be more practical. Professionals and aspiring professionals will be choosing larger systems due to the above-mentioned advantages. Hence, there is no right or wrong in picking one system over another.