Smartphones have come a long way. Over the last few years, the improvements made in smartphone photography have been huge, culminating in the powerhouses on the market toda,y such as the Google Pixel 2, the iPhone X, and the Samsung Galaxy S9. My current handset, the Samsung S7 Edge, can take great-looking photos and even performs pretty well in low light, and this model is now over 2 years old! But despite it being a capable piece of photography kit, I rarely find myself using my S7 camera app for anything more than snapping an amusing piece of graffiti, or to show something to a friend. Then again, I also own 2 dedicated cameras.
My first system is a DSLR, which is my main go to camera if I’m going out to “do photography”. I also have a small Micro Four Thirds camera paired with mainly pancake lenses, which I can carry around in a tiny sling bag I have or in a coat pocket if I’m wearing one. I take this if I’m out with friends who aren’t into photography, or if I’m going somewhere I won’t be able to take a DSLR. This is the most likely situation where a smartphone would, but I still overwhelmingly prefer to use an actual camera in these situations.
Like many people, I will admit to being a bit of a gearhead – waiting for a new lens to be delivered is always pretty exciting. But my true passion is to get out and shoot with both cameras as much as I can. I like to think these are well-considered purchases that I made in order to get shots that I couldn’t capture before (and I think I’m mostly right on this)!
Table of Contents
The iPhone Question
A friend recently asked me why I still keep on using cameras, and why I didn’t just get one of the latest smartphones. I mumbled some stuff about better image quality and controls, but it got me thinking. Why do I feel the need to keep investing in and using a dedicated camera system? At this point in 2018, almost everyone has a smartphone advanced enough to take decent shots, at least in good light. So, why do I feel I need more?
A smartphone is a camera system all in one. You don’t buy lenses for it, and it’s a one-off cost. In fact, you could see it as free, since you are buying a phone anyway. You can get additional apps to boost its functionality, some of which are actually free, and even the stock Camera apps for each OS are updated to become more feature-rich over time.
Most cameras, by contrast, are hardware products, released to market with a set of features which you are largely stuck with – that is, until the camera companies try to entice you with the latest models in their next release cycle. I realise some companies are bucking this trend: Olympus and Fuji have been pretty regular with firmware updates for their cameras, for example, adding new features rather than providing just fixing bugs. Another standout is Pentax, offering a rather unusual service when they released the K1 Mark II. For a fraction of the cost of the new Mark II, Mark I owners were able to send in their cameras to Pentax support and have them physically upgraded to the new model.
Another major advantage of the smartphone is the size and weight. Aside from perhaps the biggest phablets, they slip easily into a pocket or bag, and we carry them everywhere. Very few of us can claim this about our cameras. I have been dedicated to carrying around a large backpack for years on photography trips, which – depending on the equipment involved – can only provide room for my DSLR and 2-3 lenses.
Smartphones vs Dumb Cameras
There has long been the myth that a good camera means good pictures. It must be one of the biggest bugbears of a photographer to hear the ubiquitous line: “That’s a great picture – you must have an expensive camera!”
With better and better cameras inside the average person’s pocket, and with many people feeling the pinch economically, I think hiring a photographer with a “pro” camera is seen more and more as an unnecessary step or a luxury. With the main consumption of photographs occurring on smart devices, image quality is good enough, and photos taken from a phone can be shared easily and quickly with friends and family. It is also so easy to gain access to any image that you could want. A quick tap of a few keywords on Google Image Search brings up a wealth of different images to your fingertips, without having to pay anything (at least, to see the image), visit the host site, or even have an idea of where the photo came from.
A real eye-opener to the current situation is imagining how a teenager keen on photography would see the camera industry as it is today. Most teens in the western world have probably owned a smartphone for several years, share photos with their friends over WhatsApp or other apps, and are using photo-based social media sites such Snapchat and Instagram. They buy their first shiny new camera and get it home, excitedly tearing through the packaging to get to their new pride and joy inside.
Only to find that in-camera editing is clunky or non-existent, sharing photos means you have to physically connect your camera to a computer (or use an often badly designed Wi-Fi app), the menu system is a confusing 90s-computing-style maze of options, there’s no touch screen (or you can’t fully control some options with it), you can’t charge it with the same cable that you use to charge your other devices, and there are no or few regular updates with new features. A smartphone makes all of this so easy, and these features probably have an equal or even higher value than the extra image quality they might gain by using their new camera.
What Can’t the Smartphone Do?
There are still a few problems to solve before the likes of Google, Apple, and Samsung fully kill off the dedicated camera. I am primarily a wildlife photographer, and this is definitely an area the smartphone is finding difficult to invade. This type of photography hinges on having the necessary focal length reach to capture the animal in its environment, which works against the phone’s mantra of fitting in your pocket.
There is a new trend for multiple lenses on a phone camera, and a combination of these, some hefty image stabilisation algorithms, and some intelligent processing may one day be enough to emulate a workable 500mm lens, for example. Sports and action photography is also fairly safe for the time being, providing the sport you are shooting does not allow you to get close. Contrast detection autofocus and low light performance are improving quickly, with focal length being maybe the only major technical barrier here (not including the effort to get to or ability to access shooting locations!)
Applications like travel, family and day to day photography, however, are bread and butter for a smartphone. Entire weddings have been shot on phones, and this could easily translate into a number of other events too. Landscape photography is also not safe, with the usual wide-angle lens combined with some of the new built-in HDR or computational techniques threatening to do away with the traditional camera’s advantage.
The new portrait mode in the very latest flagships has helped to conquer another of the last camera bastions that have fallen to the smartphone: bokeh. The ability to generate blur behind a subject means that smartphone users can effectively take their own portraits. While it’s not perfect, this is only the first commercial pass at the technology, and, realistically, it’s only going to get more and more like the real thing. As I discovered recently, you can even get a stripped-down version of portrait mode with a number of older phones just by using the ‘Focus’ mode in Instagram’s app. Not as elegant in its execution, perhaps, but this will be enough for some people.
The other thing about a dedicated camera – and, in particular, the various DSLR systems – is the huge range of accessories that are on the market. This enables you to try out literally any type of photography you can think of. Something like astrophotography (which normally utilises an ultrawide lens or a modified camera to record infra-red light) or macro have not really been addressed by the smartphone as yet, but dedicated cameras can capture this sort of scene quite well. Lighting is also very well-catered on the DSLR, giving you mountains of different options, while devices like the iPhone are trying to render this lighting in software.
The Automatic Photographer
After some consideration, rather than image quality, I actually think the main reason I still carry a camera is the way it makes me feel. I love the feeling of getting out there with my kit and being totally in the moment, scanning my environment for the next shot – a unique point of view, the perfect moment to be captured. This is probably born of a pre-smartphone era upbringing, but I find it very hard to creatively get into this zone when using a smartphone. I realise that this is not the smartphone’s fault – probably more of a mental failing on my own part. I should, in theory, be able to create a fairly similar image on any camera, if we are to judge the creative vision over the technical quality. But that is how I feel nonetheless.
The other major appeal to using a dedicated camera is how it handles. I massively appreciate the wealth of physical controls at my fingertips, being able to creatively select the settings easily to craft the look I want. Ultimately, a phone feels like a gadget to me, not a camera. It doesn’t feel right in the hands, and I don’t feel like I am in control of what is going on, or that I am able to quickly adjust settings when the environment changes.
That being said, the latest smartphones such as the Google Pixel 2 are able to control most of the technical elements of taking a shot for you. Gone are the days where the photographer’s main job is to calculate an exposure. It can, for example, take multiple frames in quick succession automatically with different exposures and intelligently combine them, analysing the content of the image to render a high dynamic range scene with minimal noise and correct exposure across all areas of the image. All this from pushing a single button.
Think of what would be needed to achieve this with a camera. At the very least, you would need to go through your usual post-processing workflow, perhaps using Adobe Lightroom. Try to reclaim detail from either the highlights or the shadows, depending on what you exposed for. Maybe you would use filters, maybe you would bracket manually and use desktop software to combine these. Either way, this is a lot more work than a single button.
But there is still key human input needed. We need to choose the location, get to the location, and then carefully line up the frame until we see something pleasing. Only after that can we hit the shutter button, capturing the shot.
Still, what if even some of this could be automated? Imagine if the phone camera could analyse the scene based on a number of criteria and ratios and choose what it thinks is the “best” composition, at least from its current direction of view. The human user would merely need to hold up the phone and point it at some mountains, for the algorithms to take over – cropping down the view into a beautiful mountain scene, complete with leading lines, observation of the rule of thirds and all perfectly exposed.
In fact, what is to stop the algorithm tweaking reality for the sake of a more pleasing composition? We already have the tools to correct distortion, but what if algorithms could use these techniques to subtly change the view or perspective of the image to improve your picture? Or identify a moving car detracting from the tranquil beauty of a landscape and decide to remove it using image averaging techniques?
With current computational techniques in phones, I don’t believe this is too fantastical a concept. It is hard to for us to imagine a world in which composition, the fundamental creative aspect of photography, could be automated by a machine – but a parallel with music can be drawn here.
Machine Learning software has already been used in a project at the Sony Computer Science Laboratories in Paris to emulate the chorales of the great composer Johann Sebastian Bach. Their machine, named DeepBach, was fed with a large amount of Bach’s music which could be used for analysis of the musical patterns, and then to validate the resulting composition. Out of 1600 participants, around half judged that a DeepBach composition was work by Bach himself, while 75% identified a piece from the composer himself as a human-written work.
If a machine can do this, why couldn’t we use similar technology to emulate the work of Henri Cartier-Bresson? If not now, then in the future. If smartphones had a feature like this, what would this do to the photography business?
Long term, I think that smartphones are going to be the primary camera for more and more people, and that “manually” taking photos with a DSLR or even mirrorless will become more niche. I am sure the camera industry will continue to progress, but I believe that with the majority of camera companies so focused on hardware over software, the focus needs to change to prevent cameras being left behind. A lot could be done to enhance the user experience of a camera. They could offer features that don’t automate as much as in smartphone technology, but still give the photographer more creative options than what is available today.
Having said that, there will most likely be a group that is always gunning for the best image quality possible, be it for an amateur’s personal pride, or a pro working on a billboard campaign. For a lot of people, photography is a deep passion, and they will probably continue doing what they’re doing for now – myself included. But if a smartphone can automatically get you 80-95% of the way to the photo you’re trying to capture, how many people will deem that extra 20%, 10%, or 5% to be worth the investment in time and equipment?
Thank you to Photography Life reader Peter Cooper for this essay, written as part of the 2018 guest post contest! You can see more of Peter’s photos on his website.