Since the early days of film, panoramic photography has been synonymous with landscape and architectural images, and sometimes with other genres like street and wildlife photography. By combining two horizontal frames of film, typically 120 medium format, some film cameras actually shot panorama photographs by design. Most of these cameras emerged in the latter half of the twentieth century, bringing the panoramic format to the public eye. The panorama had existed long before this time, of course, but its popularity has only grown — and with good reason. Panoramas are fun and dramatic, and their subtleties are just as important in today’s mostly-digital age as they were during the heyday of film. In this article, I will discuss some of the important but less-common benefits of taking panorama images, as well as sharing a set of my photographs from Iceland in the classic 6×17 aspect ratio. If you are new to panoramas, you might enjoy reading our general panorama tutorial first.
Some of the positives of panoramic photography are obvious. By stitching several frames into a panorama, you can take wider-angle photos than your lenses typically allow. For landscape and architectural photographers especially, an ultra-ultra-wide perspective is always useful. Of course, many photographers simply like the extra resolution that comes from stitching several frames together — it allows for larger, more detailed prints, along with more cropping ability in post-processing. Even the relative uniqueness of a panoramic frame has some appeal, since most photographs are taken without switching from the camera’s native 2×3 or 3×4 format (or perhaps the Instagram-famous square aspect ratio). However, there are more benefits to panoramic photography than what may first meet the eye.
1) Composition
I briefly discussed the compositional benefits of panoramas in my article on balance, but it is worth going into more detail here. For starters, consider the image below.
In the sample above, you can see that this image is poorly-composed. Ignore the fact that I can only draw circles in Photoshop, and notice how crowded the frame is along its edges. If this were a real landscape, the logical decision might be to zoom out slightly to give more breathing room on either side of the image. Unfortunately, zooming out is not always an option. In this abstract example, the boring white space continues indefinitely above the frame, and the foreground is ruined because — of course — someone littered. The best way to fix your abstract composition, then, is to photograph this frame as a panorama.
A real-world example is shown below. In this image, the sky and ground were entirely featureless — foggy and silhouetted respectively. Although I could have used a 2×3 aspect ratio and zoomed in a bit more, I would have had to crop out the sides of the mountain to do so, and the image would not be as strong.
Along with the extra room for composition, the panoramic format makes it easy to balance the items in your frame. A panorama is inherently so wide that it is very difficult to tip off-balance, which certainly is not the case with traditional rectangles. Even bright, attention-seeking items in your frame are not nearly as important when they take up a small percentage of the image. In the panorama below, for example, the largest bright area is directly along the right-hand side of the photograph. Normally, this would be a textbook example of imbalance. However, the sheer number of other items in the frame — most of which would not be visible in a 2×3 or 3×4 photograph — render this bright spot of sky almost inconsequential to the image’s overall balance.
The compositional side of panoramic photography certainly is not the only reason for its popularity, but panoramas are useful for images that cannot be composed in more typical ways. Often, I use the panorama format simply because the spaces above and below my subject would boring with a 2×3 frame — other times, I do so to make my image easier to balance. Panoramas are not ideal for every composition, but they are crucial tools in more situations than you may think.
2) Large Prints
I briefly mentioned the increased detail that comes from stitching several images together — and such detail certainly is welcome — but it is not the only reason that you can print larger images from the panoramic format. Consider a typical (high-end) photo printer: the width of the print is set at a certain size (since, say, a 24-inch printer simply cannot fit anything larger), but the length of the print is essentially unlimited. The reason is that, past a certain size (typically 13×17), photo paper tends to come in rolls rather than sheets. These rolls can be tremendously long, often more than fifty feet (15 meters).
The implications for panoramic photography are clear. If your roll-paper printer works up to 24 inches (60cm), the size of your frames is only limited to your photo’s aspect ratio. If your file has a 3×4 ratio, for example, you can print no larger than 24×32 inches. A 2×3 frame gets you to 24×36 inches, whereas a 1×2 image can print up to 24×48 inches in size. If your roll isn’t out of paper yet, a 1×3 panorama can be printed at a massive 24×72 inches in size. The numbers themselves are not particularly important here, but the size differences are — once you hit your printer’s limit, a 1×3 panorama can be exhibited twice as large as a 2×3 image.
Even third-party printing companies work the same way. At Bay Photo, for example, the maximum size for a 2×3 ratio print is 30×45 inches (75×115 cm). But, if your image has a 1×4 aspect ratio, you can get a 30×120 inch print — that’s ten feet wide (three meters) on the long side. For most photographers, a print of this size is never going to be a practical option. However, even smaller prints benefit from a panoramic aspect ratio.
Above most sofas and beds, for example, the wall is wider than it is tall. Quite often, the difference is significant. And, for landscape photographers who want to sell their work, home decoration is one of the largest markets. It makes sense to cater to people’s needs, then, and panoramic art is disproportionately popular for bedrooms and living spaces.
For other photographers — those who exhibit at shows and galleries — panoramas help to stand out against other prints on the same wall. Although it is not true in every case, many galleries want a consistent height for their images rather than a consistent length. This means that panoramas are displayed quite large, and thus are more likely to attract attention than typical images.
Not all photographers print their images, of course, so this comparison is not always valid. Still, you never know if one of your images will be in a gallery show at some point, and it is worth the effort to take images with a wide aspect ratio if this occasion arises. I recently visited a gallery of this sort, where the panoramas were far larger than the other prints. They were also, it seemed, among the most popular photographs on display. And, being stitched images, they were just as detailed — if not more so — than most of the other prints in the gallery. If your end goal is a print, keep the panoramic format in mind; panoramas lend themselves to large display more than any other aspect ratio.
3) Extracting Multiple Images
When I visited Iceland over this past summer, I saw some of the most magnificent landscapes of my life. Some locations actually were too beautiful for me to choose a subject quickly — they were almost overwhelming. In a few instances, I took massive, multi-row panoramas for my composition, rather than singling out specific subjects in the field.
This is not my typical method of composition, but it is worth noting Iceland is so beautiful that I rarely needed to crop my vast panoramas at all — everything in the photo was interesting enough to keep. It also is worth mentioning that the photograph above, a prime example of this technique, is more than 230 megapixels in size — detail that is readily visible in a print. I can print this image about seven feet wide (over two meters) at more than 300 dpi, for example, and at least twice that size without a dramatic loss in quality. The “hidden benefit” is that I can extract ordinary 2×3 photographs from this massive panorama, and the insane number of pixels means that I don’t sacrifice image quality along the way. Consider the image below.
If I were making a set of 2×3 images from Iceland, I would be able to use this photograph without any issue. The perspective it brings is entirely different from the main panorama, and it stands on its own as an interesting scene. Plus, it is still more than 24 megapixels in size — more than enough to print at any reasonable size.
What if I wanted a 9×16 image for my desktop background? I can focus on another part of this panorama and get an equally interesting shot:
This image is, again, a crop from the original panorama. This time, I chose to focus on the mountains on the left-hand side of the frame, which look beautiful in the morning light. Again, this image works well as a crop, and it is not missing much from the original photograph. Plus, it’s more than 100 megapixels — large enough for a print of any size.
Although I do not recommend this approach for everyday photography, there is something to be said for extracting several individual images from a panorama. At the very least, it forces you to consider the large image as a combination of individual compositions, a different thought process that can help you decide what deserves inclusion in the frame. Plus, with the massive detail inherent in stitching photographs together, there is no significant loss in quality if you do choose to crop several versions of the image in post-processing.
These three extra benefits of panoramas, then — composing with more freedom, printing with a larger size limit, and extracting several images per frame — are not definitionally hidden, but they certainly aren’t as obvious as the other benefits of panoramic photography. No matter your preferred genre, panoramas can be a versatile tool in your arsenal — perhaps for more cases than may be clear at first.
I liked the article and the images Spencer!
Regards,
Sharif.
Thank you, Sharif! You know as well as I do, Iceland is an amazing place for landscape photography :)
Spencer
Wishing to avoid the sticky controversy of a few issues here, I would point out that one good reason for doing panoramas is simple economy. If you have a standard DX camera and want a very wide angle, it is much cheaper to stitch than it is to buy a good super wide lens.
As for the labor of stitching, I have found that the free program offered by Microsoft does a very good job (though only with JPG) if the images it starts with are good. I did a number of wide panoramas last year in Antarctica, using the standard 18-55 kit lens, which came out (in my humble opinion at least) quite well with little fuss.
I regularly use an old 35 mm. shifting lens on a DX camera, and achieve nice, if not radical, panoramas by taking three images, left, center and right. They come out well stitched, and almost rectangular without cropping. The old lens requires flipping to shift on both sides. Newer ones do not. It’s a nice way to get a wider shot than existing lenses allow.
Thank you for your comments, Matthew! Stitching is a great way to get a wider perspective on a scene. For a while, my widest lens was a 24mm equivalent — needless to say, I used this technique quite a bit!
Um, maybe it’s just me but I didn’t get any of this. I was expecting some tips on how to create great panoramas and what to keep in mind when shooting the pieces. I don’t have a panorama mode on my camera, so I’ll have to position and stitch the shots myself. Anyhoo, it was still a good read, and the sample images are gorgeous.
Ah, yes, this article was more of the “why” than the “how” for taking panoramas. You might like to check out the main panorama article that Nasim wrote — part of it is geared towards beginners, but there also are useful tips in there for advanced photographers:
photographylife.com/panor…aphy-howto
My main piece of advice is to do all of your post-processing edits after you merge the panorama — otherwise, it will be far more difficult to undo your actions. This is especially true with sharpening, which loses its effect unless you sharpen after merging.
I’m glad you like the photographs!
Spencer
Thanks for the link – exactly what I was looking for. So many great articles on this site, thank you all for the great work!
Spencer,
I enjoyed your article very much. I love shooting panoramas. You have created some stellar images, you’re an awesome photographer.
Thank you so much, Jason! I’m glad you like the article.
Dear Nasim
due personal engagement some how missed the basic course rebate, any possibility to have it for people like me as one chance to get it.
best regards
Mayank Manu
Good timing — we currently have the Basics course on sale for $99! Here’s the link to the sale (ends December 1):
photographylife.com/go/bh
If you have any questions about the sale pricing, that article is the best place to leave a comment.
Spencer
I like these panoramas very much. When you look at these pics, it is like you are there, standing on the rocks and looking at the great landscape. 4the pic is amazing. It shows what this place is all about. Isolation of people, dramatic weather and eerie landscape.
Thank you for the kind words! At times, Iceland was dark and gray, but it had some of the most amazing sunsets I had ever seen. I’ve never been anywhere else like it.
Spencer
I just visited the Los Angeles Auto Show in their convention center. The show really had nothing new to offer. The entry view of the bright show was spectacular. The snap was the best picture opportunity at the show. Thanksgiving Day, Christmas Day, or any other special occasion is an opportunity for a panoramic photo of everyone at the party.
That sounds like a wonderful place for a panorama – thanks for sharing!
I’m not sure I understand your point Sherman. There is a misconception surrounding diffraction since the introduction of 12+MP sensors. It’s almost like you should not even consider using anything over f/8 because the picture will be so ugly you won’t even be able to use it. You may decide to alter the sharpness a little bit to get more things in focus. It depends what you want to achieve. Regardless how much you want to debate about diffraction, f/8 does not include every element in the focused area in many situations. Even with a low focal length.
The way I saw this article was like the title is indicating: there is more in a panoramic picture than just a “stretched 4×6“. It can be a tool to get out of challenging situation, with the equipment you have, at the moment you are taking the picture. It can be a tool to show something in a more artistic way. I’m not sure what is wrong with a 230MB picture.. I’m not sure we would have known Spencer took the picture using a 105mm if he didn’t`t provide the information..
My 2 cent
Thank you, Simon, and I agree! The diffraction monster is not nearly as bad as some people think — it exists, of course (and I sometimes focus stack my images if the scene allows for it), but an out-of-focus foreground is far worse than a bit of diffraction blur. I learned that the hard way when the bottom corners of my early landscapes (at 24mm equivalent) were blurry at f/8 — I thought it was due to my lens quality, but they just were out of focus!
Spencer
Simon, the effect of diffraction depends on the f-stop and the size of the sensor. Sherman failed to take into account that stitching together multiple images is equivalent to using a much larger sensor! Here’s a contrived example for the purposes of illustration.
Suppose we have two cameras, “A” and “F”. Hypothetical camera A has a sensor that is 16 times the area [4 times linear] of our FX camera F. Camera A has a 100 mm lens and camera F has a 25 mm lens. Both cameras will have the same angle of view.
If camera A is set to f/16 and camera F is set to f/4 then both images will have identical levels of diffraction and an identical depth of field. Why? Because the diameters of the lens entrance pupils are the same: 100/16 = 25/4.
The sensor in A will be 144 mm x 96 mm; approx. 5.7 x 3.8 inches — nearly the size of a 6×4 inch print. Obviously, camera A set to f/16 will produce an image resolution of hundreds of megapixels: this is the primary purpose of medium and large format cameras.
Camera A with its 100 mm lens set to f/16 is equivalent to the stitching together of 16 images from camera F when F is likewise using a 100 mm lens set to f/16.
This is also equivalent to using a camera with the same sensor size as F, using a 25 mm lens set to *f/4*, but fitted with a sensor that has 16 times the number of pixels of F. Clearly, Spencer’s choice of f/16 led to an effective f-stop in the region of f/4 for his large format composite.
From what you wrote camera A and F have pixels of the same size, but A sensor has higher resolution and accordingly more pixels. In this case, if you couple both lenses, i.e. the 100 mm lens set to f/16 or 25mm set to f/4, to both cameras, each of the four camera-lens combinations will have identical diffraction. More, if you couple both lenses set this way to the same camera and shoot from the same place then both, the angle of view and DoF would increase, while perspective would stay the same when using 25 f/4 compared to 100 f/16. So for better DoF and to stitch less why would you not choose 25 f4 when using smaller F sensor. Of course you can have even less stitching and even more DoF when shooting from the same place and not changing perspective with bigger A sensor and 25 f4 combination, but that is whole different story.
Lukasz, I previously failed to explain that the level of diffraction in an image depends on the f-stop, not the focal length. E.g. at f/16 the Rayleigh 9% resolution limit is 93 line pairs per millimetre (lpm); at f/4 it is 373 lpm, which is far beyond the resolving power of colour film and digital sensors. Therefore, FX format is wholly incapable of capturing the full resolution of our scene, at our wanted depth of field, using a 25 mm lens at f/4.
Camera A (having 16 times the area of FX format) will capture our scene, at the same angle of view and depth of field, using a 100 mm lens at f/16. Colour film and our hypothetical digital sensor are both capable of fully resolving 93 lpm. Although the diffraction is now linearly 4 times greater, our sensor is linearly 4 times larger, therefore the percentage of diffraction in the image is the same as using f/4 on FX format. We can emulate camera A by using a 100 mm lens set to f/16 on our FX camera and stitching together 16 shots. Of course, if we want to produce a wide aspect ratio panorama then fewer shots will be required.
Suppose the FX camera has 12 or 16 megapixels (e.g. the Nikon D3/D4 series). This is a reasonable quantity for the level of diffraction at f/16. The 16 shots will generate an image having 192 or 256 megapixels, which is sufficient to produce a huge print.
The only other option is to use a medium or large format film camera and have the film professionally processed and scanned. This will also produce a digital image having hundreds of megapixels.
Be careful there, the level of diffraction depends on f stop only when you talk about one lens. In fact it depends on the physical diameter of the aperture not the f stop per se, so you can have two lenses of different f stop producing the same diffraction. Now when you talk camera-lens combo, diffraction depends not only on the physical diameter of the aperture but also the pixel size – not the sensor size! I agree focal length has nothing to do with diffraction, in fact with anything in photography, well maybe where you should stand:)
What I wrote was based on image space in which the radius of the first null in the Airy disk = 1.22 x wavelength x f-number. It applies to *all* lenses that have a circular (or near circular) aperture.
You must be talking about object space in which the angular resolution depends on the entrance pupil diameter, not the f-number. However, a 100 mm lens at f/16 and a 25 mm lens at f/4 have the same entrance pupil diameters, therefore their object space angular resolution will be identical (assuming perfect optics).
I was not mixing resolution with diffraction but talking purely about the latter. But yes, if other aberrations are the same or not exsistent then both lenses set to appropriate f (yielding the same entrance pupil diameter) will produce the same resolution.
You wrote “Be careful there, the level of diffraction depends on f stop only when you talk about one lens. In fact it depends on the physical diameter of the aperture not the f stop per se, so you can have two lenses of different f stop producing the same diffraction.”
That is incorrect. Image space diffraction is caused by the Airy disk, the radius of its first null (the indicator of its size and its effect) is determined directly by the f-number aka f-stop. E.g. using f/32 on a FX camera will produce the same high level of diffraction regardless of the focal length. If a particular lens produces noticeably more diffraction, at f-stops near to or at its fully stopped down setting, than other lenses set at the same f-stop then the lens likely has misaligned or sticking aperture blades.
The image space resolution of a diffraction limited system is determined entirely by the level of diffraction hence the f-stop itself, not by the diameter of the entrance pupil per se. Obviously, using a sensor that has a spatial sampling frequency less than twice the resolving power of the lens will severely limit the resolution of the system. The effect looks very similar to diffraction, which is why the two effects are frequently conflated on our modern sources of misinformation: the Interwebs.
You are right and I was wrong in that point, I see it now.
Lukasz, Thank you very much for your reply and for your previous comments. I’m very poor at wording my science-based comments in a friendly sounding manner — you have helped me to improve my communication style [huh, my dire lack of style!].
Kindest regards and best wishes,
Pete
Pete, I wonder if you can clarify my views on other but related subject, i.e. diffraction limited optical microscopy. I know the resolution limit in light microscopes is around half of the wavelength used to illuminate the object, e.g. around 500nm/2 = 250nm. In an optical system like this we cannot resolve two objects separated from each other at a distance less than 250nm and we cannot get a clear image of a single object, which size is comparable or smaller than wavelength of the light used – the image of a such small single object (effectively a point source) will be effectively an Airy disk. In such microscope we see detailed shape of objects bigger than 250nmx250nm, for example an object of 5umx5um dimension, but we only see Airy disk from object 100nmx100nm dimension…I understand that diffraction and interference in objective are the same (no matter the size of the object we observe) because we do not change its parameters, so why is that? Is this because of additional diffraction happening around the object under observation, which is different for objects with dimensions close to the light wavelength used and different for objects that are much bigger? How does this work?
Lukasz, There isn’t a short answer to your very interesting question. I spent a few hours attempting to write a reply, but when it reached 8 paragraphs I decided to start from scratch. There are some interesting articles on this subject published by Zeiss and Nikon (and others) that would explain it far better than I can achieve via these comments. The Zeiss document “Diffraction and Interference Effects in Airy Disk Formation” might be worth a look, but it seems to require Adobe Flash, which I refuse to install on my devices. I found the following article fascinating mainly because it is so well illustrated and explained, rather than it perhaps answering your question:
zeiss-campus.magnet.fsu.edu/articles/basics/psf.html
If we take the Fourier transform of a circular aperture (the aperture is simply a spatial filtering function), then square the moduli of the results we obtain the power spectrum, which is the familiar Airy pattern that results from a point source when using diffraction limited optics.
Thanks, will look into that.
I’ve just found an article on Nikon’s microscopy website that may be of interest to you, and perhaps to some other readers because it applies to all optical systems, not just to microscopes. The article shows the relationship between modulation transfer function (MTF), resolution limits, and the point spread function (PSF). It also demonstrates the contrast inversion that occurs in the defocussed / out-of-focus areas, which is a bane to photographers because it so often results in harsh/busy patches of bokeh (caused by false resolution), and in patches of false colour that are easily mistaken for chromatic aberration or aliasing-induced colour moiré patterns.
www.microscopyu.com/micro…r-function
Nikon’s microscopy website contains a plethora of illustrated articles that are both fascinating and educational.
A landscape shot on a 105mm macro???? 230mpx lol why, tell me why? Mpx are not the only thing that matters, shooting in f/8 instead of using f/16 improves your IQ, you know….there is a thing called diffraction.
Sorry, but this article is pointless. You could achieve better results (in terms of time consuming and SSD/HDD space) *****using a single lens*****: 24-120 f/4 VR (you can shot rapidly if you don’t have enough time…..and if you insist in doing a 230mpx panorama you can still do it shooting at 105mm).
Anyway, good luck stitching your pictures, spending endless hours in doing that! And hey! western digital will be happy with you! And don’t forget to carry EVERYWHERE your tripod, because you will ALWAYS need it.
Hi Sherman,
In the photo you are talking about, I shot two rows of the panorama, then merged one on top of the next. I actually did this (instead of a single row at a wider angle) because of my lens kit — a 24mm, a 50mm, and a 105mm prime. In this case, yes, a 24-120 would have helped!
However, I don’t think that your arguments on f/8 vs f/16 show the full story. With a 105mm lens, it is quite difficult to get everything in focus, and this landscape was no exception. The hill at the bottom-left is much closer to me than the rest of the landscape, and I certainly didn’t want to bring focus stacking into the equation. Although the rest of the image would have been marginally sharper at f/8 (though nothing that would be noticeable in a 6-foot wide print), this corner was obviously blurry at f/8. It isn’t possible to know that unless you were there, though!
Such an image does take up more space than a single frame, of course, but storage is cheap. I’ve already offloaded all of the original files onto my backup drives, so only this final panorama is taking up space on my internal hard drive. In all, this file is smaller than seven individual photos with my D800e, so the space isn’t a big deal.
In terms of the tripod, it is clear that our methods are very different. I used one with essentially every photo I took in Iceland, and the flexibility was well worth a few extra pounds on my back. This photo was shot at 1/10 second, for example, and the other photos were shot at even longer shutter speeds. Unless I had raised my ISO or opened my aperture (throwing the foreground out of focus), I couldn’t even have taken these photos. I find it so much easier to compose my images from a tripod, too, although I know that’s just me :)
Best,
Spencer
Sherman,
I’m afraid you overlook a crucial point: the perspective and the apparent distances between foreground and background elements which, for a given angle of view, appear to be either greater or shorter, depending on the focal length of your lens. That alone results in utterly different photographs of the same scene although the same angle of view is shown.
Think about it: your 24mm will allow an angle of view of about 84º in, but it will also give an impression of increased distance between foreground and background, with reduced sizes of the elements in the distance and magnified sizes of the elements in the foreground.
With the angle of view of about 23º given by a 105mm lens, counting the overlapping it will take four to five pictures to cover the 84º of the 24mm lens. But once stitched together, the landscape will look more compact, with the background and foreground elements closer to each other. The stitched picture will look very different from the one obtained in a single picture at a focal length of 24mm.
Now both results will kind of betray the human eye. The first one by extending the landscape from front to back, the second by condensing the landscape. Both effects may be desirable, and I’m not saying that one is better than the other, just that they do not compare and they do not exclude each other. It all depends on the kind of landscape and the effect you’re after.
It follows that there’s another big advantage to panoramas: if you stitch together frames shot at an angle of view that matches the natural human eye vision—roughly that of a 50mm lens—, you can get highly natural-looking panoramas (the fourth photograph of the article is an excellent example). And there’s no way you can get that natural vision by zooming out to 24mm or less in order to save some disk space. (By the way, with the shrinking price of the megabyte of disk or even SSD, this argument is becoming void… And you do not spend “endless hours” on the job, unless you are underskilled.)
Hoping to have helped you understand why Spencer’s article is far from “pointless”.
JD
JD, The perspective of an image is determined entirely by the location of the lens entrance pupil in 3D object space. This location determines the relative sizes of objects at different distances within the scene. E.g. a scene captured with a 24 mm lens will be identical to a stitched composite that has the same angle of view, provided that: both lenses are rectilinear; and that the lens used for the composite has been panned and tilted around the central point of its entrance pupil (rather than the camera tripod mount or some other position).
The above does not apply to stitched panoramas that exceed a 180 degree angle of view because a fisheye (or other type of non-rectilinear lens) would be required to capture the whole scene — the maximum angle of view of a rectilinear lens is less than 180 degrees, however short its focal length.
You wrote “… But once stitched together, the landscape will look more compact, with the background and foreground elements closer to each other.” If you think about it, the only way to make your keyboard look closer to your monitor is by positioning yourself, or a camera, further away from them.
The reason that wide-angle lenses appear to distort perspective is because their angle of view is much wider than that of human vision. If you were to view a wide-angle image from a distance such that it presented to your eyes the same angle as was captured by the lens, the apparent distortion would disappear. The same applies to the sometimes apparent foreshortening of perspective of long telephoto lenses: again, it is caused only by a large mismatch between the captured angle and the viewing angle.
I highly recommend reading Nasim’s article: Does Focal Length Distort Subjects?
photographylife.com/does-…t-subjects
I totally agree with you that Spencer’s article is far from “pointless”.
Sherman, I don’t think the article was a waste of space and i use my 105 DC to stitch pano images many times. There are some benefits that telephoto lenses offer that you can’t get with wide angle lenses. Show me a 24mm DC lens? Also i shot a beautiful (well at least to me) pano with my 300mm f/2.8 VR handheld. You don’t need a tripod to shoot stitched images, it may help but it is not necessary. I also urge you to look into the Brenzier method that was written about previously. One can use telephoto primes and stitch to simulate a wide angle lens that doesn’t exist, say a 50mm f/0.5.
Spencer I know it’s not possible to go back and re shoot it but I’d be interested if the 105 DC could have gotten the in focus foreground like you wanted with a smaller aperture say 5.6. I’ve experimented with the lens quite a bit and having read the challenges that you are shooting with I’m going to have to try it.
I started doing panoramas way back when I was using a 6MP Nikon D70. I could do huge landscapes – up to 60 inches wide- that were incredibly sharp. Now, even though I don’t show anymore, I do panoramas whenever I get to a place that affords the opportunity to do so. I call them “composites” since many of the shots are multi-row images, giving them a more classic aspect ratio with lots of resolution. It just depends on the scene.
Great images of Iceland by the way. Makes me want to put it on the list.
Thank you for your comments, Donald! I fully agree — with older or low-resolution cameras, a panorama is the only real way to improve the detail in your prints. I like your thought about using a panorama to take a classic-ratio image, too. I do that from time to time, although never with multiple rows — I definitely need to check that out.
Spencer