Two recurring questions that we often see from photographers are: “I have color management properly set up on my computer; why is it that the color is different between an out-of-camera JPEG and, say, Lightroom (substitute with your favorite 3rd-party converter)?” and “Why is it that the particular color on a photo is different from the actual color?”. In this article, we will go over why color from images is reproduced differently on camera LCD screens and monitors, and the steps you can take to achieve more accurate colors.
Though these two questions seem to be different, they still have much in common. That’s the part where we need to consider the stages it takes to get from captured data to color, as well as the limitations of color models and output media (monitor and print, mostly) when it comes to color reproduction.
The goals of this article are twofold: the first is to demonstrate that out-of-camera JPEGs, including in-camera previews, can’t be implicitly, with no checking, used to evaluate color (as we already know, the in-camera histogram is misleading, too). The second is to show that it isn’t necessary that the camera manufacturer-recommended converter be specifically tuned to match the out-of-camera JPEG.
Let’s start with an example.
Recently, I got an email from a photographer asking essentially the same question we quoted in the beginning: how is it that the color on an out-of-camera JPEG is nothing like the color on the original subject of the shot? The photographer was taking shots of blue herons and hummingbirds, relying on the previews to evaluate the shots, and was rather confused: the camera was displaying strongly distorted blues in the sky and on the birds. One can say that camera’s LCD and EVF are “calibrated” to an unknown specification, so this “calibration” and viewing conditions might be what causes the color issue. However, the color on a computer monitor also looked wrong. Naturally, the photographer decided to dig deeper, and to take a picture of something blue to check the accuracy of the color. The test subject was a piece of stained glass, and …(drumroll, please)… the out-of-camera JPEGs looked off not just on the camera display, but (as expected from examining the shots of birds and sky) on a computer monitor as well.
Here is the out-of-camera JPEG (the camera was set to sRGB, and the photographer told me that setting it to Adobe RGB didn’t really make much of a difference). The region of interest was the pane of glass in the middle, the one that looks cyan-ish-colored.
The photographer said it was painted in a much deeper blue. Obviously, I asked for details and got a raw file. I looked into the metadata (EXIF/Makernotes) and ruled out any issues with the camera settings – they were all standard. Opening the raw file in Adobe CameraRaw with “as shot” white balance, I got a much more reasonable blue, and the photographer confirmed that it looked much closer to the real thing, maybe lacking a tad of depth in the blue, as if it is from a slightly different palette. So, this is not a problem with white balance. Moreover, the default conversion in ACR proved that the color could be rendered better than on the out-of-camera JPEG, even with a 3rd-party converter.
The shot was taken with a SONY a6500, so my natural impulse was to recommend to the photographer to use the “SONY-recommended” converter, which happens to be Capture One (Phase One).
One thing to keep in mind as you’re reading this: this is in no way an attack on any specific product. You can check for yourself and find if this effect occurs with your camera and preferred RAW converter. The reason we’re using this as an example is just because it happened to fall into our lap. That said, we certainly wouldn’t mind if SONY and Phase One fixed this issue.
Back to our image. Here comes the unpleasant part.
The first thing I see upon opening the raw file in Capture One is the blue pane of glass covered with an “overexposure” overlay. Easily enough fixed, change the curve in Capture One from “normal film simulation” to linear and the overexposure indication is gone. Next, I move the exposure slider to +0.66 EV: overexposure doesn’t kick in (takes 0.75 EV for faint over-exposure overlay spots to appear). Here is the result; it’s distinctly different in color from what we have in the embedded JPEG, in spite of the fact that white balance was left at “As Shot”, but it’s still wrong, having a color shift into purple:
Let’s have a closer look at the region of interest:
So, let’s reiterate the two points we made at the beginning:
- First, not only is the JPEG histogram misleading, the JPEG color preview may not be very useful either, be it in-camera or computer monitor, — check how it is with your camera;
- Second, for some reason, Capture One renders things differently from SONY in spite of being “the recommended” and free option for SONY users. Not just differently, in fact, but also incorrectly with certain hues.
Table of Contents
First Digression
When we say “render things differently”, we mean not just the obvious things like different color profiles (color transforms; icc, dcp, or some other) used in different converters, and / or different contrast (tone) curves; not (minor) differences in sharpening and / or noise reduction: we also mean how white balance is applied.
Somehow it is often forgotten that the methods of white balance calculation and application are also different between various converters, leading to differences in color renditions.
In a lot of cases we see that discussions of white balance operate with color temperature and tint values. However, white balance is not measured as color temperature and tint – it is only reported this way. This report is an approximation only, and there are multiple ways to derive such a report from white balance measurement data. If you compare color temperature and tint readings from the same raw file in different converters, most probably you will find the readings are different. That’s because the methods of calculating color temperature (actually, correlated color temperature, CCT) and tint vary, and there is no exact or standard way to calculate those parameters from the primary data recorded in the camera (essentially, this primary data is ratios of red to green and blue to green for a neutral area in the scene, or for the whole scene, averaged, see “Gray World” method and its variations).
Consider Canon EOS 5D Mark II. For different camera samples the preset for Flash color temperature in EXIF data of non-modified raw files varies from 6089 K to 6129 K. Checking the color temperature for a range of Canon camera models, the color temperature for Flash preset varies from 6089 K on Canon EOS 5D Mark II to 7030 K on Canon EOS 60D. For Canon EOS M3 it reaches 8652 K. Meanwhile, Adobe converters have 5500 K as Flash preset, for any camera. If you dig deeper, the variations of tint are also rather impressive.
Quite often the color temperature and tint reports differ between the converters when you establish white balance in the converters using “click on gray” method.
Some converters calculate white balance back from the color temperature and tint, calculated using various methods; some (like Adobe) re-calculate color transform matrices based on color temperature and tint; while some apply the white balance coefficients (those ratios we mentioned above). Obviously, the neutrals will look nearly the same, but overall color changes depending on the method in use and the color transform matrices a converter has for the camera.
Of course, it is rather strange that Capture One is indicating overexposure in the default mode. If one were to open the raw file in RawDigger or FastRawViewer, it becomes clear that the raw is not even close to overexposure; it’s easily 1 1/3 EV below the saturation point [the maximum value on the shot is 6316 (red rectangle on the left histogram), while the camera can go as far as 16116: log2(16116/6316) = 1.35]. If the exposure compensation is raised to 1.5 EV, only 194 pixels in the green channels are clipped (red rectangle at the “OE+Corr” column of Exposure Stat panel), as the statistics in FastRawViewer demonstrate.
So, Capture One is indicating “overexposure” for no good reason, effectively cutting more than 1 stop of dynamic range from the highlights in the default film simulation mode, and about 1/3 EV in linear mode.
Now completely hooked, I downloaded a scene from Imaging Resource that was shot with the same SONY a6500 model, but of course a different camera sample was used. Let’s look at embedded vs. default Capture One, both sRGB, embedded first:
Now, to Capture One’s “all defaults” rendition:
I’m left completely mind boggled: comparing side-by-side, it is easy to not only see the differences in the yellows, reds, deep blues, and purples; but also the different level of color “flatness”; for example, compare the color contrast and amount of color details on the threads:
For Figure 10, we are not suggesting to pick the best, or the sharpest rendition; just pointing out how different the renditions are. Look, for example, at the crayon box. SONY JPEG is very cold yellow, nearly greenish, like Pantone 611 C, while Capture One rendered it warm yellow, slightly reddish, like Pantone 117C. Red strips on the label “Fiddler’s” of the bottle: JPEG – close to Pantone 180C, Capture One rendition – close to Pantone 7418C. Deep purple hunk (eight from the right): JPEG – Pantone 161C, Capture One rendition – Pantone 269C. Another portion on the box with crayons, the strip that is supposed to be green: on the JPEG, it is muted green, while Capture One rendered it into a more pure and saturated variety.
Finally, I took a scene from DPReview, put it through PatchTool, and came up with the following color differences report for the embedded JPEG vs. Capture One’s version (I used dE94 metric because I think there’s too much of a difference for dE00 to be applicable):
The question we’re left with: how is it that the color is so different and so wrong?
How does it happen that different converters render the same color differently and incorrectly?
The real problem is a combination of:
- The necessity to substitute out-of-gamut colors
- The perceptual non-uniformity of the CIE color model, non-uniformity that is especially pronounced when it comes to blue-purple color regions
- Sensor metamerism being different from human observer metamerism (more on this later).
Because of that non-uniformity the hue angle is not constant, and substituting an out-of-gamut color with a less saturated color of the same hue number (we need to decrease saturation in order to fit into the gamut) results in hue discontinuity. “Blue-turns-purple” and “purple-turns-blue” are quite common problems, caused by exactly this color model’s perceptual inaccuracies. Another hue twist causes a “red-turned-orange” effect (we suggested an example in the beginning of this article). With certain colors (often called “memory colors”), the problem really catches the eye. The problem also causes a perceived change in color with any brightness changes.
One of the things that we have come to expect from color management is consistent and stable color. That is, if the color management is properly set up, we expect the color to be the same on different monitors and printers, less the constraints of the color gamut of those output devices.
Color management maintains color in a consistent manner by mapping the color numbers in one color space to the color numbers in a different color space, taking corrective measures when the source color space and the destination color space have different gamuts (those measures depend on colorimetric intent stated for the conversion, and on particular the implementation of that intent). In short, color management is guided by a strict and rather unambiguous set of rules.
Why is it that we do not enjoy the same color consistency and stability when converting RAW, even when the utmost care is used to apply the correct white balance?
Another Digression
If you have been reading carefully, you may be asking why we are not limiting this to cross-converter consistency. The answer is: a color model used in a converter may be very good in general, but not very suitable for some particular shooting conditions or particular colors. Worse, some types of lights, like mercury vapor lamps used for street lighting, certain fluorescent bulbs, some “white” LEDs have such strong light spectrum deficiencies that color consistency is out of question. Oddly enough, some not-so-good color models behave better when dealing with low quality lights.
And while we are discussing consistency, there is another problem. The question “why do my consecutive indoor sports shots have different color/brightness” is also among the recurring ones. The reason for this is unrelated to RAW processing and equally affects both RAW and JPEGs: some light sources flicker. That is, for the same white balance and exposure set in the camera, the result depends on what part of the light cycle you are capturing. For example, ordinary fluorescent lights tend to flicker each half-period of the main supply frequency. Because of that flicker, it is safe to shoot at shutter speed = X/(2 *frequency of mains power), X being 1, 2, 3,…n, as the full bulb cycles are captured this way; if it is 60 Hz mains, safe speeds are 1/120, 1/60, 1/40 if you have it on your camera, 1/30,…, while for 50 Hz it is 1/100, 1/50,… You can test the lights for flicker, setting your camera to a fixed white balance, like fluorescent, and shooting at different shutter speeds, say, 1/200 and 1/30. If the color changes between the shots, it is the flicker. Of course, nearly the same is true when it comes to shooting monitor screens and various LCDs. If the refresh rate is 60 Hz, for consistent results try shooting with a shutter speed of 2 *X/60; again, X being 1, 2, 3,… Some modern cameras help reduce this problem, synchronizing the start of the exposure with the light cycle. However for lights with a non-smooth spectrum that changes during the cycle it is not a complete solution; the shutter speed still needs to be set according to the frequency of flicker.
When we attempt to apply familiar color management logic to digital cameras, we need to realize that color management, somewhat tautologically, manages colors; and it can’t be applied directly to RAW numbers – there is no color to raw image data to begin with. Raw image data is just a set of light measurements from a scene. Those measurements (except, for now, for Foveon sensors) are taken through color filters (color filter array, CFA), but the regular in-camera filtration (and this includes Foveon sensors) does not result in something one can unambiguously map to a reference color space, hence such measurements do not constitute color. But again, color management deals with color, and for color management to kick in we need first to convert the measurements contained in raw image data to color.
One More Digression
Filtrations that result in color spaces that can be mapped to reference color spaces do exist, but currently they work acceptably well only with smooth, continuous, non-spiky spectrums – that is, many sources of light and many artificial color pigments will cause extreme metameric errors. On top of that, such filters have very low transmittance, demanding such an increase in exposure that isn’t acceptable for general-purpose cameras. However, CFA is not the only possible filtration method, and 3-color filtration has alternatives.
So, what’s the problem? Why can’t we just … convert raw data numbers to color? Well, we can, but it is not an unambiguous conversion. This ambiguity is among main reasons for the differences in output color.
Why is it ambiguous? Because we need to fit the measurements made by the camera into the color gamut of some “regular” color space: the profile connection space, working color space, or the output color space. That’s a bit of alchemy that we need here; we’re performing a transmutation between two different physical essences. To better understand the problem, we need to take a short excursion into some of color science concepts.
The term “color gamut” is commonly abused; in many cases we hear chatter discussing “camera gamut” and such. Let’s try to address this misconception because it’s an important one for the topic at hand.
Color gamut is defined as the entire range of colors available at an output, be it a TV, a projector, a monitor, a printer, a working color space. In other words, a color gamut pertains to working color spaces and devices that render color for output. Digital cameras, however, are input devices. They just measure light, and the concept of the color gamut is not relevant for such measurements: gamut means some limited range, a subset of something, but a sensor responds in some way to all visible colors present in a scene. Also, sensors are capable of responding to color at very low luminance levels, where our ability to discriminate colors is decreased or even absent. More than that: the range of wavelengths a sensor records is wider than the visible spectrum and not limited by the CIE color space chromaticity diagram; that’s why UV and IR photography is possible even with a non-modified camera. As you can see, the term color gamut does not apply to RAW. Only the range of relative lightnesses of colors poses the limitations to the sensor response, and that’s the whole different matter – dynamic range.
Thus, a sensor doesn’t have a gamut, and there is no single, standard, or even preferred set of rules defining how we map larger into smaller, raw data numbers to color numbers, nothing like what we have in color management. One needs to be creative here, making trade-offs to achieve agreeable, expected, and pleasing color most of the times.
– OK, and what happens when we set a camera to sRGB or Adobe RGB? Those do have gamuts!
– Well, nothing happens to the raw data, only a tag indicating the preferred rendering changes in metadata, and the out-of-camera JPEGs, including embedded into RAW JPEG preview(s) and external JPEGs are rendered accordingly. Whatever the color space you set your camera in, only JPEG data and, consequently, the in-camera histogram are affected. Here is a curveball: pseudo-raw files, like some small RAW variants (sRAW), which are in fact not raw but JPEGs, have white balance applied to them.
Color is a sensation, meaning color simply does not exist outside of our perception, and so we need to map measurements to sensation. In other words, we need a bridge between the compositions of wavelengths (together with their intensities), which our eye registered, and colors that we perceive. Such a bridge, or mapping, is called color matching function, CMF; or observer. It tries to emulate the way we humans process the information our eyes gather from a scene. In other words, observers model typical human perception, based on experimental data.
And here comes yet another source of ambiguity: the spectral response functions (SRFs) of the sensors we have in our cameras do not match typical human perception.
From Figure 12 it is pretty obvious that there is no simple transform that can convert camera RGB output to what our perception is. More, the above graph is based on data at nearly the hottest exposure possible (white with faint texture is close to 92% of the maximum). When the exposure is decreased (say, the blue patch in ColorChecker is about 4 stops darker than the white one), the task of restoring the hue of the dark saturated blue becomes more problematic because the red curve flattens a lot and small changes in the response in red are now comparable to noise – but we need to know that red to identify the necessary hue of the blue. Now, suppose you are (mis-)led by the in-camera exposure meter, in-camera histogram, and / or “blinkies” into underexposing the scene by a stop; and there are surely darker blues in the real life than that blue patch on the ColorChecker… That’s how the color becomes unstable, and that’s how it depends on exposure.
This difference between SRF and LMS leads to what is known as metameric error: wavelength / intensity combinations that look the same to a human (that is, we recognize them as having the same color) a camera records as different and separate, with different raw numbers. This is especially the case with colors on the both sides of the lightness scale, dark colors and light colors; as well as with low saturated, close to neutral pastel colors. The reverse also happens; colors that are recorded the same in raw data look different to a human. Metameric error can’t be corrected through any math, as the spectral information characterizing the scene is absent at the stage when we deal with raw. This makes exact, non-ambiguous color reproduction impossible.
Yet Another One
What follows from here is that instead of talking about some vague “sensor color reproduction” deficiencies we can operate with metameric error, comparing sensors over this defined parameter. Incidentally, this characteristic can be calculated independent of any raw converters, as a characteristic of the camera per se; but it can also be used to evaluate mappings produced by raw converters. However, measuring metameric error by shooting targets is a limited method. To quote the ISO 17321-1:2012 standard, the method based on shooting some targets (the standard refers to it as to method B) “can only provide accurate characterization data to the extent that the target spectral characteristics match those of the scene or original to be photographed”, that is, it is mostly suitable for in-studio reproduction work.
To reiterate: what immediately follows from sensors having no gamuts and their spectral response functions differing from what we have as our perception mechanism is this: raw data needs to be interpreted to fit the output or working color space gamut (sRGB, Adobe RGB, printer profile…), and some approximate transform between a sensor’s spectral response functions and the human observer needs to be applied.
There are multiple ways to perform such an approximate transform, depending on the constraints and assumptions involved. Some of those ways are better than others. By the way, “better” needs to be defined here. When it comes to “optimum reproduction”, it can be an “appearance matching”, a “colorimetric matching”, or something in between. That is, “better” is pretty subjective, it is a matter of interpretation and quite often it is an aesthetic call on the part of a camera or raw converter manufacturer, especially if one is using default out-of-camera or out-of-converter color. It’s actually the same as with film; accurate color reproduction was never a goal for most popular emulsions, but pleasing color was.
Earlier, we mentioned that there are two major reasons for output color differences. We discussed the ambiguity, and now let’s get to the second one, the procedure and the quality of the measurements that are used to calculate color transforms for the mapping of raw data to color data.
Imagine you are shooting one of those color targets we use for profiling, like a ColorChecker. What is the light that you are going to use for the shot? It seems logical to use the illuminant that matches the one the future profile will be based upon. However, standard color spaces are based on synthetic illuminants, mostly D50 and D65 (except for two: CIE RGB, based on synthetic illuminant E, and NTSC, which is based on the illuminant C that can hardly be used for studio lighting – one needs a filter composed of 2 water-based solutions to produce it). It is rather problematic to directly obtain the camera data for a D-series illuminants simply because they are synthetic illuminants and it is very hard, if even possible, to come by studio lights matching accurately enough, for example, the D65 spectrum.
To compensate for the mismatch between actual in-studio illuminant and synthetic illuminant, profiling software needs to resort to one of the approximate transforms from studio lighting to standard illuminants. The accuracy of such a transform is very important, and the transform itself is often based not only on accuracy, but also on perceptional quality. Transforms work over rather narrow ranges; don’t expect to shoot a target under some incandescent light and produce a good D65-based profile. This, of course, is not the only problem responsible for color differences while obtaining source data to calculate color transforms, others being problems with light and camera setup, as well as the choice of targets and accuracy of target reference files.
This is in no way to say shooting ColorChecker does not lead to usable results. We provide an example of its usefulness towards the end of this article. Yet another (but minor, compared to the two above) consideration is that color science is imperfect, especially when it comes to describing the human perception of color (remember those observers we mentioned earlier?). Some manufacturers are using more recent/more reliable models of human perception, while others may be stuck with older models and/or using less precise methods of calculations.
To sum up, the interpretations differ depending on the manufacturer’s understanding of “likeable” color, processing speed limitations, the quality of the data that was used to calculate the necessary color transforms, the type of transforms (they can be anything starting from simple linear matrix to complex 3D-functions), the way white balance is calculated and applied, and even noise considerations (matrix transforms are usually smoother compared to transforms that employ complex look-up tables). All of these factors together form what is informally called the “color model”. Since the color models are different, the output color may be quite different between different converters, including the in-camera converter that produces out-of-camera JPEGs. By the way, you can see it is not always the case that an in-camera converter produces the most pleasant or accurate color.
And thus we feel that we have proved both statements what we’ve made at the very beginning of this article:
- The out-of-camera JPEGs, including in-camera previews can’t be implicitly, without any checking, used to evaluate color (as we already know, the in-camera histogram is misleading, too);
- It isn’t necessary that the camera manufacturer-recommended converter be specifically tuned to match the out-of-camera JPEG.
So, we definitely know how we feel about it, but what can we do about it?
What can we do to ensure that our RAW converter renders the colors close to the colors we saw?
A custom camera profile can help with such issues. We calculated camera profile for SONY a6500, based on RAW data extracted with RawDigger from DPReview Studio Scene, and used this profile with our preferred RAW converter to open the source ARW. That’s how we obtained the right part on the figure below:
Here is the report of profile accuracy:
Looking at profile accuracy report on Figure 14, one may notice that though the accuracy is pretty good, reds are generally reproduced with less error compared to blues, and that the darker neutral patches E4 and D4 exhibit larger error than the others. The main reason behind irregularity over the reproduction of neutrals would be that I was forced to use a generic ColorChecker reference, as DPReview does not offer the reference for the target they are shooting. Profiling offers an approximation, a best fit, and it might be that E4 and D4 patches on the target they use deviate from the generic reference in a rather significant way. BabelColor web site offers a very good comparison on the matter of variation of the targets.
The imbalance between error in reds and error in blues can be attributed to 2 factors, mainly, first being the use of the generic reference we just mentioned, and the second is sensitivity metamerism that we discussed earlier in the article.
There are also some secondary factors, too, to watch. It is difficult to make a good profile if the spectral power distribution of the studio lighting is not measured; flare and glare can reduce the profile quality significantly, and so can light non-uniformity, be it just intensity or different spectral composition of lights on the sides of the target. However, flat field mode in RawDigger can help to take care of light non-uniformity, please have a look at “How to Compensate for Non-Uniform Target Illumination” chapter in our Obtaining Device Data for Color Profiling article. You can use RawDigger-generated CGATS files with free ArgyllCMS, MakeInputICC, our free GUI over ArgyllCMS (Windows version, OS X version), or basICColor Input 5 (includes 14-day fully functional trial).
As we can see, there is certainly good value in ColorChecker when it comes to creating camera profiles. Color becomes more stable, more predictable, and overall color is improved. Even when the exact target measurements and light measurements are unknown, ColorChecker still allows to create a robust camera profile, taking care of most of the color twists and allowing for better color reproduction. Of course, you can use ColorChecker SG or other dedicated camera targets, but due to their semi-gloss nature you may find those to be more difficult to shoot. So, before going to the next level, using more advanced targets, figure your shooting setup first to have as little flare as possible, use a spectrophotometer to measure your ColorChecker target and your studio lights – it often proves to be more beneficial in terms of color profile quality than jumping to using a complex target.
I would like to thank Thom Hogan and Andrew “Digital Dog” Rodney for their input.
Dear Iliah,
You wrote: “Why don’t you start with an explanation how it happened that some patches that Mr.Myers measured on both targets have very close spectral curves between those targets, while others show significant variations in some wavelength ranges?”
That’s precisely the sort of pattern I would expect if the variation between the two targets were due to random effects: either from manufacturing “tolerances” and/or from the vagaries of spectrophotometry measurement, i.e., measurement “error”. We would expect such “errors” to be approximately normally distributed. Meaning that a few patches would be either close to, or far away, from each other, and most would have approximately “average” differences.
You wrote: “Next, why don’t you take real references extracting them from profiling packages and compare?”
If you could tell me how to get such information out of the CC Passport application, I’d be happy to try.
Lastly, you wrote: “So this remains true: ‘Because of these reformulations, you should measure your own chart and make a custom reference file if you are going to use the ColorChecker Classic for making profiles.'”
I would not argue with that, given that slight sample variation among ColorCheckers is inevitable, and colors are likely to change over time (hence the recommendation to buy new targets every year or two). But, when all is said and done, I do wonder if concern over the precise colors of a given target can be taken too far. Given that no camera sensor that I am aware of satisfies the Luther condition exactly ), it is not possible to make digital images with perfect color fidelity (simultaneously for a range of colors). In other words, knowing very precisely the colors of the target that you are using for profiling will not enable perfect color reproduction in the image. It seems to me that the real value of custom profiling is to improve color CONSISTENCY in a production environment. For that reason, it would seem advisable to take periodic spectral measurements of your target and make new camera profiles as necessary.
Dear Phil:
>>“Why don’t you start with an explanation how it happened that some patches that Mr.Myers measured on both targets have very close spectral curves between those targets, while others show significant variations in some wavelength ranges?”
> That’s precisely the sort of pattern I would expect if the variation between the two targets were due to random effects
Well, no, it is not random variation, for very obvious physical reasons. Random variations do not happen so isolated. It is reformulation and pigment change. I saw no less than 9 reformulations of ColorChecker since 1989, and one time it happened twice in one year. I know the pattern. So does Mr. Myers.
The concern is not about the colour. It is about spectral curves.
Luther condition is based on imprecise and outdated observers, created based on few dozens of people, often of the same race and diet. One can safely forget it.
As to your request to help you with extraction of the references, this is not something that is of any interest to the vast majority of the readers of this site.
Short version: common sense is not enough when dealing with physics and physiology.
Note: marketing departments and technical writers working for camera manufacturers may talk ad nausem that ISO is a part of exposure. It is not. Same with the manufacturers of other objects.
iamkarenschmidt wrote “the reference for ColorChecker (Classic) is not recommended for ColorChecker Passport shots, and vice versa.” I believe this repeats a comment made by Iliah Borg elsewhere in this thread. I would very much like to see an authoritative source for this – preferably from Xrite, the current manufacturer. If you go to the ColorChecker Passport page at Xrite, www.xrite.com/categ…port-photo, or to the brochure, www.xrite.com/-/med…ure_en.pdf, you will see that one of the targets included with the Passport is referred to as the “Color Checker Classic”. True, it is a smaller version than the original (and still available) 30 x 20 cm target. But where is the documentation that the colors are different? I see nothing at BabelColor to suggest that the Passport Color Checker Classic differs from the full-size Classic. What BabelColor does warn about is the ColorChecker SG – www.babelcolor.com/color…_SGproblem.
Dear Phil:
> I would very much like to see an authoritative source
All you need is to compare the reference for ColorChecker Passport that X-Rite puts into their software to the reference for ColorChecker Classic. Alternatively, you can take measurements with your spectrophotometer. Alternatively, you can read what Robin D. Myers (you know, that guy that worked for Apple and is credited for creating ColorSync) has to say, and I quote: “A few of the patches have been reformulated as shown by their spectral reflectance and difference graphs…” ( www.rmimaging.com/infor…Report.pdf , page 3) — and further on that page he shows the graphs of the difference, purple patch being seriously different, red patch moderately different. Then he goes on: “Because of these reformulations, you should measure your own chart and make a custom reference file if you are going to use the ColorChecker Classic for making profiles.”
On a side note, if you do not consider me to be an authoritative source, I wouldn’t recommend reading my articles.
Dear Iliah
“All you need is to compare the reference for ColorChecker Passport that X-Rite puts into their software to the reference for ColorChecker Classic.”
Xrite provides reference values here: xritephoto.com/ph_pr…ortID=5884
There is no apparent distinction between the miniaturized ColorChecker Classic that ships with the Passport application and the full-size ColorChecker Classic target.
“Alternatively, you can take measurements with your spectrophotometer. Alternatively, you can read what Robin D. Myers (you know, that guy that worked for Apple and is credited for creating ColorSync) has to say, and I quote: “A few of the patches have been reformulated as shown by their spectral reflectance and difference graphs…” ( www.rmimaging.com/infor…Report.pdf , page 3) — and further on that page he shows the graphs of the difference, purple patch being seriously different, red patch moderately different. Then he goes on: ‘Because of these reformulations, you should measure your own chart and make a custom reference file if you are going to use the ColorChecker Classic for making profiles.'”
The differences that Myers documents in the figures on p. 3 are entirely confined the extreme ends of the visible (to humans) spectrum. No wonder he says “the colors appear the same.” Whether or not they appear the same to a sensor will obviously depend upon the filters used to block infrared and and ultraviolet wavelengths. My guess is that the differences that Myers finds are less than the differences that exist between the pre- and post-November 2014 ColorChecker Classics, as documented by BabelColor, several of which are discernable when colors are compared side-by-side. www.babelcolor.com/color…oreVSafter
I also note that Myers’ results are based on measurements of ONE Passport ColorChecker Classic, as he says in the footnotes to the tables on pp. 7 – 10.
“On a side note, if you do not consider me to be an authoritative source, I wouldn’t recommend reading my articles.”
Two points:
1. I appreciate your articles very much. I look forward to more of them. I recognize that you have a great deal of expertise, which I value as a resource. That said, I do not turn-off my analytical faculties when I read your articles. When it comes to discovering “truth”, skepticism is a virtue.
2. The point at issue here – whether it is suitable to use a full-size ColorChecker Classic with the Passport application – was not a part of your article. Rather, it was in a response to a comment made by me. Why should I assume that you are an authoritative source about the ColorChecker?
Regards,
Phil
Dear Phil:
You are not doing anything I suggested, and misinterpreting the words of Mr. Myers.
Here is an illustration of the difference:
s3.amazonaws.com/Iliah…ssport.png
It’s not clear to me what the linked image represents:
s3.amazonaws.com/Iliah…ssport.png
Is it a comparison of published reference values for the two targets? Or is it a comparison of measurements made from actual charts? If the latter, what was the sample size? If it was only 1, the comparison says nothing conclusive about systematic differences, if any, between the two types of targets. Color variation among charts that are nominally the same has been well documented by BabelColor.
This is not to say that your statement that the full-size CC Classic should not be used with the Passport application is incorrect. But, what I asked for was the evidence behind the statement. What I have seen so far seems inconclusive. Particularly if the two comparisons that you have pointed me to – Myers and your figure linked above – are each based on measurements of single samples. Furthermore, without some information about the repeatability of spectrophotometer measurements made by you or Myers, we have no basis for evaluating the “noisiness” of the data.
Lastly, I compared Myers’ published CCPassport Lab (D50) coordinates (last table in his paper) with the pre-Nov 2014 CC Classic coordinates (xritephoto.com/ph_pr…ortID=5884). I used the pre-Nov 2014 reference data because the Myers paper was written in 2009-10. You can see the results for DeltaE(1994) and DeltaE(2000) here: www.dropbox.com/s/5uh…4.pdf?dl=0
To summarize:
Average DE(1994) over all 24 patches is 0.96
Average DE(2000) is 0.86
The largest DE(1994) is 1.65 for the Neutral 9.5 (white) patch, followed closely by the blue patch on row 3. Several other DE(1994) values are approximately 1.5
Many DE(1994) values are less than 1.0
DE(2000) values are generally less than DE(1994), sometimes markedly so. For example, for the Neutral 9.5 patch, DE(2000) is only 1.18
Are these data evidence for systematic differences between the CC Classic target that ships with Passport and the full-size CC Classic? You seem to think they are. I remain skeptical. The differences just quantified by me are generally small, perhaps are based on single samples, and I have seen no data on measurement “error”.
Dear Phil:
Why don’t you start with an explanation how it happened that some patches that Mr.Myers measured on both targets have very close spectral curves between those targets, while others show significant variations in some wavelength ranges? Next, why don’t you take real references extracting them from profiling packages and compare? Reformulations happen, often unannounced. So this remains true: “Because of these reformulations, you should measure your own chart and make a custom reference file if you are going to use the ColorChecker Classic for making profiles.”
This is the great content about how to color calibrate cameras. This is Also, the reference for ColorChecker (Classic) is not recommended for ColorChecker Passport shots, and vice versa. Thanks a lot for sharing this nice content with us.
Nice article.
What are your thoughts regarding *some* commercial targets (Colorcheker, IT8…) with neutral greys but “bluer” whites?
I mean the kind of color cards that when measured (and using some reference white like D50 to get L*a*b* values) report less than 1* a or b* on all grey patches but white patches are somehow -4 or -6b* (a* remains the same). Sound like some have been printed in media with some kind of brightneting agents.
Since indoor lighting is moving towards LED (normal ones or high CRI LED lighting) an approach to deal with this color cards me be to remeasure and characterize them with M2 conditions (UV free) and use that information for indoors.
Have you deal whith such color cards? Do you find them much more complicated to use than “usual” ColorCheckers with their less than 1 a-b in all greys and white patches?
Dear Sir:
Paper used for IT8 sometimes contains optical brighteners, which becomes evident when it is measured with and without a UV filter on a spectrophotometer. I do not use this targets for camera.
ColorChecker – I never came across optical brightners in those manufactured under Gretag/X-Rite/Munsell names. If the white patch contains optical brighteners, and the goal is to create LUT/DCP profile, one can cover it with several layers of PTFE tape. For matrix profiles it can be ignored.
If the paper, not just one patch, contains the brightening agent (like it is with IT8), one can use the advice here www.argyllcms.com/doc/FWA.html
Hi, thanks for all you hard work all measurements and testing.
What is worth add (in my humble opinion) that this problem is quite common. I, as Pentax user have same problem with… Lightroom.
I’ve constantly got color shift and hue problem. So big that even on my uncalibrated monitor (4k SEIKI TV to be honest :D) was obvious.
I bought X-Rite Color Passport and I’ve got better colours but problem didn’t disappear both with K1 and K5.
After test and search and testing RAW software, I went to Capture one about 3-4 month ago. For Pentax C1 manage colours much better than LR. But still, I was looking for something better.
I found a solution how to use X-rite color passport with C1 (!)
look here:
for windows users there is simple way to extract ICC profile using Imagemagick www.multipole.org/disco…t=12273
And that was a salvation. As I mention I don’t use either Eizo nor NEC or even proper BEN-Q 10 bit monitor.
But I make some prints and as i calibrated my 6bit monitor perceptual by iterations of prints and calibration. I suits my need for now :).
Addendum:
Why 4K TV? There is no good 4K monitor bigger than 32″. Belive me for 4K you need at least 40″ (I’ve got 39) 39″ 4K is like 19″ FHD :)
Dear Piotr:
This video is based on misunderstanding, all it allows to accomplish is to add standard Adobe RGB profile to camera profiles. It does not work this way at all. It is not a camera profile that is added.
So it is not ICC profile from X-rite calibrated file? I have to check if it different.
Dear Piotr:
Of course it is not. What he is extracting is the standard Adobe RGB profile, nothing else.
More than a little complicated.
Merlin, here is a good summary for you:
Most modern digital cameras cannot reproduce colors accurately, whether you are looking at the camera’s LCD, or at the image on your computer monitor. Even recommended post processing software such as Capture One and Lightroom cannot reproduce color accurately. The only thing you can trust is measuring color yourself with specific tools like ColorChecker. Only then will you be able to take a full advantage of your calibrated monitor.
Thank you. Very kind of you.
I agree that cameras, modern or not, cannot produce color “accurately.” 100%, no; 99%?; 95%? I do think that most modern cameras are “close enough.” I’m not even sure that my eyes capture light the same way anyone else’s eyes do. I don’t really care that my cameras aren’t perfectly accurate, as long as they are close enough, since I’m not trying to capture forensic photos. I’m going to take the raw image, adjust white balance as required to please me, crop as required to please me, adjust exposure and color both globally and locally as required to please me, etc. If I don’t like the shade of blue in the stained glass window, I’ll change it until I do.
Tend to agree. Accurate color correction is variably important depending on the context and use of the photograph. Accurate color reproduction is considered more important in science, medicine, and publishing than in general personal photography.
Dear Jack:
> I agree that cameras, modern or not, cannot produce color “accurately.”
The article starts with a particular problem of colour mismatch between the out-of-camera JPEG and “recommended converter”, not about the accuracy of colour per se.
Thank you very much for this in-depth treatment. Readers who would like still more (probably not many), might be interested in three articles I have posted:
philservice.typepad.com/f_opt…ample.html
and two subsequent articles that can be accessed here: philservice.typepad.com/f_optimum/
To summarize:
1. In-camera JPEG color is seldom, if ever, accurate; even when using camera settings such as “Standard” or “Neutral”.
2. For raw images of a ColorChecker Classic made with a Sony A6300, the Adobe Standard Profile (ACR) produces more accurate color than custom-made ColorChecker Passport profiles. CC Passport profiles are biased toward increased saturation. The bias appears to be intentional
3. A relatively simple example is illustrated for estimating the transformation of sensor spectral sensitivity functions to CIE XYZ color. It depends upon extracting raw RGB values for a ColorChecker image using RawDigger.
4. Empirically, it appears that absolute color fidelity is not possible with any current camera sensor, even when images are made with carefully controlled illumination.
Points 1 and 2 repeat points in this article by Iliah Borg, although I state them perhaps a bit more bluntly.
Dear Phil:
I happen to think your #2 strongly depends on the quality of the target shot and profiling engine.
Iliah,
Not sure what you mean by profiling engine. The CC Passport profiles were made with the X-rite CCPassport application using the same image of a ColorChecker Classic that was used to assess the accuracy of the Adobe Standard profile. In other words, same image, developed with two different profiles. One would presume that the CC Passport profile would be more accurate because it was made from the same image that was then used to check its accuracy. On the other hand, using the Adobe Standard profile required ACR to interpolate the conversion matrix between the two illuminants in the Adobe Standard profile. Of course, I am assuming that the colors of the actual ColorChecker chart that I used were close to their reference L*a*b* coordinates.
Dear Phil:
Each software you listed is using different profiling engine. Also, the reference for ColorChecker (Classic) is not recommended for ColorChecker Passport shots, and vice versa.
Lens can greatly alter color, and it is somewhat subject specific. Learned this when I mounted a 1972 Micro Nikkor 55/3.5 on a 2012 Nex-7. Wild red geraniums were rendered on blue side of purple. Removing UV filter made no difference. Same camera using 1986 Minolta 100/2.8 macro or a modern Sony lens gave true rendition. All were RAW and opened in then current Lr. When using vintage Micro NIkkor I’d carry home sample petals and use them to calibrate color in Lr.
Just tried to repeat this with variety of lenses on a Sony a6500. For sample I used the red & blue motif persian carpet in our living room, and used a ripe tomato, indirect sunlight. The old Micro Nikkor did not differ significantly from modern Zeiss Touit 55/1.8 nor vintage Auto Nikkor 50/2.0. Also today compared a Nex-7 and the a6500 and these produced identical colors.
So my theory is that in nature, flower petals, there must be pigments with very narrow reflected light spectra, and that lens glass or coatings of yore may have had narrow spectra filtering anomalies.
I agree, different lenses certainly produce different casts depending on coatings, and also just the type of glass.
Additionally, in your test, did you check that the WB was set at a fixed Kelvin level, rather than auto WB? I’ve noticed that the same scene can be attributed a different WB depending on the lens, because some lenses will interact correctly with the colour metering sensor algorithms and some will not.
This is one of the reasons professional portrait / interior / food photographers stick with the same camera and lens combination for years!
Dear Sir:
I tried to explain why “fixed Kelvin level” does not characterize white balance right in this article, please see “First Digression”.
It is also important that the white balance for such tests is established as custom, either during the shot, if one is shooting JPEGs, or with click-gray method in raw conversion.
I would like to put this out and would appreciate correction and comment.
As a RAW shooter with a heavy bias in favour of colour management and quality, I find the matter of camera calibration and profiling a bit of a step too far. It seems to me that it is fraught with so many variables, most of which are either beyond our control or conflict with one another in practical application, that trying to make sense of it in any other than an academic way makes little or no sense.
For a camera profile to have any validity it has to be carried out under tightly controlled, not to say laboratory, conditions and only really has any relevance when shooting under those same conditions. Even then there will likely be some colour variation. The only truly practical use I can think of is accurate museum/archival reproduction of artworks or colour critical applications in medicine and science.
I shoot RAW ETTR with a Neutral camera profile knowing that my image in the back of the camera or on my monitor will bear little or no resemblance to either the JPEG rendition or the scene itself. What I aim for is maximum data capture (as this is something that cannot be changed after the event) and rely on post processing to achieve exactly the ‘look’ I want.
Trying to achieve perfect colour congruence (let alone ‘accuracy’) between the JPEG in the back of the camera, the JPEG on the monitor and the colour of the scene seems like a task for masochists and back bedroom geeks – particularly as many of these have little or no concept of colour management and work on budget monitors incapable of correct calibration or profiling.
Betty wrote: “For a camera profile to have any validity it has to be carried out under tightly controlled, not to say laboratory, conditions and only really has any relevance when shooting under those same conditions. Even then there will likely be some colour variation.” I think this is an accurate characterization of camera profiling. Basically, profiling requires making an image of a target with known colors, e.g., known CIE XYZ coordinates. Of course, color depends on illuminant, so let’s assume D50. Having made our image, we extract the observed camera raw RGB values that correspond to the the known XYZ values for each color patch. The task of the profile is to find general equations (ultimately to be used for all colors) that convert the observed raw RGB values to the known XYZ coordinates. AS A PRACTICAL MATTER, THE CONVERSION WILL NOT BE PERFECT FOR EVERY COLOR, although it may be quite close for some. The fundamental reason has to do with the spectral response functions of the red, green, and blue-sensitive photosites of the sensor. They do not correspond closely enough to the color matching functions of the CIE standard observer. As I recall, Iliah refers in his article to very specialized sensors that do have spectral response functions that permit more accurate color reproduction than camera sensors. Note that the imperfect profile we have just described is really valid only for scenes with D50 illuminant. In sum, as Betty wrote, even under laboratory conditions, “there will likely be some colour variation.”
If you were always photographing with a defined illuminant, it would not be too much trouble to make a separate profile for each. But most of us don’t make all our images under controlled conditions. The typical solution, used by Adobe, is a dual-illuminant profile. That is, essentially two profiles made with illuminants of widely varying color temperatures. Adobe seems to us use Illuminant A (incandescent) and D65. When you open a raw image in Lightroom/ACR, the application uses a white balance (color temperature) tag from the raw file, or from your white balance adjustment, to estimate the actual color temperature of the scene. Then, assuming the estimated color temperature is between illuminant A and D65, Lightroom/ACR INTERPOLATES a transformation from camera raw RGB to XYZ. The interpolation itself is a source of color inaccuracy. Add to that uncertainty about actual color temperature and the possibility that illuminants that have “poorly behaved” spectral power distributions, and the problem gets even worse. Yet another source of color inaccuracy can be colors that are out-of-gamut for your display device. They must necessarily be mapped to the display gamut, and must thus be altered to some degree.
In short, absolute color fidelity, simultaneously for a wide range of colors, ain’t gonna happen — even under tightly controlled conditions.
Phil Service
Thank you for a very clear explanation.
It seems to confirm me in my view that for all but the most specialised applications it’s more trouble than it’s worth. A bit like the crock of gold at the end of the rainbow – enticing in theory but ultimately unachievable in practice!
I will stay as I am.
Agreed. And a very apt metaphor. Although I can understand that for some applications, it may be desirable to get as close to true colors as practicable, or at least to strive for color consistency. I think that’s where custom profiling, for specific subjects and illuminants, can help. But most of us, I think, just want colors that please us, or serve our own creative purposes.
> I think that’s where custom profiling, for specific subjects and illuminants, can help.
It also helps when a certain raw converter renders blue as cyan, or red as orange, something the article is about.
> profiling requires making an image of a target with known colors, e.g., known CIE XYZ coordinates
Not necessarily. Why it is important to mention is because you are looking at only one of several possible ways, while more advanced profiling methods are not illuminant-dependent.
Iliah,
“while more advanced profiling methods are not illuminant-dependent.”
Can you point me to an article or paper that explains this is reasonable detail?
Thanks,
Phil
The reason I read Photography Life Very in depth, and geeky (I is one also), Also I am a Sony user (A7s) and use both Lr/COP/Sony Image Data Converter but also a Canon T2i user. When out and on a walkabout not to much you can do but shoot and trust your WB selection but say you pick an old house inside with each room filled with colorful items and different lighting and in post you want to make it right! When I am serious about getting it right I use something inexpensive “Gary Fongs” domes one grey and one white BUT it is how you use them!! The white dome you get next to a subject aim where you will be with it over the lens and the grey again go to the subject place next to it and put the little round circle the camera makes and fill it with the grey dome. Also I use COP’s LCC white plastic card basically the same way I use the dome. Doing indoor like your story subject you have to be calibrating in front of the subject facing where you are or an indoor room shot you have to go across the room and stand facing your position to get the light coming at your subject NOT the light being reflected. It is also the way I learned to use a light meter back in the film days, same thing, maybe!! Ok, next outdoors very tough to get colors right say a sunset/rise things happen sooo fast. BUT say the Milky Way, ah! something you can not see!!! and have to trust the camera. But using the white dome or card facing the MW or facing the light at your back you will get two different shots with different colors. And if you turn on jpeg also you will get those colors as seen on the cameras back screen. BUT also turn on/select the camera settings you would set for the scene even though they never show in the RAW image. It is like night is not a selection in Lr/camera raw but in Sony’s Image Data Converter it is. If doing a sunset/rise select it and do raw and jpeg and compare because when you are capturing light you are judging your image with settings and the in camera jpeg on your cameras screen WHICH is tossed when importing your raw image to a program and can you really remember what you saw that minute, color wise even if you go back (different day/time and light)!!!
Also color/brightness calibrate your monitor these new daylight LED 4k HD monitors will toss you a curve. Not selling here but I use a Spyder calibrator which worked for my 1995 monitor the same for my new 2017 everything looks the same even going to the Apple store and comparing on their 5k one!! Spyder even has a weird color card with a diamond shaped white/gray/black above it and above that a silver ball to put in a shot then the real shot and in Lr you now can compare two images side by side and do a color pick from one and apply to the other (Like I said ” geeky”) but with Lr changing things who knows.
Bottom line, in photography you can ache and pain over the littlest color change to what you “THINK YOU SAW” to what you process it to. But YOU are the artist here and no one can really say the color hue is wrong (unless it is really out there) because they where not behind the camera.
Lastly beware of the new daylight led lights if you look indoors and even outside you will see a blue hue cast at the edges even with your own eyes if you pay attention and they do flicker with the right settings in video mode as stated in a LED Magazine article.
Thanks for the great article, you are very informed and accomplished experimenter with a great eye!!!!
Makes me yearn for “the good old days” when you could choose the look of film to suit the clients work, light the work evenly measured in 1/10ths of a stop, measure incident light and grey card reflectance, add in the slightest change based on the subject and the clients preferences and make ONE exposure. Film sent on one day 800 mile round trip to very reliable lab, put feet up until delivering result to delighted client.
If you had written this ten years ago it would have saved me a heap of angst. Digital was far from magical and was a greedy thief of time and resources. It was obvious very quickly that accurate hand metering was not providing desired results, nor was internal metering since artists rarely give you a subject that conforms to a typical scene. Someone who had rarely made more than two exposures of anything, except when working far from home, found themselves doing multiple bracketing, first of colour charts and grey cards then duplicating those exposures on the actual subject. Then followed hours of post processing and frustration in front of a screen. all my colleagues closed down their “darkrooms” and blackened their offices to work all day in the dark on their screens.
After all this we have no idea what the person next to us sees. Many males are colour blind to some degree, what do they see?
Digital has not been the ideal many may have hoped for, I am just glad that my tired eyes have now been retired, then again many of my clients abandoned all their high standards to have a go themselves…
Going to now read the post again.
Dear Caroline:
> to very reliable lab
“Reliable”, meaning such a lab painstakingly calibrated film processing to the standards the film/chemistry manufacturer developed. I can’t even start on how much time and money reliable labs spent on setting up and maintaining the standards, including equipment, test strips, and last but not least salaries. We are nowhere near that standard with digital.
It is a matter of a different article, of course, but here are a couple of thoughts:
– with slide film, the dynamic range was limited, and we tried to be as accurate with exposure as possible – that helped consistent colour;
– with film, the colour was less based on colour theory and more based on the visual experience.
Iliah, I quite agree about being based on visual experience. What I miss is the consistency of vision. If I chose a certain film and understood how it worked I had predictability from shot to shot and film to film. Nobody would argue that the results could be a direct copy of nature. I once filled a box as a very good colour printer tried to print a copy of a watercolour painting at same size. The first print he made using his instinct and the colour charts and it was brilliant, then we got in close observing tiny dabs of different distinct colours, matching any of them threw off others! The printer was shocked by how these tiny adjustments interacted, interestingly when shown each print next to the original painting the client would have been happy with any of the prints based on their own visual experience.
No client ever got a thrill seeing a digital file like they did seeing a 4×5 inch transparency. Since the change over from handing a transparency to the person who is going to publish your poster / catalogue / book and them doing the necessary work to emailing a file and publisher hardly even looking at it the finished results have on the whole not been good. There are obviously good printers out there but on average I feel standards have dropped and I am most happy to have been able to leave that life behind.
I know that I am in a minority…
Thanks for the great article Iliah.
I’m totally with you Caroline. Velvia slides on a lightbox are still magic!
At least is the good old days, with a modicum of understanding of the chemistry and filters you could feel that you were engaged in an artistic pursuit. Nowadays, it seems you have to be a PhD electronics engineer to understand what’s going on between DSLR, PC and Printer! I’ve certainly given up trying too. My aim is just to get it 99% right in camera. Yep, guilty, I’m a JPEG lightweight!
To that extent, I wonder if anyone knows if Nikon provide schematics for the new Picture Control relationships? Remember the Contrast/Saturation chart in the D700/300/3, so you could see where each PC related to each other including your adjustments. This doesn’t appear in the latest Nikon bodies, and although I have a fair idea through trial and error, it would be interesting to know how the D500 PCs relate to each other in contrast/saturation space. That’s another thing you would always be able to find out from Nikon, in the good old days of proper instruction booklets. Sigh!
Dear Sir:
Have you tried processing your raw files in Nikon Capture NX-D? It gives the benefit of changing the picture styles, and adjusting the white balance. Normally, the output from NX-D without any adjustments to raw is the same in terms of colour as it is from the camera, but a tad sharper. Plus you can adjust to taste directly from raw. Another option is in-camera processing from raw.
Hi Iliah,
Thanks for the reply. I am aware of the Raw processing options in software, but are you saying that shooting Raw then using the Raw processor in camera to select a Picture Style is different from just using that Picture Style from the start? I have used the in camera Raw option sometimes when I wasn’t sure of the best PC (or WB etc), but I assumed that that application of the PC in camera would be the same. Surely there aren’t 2 JPEG engines in the body? Or do you just mean that gives me the flexibility, which I’d agree with, but my aim is to find an optimum JPEG from the start if possible.
Typically, I use Neutral for contrasty days, and Vivid with lowered contrast and raised clarity on flat days. I also modify the WB to reduce green slightly in the D500. I would like to see some sort of confirmation from Nikon where these JPEG PCs lie in relation to each other, but I suppose with the addition of Clarity, it is not possible to show in 2-dimensional space. The picture samples I’ve just found on the Nikon US sight aren’t bad.
Thanks again.
Dear Sir:
I’m just saying that you can change Picture stile at will while converting.
Not saying one should not shoot JPEGs. However, after the first try in ’99, I do not, never, for two simple reasons:
– I do not want to be confined to the exposure that results in “correct” brightness, and fixing dark JPEGs is not my cup of tea;
– I want to concentrate on the shooting and not on tuning JPEG settings and white balance.
Thanks for the clarification Iliah.
I definitely see the benefit of Raw when you have large adjustments to make in post, but I don’t see huge difference between the exposure/contrast/colour latitude between JPEG and Raw when you are making small adjustments, at least when you start with a fairly flat JPEG.
For a well-exposed JPEG (which by definition is often actually underexposed from the maximum data capture/quality point of view) it’s the difference between degrading your image file a bit and not degrading it at all.
For a less well-exposed JPEG or one which doesn’t correspond with your vision, it’s the difference between degrading your image file a lot and not degrading it at all.
If you are so skilled and have your various JPEG settings so finely tuned that everything is always predictable and exactly as you envisioned it would be, to the extent that you would never wish to change it, then all kudos to you.
Why not just shoot RAW and leave all that angst behind you – or at least save it for when you post process and fine tune everything ad infinitum?
Caroline
The fault does not lie with the technology. It lies, as ever, with the people operating it.
With a transparency, the printer/print house has a direct visual comparison and can tweak the print until it matches the transparency. It’s called ‘experience’ – otherwise known as trial and error. An experienced printer with a good eye can achieve outstanding results.
The problem with digital is that it relies on colour management. If either you or your print house are not properly colour managed then again it’s trial and error – except this time your printer has no idea how your file is supposed to look and has nothing against which to compare his print.
That is both the beauty and the beast of digital colour management. When it’s done right good results are almost entirely predictable and do not rely on trial and error. When it’s done badly or not at all, all bets are off and everyone gets what they deserve.
Ignoring the fundamentals is bad practice and bad practice always leads to disappointment.
I agree however, with your comments about 4×5 transparencies. Many years ago I had the privilege of attending a talk given by Stephen Dalton, one of the pioneers of wildlife and especially high-speed flash photography of flying birds and insects. He put some of his original Kodachromes and Ektachromes on a light box. Those utterly awe-inspiring images were burned into my memory for ever and hugely influenced my photographic journey.