The number of native gamma ramp measurements have been reduced greatly to 9 patches per channel (RGBW, although they repeat meaures several times over same patch RGB number… useless) anthough 3D colorspace patches (mixed colors) are big. This is usually a bad idea since most of these Dells relly on a LUT-matrix-LUT, not a LUT3D, hence priority must be set on fixing neutral grey & gamma. This may lead to a increased “grey range” after calibration on troublesome monitors with bad grtey behavior out of the box. “Grey range” is a way to measure how much opposite colors are in your grey ramp, like having a slight green tint on middle gray and a slight pink tint in dark grey. Although each color seems within some X dE boundary to reference true neutral grey, actual distance between those two greys is HIGH and noticeable by your eyes. If you suffer this after upgrade and you cannot downgrade to latest DUCCS 1.6.6 for macOS/Win (because your current OS will nt be supported) the only way to get rid of thiss issue is recalibrating greyscale in DisplayCAL after DUCCS has done its job and you have tested numeriucally or visually that you have a grey range issue.
New DUCCS 1.6.7 still have the same issue regarding the whitepoint you get since it defaults brightness to some fixed value, calibrate internal LUT, then lower brightness to your desired target causing whitepoint to drift. It’s seems they NEVER will fix this issue, this article dates back to 2016. The way to overcome this issue is to calibrate first, see how it drifts and recalibrate again guessing an opposite offset to target white . 1st guess, measure xy offset from your actual desired white, then apply it negative to target whitepoint & recalibrate again, 2nd guess may be more fine tuned. Don’t worry you had to do guess that only once per display, then save workflow in DUCCS and next time you calibrate you just have to recall your saved target with whitepoint offset. It has to be done per display, so there is no point sharing it.
Also Xrite has updated is Xrite services, so it can coexist (disable Xrite tray / Dell Xrite gamma app) with i1Profiler 3.5.0 on Windows systems if you use i1Profiler with your i1Pro2/i1Pro3 for printer profiling (I’ll advice you to go to ArgyllCMS commandline… but that’s another story).
Overall I have mixed feelings about this update. It’s nice to have it working in W11 and macOS M1, but I’m afraid that in those low quality problematic ultrasharps with bad grey range out of the box it will give you worse calibration. For owners with a good behaved Ultrasharp this may go unnoticed.
Lightpainter
February 19, 2020 9:02 am
Hi,
Thank you very much for your very helpful article, ColorConsultant!
This comment is just meant to point out that if someone is not willing or able to thoroughly verify a calibration performed with a version of DUCCS other than the version that came with the monitor, one might be better off using that original version of DUCCS.
The reason is this: X-Rite (who writes DUCCS!) does not consider older models when developing DUCCS updates, and neither X-Rite nor Dell test updates on older models. Unlike drivers or other software, Dell still offers specific (the “native”) DUCCSs for each of their DUCCS qualified monitors for download on their support site, instead of the latest version for all. See also: www.dell.com/commu…-p/4660194
The DUCCS for newer monitors may not crash with older models, but, since being ignored by the programmers, the updates may not consider the older monitor’s technology accordingly. You’ll never know until you perform a thorough verification of your calibration.
My personal experience with compatibility issues:
I successfully hardware calibrated & profiled two Dell UP2716D monitors with the X-Rite iDisplay Pro but, the newer versions of DUCCS didn’t work at all. They crashed right after the measurements, leaving useless data in the monitor’s memory (CAL), and did not provide any ICC profile for Windows at all. I had to use DUCCS 1.5.10, the version that came with the UP2716D when it was released.
Version 1.5.10 is still the one that is offered for download on Dell’s support site when you enter a UP2716D’s service tag. For other Dell devices, e.g., my Dell XPS 15 notebook, I get software updates from Dell regularly instead. So, it seems that Dell “tacitly recommends” to stay with DUCCS 1.5.10 for the UP2716D.
To find this out, I had to google to the Dell forum where several posters reported this fallback solution. The most helpful post was by admin “Dell-Chris M”: “Those DUCCS update invites were initiated by the software, but not technically supported by Dell for all monitors. For example, when released to sale, the U2413 was only tested/validated by Dell with DUCCS 1.5.8. So the testing of [newer] versions […] has to be done by the end user to see if they will work. The same situation applies to the UP2516D/UP2716D. When released to sale, the UP2516D/UP2716D were only tested/validated by Dell with DUCCS 1.5.10.”
I for one stopped DUCCS from auto-updating.
Cheers
My setup: PC with Windows 10 1903, 2x Dell UP2716D, Palit nVidia GeForce GTX 1070 Gamerock w latest driver, latest monitor firmware (M3T103), latest Dell drivers, DUCCS 1.15.10, X-Rite Device Services Version 3.1.7.6, X-Rite iDisplay Pro
Older DUCCS do not have the new (but slighty wrong) EDR with WLED PFS spectral correction for UP2516D/UP2716D. Older DUCCS does not take 20×4 uncalibrated gamma ramp measurements so in soem units with bad uncalibrated grey ramp with severe colorations DUCCS 1.5.x may be not ablle to correct those issues.
So I would try first to install 16.5+ (current 1.6.6). Only if it crashes several times without being able to complete calibration I would try older versiosn recomended by DELL. Also laptops with its many “hub although they look independent USB ports” may play a role in these crash by the stupid idea of xrite programmers to check serial no. twice on GPU DDC/CI and in USB.
I mean your first hand experience is valuable, but I would try first the newer versions. Too high profit to ignore them.
matteo
October 23, 2019 5:39 am
Hello ColorConsultant,
Will DUCCS (Mac) support 64bit on macOS 10.15 Catalina?
I still working with MacOs Mojave and 32 bit DUCCS, but i don’t know what to do the day i will install Catalina… May you advice me the best solution? Thank You, Have a good day. Matteo
“Will DUCCS (Mac) support 64bit on macOS 10.15 Catalina?”
It’s up to Dell. I do not work for Dell, so IDNK. Xrite updated i1Profiler to support Catalina, v3.1.1. It’s up to Dell, Viewsonic and other i1Profiler clones to request to Xrite (and maybe PAY for it) an update.
If you are forces to use Catalina, and old computer with Windows may solve your issues. Use it to calibrate with DUCCS and write CAL1/CAL2 slots, then copy ICM files to your Mac. If you calibrate each 2 months it could be possible to borrow that computer from a friend/relative.
DUCCS v1.6.6 for MacOS Catalina was released on November 2019
Direct download through Dell, choosing some new model like UP3218K, but some display support (aka HW calibration support) have been discontinued for MacOS like UP2414Q (Xrite says that) Not sure if this link is going to work without navigating to dell’s web first dl.dell.com/FOLDE…lution.pkg
John L
September 25, 2019 4:50 am
I have Dell UP3017 connected to a Computer running Mac OS 10.14.6. I have i1Display Pro and have successfully used Dell DUCCS calibration once when I was on 10.14.1. Since that date when I try to use it the program crashes before the calibration can start. The DUCCS version is 1.6.5 My Graphics card is Radeon RX580 8 GB Can you advise wether these OS and calibration software are compatible or not. I do have a windows laptop running windows 10. Could I calibrate the display using the laptop and could I if needed copy the profile to the Mac OS colour profile folder? Any help would be appreciated
Use laptop & latest DUCCS 1.6.6. Calibrate CAL1 and CAL2 memories to your desired target, once HW calibration & profiling has ended it will request a fiel name for ICM file, give it a meaninful but short name like “UP3017_CAL1_from_Windows.icm” Copy those ICM files to MacOS and asign them to your display manually (ColorSync if Screen configuration does not show new ones “[…]from_windows.icm” because you did no copy them to desired place)
And it’s done. You’ll hacve to change manually default diusplay profile each time you change OSD mode form CAL1 to CAL2 or other modes that use GPU calibration istead of HW calibration. For GPU calibratiom (Custom Color OSD mode) it is recomended to use DisplayCAL.
Thank you for tour comment. I will try this and get back to you with results.
Thanks
Cepcem
August 5, 2019 5:30 am
imgur.com/a/ivl1nr1 Here is my setup with the new Nvidia “designer” drivers. On the left side is a Grey ramp on Adobe Photoshop 30 Bit on NEC PA242W. On the left side is a Grey ramp on Adobe Illustrator 30 Bit on DELL UP2718Q.
I cant tell there is some difference between them. I cant tell there is the difference when i switch to 8 bit in Nvidia driver on both displays either.
Banding in gradient are noticeable in 16bit images if there is truncation to 8bit (so no Photoshop 30bit, no Lightroom/CaptureOne dithering) and color extremes in gradient are very close. Your sample is an user generated blank to white gradient. If you want to see 16bit to 8bit truncation error try googling for 10bit test ramp tiff image. It’s a 1000×1000 16bit gradient between two close greys.
If you see no bands at all in that particular test with 8bpc configuration in nvidia panel with an 8bit link between display and GPU but 30bit enabled in PS that would mean that nvidia driver is doing the same trick as Apple. That would be nice, wonderful, very good news (that’s what I’ve asked for in my previous comments), but *it contradicts other users reports* about “no 10bit pipeline end to end” means banding “10bit test ramp”.tiff/psd in Photoshop. 10bit test ramp.tiff should look smooth in Lightroom develop module or in Capture One if your system is able to display smooth gradients in non color managed applications, like lagom 8bit gradient png opened in MS paint.
Cepcem
August 5, 2019 5:29 am
Here is my setup with the new Nvidia “designer” drivers. On the left side is a Grey ramp on Adobe Photoshop 30 Bit on NEC PA242W. On the left side is a Grey ramp on Adobe Illustrator 30 Bit on DELL UP2718Q.
I cant tell there is some difference between them. I cant tell there is the difference when i switch to 8 bit in Nvidia driver on both displays either.
quill
August 3, 2019 4:45 pm
I’ve just read that NVIDIA has released new studio drivers that activate 30-bit support in their GeForce and Titan graphics cards. The drivers seem to be available for all GeForce cards from the 1050 and up. Any related testing or information of how well those cards now perform in color-critical work and of course calibration would be greatly appreciated.
That means that you can use 30bit in Photoshop and other apps that use it (maybe Premiere). That’s all.
30bit is *one* (not the only) way to get rid of some rounding errors caused in some stages of the pipeline between RGB binary data in an image to actual RGB values that are sent to a electonic panel inside monitor. The other one is temporal dithering used by Lightroom, CaptureOne or some advanced video tools.
True “30bit” in photoshop requieres a 30bit link between GPU and display input (regardless of actual monitor’s panel bitdepth, panel can be 6bit/channel native). When rendering a 16bit image it will truncate to 10bit/channel, draw a 30bit OpenGL surface (if some zoom settings are met, greater than 60%?, I do not remember) and send to display. Driver has to be able to handle that and send it to display. So it requires specialized drivers and HW.
The other option, temporal dithering, is an universal solution that works on any display. After color management is done (after image is re-encoded to display RGB values) high precision calculations are truncated to display capabilities… but in a smart way so no rounding errors are visible. There is an explanation with samples in “The basics of monitor calibration” article in this web.
Since ALL of these are SDR displays (and that includes those so called HDR monitors with fake HDR like SW320) ****the two solutions are visually equivalent****… but one can work on any display/gpu and the other does not. You can check by yourself: open that famous 10bit test ramp TIFF in Lightroom (develop module), Capture one or Photoshop. LR & C1 won’t need Studio driver to display that TIFF in an smooth way (but they DO need bandless calibration)
It is reported (macbooks) that custom GPU drivers from Apple accept 30bit request, but dither to 8bpc in order to send it to non-10bit panels in apple laptop… which is superb. Same is reported to work in the past with Firepros wth not 10bit displays. What does it mean? Why is that black box dithering so good? It means that those GPU drivers “expose” a puzzle-like lock to AdobePhotoshop(PS) that matches its 30bit functionality. Inside driver once PS has set 30bit data it dithers down to display capabilities (direct 30bit output or temporal dithering to 8bit). IMHO, *THAT* is the way to go: 1- expose 30/48bit OpenGL surface drawing interface in driver 2- temporal dithering to meet display capabilites (10 or 8bit per channel) but AFAIK Studio Driver form nvidia do not work this way, it requires full 30bit pipeline end to end.
It’s an improvement but it is not what I would like to see. Also AFAIK Illustrator or Indesign lack of 30bit functionality (but I’ve not tested CC2019 so deep, I should check), so rounding errors in non idealized display ICM profiles are going to be there. Their users are suffering that kind of rounding errors (banding) more often that photographers taht use PS. I mean that this kind of solutions are needed in other tools than PS.
If I’m understanding you correctly you’re saying that it’s still best practice to avoid nVidia consumer cards, correct?
Since you mentioned Illustrator and Indesign, how do AMD’s consumer cards perform with those programs? I understand that they have dithering, and you recommend them for calibration. Apple seems to be using pro variants in their products however (or so their marketing says anyway), plus you mentioned they have their own special drivers, so I’m not sure how eg. a 3rd party Radeon RX 560 would compare to an iMac’s Radeon Pro 560X.
No, I’m not saying that. I’m saying that 30bit pipeline approach made by ***Adobe*** (not by nvidia or AMD) to limit rounding errors severity is not what I would like to see. I would like to see a more universal approach valid for al SDR displays (less than 4000:1 contrast ratio), I would like to see what I wrote: 1- expose 30/48bit OpenGL surface drawing interface in driver 2- temporal dithering to meet display capabilites (10 or 8bit per channel) or CaptureOne/Lightroom approach: ditehr to 8bit from whatever high bitdepth calculations these applications make.
Anyway, with things at its current state new Studio Driver it’s an advance. A leap forward. AMD (and intel for their future discrete XE graphic card) should take note of nvidia movement.
***************
Regarding AMD and dithering. AMD do dither at LUT output. That means that 16bit corrections that are stored in ICM profiles for monitors w/o HW calibration are not truncated in a wrong/simple way with an 8bit link with DIsplay so you do not see banding. That is NOT related at all with the things I’ve explained in my previous comment. My last comment was about all the calculations made BEFORE you send RGB data to graphics card, before they enter LUT for calibration.
Think of this as a car factory pipeline. Final product has to pass across all stages. Each stage introduce NEEDED transfomations from original RGB data in an image that result in desired RGB input at panel/display, butthis transformation may induce some unwanted rounding errors caused by limited precision calculations. LUT dithering, like in Radeons, is an stage close to the end in the pipeline. Same for HW calibration. OTOH 16bit images truncation (like 10bit test ramp tiff banding) or banding caused by Illustrator (in a system that does not show bading in non color managed gradienst, like MS Paint) happens at an early stage. Once this rounding error happens, it will remain. That why I wrote that in my previpus comment, I would like to see an universal solution for ALL displays that solved banding in this early stages, so all the following stages do not carry this error across the full pipeline.
That means that all kind or rounding errors caused by color management would go uncorrected (banding) unless application do dither, an application’s own functionality. That means that a 3xTRC ICM profile is likely to cause some kind of banding in Illustrator even if anr AMD RX560 or a nvidia GF2060 are able to provide you bandless smooth gradiens in non color managed enviroments. A way to solve this is to use more “idealized” profile types, you trade precision for (more chances of) smoothness. Since desktop color management engine in macOS is broken (Adobe uses its own engine for its applications), “idealized” profile displays (1xTRC, matrix, black point compensation) is the recommended approach for mac users. Most HW calibration solution use that idealized approach. With CG/CS fom Eizo or PA from NEC that idealization, specially for grey ramp (1TRC perfect neutrality stored in ICM profile), it is very close to reality. In Benqs and Dells it is not guaranteed, that’s why this article recommends to test “range a*b*”.
Ah, I see, I completely misunderstood that. Thank you for clarifying and in such detail.
Jay G
July 31, 2019 11:48 pm
I have a Dell U2413 and am not getting the same screens as you on my Dell Calibration 1.5.3. I’m not showing GB-LED as my selected calibration matrix. I’m showing CCFL. Why can I not calibrate to GB-LED?
GB-LED is not a calibration matrix, its a spectral correction. Calibration matrices, matrix offsets, or close names are static, “Fixed”, 3×3 matrices that translate XYZ readings from a colorimeter to a corrected CIE XYZ values. This is an universal type of correction supported by all colorimeters… but those matrices are not “portable” between devices. It is not possible that a 3×3 matrix with the same values will be valid for all colorimeters of some model. Spectral corrections are a generic, portable, spectral power distribution (SPD) sample of a backlight technology. Some colorimetrs like i1d3 store its own spectral sensivities in firmware (for each colorimeter its own set, written at factory). Colorimeter spectral sensivity (unique, or supposed to be unique) + generic backlight SPD (EDR/CCSS files) are used to calculate an unique calibration matrix “on the fly”, for each colorimeter a different one.
Regarding your problem, that kind of error is usually related to no DDC/CI communication between display and GPU: -check that DDC/CI is enabed on OSD (or if you do not care about current CAL1/CAL2 calibration do a factory reset, but save relevant config of other models like Custom color. -no VM support. DO not calibrate from a virtual machine -check log files in your user\AppData nd in Local or Roaming folder there should be a Dell or Xrite folder with a communication log. Try to find error. My guess is that is a DDC/CI issue over DP/HDMI/DVI.
jakub
July 18, 2019 1:43 am
Wow, the amount of comments here is impressive ;-) I would like to ask for advice. I used to have Dell U2711 for many years and recently I replaced it with UP2718Q which i thought would be a clear upgrade. 4K, wide gamut, hardware calibration. Nice.
However, since I got the UP2718Q I have been struggling with calibrating it properly. I am using Windows 10 mostly, but for troubleshooting I could also connect it to a Mac if necessary. I have access to i1Pro device. All I want to use it for is photography (Lightroom and Photoshop), so I would like to use wide gamut, and have it around 160-200cd/m2 because sometimes the room is too bright to use it at ~120. Actually, if possible it would be nice to have CAL1 calibrated at 120cd and CAL2 at 200 so I can switch between working in the evening and during the day.
Anyway, I tried DUCCS and Argyll, but I usually end up with one of two issues, or both: 1. Dark colours are clipped. When I make a gradient from black to white in photoshop I end up with darkest tones being just completely black. On many pictures this issue isn’t really noticeable, but working on dark images with lots of shadows I just cannot see details in the shadows.
2. Grey gradient is not really grey. I can clearly see magenta / green all over. Especially in the darker parts as well.
Could you maybe suggest a way of calibrating this screen to make it work? I am already quite disappointedly with it, I can see colour casts on the sides compared to the centre, but I can live with that as long as my gradients look like a gradients and grey looks like grey ;-)
1- get an i1DisplayPro colorimeter, I’ve seen this error in my i1Pro2, a few pixels in lagom gradient viewed with no color management all balck. i1Pro”X” are not really suited for calibrating displays. They are just to take a reference of spectral power distribution (SPD) and feed that info to a colorimeter… and even they fail doing this taks in new widegamut W-LED with XXX phosphorsunoless you use 3nm mode in ArgyllCMS.
2-If it happens with a DUCCS calibration, it’s explained in this article why it happens (and in my 1st one too), try to use the latest version 1.6.6 (it raises number of calibration patches to 4×20 in black to WRGB ramp). If it happens in lastest version… it is uncorrectable with DUCCS buy may be improved if applied a GPU calibration in top of DUCCS calibration. Explained in this articile. If you suffer that kind of green-magenta tints in ArgyllCMS, use an i1DisplayPro colorimeter and slow mode (up to 96 patches for fixing grey, about 30min). For GPU calibration to work properly you need a graphics card with high bitdepth LUT and dithering like AMD Radeons, although newer nvidias over HDMI and DP with more than 8bpc in Control Panel are repoted to work. Still the safest option for “non pro cards” is Radeons over Geforces, they just work in every configuration. You can also try with i1Pro but it will be a pain because low speed, but it won’t be accurate in dark patches.
Also there is a bug in Windows 10 v1903, that causes distinctive broad bands in gradients (not like the ines caused by GPU calibration banding). Tak a look in DisplayCAL forum for solution (Hint: disable task scheduler for task WindowsColorSystem and be careful when switcheing display profiles, you may need to log out and log in)
As a side note: xrite spectros are for fabrics and for printer profiles and for give a fast&accurate colorimeter a measure of SPD. They are not really menat for display calibration, if somebody sell you a colormunki photo, i1ProX or i1Studio as a “swiss army knife for everything”, the seller lied.
“but for troubleshooting I could also connect it to a Mac if necessary” MacOS color management engine for dekstop and Apple apps (not Adobe ones) is broken unless you use highly idealized profiles like DUCCS “matrix” or Argyll’s “sigle curve + matrix”. Further info in DisplayCAL doc+fourm. It has been there for 3-4 OS versions and it seems that they won’t fix it.
Thanks for the detailed answer, that is both bad and good news. Bad because the i1Pro2 I have access to is from my work so I can use it for free. It came with Xerox printers to calibrate them, so nobody lied about the capabilities. I just thought that i1Pro can handle both print and screen while the i1DisplayPro can only do screens. Thanks for clarifying.
Good because with i1DisplayPro it seems like there is a light at the end of the tunnel.
One more question. What about ColorMunki Display? Any good?
Same device but slower ( and you cannot modify it) and most HW calibration suites are locked to do not support munki display, so you have to pay the more expensive i1displaypro: NEC, Dell, Eizo… and so on. DisplayCAL can use the two colorimeters in the same way, but munki still remains 4-5x slower.
4x-5x more time measuring 1000-2000+ patches for a LUT3D can be painful, also for the slowest 96 step calibration from Argyll/DisplayCAL, it goes to 30min with the i1Displaypro.
Thanks. I have the i1Display Pro booked for this weekend so I will be testing and might come back with more questions. I appreciate you taking time to answers all those questions here. I did a fair bit of research into colour management 6-7 years ago when I found this topic quite interesting, but after all those years I forgot a lot of it, and I was not an expert in the first place. The only thing I got out of it is that I can tell when people talk nonsense about colour management (either in person or online in reviews / videos) but I don’t have enough knowledge to confidently correct them ;-) And all this info about specific LEDs in modern monitors, differences in latest software versions, bugs in Windows and so on you can only know if you’re really up to date on the topic, so again, thanks for sharing this information here.
Is it fine to enable DP1.2 after the calibration and profiling is done? Using a Dell UP2414Q.
Thanks!
Newer DUCCS version for macOS Silicon and Windows 11:
dl.dell.com/FOLDE…_15905.zip
dl.dell.com/FOLDE…7_16095.7z
The number of native gamma ramp measurements have been reduced greatly to 9 patches per channel (RGBW, although they repeat meaures several times over same patch RGB number… useless) anthough 3D colorspace patches (mixed colors) are big. This is usually a bad idea since most of these Dells relly on a LUT-matrix-LUT, not a LUT3D, hence priority must be set on fixing neutral grey & gamma.
This may lead to a increased “grey range” after calibration on troublesome monitors with bad grtey behavior out of the box. “Grey range” is a way to measure how much opposite colors are in your grey ramp, like having a slight green tint on middle gray and a slight pink tint in dark grey. Although each color seems within some X dE boundary to reference true neutral grey, actual distance between those two greys is HIGH and noticeable by your eyes.
If you suffer this after upgrade and you cannot downgrade to latest DUCCS 1.6.6 for macOS/Win (because your current OS will nt be supported) the only way to get rid of thiss issue is recalibrating greyscale in DisplayCAL after DUCCS has done its job and you have tested numeriucally or visually that you have a grey range issue.
New DUCCS 1.6.7 still have the same issue regarding the whitepoint you get since it defaults brightness to some fixed value, calibrate internal LUT, then lower brightness to your desired target causing whitepoint to drift.
It’s seems they NEVER will fix this issue, this article dates back to 2016.
The way to overcome this issue is to calibrate first, see how it drifts and recalibrate again guessing an opposite offset to target white .
1st guess, measure xy offset from your actual desired white, then apply it negative to target whitepoint & recalibrate again, 2nd guess may be more fine tuned. Don’t worry you had to do guess that only once per display, then save workflow in DUCCS and next time you calibrate you just have to recall your saved target with whitepoint offset.
It has to be done per display, so there is no point sharing it.
Also Xrite has updated is Xrite services, so it can coexist (disable Xrite tray / Dell Xrite gamma app) with i1Profiler 3.5.0 on Windows systems if you use i1Profiler with your i1Pro2/i1Pro3 for printer profiling (I’ll advice you to go to ArgyllCMS commandline… but that’s another story).
Overall I have mixed feelings about this update. It’s nice to have it working in W11 and macOS M1, but I’m afraid that in those low quality problematic ultrasharps with bad grey range out of the box it will give you worse calibration. For owners with a good behaved Ultrasharp this may go unnoticed.
Hi,
Thank you very much for your very helpful article, ColorConsultant!
This comment is just meant to point out that if someone is not willing or able to thoroughly verify a calibration performed with a version of DUCCS other than the version that came with the monitor, one might be better off using that original version of DUCCS.
The reason is this:
X-Rite (who writes DUCCS!) does not consider older models when developing DUCCS updates, and neither X-Rite nor Dell test updates on older models. Unlike drivers or other software, Dell still offers specific (the “native”) DUCCSs for each of their DUCCS qualified monitors for download on their support site, instead of the latest version for all. See also: www.dell.com/commu…-p/4660194
The DUCCS for newer monitors may not crash with older models, but, since being ignored by the programmers, the updates may not consider the older monitor’s technology accordingly. You’ll never know until you perform a thorough verification of your calibration.
My personal experience with compatibility issues:
I successfully hardware calibrated & profiled two Dell UP2716D monitors with the X-Rite iDisplay Pro but, the newer versions of DUCCS didn’t work at all. They crashed right after the measurements, leaving useless data in the monitor’s memory (CAL), and did not provide any ICC profile for Windows at all. I had to use DUCCS 1.5.10, the version that came with the UP2716D when it was released.
Version 1.5.10 is still the one that is offered for download on Dell’s support site when you enter a UP2716D’s service tag. For other Dell devices, e.g., my Dell XPS 15 notebook, I get software updates from Dell regularly instead. So, it seems that Dell “tacitly recommends” to stay with DUCCS 1.5.10 for the UP2716D.
To find this out, I had to google to the Dell forum where several posters reported this fallback solution. The most helpful post was by admin “Dell-Chris M”:
“Those DUCCS update invites were initiated by the software, but not technically supported by Dell for all monitors. For example, when released to sale, the U2413 was only tested/validated by Dell with DUCCS 1.5.8. So the testing of [newer] versions […] has to be done by the end user to see if they will work.
The same situation applies to the UP2516D/UP2716D. When released to sale, the UP2516D/UP2716D were only tested/validated by Dell with DUCCS 1.5.10.”
I for one stopped DUCCS from auto-updating.
Cheers
My setup:
PC with Windows 10 1903, 2x Dell UP2716D, Palit nVidia GeForce GTX 1070 Gamerock w latest driver, latest monitor firmware (M3T103), latest Dell drivers, DUCCS 1.15.10, X-Rite Device Services Version 3.1.7.6, X-Rite iDisplay Pro
Older DUCCS do not have the new (but slighty wrong) EDR with WLED PFS spectral correction for UP2516D/UP2716D.
Older DUCCS does not take 20×4 uncalibrated gamma ramp measurements so in soem units with bad uncalibrated grey ramp with severe colorations DUCCS 1.5.x may be not ablle to correct those issues.
So I would try first to install 16.5+ (current 1.6.6). Only if it crashes several times without being able to complete calibration I would try older versiosn recomended by DELL.
Also laptops with its many “hub although they look independent USB ports” may play a role in these crash by the stupid idea of xrite programmers to check serial no. twice on GPU DDC/CI and in USB.
I mean your first hand experience is valuable, but I would try first the newer versions. Too high profit to ignore them.
Hello ColorConsultant,
Will DUCCS (Mac) support 64bit on macOS 10.15 Catalina?
I still working with MacOs Mojave and 32 bit DUCCS, but i don’t know what to do the day i will install Catalina…
May you advice me the best solution?
Thank You,
Have a good day.
Matteo
“Will DUCCS (Mac) support 64bit on macOS 10.15 Catalina?”
It’s up to Dell. I do not work for Dell, so IDNK. Xrite updated i1Profiler to support Catalina, v3.1.1. It’s up to Dell, Viewsonic and other i1Profiler clones to request to Xrite (and maybe PAY for it) an update.
If you are forces to use Catalina, and old computer with Windows may solve your issues. Use it to calibrate with DUCCS and write CAL1/CAL2 slots, then copy ICM files to your Mac.
If you calibrate each 2 months it could be possible to borrow that computer from a friend/relative.
DUCCS v1.6.6 for MacOS Catalina was released on November 2019
Direct download through Dell, choosing some new model like UP3218K, but some display support (aka HW calibration support) have been discontinued for MacOS like UP2414Q (Xrite says that)
Not sure if this link is going to work without navigating to dell’s web first
dl.dell.com/FOLDE…lution.pkg
I have Dell UP3017 connected to a Computer running Mac OS 10.14.6. I have i1Display Pro and have successfully used Dell DUCCS calibration once when I was on 10.14.1. Since that date when I try to use it the program crashes before the calibration can start.
The DUCCS version is 1.6.5 My Graphics card is Radeon RX580 8 GB
Can you advise wether these OS and calibration software are compatible or not.
I do have a windows laptop running windows 10. Could I calibrate the display using the laptop and could I if needed copy the profile to the Mac OS colour profile folder? Any help would be appreciated
Use laptop & latest DUCCS 1.6.6. Calibrate CAL1 and CAL2 memories to your desired target, once HW calibration & profiling has ended it will request a fiel name for ICM file, give it a meaninful but short name like “UP3017_CAL1_from_Windows.icm”
Copy those ICM files to MacOS and asign them to your display manually (ColorSync if Screen configuration
does not show new ones “[…]from_windows.icm” because you did no copy them to desired place)
And it’s done. You’ll hacve to change manually default diusplay profile each time you change OSD mode form CAL1 to CAL2 or other modes that use GPU calibration istead of HW calibration.
For GPU calibratiom (Custom Color OSD mode) it is recomended to use DisplayCAL.
Thank you for tour comment. I will try this and get back to you with results.
Thanks
imgur.com/a/ivl1nr1
Here is my setup with the new Nvidia “designer” drivers.
On the left side is a Grey ramp on Adobe Photoshop 30 Bit on NEC PA242W.
On the left side is a Grey ramp on Adobe Illustrator 30 Bit on DELL UP2718Q.
I cant tell there is some difference between them.
I cant tell there is the difference when i switch to 8 bit in Nvidia driver on both displays either.
Banding in gradient are noticeable in 16bit images if there is truncation to 8bit (so no Photoshop 30bit, no Lightroom/CaptureOne dithering) and color extremes in gradient are very close.
Your sample is an user generated blank to white gradient. If you want to see 16bit to 8bit truncation error try googling for 10bit test ramp tiff image. It’s a 1000×1000 16bit gradient between two close greys.
If you see no bands at all in that particular test with 8bpc configuration in nvidia panel with an 8bit link between display and GPU but 30bit enabled in PS that would mean that nvidia driver is doing the same trick as Apple. That would be nice, wonderful, very good news (that’s what I’ve asked for in my previous comments), but *it contradicts other users reports* about “no 10bit pipeline end to end” means banding “10bit test ramp”.tiff/psd in Photoshop.
10bit test ramp.tiff should look smooth in Lightroom develop module or in Capture One if your system is able to display smooth gradients in non color managed applications, like lagom 8bit gradient png opened in MS paint.
Here is my setup with the new Nvidia “designer” drivers.
On the left side is a Grey ramp on Adobe Photoshop 30 Bit on NEC PA242W.
On the left side is a Grey ramp on Adobe Illustrator 30 Bit on DELL UP2718Q.
I cant tell there is some difference between them.
I cant tell there is the difference when i switch to 8 bit in Nvidia driver on both displays either.
I’ve just read that NVIDIA has released new studio drivers that activate 30-bit support in their GeForce and Titan graphics cards. The drivers seem to be available for all GeForce cards from the 1050 and up. Any related testing or information of how well those cards now perform in color-critical work and of course calibration would be greatly appreciated.
Here’s the related link: www.nvidia.com/en-us…io-driver/
That means that you can use 30bit in Photoshop and other apps that use it (maybe Premiere). That’s all.
30bit is *one* (not the only) way to get rid of some rounding errors caused in some stages of the pipeline between RGB binary data in an image to actual RGB values that are sent to a electonic panel inside monitor.
The other one is temporal dithering used by Lightroom, CaptureOne or some advanced video tools.
True “30bit” in photoshop requieres a 30bit link between GPU and display input (regardless of actual monitor’s panel bitdepth, panel can be 6bit/channel native). When rendering a 16bit image it will truncate to 10bit/channel, draw a 30bit OpenGL surface (if some zoom settings are met, greater than 60%?, I do not remember) and send to display. Driver has to be able to handle that and send it to display. So it requires specialized drivers and HW.
The other option, temporal dithering, is an universal solution that works on any display. After color management is done (after image is re-encoded to display RGB values) high precision calculations are truncated to display capabilities… but in a smart way so no rounding errors are visible. There is an explanation with samples in “The basics of monitor calibration” article in this web.
Since ALL of these are SDR displays (and that includes those so called HDR monitors with fake HDR like SW320) ****the two solutions are visually equivalent****… but one can work on any display/gpu and the other does not.
You can check by yourself: open that famous 10bit test ramp TIFF in Lightroom (develop module), Capture one or Photoshop. LR & C1 won’t need Studio driver to display that TIFF in an smooth way (but they DO need bandless calibration)
It is reported (macbooks) that custom GPU drivers from Apple accept 30bit request, but dither to 8bpc in order to send it to non-10bit panels in apple laptop… which is superb. Same is reported to work in the past with Firepros wth not 10bit displays.
What does it mean? Why is that black box dithering so good? It means that those GPU drivers “expose” a puzzle-like lock to AdobePhotoshop(PS) that matches its 30bit functionality. Inside driver once PS has set 30bit data it dithers down to display capabilities (direct 30bit output or temporal dithering to 8bit).
IMHO, *THAT* is the way to go:
1- expose 30/48bit OpenGL surface drawing interface in driver
2- temporal dithering to meet display capabilites (10 or 8bit per channel)
but AFAIK Studio Driver form nvidia do not work this way, it requires full 30bit pipeline end to end.
It’s an improvement but it is not what I would like to see.
Also AFAIK Illustrator or Indesign lack of 30bit functionality (but I’ve not tested CC2019 so deep, I should check), so rounding errors in non idealized display ICM profiles are going to be there. Their users are suffering that kind of rounding errors (banding) more often that photographers taht use PS. I mean that this kind of solutions are needed in other tools than PS.
If I’m understanding you correctly you’re saying that it’s still best practice to avoid nVidia consumer cards, correct?
Since you mentioned Illustrator and Indesign, how do AMD’s consumer cards perform with those programs? I understand that they have dithering, and you recommend them for calibration. Apple seems to be using pro variants in their products however (or so their marketing says anyway), plus you mentioned they have their own special drivers, so I’m not sure how eg. a 3rd party Radeon RX 560 would compare to an iMac’s Radeon Pro 560X.
No, I’m not saying that.
I’m saying that 30bit pipeline approach made by ***Adobe*** (not by nvidia or AMD) to limit rounding errors severity is not what I would like to see.
I would like to see a more universal approach valid for al SDR displays (less than 4000:1 contrast ratio), I would like to see what I wrote:
1- expose 30/48bit OpenGL surface drawing interface in driver
2- temporal dithering to meet display capabilites (10 or 8bit per channel)
or CaptureOne/Lightroom approach: ditehr to 8bit from whatever high bitdepth calculations these applications make.
Anyway, with things at its current state new Studio Driver it’s an advance. A leap forward. AMD (and intel for their future discrete XE graphic card) should take note of nvidia movement.
***************
Regarding AMD and dithering. AMD do dither at LUT output. That means that 16bit corrections that are stored in ICM profiles for monitors w/o HW calibration are not truncated in a wrong/simple way with an 8bit link with DIsplay so you do not see banding.
That is NOT related at all with the things I’ve explained in my previous comment. My last comment was about all the calculations made BEFORE you send RGB data to graphics card, before they enter LUT for calibration.
Think of this as a car factory pipeline. Final product has to pass across all stages. Each stage introduce NEEDED transfomations from original RGB data in an image that result in desired RGB input at panel/display, butthis transformation may induce some unwanted rounding errors caused by limited precision calculations.
LUT dithering, like in Radeons, is an stage close to the end in the pipeline. Same for HW calibration.
OTOH 16bit images truncation (like 10bit test ramp tiff banding) or banding caused by Illustrator (in a system that does not show bading in non color managed gradienst, like MS Paint) happens at an early stage. Once this rounding error happens, it will remain. That why I wrote that in my previpus comment, I would like to see an universal solution for ALL displays that solved banding in this early stages, so all the following stages do not carry this error across the full pipeline.
That means that all kind or rounding errors caused by color management would go uncorrected (banding) unless application do dither, an application’s own functionality.
That means that a 3xTRC ICM profile is likely to cause some kind of banding in Illustrator even if anr AMD RX560 or a nvidia GF2060 are able to provide you bandless smooth gradiens in non color managed enviroments.
A way to solve this is to use more “idealized” profile types, you trade precision for (more chances of) smoothness. Since desktop color management engine in macOS is broken (Adobe uses its own engine for its applications), “idealized” profile displays (1xTRC, matrix, black point compensation) is the recommended approach for mac users.
Most HW calibration solution use that idealized approach. With CG/CS fom Eizo or PA from NEC that idealization, specially for grey ramp (1TRC perfect neutrality stored in ICM profile), it is very close to reality. In Benqs and Dells it is not guaranteed, that’s why this article recommends to test “range a*b*”.
Ah, I see, I completely misunderstood that. Thank you for clarifying and in such detail.
I have a Dell U2413 and am not getting the same screens as you on my Dell Calibration 1.5.3. I’m not showing GB-LED as my selected calibration matrix. I’m showing CCFL. Why can I not calibrate to GB-LED?
GB-LED is not a calibration matrix, its a spectral correction.
Calibration matrices, matrix offsets, or close names are static, “Fixed”, 3×3 matrices that translate XYZ readings from a colorimeter to a corrected CIE XYZ values. This is an universal type of correction supported by all colorimeters… but those matrices are not “portable” between devices. It is not possible that a 3×3 matrix with the same values will be valid for all colorimeters of some model.
Spectral corrections are a generic, portable, spectral power distribution (SPD) sample of a backlight technology. Some colorimetrs like i1d3 store its own spectral sensivities in firmware (for each colorimeter its own set, written at factory). Colorimeter spectral sensivity (unique, or supposed to be unique) + generic backlight SPD (EDR/CCSS files) are used to calculate an unique calibration matrix “on the fly”, for each colorimeter a different one.
Regarding your problem, that kind of error is usually related to no DDC/CI communication between display and GPU:
-check that DDC/CI is enabed on OSD (or if you do not care about current CAL1/CAL2 calibration do a factory reset, but save relevant config of other models like Custom color.
-no VM support. DO not calibrate from a virtual machine
-check log files in your user\AppData nd in Local or Roaming folder there should be a Dell or Xrite folder with a communication log. Try to find error. My guess is that is a DDC/CI issue over DP/HDMI/DVI.
Wow, the amount of comments here is impressive ;-)
I would like to ask for advice. I used to have Dell U2711 for many years and recently I replaced it with UP2718Q which i thought would be a clear upgrade. 4K, wide gamut, hardware calibration. Nice.
However, since I got the UP2718Q I have been struggling with calibrating it properly. I am using Windows 10 mostly, but for troubleshooting I could also connect it to a Mac if necessary. I have access to i1Pro device. All I want to use it for is photography (Lightroom and Photoshop), so I would like to use wide gamut, and have it around 160-200cd/m2 because sometimes the room is too bright to use it at ~120. Actually, if possible it would be nice to have CAL1 calibrated at 120cd and CAL2 at 200 so I can switch between working in the evening and during the day.
Anyway, I tried DUCCS and Argyll, but I usually end up with one of two issues, or both:
1. Dark colours are clipped. When I make a gradient from black to white in photoshop I end up with darkest tones being just completely black. On many pictures this issue isn’t really noticeable, but working on dark images with lots of shadows I just cannot see details in the shadows.
2. Grey gradient is not really grey. I can clearly see magenta / green all over. Especially in the darker parts as well.
Could you maybe suggest a way of calibrating this screen to make it work? I am already quite disappointedly with it, I can see colour casts on the sides compared to the centre, but I can live with that as long as my gradients look like a gradients and grey looks like grey ;-)
Thanks in advance!
1- get an i1DisplayPro colorimeter, I’ve seen this error in my i1Pro2, a few pixels in lagom gradient viewed with no color management all balck.
i1Pro”X” are not really suited for calibrating displays. They are just to take a reference of spectral power distribution (SPD) and feed that info to a colorimeter… and even they fail doing this taks in new widegamut W-LED with XXX phosphorsunoless you use 3nm mode in ArgyllCMS.
2-If it happens with a DUCCS calibration, it’s explained in this article why it happens (and in my 1st one too), try to use the latest version 1.6.6 (it raises number of calibration patches to 4×20 in black to WRGB ramp). If it happens in lastest version… it is uncorrectable with DUCCS buy may be improved if applied a GPU calibration in top of DUCCS calibration. Explained in this articile.
If you suffer that kind of green-magenta tints in ArgyllCMS, use an i1DisplayPro colorimeter and slow mode (up to 96 patches for fixing grey, about 30min). For GPU calibration to work properly you need a graphics card with high bitdepth LUT and dithering like AMD Radeons, although newer nvidias over HDMI and DP with more than 8bpc in Control Panel are repoted to work. Still the safest option for “non pro cards” is Radeons over Geforces, they just work in every configuration.
You can also try with i1Pro but it will be a pain because low speed, but it won’t be accurate in dark patches.
Also there is a bug in Windows 10 v1903, that causes distinctive broad bands in gradients (not like the ines caused by GPU calibration banding). Tak a look in DisplayCAL forum for solution (Hint: disable task scheduler for task WindowsColorSystem and be careful when switcheing display profiles, you may need to log out and log in)
As a side note: xrite spectros are for fabrics and for printer profiles and for give a fast&accurate colorimeter a measure of SPD. They are not really menat for display calibration, if somebody sell you a colormunki photo, i1ProX or i1Studio as a “swiss army knife for everything”, the seller lied.
Sorry, another typo… portable devices…With “phosphorsunoless” I meant “phosphors unless”
Happy weekend!
“but for troubleshooting I could also connect it to a Mac if necessary”
MacOS color management engine for dekstop and Apple apps (not Adobe ones) is broken unless you use highly idealized profiles like DUCCS “matrix” or Argyll’s “sigle curve + matrix”. Further info in DisplayCAL doc+fourm.
It has been there for 3-4 OS versions and it seems that they won’t fix it.
You’ll be jumping in the wrong boat.
Thanks for the detailed answer, that is both bad and good news.
Bad because the i1Pro2 I have access to is from my work so I can use it for free. It came with Xerox printers to calibrate them, so nobody lied about the capabilities. I just thought that i1Pro can handle both print and screen while the i1DisplayPro can only do screens. Thanks for clarifying.
Good because with i1DisplayPro it seems like there is a light at the end of the tunnel.
One more question. What about ColorMunki Display? Any good?
I just found i1DisplayPro for rent from a camera rental business nearby, so I will be testing your suggestions soon ;-)
Same device but slower ( and you cannot modify it) and most HW calibration suites are locked to do not support munki display, so you have to pay the more expensive i1displaypro: NEC, Dell, Eizo… and so on.
DisplayCAL can use the two colorimeters in the same way, but munki still remains 4-5x slower.
4x-5x more time measuring 1000-2000+ patches for a LUT3D can be painful, also for the slowest 96 step calibration from Argyll/DisplayCAL, it goes to 30min with the i1Displaypro.
Thanks. I have the i1Display Pro booked for this weekend so I will be testing and might come back with more questions. I appreciate you taking time to answers all those questions here. I did a fair bit of research into colour management 6-7 years ago when I found this topic quite interesting, but after all those years I forgot a lot of it, and I was not an expert in the first place. The only thing I got out of it is that I can tell when people talk nonsense about colour management (either in person or online in reviews / videos) but I don’t have enough knowledge to confidently correct them ;-) And all this info about specific LEDs in modern monitors, differences in latest software versions, bugs in Windows and so on you can only know if you’re really up to date on the topic, so again, thanks for sharing this information here.