Adobe Premiere (2021) and HDR
was looking at how exactly HDR info travels in production environment. https://community.adobe.com/t5/premiere-pro-beta-discussions/discuss-color-m... ===== Francis-Crossman Adobe Employee Sep 17, 2021 This expands the list of color managed codecs. H264/HEVC represents the majority of DSLR and mirrorless prosumer cameras on the market, and XAVC-L / XAVC Slog are heavily used in broadcast HDR workflows. “Color managed” means that Premiere Pro read the color tags in the file metadata and accurately converts the file to the sequence color space. If the color spaces of the file and sequence are the same, the Premiere Pro passes the colors through to the sequence without conversion. Color management is necessary for HDR production. Codecs that are not color managed will always be interpreted as Rec709, which worked just fine for a long time. But with HDR becoming more and more mainstream, proper color management is necessary. Full list of color managed codecs Codec Wrapper Color Space New? Apple ProRes · 422 HQ · 4444 · 4444 XQ .MOV Rec. 709 Rec. 2100 HLG Rec. 2100 PQ Previously supported Sony XAVC-I (all intra) .MXF Rec. 709 Rec. 2100 HLG Previously supported Sony XAVC-L (long GOP) .MXF Rec. 709 Rec. 2100 HLG NEW H.264 / HEVC (H.264) .MP4, .MOV Rec. 709 Rec. 2100 HLG Rec. 2100 PQ NEW ===== so in this case he talked about importing colorspace-tagged material, not embedded icc or display side. x264 support for XVAC-intra + HDR metadata was merged few years ago: https://forum.doom9.org/showthread.php?p=1950549 https://forum.doom9.org/showthread.php?p=1940382 https://code.videolan.org/videolan/x264/-/merge_requests/5 I still wonder how you get those values for mastering display? G(x,y)B(x,y)R(x,y)WP(x,y)L(max,min) hm may be like here, vua libdisplayinfo/EDID https://github.com/doitsujin/dxvk/blob/master/src/wsi/wsi_edid.cpp#L14 from https://github.com/libsdl-org/SDL/issues/6587 but ofc there us dev blog saying edid can be incomplete/inaccurate: https://planet.kde.org/xavers-blog-2024-05-10-hdr-and-color-management-in-kw... ==== There is a fourth option though: Use the color information from the display’s EDID. In Plasma 6.1, you can simply select this in the display settings. color profile settings Note that it comes with some caveats too: the EDID only describes colors with the default display settings, so if you change the “picture mode” or similar things in the display settings, the values may not be correct anymore the manufacturer may not measure every panel and just put generic values for the display model into the EDID the manufacturer may put completely wrong values in there (which is why this is disabled by default) even when correct values are provided, ICC profiles have much more detailed information on the display’s behavior than the EDID can contain ==== but at least this round of search shows that SDL(2.x, 3?) does have some support for HDR now, so may be it also an option!
The ICC/color space relationship is not clear to me (maybe ICC is just the same color space but more precisely defined and adapted to the device), however, again for the sheer pleasure of talking about color without any other purpose, I think that good color management can be simplified in 4 steps: 1) input with its own Color Space (or ICC). Typically it is the camera that records in raw (pro), log and ProRes (prosumer) or h264/5 (consumer). All types are profiled; pro and prosumer have multiple profiles to choose from and can be profiled manually or with LUT. This is referred to as ICC in the manual case or support for the various color spaces in the menu-driven case. Consumer cameras have only encoded the working color space in hardware and you can read it in the metadata of the produced files. This is because the sensor is always and only linear (and raw), so in order to have an evaluable and workable output it is internally transformed into the desired signal (also via the transfer function or gamma). I think that CinGG users (interested in color) just need to consider the mediafiles (sources) we upload having their own color space; it doesn't matter where they came from and how they were processed (if they don't have any visible in the metadata it would be better to give them one ourselves before processing). This is because CinGG has no color management and therefore cannot customize the input in a specific way (i.e., do color space transformations automatically). This way is inaccurate because knowledge of the generic color space does not get to the detail of profiling, but it is the norm in the consumer environment. 2) Program color space, the result of which is seen in the Compositor window. It should be as wide as possible to ensure preservation of all source color data (no clipping). A gamma should also be applied if the input is log type. In CinGG you might consider transforming the color space (better with the ffmpeg plugin than the native one, because it offers more color spaces) to one from “intermediate,” that is, as large as possible. Then apply all the correction filters we want and finally we reapply in the effects queue the plugin to transform the space to that of the display (usually Rec709 with gamma 2.4). It is independent of the camera and display color spaces; so you have a true CinGG color management. In Resolve this is done with the CST (Color Space Transformation) plugin. 3) Display color space. Important because it is what we see in the Compositor and often it is also what is delivered (but not necessarily!). It is important that the display is calibrated. I think there is the same problem as for cameras: is the monitor color data taken from the EDID or is it better to use profiling? Is the former case for consumer monitors and the latter for pro monitors? 4) Export the project with a codec and color space suitable for the destination (delivery). For example, for FullHD TVs (Rec709; 2.4) or 4k (BT2020; 2.4); for monitors and the Internet, the classic Rec709, 2.4 is recommended (if it was the same as set as the program color space, there will be no color space transformation during rendering); for cinema, DCI-P3; etc.
ср, 16 апр. 2025 г., 10:42 Andrea paz <[email protected]>:
The ICC/color space relationship is not clear to me (maybe ICC is just the same color space but more precisely defined and adapted to the device), however, again for the sheer pleasure of talking about color without any other purpose, I think that good color management can be simplified in 4 steps: 1) input with its own Color Space (or ICC). Typically it is the camera that records in raw (pro), log and ProRes (prosumer) or h264/5 (consumer). All types are profiled; pro and prosumer have multiple profiles to choose from and can be profiled manually or with LUT. This is referred to as ICC in the manual case or support for the various color spaces in the menu-driven case. Consumer cameras have only encoded the working color space in hardware and you can read it in the metadata of the produced files. This is because the sensor is always and only linear (and raw), so in order to have an evaluable and workable output it is internally transformed into the desired signal (also via the transfer function or gamma). I think that CinGG users (interested in color) just need to consider the mediafiles (sources) we upload having their own color space; it doesn't matter where they came from and how they were processed (if they don't have any visible in the metadata it would be better to give them one ourselves before processing). This is because CinGG has no color management and therefore cannot customize the input in a specific way (i.e., do color space transformations automatically). This way is inaccurate because knowledge of the generic color space does not get to the detail of profiling, but it is the norm in the consumer environment. 2) Program color space, the result of which is seen in the Compositor window. It should be as wide as possible to ensure preservation of all source color data (no clipping). A gamma should also be applied if the input is log type. In CinGG you might consider transforming the color space (better with the ffmpeg plugin than the native one, because it offers more color spaces) to one from “intermediate,” that is, as large as possible. Then apply all the correction filters we want and finally we reapply in the effects queue the plugin to transform the space to that of the display (usually Rec709 with gamma 2.4). It is independent of the camera and display color spaces; so you have a true CinGG color management. In Resolve this is done with the CST (Color Space Transformation) plugin. 3) Display color space. Important because it is what we see in the Compositor and often it is also what is delivered (but not necessarily!). It is important that the display is calibrated. I think there is the same problem as for cameras: is the monitor color data taken from the EDID or is it better to use profiling? Is the former case for consumer monitors and the latter for pro monitors? 4) Export the project with a codec and color space suitable for the destination (delivery). For example, for FullHD TVs (Rec709; 2.4) or 4k (BT2020; 2.4); for monitors and the Internet, the classic Rec709, 2.4 is recommended (if it was the same as set as the program color space, there will be no color space transformation during rendering); for cinema, DCI-P3; etc.
There was idea on l.o.r to put Resolve and cingg to test. If you come up with step by step how to import example file, apply ffmpeg chain of (3d)lut, colorspace, colorspace filters to get footage into viewable/workable condition, then tweak render profile so it literally deliver HDR(ish) file, even if as tiff sequence - we will have our end of possible comparison test covered. It probably will be not fast, due to cpu only transforms, but hopefully it will serve as proof of concept? if you have libplacebo installed and system ffmpeg picks it up and it work there - may be it can be convinced to work inside cingg? I do not have working vulkan driver .... while I might check proprietary nvidia driver one more time. This is long time request, so do not rush ;)
On Wed, 16 Apr 2025, Andrea paz via Cin wrote:
2) Program color space, the result of which is seen in the Compositor window. It should be as wide as possible to ensure preservation of all source color data (no clipping). A gamma should also be applied if the
Recently AndrewR showed me several memorandums about some defined standards for high bit depth video. It is definitely stated there: the ranges of all the color channels must be inside 0.0 - 1.0 (it was meant RGB float type). Therefore, to comply with the existing standards, color data MUST be clipped between 0.0-1.0, otherwise they are illegal! The only parameters which can control the colors precision, max brightness, darkness etc., are actual bit precision, and the actual value of white point and black point, gamma or color curves, etc. Perhaps, it was (still is?) some misunderstanding about what you mean under 'clipping'. _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
There are techniques to extract some detail in the areas burned out by overexposure(I read about it in the book of Alexis Van Hurkmann: "Color Correction Handbook", section: "Dealing with overexposure"). These techniques take advantage of “superwhite,” that is, values above the “reference white” (or white point) value. I, for years, have assumed that superwhite is above 1.0. In fact, in videoscopes, values as high as 110% can be seen. I went back and read poynton's book (https://wangwei1237.github.io/shares/Digital_Video_and_HD_Algorithms_and_Int...) and I found that actually superwhite is only mentioned in reference to the limited range (mpeg, 16-235) and not the full range (jpeg, 0-255). It is mentioned in the “swing study” section on page 42 and also in the “Processing coding” section on page 45. Basically, my mistake is assuming reference white at 235 to be the same as reference white at 255 and corresponding to 1.0. Instead, apparently, even if you have limited range values above 1.0 they are still within the 0-1.0 range of the full range, and thus are still “legal” values. I'm sorry, I have bothered you for a long time about something wrong.... There are still many things I don't understand in the color pipeline, but I'll stop here.
ср, 16 апр. 2025 г., 23:20 Andrea paz via Cin <[email protected]>:
There are techniques to extract some detail in the areas burned out by overexposure(I read about it in the book of Alexis Van Hurkmann: "Color Correction Handbook", section: "Dealing with overexposure"). These techniques take advantage of “superwhite,” that is, values above the “reference white” (or white point) value. I, for years, have assumed that superwhite is above 1.0. In fact, in videoscopes, values as high as 110% can be seen. I went back and read poynton's book ( https://wangwei1237.github.io/shares/Digital_Video_and_HD_Algorithms_and_Int... ) and I found that actually superwhite is only mentioned in reference to the limited range (mpeg, 16-235) and not the full range (jpeg, 0-255). It is mentioned in the “swing study” section on page 42 and also in the “Processing coding” section on page 45. Basically, my mistake is assuming reference white at 235 to be the same as reference white at 255 and corresponding to 1.0. Instead, apparently, even if you have limited range values above 1.0 they are still within the 0-1.0 range of the full range, and thus are still “legal” values. I'm sorry, I have bothered you for a long time about something wrong.... There are still many things I don't understand in the color pipeline, but I'll stop here.
But as we found out DaVinci Resolve _does_ have ability to work with color values in floating point above 1.0f .... *I think* main use there is "quickpatch" small overexposures you discovered over plenty of layers of effects, without re-tuning all of them. Or something similar. Also, intended by original author abuse of cinelerra as photo processor ;) We still have complex question if it possible to do anything useful color-wide with HDR source if you only have SDR monitor ... FCP for example does have "tonemapped to SDR" viewers, but not sure if people *grade for HDR* on them, or just cut/assembly? IMO even cut/assembly might be useful .... https://support.apple.com/ru-ru/guide/final-cut-pro/ver06915f2fe/11.1/mac/14... ==== View HDR video in the viewer in Final Cut Pro for Mac In Final Cut Pro, you can view HDR video in the viewer with tone mapping applied, which compresses bright image content and reduces the apparent dynamic range of the video to fit the viewable range of your display. Important: To play back the wider range of colors in an HDR project with maximum accuracy, you can use the A/V Output feature with an external reference HDR video monitor. See Play media on an external display in Final Cut Pro for Mac. When you view HDR video, the Show HDR as Tone Mapped setting is turned on by default in most cases, applying tone mapping to the HDR image in the viewer. This setting does not affect how HDR content is displayed on an external monitor using A/V Output. Note: If you’re using Final Cut Pro with a Pro Display XDR that’s set to HDR Video or another reference mode preset in Displays settings (in macOS System Settings), tone mapping is disabled. Turn on tone mapping for HDR video in the viewer In Final Cut Pro, position the playhead on an HDR clip in the timeline or browser, so that the clip appears in the viewer. Click the View pop-up menu in the upper-right corner of the viewer, then, in the Display section, choose Show HDR as Tone Mapped. (When tone mapping is turned on, a checkmark appears.) If you’re using a system with a Pro Display XDR, the Show HDR as Tone Mapped setting is appropriate for day-to-day playback and editing with the default preset (“Apple Display P3-1600 nits”) in Displays settings. To use the Pro Display XDR for critical tasks such as color correction, see Color correct HDR video with Pro Display XDR and Final Cut Pro for Mac. ==== --
Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
CinGG is already capable of reading HDR images, as I believe any program that works in floating point. Just load an HDR image and then read the values in the white with the eydropper tool, to confirm it. What to do in CinGG if we are dealing with HDR media? If we have an HDR monitor I don't know, in fact if anyone has one, that would be useful information. If we have an SDR monitor all we can do is tone mapping and bring everything back to SDR. The trouble is that in CinGG the plugins that work for tone mapping are only the primary color correction plugins, that is, they affect the whole frame. In this way we are able to reveal details in the highlights but the midtones and shadows become hopelessly pure black. It would take secondary color correction tools, i.e., capable of acting only in certain areas of the image. Unfortunately, CinGG's two main plugins, namely 3 color way and curves (contained in Histogram Bezier) do not support HDR values. My old attempt to “unlock” the Value slider in 3 color way was disastrous because it destroyed the functionality of the shadow color wheel. I don't know why. With Histogram Bezier curves one could lock the shadow and midtone values and lower only the highlight values, but, as mentioned, this is not possible. This is easily seen by trying tone mapping: for an SDR image or a clipped HDR image: homogeneous white of value 1.0 becomes homogeneous gray of value less than 1.0. So an unnecessary intervention. In an HDR image, the white value above 1.0 (which we see as homogeneous white = 1.0, however) leads to detailed gray values that are no longer homogeneous, thus reconstructing the content present in the white. How HDR values relate to color spaces, I just cannot understand.
чт, 17 апр. 2025 г., 10:50 Andrea paz <[email protected]>:
CinGG is already capable of reading HDR images, as I believe any program that works in floating point. Just load an HDR image and then read the values in the white with the eydropper tool, to confirm it. What to do in CinGG if we are dealing with HDR media? If we have an HDR monitor I don't know, in fact if anyone has one, that would be useful information. If we have an SDR monitor all we can do is tone mapping and bring everything back to SDR. The trouble is that in CinGG the plugins that work for tone mapping are only the primary color correction plugins, that is, they affect the whole frame. In this way we are able to reveal details in the highlights but the midtones and shadows become hopelessly pure black. It would take secondary color correction tools, i.e., capable of acting only in certain areas of the image.
isn't there workaround for this using masks? Unfortunately, CinGG's two main plugins, namely 3 color way and
curves (contained in Histogram Bezier) do not support HDR values. My old attempt to “unlock” the Value slider in 3 color way was disastrous because it destroyed the functionality of the shadow color wheel. I don't know why. With Histogram Bezier curves one could lock the shadow and midtone values and lower only the highlight values, but, as mentioned, this is not possible. This is easily seen by trying tone mapping: for an SDR image or a clipped HDR image: homogeneous white of value 1.0 becomes homogeneous gray of value less than 1.0. So an unnecessary intervention. In an HDR image, the white value above 1.0 (which we see as homogeneous white = 1.0, however) leads to detailed gray values that are no longer homogeneous, thus reconstructing the content present in the white.
How HDR values relate to color spaces, I just cannot understand.
isn't there workaround for this using masks?
Yes, a workaround to be able to continue using primary Color Correction. Nothing wrong with that, especially for CinGG which is made for editing and not for CC. I always tend to talk about CC because it is a topic that interests me, but I realize that we are prolonging the discussion too much. It has become OT by now. CinGG has its own features and it is useless to talk about features it does not have.
On Thu, 17 Apr 2025, Andrea paz wrote:
unnecessary intervention. In an HDR image, the white value above 1.0 (which we see as homogeneous white = 1.0, however) leads to detailed gray values that are no longer homogeneous, thus reconstructing the content present in the white.
Andrea, please read carefully these published standards which AndrewR already mentioned recently, I repeat the references here once again: https://pub.smpte.org/pub/st2094-10/st2094-10-2021.pdf https://pub.smpte.org/pub/st2094-40/st2094-40-2020.pdf Pay attention, it is clearly stated: RGB values SHALL be in range [0.0, 1.0] (with precision of 0.00001). Anything else is illegal. What white values above 1.0 do you mean after that all ????? A hint: floating point value presentation, in contrast to integer ones, defines two properties: a range width (for example, -1000000 - +1000000, or even -10^38 - +10^38), and a precision (for example, 7 decimal digits typical of 32-bit float or 17 digits typical of 64-bit double precision). There may exist also 'fixed-point' presentations, for example, where we present a value in form of 8-bit unsigned byte: 0.0 is byte 0, 1.0 is byte 255, then it still has a range 0.0-1.0, and precision 8 bit, or 2 to 3 decimal digits. If we present the same in 10-bit form, then 0.0 will be 0, 1.0 will be 1023, the precision will be slightly higher than 3 decimal digits. What do you mean under 'your' high dynamic range? A width? A precision? Or simply a combination of capital letters H, D, and R without any definite meaning?
What to do in CinGG if we are dealing with HDR media? If we have an HDR monitor I don't know, in fact if anyone has one, that would be useful information. If we have an SDR monitor all we can do is tone mapping and bring everything back to SDR. The trouble is that in CinGG
If 'HDR' is to be related to digital precision, then I'd say, any true SVGA CRT monitor is a HDR monitor because SVGA interface propagates analog (not digital) signal which potentially can have any precision (actually determined by the bit depth of the DAC of video card which will generate the analog SVGA signal). Same for other kinds of analog video interfaces. But not so for modern LCD monitors attached to SVGA: they first convert analog SVGA signal to internal digital form, after that everything depends on their internal math.
How HDR values relate to color spaces, I just cannot understand.
I tend to think, any image or video whose data have precision higher than 8 bit, is called a HDR image. I am feeling one more aspect of misunderstanding, in a discussion about 'true HDR monitors' meaning 'true calibrated monitors'. First of all, any monitor can be calibrated, independently on its bit precision and even on analog interface. Second, any monitor with a digital interface actually IS CALIBRATED! Any digital monitor technically must have DACs converting bits into some millivolts, or milliamperes, etc. which finally drive light diodes or what is in such monitors inside. If such a monitor 'needs no calibration' means nothing else except the necessary calibration is hardwired in its firmware. You can rely on the assumption that it 'needs no calibration', or you can still attach a calibrator and test the quality of its firmware LUT, if the monitor's firmware calibration data are really good, you will get almost 1:1 color curves. _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
Andrea, please read carefully these published standards
Yes, I understand that color spaces (even HDR color spaces) are constructed, intentionally, limited to the range (0-1.0), all and without exception. My fault is that I cannot understand them, tune them to my experience. That's why I preferred to stop the conversation: I don't think I will ever understand the “science of color.” At most I can use some plugins as a simple ignorant user.
What white values above 1.0 do you mean after that all ?????
I mean these values: https://postimg.cc/gallery/yxC91g4 (note that with sdr image, the homogeneous white becomes homogeneous gray) But I still don't understand...
On Thu, 17 Apr 2025, Andrea paz wrote:
(note that with sdr image, the homogeneous white becomes homogeneous gray)
It is the question of white point and black point, on input and on output of the brightness/contrast/gamma correction. Where to cut off input, and then where to pad lost data on output. _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
чт, 17 апр. 2025 г., 17:16 Georgy Salnikov <[email protected]>:
On Thu, 17 Apr 2025, Andrea paz wrote:
(note that with sdr image, the homogeneous white becomes homogeneous gray)
It is the question of white point and black point, on input and on output of the brightness/contrast/gamma correction. Where to cut off input, and then where to pad lost data on output.
I even suspect original tiff/EXR may come from Natron/Blender/something (CG tools) that was using scene-referred workflow ..... so while in some sense using this image might be wrong , ability to get useful info out of it might be useful anyway.
_______________________________________________________________________________
Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected]
_______________________________________________________________________________
participants (3)
-
Andrea paz -
Andrew Randrianasulu -
Georgy Salnikov