ср, 16 апр. 2025 г., 10:42 Andrea paz <[email protected]>:
The ICC/color space relationship is not clear to me (maybe ICC is just the same color space but more precisely defined and adapted to the device), however, again for the sheer pleasure of talking about color without any other purpose, I think that good color management can be simplified in 4 steps: 1) input with its own Color Space (or ICC). Typically it is the camera that records in raw (pro), log and ProRes (prosumer) or h264/5 (consumer). All types are profiled; pro and prosumer have multiple profiles to choose from and can be profiled manually or with LUT. This is referred to as ICC in the manual case or support for the various color spaces in the menu-driven case. Consumer cameras have only encoded the working color space in hardware and you can read it in the metadata of the produced files. This is because the sensor is always and only linear (and raw), so in order to have an evaluable and workable output it is internally transformed into the desired signal (also via the transfer function or gamma). I think that CinGG users (interested in color) just need to consider the mediafiles (sources) we upload having their own color space; it doesn't matter where they came from and how they were processed (if they don't have any visible in the metadata it would be better to give them one ourselves before processing). This is because CinGG has no color management and therefore cannot customize the input in a specific way (i.e., do color space transformations automatically). This way is inaccurate because knowledge of the generic color space does not get to the detail of profiling, but it is the norm in the consumer environment. 2) Program color space, the result of which is seen in the Compositor window. It should be as wide as possible to ensure preservation of all source color data (no clipping). A gamma should also be applied if the input is log type. In CinGG you might consider transforming the color space (better with the ffmpeg plugin than the native one, because it offers more color spaces) to one from “intermediate,” that is, as large as possible. Then apply all the correction filters we want and finally we reapply in the effects queue the plugin to transform the space to that of the display (usually Rec709 with gamma 2.4). It is independent of the camera and display color spaces; so you have a true CinGG color management. In Resolve this is done with the CST (Color Space Transformation) plugin. 3) Display color space. Important because it is what we see in the Compositor and often it is also what is delivered (but not necessarily!). It is important that the display is calibrated. I think there is the same problem as for cameras: is the monitor color data taken from the EDID or is it better to use profiling? Is the former case for consumer monitors and the latter for pro monitors? 4) Export the project with a codec and color space suitable for the destination (delivery). For example, for FullHD TVs (Rec709; 2.4) or 4k (BT2020; 2.4); for monitors and the Internet, the classic Rec709, 2.4 is recommended (if it was the same as set as the program color space, there will be no color space transformation during rendering); for cinema, DCI-P3; etc.
There was idea on l.o.r to put Resolve and cingg to test. If you come up with step by step how to import example file, apply ffmpeg chain of (3d)lut, colorspace, colorspace filters to get footage into viewable/workable condition, then tweak render profile so it literally deliver HDR(ish) file, even if as tiff sequence - we will have our end of possible comparison test covered. It probably will be not fast, due to cpu only transforms, but hopefully it will serve as proof of concept? if you have libplacebo installed and system ffmpeg picks it up and it work there - may be it can be convinced to work inside cingg? I do not have working vulkan driver .... while I might check proprietary nvidia driver one more time. This is long time request, so do not rush ;)