чт, 4 авг. 2022 г., 09:28 Andrea paz <gamberucci.andrea@gmail.com>:
Thank you for the explanations.
I am trying to write a color section for the CGG manual.
I have a doubt: a long time ago GG said that on the
timeline/compositor we always see an sRGB output:

"CinGG, works internally at 32 bits in RGB, but the output in timeline
is sRGB, and performs a first conversion from YUV of the original file
to RGB with the scaler (matrix function, not primaries or transfer
characteristic function), using its internal settings."

Does this mean that it is useless to have wide-gamut monitors? And
that it is useless to have ICCs or LUTs unless they are limited to
sRGB only?

I think some ffmpeg plugins work internally in rgba-float, so correction happens with high precision, just display truncate/dither image. So yes, not truely 'what you see is what you get' {as far as I understand all image processing happens in linear space ? Because converting back and forth at each effect sounds wasteful on cpu ...}

I wonder if mpv can read those yuv4mpeg raw streams with color transfer info? May be you can render to pipe and add mpv as gl accelerated and color-corrected external player?

But I think fixing display stage will be as 'simple'  as using specialised color-corrected rgba-float-> 10bit rgb (or may be dithered/tonemapped 8bit rgb) function, as long as display driver support 30 bit color. I commented in  few bugs in the past:

https://www.cinelerra-gg.org/bugtracker/view.php?id=297
https://www.cinelerra-gg.org/bugtracker/view.php?id=294
https://www.cinelerra-gg.org/bugtracker/view.php?id=238


If so, it would be extremely limiting for color correction
in CGG.
My other doubt: what is the purpose of the "YUV color space" in
Preferences, just for encoding?



I think it also used indirectly via FFVideoConvert::convert_vframe_picture in convert_cmodel function, in turn used by FFVideoStream::load


all those in cinelerra/ffmpeg.C

Any use of YUV type color spaces
should be discouraged since the signal on the screen is always sRGB.

Well, there is Xv (yuv) output, and encoded video tend to be variation of subsampled yuv (because it takes less space this way)