I run
optipng -o7 *.png in picons dirs and it seems to reduce size down for
measurable amount (100s kb)
This does not seems much but I discovered icon theme got pulled into main
cin binary, so hopefully smaller binary will work tiny bit better ...
чт, 4 авг. 2022 г., 09:28 Andrea paz <gamberucci.andrea(a)gmail.com>:
> Thank you for the explanations.
> I am trying to write a color section for the CGG manual.
> I have a doubt: a long time ago GG said that on the
> timeline/compositor we always see an sRGB output:
>
> "CinGG, works internally at 32 bits in RGB, but the output in timeline
> is sRGB, and performs a first conversion from YUV of the original file
> to RGB with the scaler (matrix function, not primaries or transfer
> characteristic function), using its internal settings."
>
> Does this mean that it is useless to have wide-gamut monitors? And
> that it is useless to have ICCs or LUTs unless they are limited to
> sRGB only?
I think some ffmpeg plugins work internally in rgba-float, so correction
happens with high precision, just display truncate/dither image. So yes,
not truely 'what you see is what you get' {as far as I understand all image
processing happens in linear space ? Because converting back and forth at
each effect sounds wasteful on cpu ...}
I wonder if mpv can read those yuv4mpeg raw streams with color transfer
info? May be you can render to pipe and add mpv as gl accelerated and
color-corrected external player?
But I think fixing display stage will be as 'simple' as using specialised
color-corrected rgba-float-> 10bit rgb (or may be dithered/tonemapped 8bit
rgb) function, as long as display driver support 30 bit color. I commented
in few bugs in the past:
https://www.cinelerra-gg.org/bugtracker/view.php?id=297https://www.cinelerra-gg.org/bugtracker/view.php?id=294https://www.cinelerra-gg.org/bugtracker/view.php?id=238
If so, it would be extremely limiting for color correction
> in CGG.
> My other doubt: what is the purpose of the "YUV color space" in
> Preferences, just for encoding?
I think it also used indirectly via FFVideoConvert::convert_vframe_picture
in convert_cmodel function, in turn used by FFVideoStream::load
all those in cinelerra/ffmpeg.C
Any use of YUV type color spaces
> should be discouraged since the signal on the screen is always sRGB.
>
Well, there is Xv (yuv) output, and encoded video tend to be variation of
subsampled yuv (because it takes less space this way)
>
I found this forum bugreport concerned Cingg 64 bit x86 Appimage for Debian
11
https://www.linux.org.ru/forum/multimedia/16944076?lastmod=1660280725246
Answer was found by reporter, simply setting
ulimit -n 4096
Fixed crash on media load ....
On my termux 'ulimit -a' reports among other things:
open files (-n) 32768
So, I tried to test my suggestion and try mplayer2's gl3 output (with icc /
lcms2 support)
I run into obvious barrier of ffmpeg API changes.
For easy install I just cloned mplayer2 code from github first
git clone https://github.com/astiob/mplayer2.git
Then downloaded ffmpeg 2.3.6 tar.bz2 unpacked and configured it on 32 bit
system like this
# ./configure --disable-debug --disable-asm --disable-doc
--enable-avresample
May be you can leave asm enabled on 64 bit x86-64 system, for me it failed
in h264 header.
Then I installed this ffmpeg to default /usr/local prefix
Then I configured mplayer2 without sdl:
./configure --disable-sdl
And then I run into problem ...for some reason any ffmpeg decoded video was
failing to init vo .. even null vo!
I finally guessed there was missing {} pair ...
After patching mplayer2 sources with attached patch I finally got output!
I also slightly enlarged help text array, so
-vo gl3:help will not truncated in the middle ...
Note, by default mplayer2 binary still named mplayer, so please do not
install this mplayer over system provided one, just run it from its source
dir
сб, 6 авг. 2022 г., 16:16 Einar Rünkaru <einarrunkaru(a)gmail.com>:
>
>
> On 06/08/2022 07:41, Andrew Randrianasulu wrote:
> >
> >
> > пт, 5 авг. 2022 г., 20:05 Einar Rünkaru <einarrunkaru(a)gmail.com
> > <mailto:[email protected]>>:
> >
> >
> >
> > On 05/08/2022 08:08, Andrew Randrianasulu wrote:
> > >
> > >
> > > чт, 4 авг. 2022 г., 14:36 Andrew Randrianasulu
> > <randrianasulu(a)gmail.com <mailto:[email protected]>
> > > <mailto:[email protected] <mailto:[email protected]
> >>>:
> > >
> > >
> > https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html
> > <https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html>
> > >
> > <
> https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html <
> https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html>>
> > >
> > >
> > > It seems not all icc profiles are the same ...
> >
> > Conversion from one profile to another may be done different ways.
> >
> >
> > Well, I was amazed by existence of unbounded mode (where values can be
> > less than zero or more than 1.0) but I guess Cinelerra not ready for
> this )
>
> Values can overflow or underflow inside certanin conversion. The output
> of this conversion should be inside margins of this format.
>
>
> >
> >
> > Also, littecms pdf documentation in doc folder says you can use same
> > buffer for input and output as long as its organization remain the same
> ...
> >
> > ---
> > Scanline overlap
> >
> > It is safe to use same block for input and output, but only if the input
> > and output are coded
> > in same format. For example, you can safely use only one buffer for RGB
> > to RGB but you
> > cannot use same buffer for RGB as input and CMYK as output.
> >
> > ---
> >
> Of course when the size of pixel in bytes does not change. This is the
> most effective solution.
> >
> >
> > >
> > > I also was reading ninedegreesbelow.com
> > <http://ninedegreesbelow.com> <http://ninedegreesbelow.com
> > <http://ninedegreesbelow.com>>
> > > articles on color spaces and it seems some of our trouble with
> blend
> > > modes with text and fades caused by (missing?) linearization, see
> > >
> > >
> >
> https://ninedegreesbelow.com/photography/test-for-linear-processing.html
> > <
> https://ninedegreesbelow.com/photography/test-for-linear-processing.html>
> >
> > >
> > <
> https://ninedegreesbelow.com/photography/test-for-linear-processing.html
> > <
> https://ninedegreesbelow.com/photography/test-for-linear-processing.html>>
> > >
> > > I did some hack for normal blend mode using code from
> > stackoverflow and
> > > code posted in our bugzilla
> > >
> > > See
> > > https://www.cinelerra-gg.org/bugtracker/view.php?id=559
> > <https://www.cinelerra-gg.org/bugtracker/view.php?id=559>
> > > <https://www.cinelerra-gg.org/bugtracker/view.php?id=559
> > <https://www.cinelerra-gg.org/bugtracker/view.php?id=559>>
> >
> > There are broken links - not very useful.
> >
> >
> >
> > Sorry, most likely gmail client on Android not very like copy pasted
> > links....
> >
> No, the links in the ticket did not open. Without these it was very hard
> to understand what the problem was
> >
> > If I understand right, the problem is related to alpha. When you mix
> > one
> > frame with frame with shaped alpha, the second frame "cuts" itself
> into
> > the first. The result may be unexpected.
> > >
> > > So, in theory this (fast) linearization step should be added to
> all
> > > modes, or at image-reading stage only?
> >
> > My idea is that video/image loading converts to internal format
> > including linearization. What is mode?
> >
> >
> > If I look at right place native png reader outputs into rgb(a)8 or
> > rgb(a)16 ..
> >
> > int FilePNG::colormodel_supported(int colormodel)
> > {
> > if( colormodel == BC_RGB888 || colormodel == BC_RGBA8888 )
> > return colormodel;
> > if( colormodel == BC_RGB161616 && native_cmodel == BC_RGBA161616
> > return colormodel;
> > if( native_cmodel >= 0 )
> > return native_cmodel;
> > int use_16bit = BC_CModels::is_float(colormodel) ||
> > BC_CModels::calculate_max(colormodel) > 255 ? 1 : 0;
> > return BC_CModels::has_alpha(colormodel) ?
> > (use_16bit ? BC_RGBA16161616 : BC_RGBA8888) :
> > (use_16bit ? BC_RGB161616 : BC_RGB888);
> > }
> >
> > No specific icc handling ...
>
> The icc handlig has to be done when the execution reaches this place.
> >
> > I guess ffmpeg png reader also was dropping icc profile info until very
> > recently ....
>
> Yes - this was ignored.
>
> >
> > >
> > > Also, good way to put lcms transform code in our rgb2rgb function
> > > (somewhere in guicast? ..but in cingg some of this code
> > python-generated
> > > and I do not know python at all ..), just with additional
> parameters
> > > like pointer to in and out profiles? If any of them null then
> > just not
> > > execute transform call ...
> > >
> > I don't know python too.
> >
> >
> >
> > I think I found point where new function can be inserted for display:
> >
> > vdevicex11
> >
> > In function
> > VDeviceX11::write_buffer(VFrame *output_channels, EDL *edl)
> >
> > // printf("VDeviceX11::write_buffer %d output_channels=%p\n", __LINE__,
> > output_channels);
> > // printf("VDeviceX11::write_buffer %d input color_model=%d output
> > color_model=%d\n",
> > // __LINE__, output_channels->get_color_model(),
> bitmap->get_color_model());
> > if( bitmap->hardware_scaling() ) {
> >
> > BC_CModels::transfer(bitmap->get_row_pointers(),
> > output_channels->get_rows(), 0, 0, 0,
> > output_channels->get_y(),
> > output_channels->get_u(), output_channels->get_v(),
> > 0, 0, output_channels->get_w(),
> > output_channels->get_h(),
> > 0, 0, bitmap->get_w(), bitmap->get_h(),
> > output_channels->get_color_model(),
> > bitmap->get_color_model(),
> > -1, output_channels->get_w(),
> > bitmap->get_w());
> > }
> > else {
> >
> > BC_CModels::transfer(bitmap->get_row_pointers(),
> > output_channels->get_rows(), 0, 0, 0,
> > output_channels->get_y(),
> > output_channels->get_u(), output_channels->get_v(),
> > (int)output_x1, (int)output_y1,
> > (int)(output_x2 - output_x1), (int)(output_y2 - out
> > 0, 0, (int)(canvas_x2 - canvas_x1),
> > (int)(canvas_y2 - canvas_y1),
> > output_channels->get_color_model(),
> > bitmap->get_color_model(),
> > -1, output_channels->get_w(),
> > bitmap->get_w());
> > }
> >
> > So bitmap->get_row_pointers() can be used as buffer argument for cms*
> > functions?
>
> Icc profile should be added as a new parameter. You cant reuse
> get_row_pointers() for it.
>
Yes, I was talking about image buffer to pass to transform function ....
> The right place for adding icc profile is color conversion routines.
> Caller tell what profile should be used (no profile, some predefined or
> some custom profile).
>
> >
> > I ignore direct mode above for now ...
> >
> > For initial experiments even just hardcoded path to monitor profile
> > should be enough to see difference and experience speed lost?
> >
> > And for encoding this profile into media/file 'asset' structure must be
> > altered first to hold icc profile type from lcms2.h and then encoder
> > functions should look if there attached profile and pass it to
> > libacvodec/libavformat ...
>
> Asset should not hold the profile. Profile is inside the original file,
> in decoder parameters or in encoder parameters.
>
Ok ....
>
>
> >
> > >
> > >
> > > Input side hopefully already covered by ffmpeg.git patches
> (input
> > > image format icc profile only should matter at decompressing
> into
> > > some pixel array? Because further processing will alter those
> > pixels
> > > ...or I am wrong and input media profiles must be somewhat
> > combined
> > > during track compositing?)
> > >
> >
> > You have to select the best internal format or you have to change and
> > test 6 x N paths in every effect. N is in hundreds.
> >
> >
> >
> > But then is there any sense in making profile-based transfer functions
> > to be available for rest of cinelerra, like in cingg's case
> > guicast/bcxfer.C place? (Where BC_CModels::transfer currently defined )
> >
> Yes, as I said above.
>
> Einar
>
пт, 5 авг. 2022 г., 20:05 Einar Rünkaru <einarrunkaru(a)gmail.com>:
>
>
> On 05/08/2022 08:08, Andrew Randrianasulu wrote:
> >
> >
> > чт, 4 авг. 2022 г., 14:36 Andrew Randrianasulu <randrianasulu(a)gmail.com
> > <mailto:[email protected]>>:
> >
> > https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html
> > <https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html>
> >
> >
> > It seems not all icc profiles are the same ...
>
> Conversion from one profile to another may be done different ways.
>
Well, I was amazed by existence of unbounded mode (where values can be less
than zero or more than 1.0) but I guess Cinelerra not ready for this )
Also, I was surprized there is more than one method for describing those
tone curves ...
>
> Profiles are of course not the same
> https://www.color.org/version4html.xalter
> >
> >
> > I think for cin-gg it makes sense converting from biggest rgba-float
> > (32 bit floating point value per color channel) , and for CVE from
> > 16 bit/channel int format .... (I think back in time Adam deleted
> > int 16 formats from Cinelerra 2.0 as opposed to cin 1.2 saying they
> > were not as good as true 32 fp)
>
> 32 bit fp is not 32 bits - it has only 24 significant bits. Fp pixel
> value is 0 <= value < 1.
>
Yeah, thanks for correction
>
> > I put some emphasis on opengl output because it offloads some of
> > per-pixel math to graphics hw, but slower sw only mode probably will
> > work as proof of concept.
> >
> > As far as I understand you even can have colord daemon monitoring
> > hot-plugged output devices incl. monitors and providing associated
> > profile via some api (?) yet I only compiled it and newer used ...
> >
> > It seems at least two display pathes needed, one for traditional 8
> > bit displays and another for newer 10 bit ones. i thought because
> > color profiles known and used ever since 1996 or so, 8 bit output
> > can be done first ..
> >
> > I'll try to find simpler lcms2 examples ...
>
> Cin should detect the output device and select appropriate conversion
> >
> >
> >
> >
> > Does this example count as useful tutorial?
> >
> >
> https://stackoverflow.com/questions/22561176/lcms2-convert-cmyk-to-rgb-thro…
> > <
> https://stackoverflow.com/questions/22561176/lcms2-convert-cmyk-to-rgb-thro…
> >
> >
> > It uses rgb to cmyk and two profiles, but hopefully one just can call
> > this with one real and one built-in profile? It was nice to know you can
> > do scanlines with lcms, too ....
>
> Looks reasonable
> ----
Also, littecms pdf documentation in doc folder says you can use same buffer
for input and output as long as its organization remain the same ...
---
Scanline overlap
It is safe to use same block for input and output, but only if the input
and output are coded
in same format. For example, you can safely use only one buffer for RGB to
RGB but you
cannot use same buffer for RGB as input and CMYK as output.
---
>
>
> >
> > I also was reading ninedegreesbelow.com <http://ninedegreesbelow.com>
> > articles on color spaces and it seems some of our trouble with blend
> > modes with text and fades caused by (missing?) linearization, see
> >
> > https://ninedegreesbelow.com/photography/test-for-linear-processing.html
> > <
> https://ninedegreesbelow.com/photography/test-for-linear-processing.html>
> >
> > I did some hack for normal blend mode using code from stackoverflow and
> > code posted in our bugzilla
> >
> > See
> > https://www.cinelerra-gg.org/bugtracker/view.php?id=559
> > <https://www.cinelerra-gg.org/bugtracker/view.php?id=559>
>
> There are broken links - not very useful.
>
Sorry, most likely gmail client on Android not very like copy pasted
links....
> If I understand right, the problem is related to alpha. When you mix one
> frame with frame with shaped alpha, the second frame "cuts" itself into
> the first. The result may be unexpected.
> >
> > So, in theory this (fast) linearization step should be added to all
> > modes, or at image-reading stage only?
>
> My idea is that video/image loading converts to internal format
> including linearization. What is mode?
>
If I look at right place native png reader outputs into rgb(a)8 or rgb(a)16
..
int FilePNG::colormodel_supported(int colormodel)
{
if( colormodel == BC_RGB888 || colormodel == BC_RGBA8888 )
return colormodel;
if( colormodel == BC_RGB161616 && native_cmodel == BC_RGBA161616
return colormodel;
if( native_cmodel >= 0 )
return native_cmodel;
int use_16bit = BC_CModels::is_float(colormodel) ||
BC_CModels::calculate_max(colormodel) > 255 ? 1 : 0;
return BC_CModels::has_alpha(colormodel) ?
(use_16bit ? BC_RGBA16161616 : BC_RGBA8888) :
(use_16bit ? BC_RGB161616 : BC_RGB888);
}
No specific icc handling ...
I guess ffmpeg png reader also was dropping icc profile info until very
recently ....
>
> > Also, good way to put lcms transform code in our rgb2rgb function
> > (somewhere in guicast? ..but in cingg some of this code python-generated
> > and I do not know python at all ..), just with additional parameters
> > like pointer to in and out profiles? If any of them null then just not
> > execute transform call ...
> >
> I don't know python too.
>
I think I found point where new function can be inserted for display:
vdevicex11
In function
VDeviceX11::write_buffer(VFrame *output_channels, EDL *edl)
// printf("VDeviceX11::write_buffer %d output_channels=%p\n", __LINE__,
output_channels);
// printf("VDeviceX11::write_buffer %d input color_model=%d output
color_model=%d\n",
// __LINE__, output_channels->get_color_model(), bitmap->get_color_model());
if( bitmap->hardware_scaling() ) {
BC_CModels::transfer(bitmap->get_row_pointers(),
output_channels->get_rows(), 0, 0, 0,
output_channels->get_y(),
output_channels->get_u(), output_channels->get_v(),
0, 0, output_channels->get_w(),
output_channels->get_h(),
0, 0, bitmap->get_w(), bitmap->get_h(),
output_channels->get_color_model(),
bitmap->get_color_model(),
-1, output_channels->get_w(),
bitmap->get_w());
}
else {
BC_CModels::transfer(bitmap->get_row_pointers(),
output_channels->get_rows(), 0, 0, 0,
output_channels->get_y(),
output_channels->get_u(), output_channels->get_v(),
(int)output_x1, (int)output_y1,
(int)(output_x2 - output_x1), (int)(output_y2 - out
0, 0, (int)(canvas_x2 - canvas_x1),
(int)(canvas_y2 - canvas_y1),
output_channels->get_color_model(),
bitmap->get_color_model(),
-1, output_channels->get_w(),
bitmap->get_w());
}
So bitmap->get_row_pointers() can be used as buffer argument for cms*
functions?
I ignore direct mode above for now ...
For initial experiments even just hardcoded path to monitor profile should
be enough to see difference and experience speed lost?
And for encoding this profile into media/file 'asset' structure must be
altered first to hold icc profile type from lcms2.h and then encoder
functions should look if there attached profile and pass it to
libacvodec/libavformat ...
>
> >
> > Adding profile to some video container hopefully will be not very
> > hard task (i forgot about this patch for ffmpeg's mov muxer from
> > 2019 i talked about in cingg bug ...)
> >
> >
> > http://ffmpeg.org/pipermail/ffmpeg-devel/2019-September/250398.html
> > <http://ffmpeg.org/pipermail/ffmpeg-devel/2019-September/250398.html>
>
> This patch is applied.
>
Thanks for checking!
> >
> >
> > Input side hopefully already covered by ffmpeg.git patches (input
> > image format icc profile only should matter at decompressing into
> > some pixel array? Because further processing will alter those pixels
> > ...or I am wrong and input media profiles must be somewhat combined
> > during track compositing?)
> >
>
> You have to select the best internal format or you have to change and
> test 6 x N paths in every effect. N is in hundreds.
>
But then is there any sense in making profile-based transfer functions to
be available for rest of cinelerra, like in cingg's case guicast/bcxfer.C
place? (Where BC_CModels::transfer currently defined )
> Einar
>
https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html
It seems not all icc profiles are the same ...
I think for cin-gg it makes sense converting from biggest rgba-float (32
bit floating point value per color channel) , and for CVE from 16
bit/channel int format .... (I think back in time Adam deleted int 16
formats from Cinelerra 2.0 as opposed to cin 1.2 saying they were not as
good as true 32 fp)
I put some emphasis on opengl output because it offloads some of per-pixel
math to graphics hw, but slower sw only mode probably will work as proof of
concept.
As far as I understand you even can have colord daemon monitoring
hot-plugged output devices incl. monitors and providing associated profile
via some api (?) yet I only compiled it and newer used ...
It seems at least two display pathes needed, one for traditional 8 bit
displays and another for newer 10 bit ones. i thought because color
profiles known and used ever since 1996 or so, 8 bit output can be done
first ..
I'll try to find simpler lcms2 examples ...
Adding profile to some video container hopefully will be not very hard task
(i forgot about this patch for ffmpeg's mov muxer from 2019 i talked about
in cingg bug ...)
Input side hopefully already covered by ffmpeg.git patches (input image
format icc profile only should matter at decompressing into some pixel
array? Because further processing will alter those pixels ...or I am wrong
and input media profiles must be somewhat combined during track
compositing?)
вт, 2 авг. 2022 г., 20:10 Einar Rünkaru <einarrunkaru(a)gmail.com>:
> Hi.
>
> On 02/08/2022 11:37, Andrea paz wrote:
>
> > Thanks for the responses; glad to hear from you again.
> > In the CVE readme you mention the internal 16-bit color model. Since
> > you remain the only deep Cinelerra expert (besides Adam, with whom it
> > is difficult to interact), may I ask you to explain how color works in
> > Cinelerra? Is it converted continuously as needed?
>
> Video is decoded to internal representation (look at settings/format).
> Internal format is unpacked 3..4 values every pixel. CVE has only
> RGBA-16 and AYUV-16 pixel formats. All values are 16 bit.
>
> CVO (and probably CGG) have 6 internal pixel formats (8-bit and float)
>
> > Does each plugin
> > make the conversions it needs?
>
> All plugins see the frames only in internal format and modify this as
> needed.
>
> > What do the color model settings affect
> > and what are they affected by?
>
> Color model describes internal pixel format
>
> > Is it possible to implement ICC
> > profiles?
>
> Probably. How?
>
Good question. I was looking into mplayer2's code for display part
https://github.com/astiob/mplayer2/blob/all/libvo/vo_gl3.c
Then I found news from yesteryear about lcms2 plugin
https://littlecms.com/plugin/
"Little CMS floating point plug-in accelerates 8 bit and floating point
color transforms, ".." The speedup is accomplished by implementing new
interpolation kernels, adding optimizations and re-arranging memory
layouts. Additionally, it can use SIMD instructions if present."
Then there is new infrastructure in ffmpeg git:
http://ffmpeg.org/pipermail/ffmpeg-devel/2022-July/299438.htmlhttps://git.ffmpeg.org/gitweb/ffmpeg.git/commit/61ffa23c2e42887b32d469d9e69…
"avcodec/codec_internal: add cap for ICC profile support"
As far as I understand it only works with webp, tiff, png and mjpeg as
video format but hopefully can be extended in future?
> > In short, I am not clear about anything related to color
> > treatment in Cinelerra. This is an issue that we have been dragging
> > around for years (for example see the thread:
> >
> https://www.cinelerra-gg.org/forum/help-video/yuv-to-rgb-conversion-issues/
> )
> > but the confusion is total and we can't figure it out.
> > Few mentions were made many years ago:
> > https://lists.cinelerra-cv.org/pipermail/cinelerra/2018q4/009903.html
> > https://www.mail-archive.com/[email protected]/msg13761.html
>
> Original Cinelerra does not care about colorspaces. It uses what comes
> out from decoder. Some effects work differenlly depending on colorspace.
> Sometimes pixel values are converted to float, sometimes to 8-bit for an
> effect. Fixing it takes couple of man-years.
>
> CVE converts color to full-scale (0..65565). All effects work with full
> scale values. Encoding converts to colorspace required by the codec.
>
> Einar
>
> PS Removed CinGG ML - it does not accept my mails - I am not subscribed.
>
>
>
> The purpose of the plugin was to compare different color conversion
> routines for testing.
> The test is moved now to "Video Tests" plugin. Less confusion.
> Einar
Thanks for the responses; glad to hear from you again.
In the CVE readme you mention the internal 16-bit color model. Since
you remain the only deep Cinelerra expert (besides Adam, with whom it
is difficult to interact), may I ask you to explain how color works in
Cinelerra? Is it converted continuously as needed? Does each plugin
make the conversions it needs? What do the color model settings affect
and what are they affected by? Is it possible to implement ICC
profiles? In short, I am not clear about anything related to color
treatment in Cinelerra. This is an issue that we have been dragging
around for years (for example see the thread:
https://www.cinelerra-gg.org/forum/help-video/yuv-to-rgb-conversion-issues/)
but the confusion is total and we can't figure it out.
Few mentions were made many years ago:
https://lists.cinelerra-cv.org/pipermail/cinelerra/2018q4/009903.htmlhttps://www.mail-archive.com/[email protected]/msg13761.html