чт, 4 авг. 2022 г., 14:36 Andrew Randrianasulu <[email protected]>:
https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html
It seems not all icc profiles are the same ...
I think for cin-gg it makes sense converting from biggest rgba-float (32 bit floating point value per color channel) , and for CVE from 16 bit/channel int format .... (I think back in time Adam deleted int 16 formats from Cinelerra 2.0 as opposed to cin 1.2 saying they were not as good as true 32 fp)
I put some emphasis on opengl output because it offloads some of per-pixel math to graphics hw, but slower sw only mode probably will work as proof of concept.
As far as I understand you even can have colord daemon monitoring hot-plugged output devices incl. monitors and providing associated profile via some api (?) yet I only compiled it and newer used ...
It seems at least two display pathes needed, one for traditional 8 bit displays and another for newer 10 bit ones. i thought because color profiles known and used ever since 1996 or so, 8 bit output can be done first ..
I'll try to find simpler lcms2 examples ...
Does this example count as useful tutorial? https://stackoverflow.com/questions/22561176/lcms2-convert-cmyk-to-rgb-throu... It uses rgb to cmyk and two profiles, but hopefully one just can call this with one real and one built-in profile? It was nice to know you can do scanlines with lcms, too .... I also was reading ninedegreesbelow.com articles on color spaces and it seems some of our trouble with blend modes with text and fades caused by (missing?) linearization, see https://ninedegreesbelow.com/photography/test-for-linear-processing.html I did some hack for normal blend mode using code from stackoverflow and code posted in our bugzilla See https://www.cinelerra-gg.org/bugtracker/view.php?id=559 So, in theory this (fast) linearization step should be added to all modes, or at image-reading stage only? Also, good way to put lcms transform code in our rgb2rgb function (somewhere in guicast? ..but in cingg some of this code python-generated and I do not know python at all ..), just with additional parameters like pointer to in and out profiles? If any of them null then just not execute transform call ...
Adding profile to some video container hopefully will be not very hard task (i forgot about this patch for ffmpeg's mov muxer from 2019 i talked about in cingg bug ...)
http://ffmpeg.org/pipermail/ffmpeg-devel/2019-September/250398.html
Input side hopefully already covered by ffmpeg.git patches (input image format icc profile only should matter at decompressing into some pixel array? Because further processing will alter those pixels ...or I am wrong and input media profiles must be somewhat combined during track compositing?)