[Cin] Colour profiles in photo

Andrew Randrianasulu randrianasulu at gmail.com
Sat Aug 6 06:41:14 CEST 2022


пт, 5 авг. 2022 г., 20:05 Einar Rünkaru <einarrunkaru at gmail.com>:

>
>
> On 05/08/2022 08:08, Andrew Randrianasulu wrote:
> >
> >
> > чт, 4 авг. 2022 г., 14:36 Andrew Randrianasulu <randrianasulu at gmail.com
> > <mailto:randrianasulu at gmail.com>>:
> >
> >     https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html
> >     <https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html>
> >
> >
> >     It seems not all icc profiles are the same ...
>
> Conversion from one profile to another may be done different ways.
>

Well, I was amazed by existence of unbounded mode (where values can be less
than zero or more than 1.0) but I guess Cinelerra not ready for this )

Also, I was surprized there is more than one method for describing those
tone curves ...

>
> Profiles are of course not the same
> https://www.color.org/version4html.xalter
> >
> >
> >     I think for cin-gg it makes sense converting from biggest rgba-float
> >     (32 bit floating point value per color channel) , and for CVE from
> >     16 bit/channel int format .... (I think back in time Adam deleted
> >     int 16 formats from Cinelerra 2.0 as opposed to cin 1.2 saying they
> >     were not as good as true 32 fp)
>
> 32 bit fp is not 32 bits - it has only 24 significant bits. Fp pixel
> value is 0 <= value < 1.
>


Yeah, thanks for correction



>
> >     I put some emphasis on opengl output because it offloads some of
> >     per-pixel math to graphics hw, but slower sw only mode probably will
> >     work as proof of concept.
> >
> >     As far as I understand you even can have colord daemon monitoring
> >     hot-plugged output devices incl. monitors and providing associated
> >     profile via some api (?) yet I only compiled it and newer used ...
> >
> >     It seems at least two display pathes needed, one for traditional 8
> >     bit displays and another for newer 10 bit ones. i thought because
> >     color profiles known and used ever since 1996 or so, 8 bit output
> >     can be done first ..
> >
> >     I'll try to find simpler lcms2 examples ...
>
> Cin should detect the output device and select appropriate conversion
> >
> >
> >
> >
> > Does this example count as useful tutorial?
> >
> >
> https://stackoverflow.com/questions/22561176/lcms2-convert-cmyk-to-rgb-through-profiles-in-c-help-on-input-output-values
> > <
> https://stackoverflow.com/questions/22561176/lcms2-convert-cmyk-to-rgb-through-profiles-in-c-help-on-input-output-values
> >
> >
> > It uses rgb to cmyk and two profiles, but hopefully one just can call
> > this with one real and one built-in profile? It was nice to know you can
> > do scanlines with lcms, too ....
>
> Looks reasonable


> ----


Also, littecms pdf documentation in doc folder says you can use same buffer
for input and output as long as its organization remain the same ...

---
Scanline overlap

It is safe to use same block for input and output, but only if the input
and output are coded
in same format. For example, you can safely use only one buffer for RGB to
RGB but you
cannot use same buffer for RGB as input and CMYK as output.

---

>
>
> >
> > I also was reading ninedegreesbelow.com <http://ninedegreesbelow.com>
> > articles on color spaces and it seems some of our trouble with blend
> > modes with text and fades caused by (missing?) linearization, see
> >
> > https://ninedegreesbelow.com/photography/test-for-linear-processing.html
> > <
> https://ninedegreesbelow.com/photography/test-for-linear-processing.html>
> >
> > I did some hack for normal blend mode using code from stackoverflow and
> > code posted in our bugzilla
> >
> > See
> > https://www.cinelerra-gg.org/bugtracker/view.php?id=559
> > <https://www.cinelerra-gg.org/bugtracker/view.php?id=559>
>
> There are broken links - not very useful.
>


Sorry, most likely gmail client on Android not very like copy pasted
links....


> If I understand right, the problem is related to alpha. When you mix one
> frame with frame with shaped alpha, the second frame "cuts" itself into
> the first. The result may be unexpected.
> >
> > So, in theory this (fast) linearization step should be added to all
> > modes, or at image-reading stage only?
>
> My idea is that video/image loading converts to internal format
> including linearization. What is mode?
>

If I look at right place native png reader outputs into rgb(a)8 or rgb(a)16
..

int FilePNG::colormodel_supported(int colormodel)
{
        if( colormodel == BC_RGB888 || colormodel == BC_RGBA8888 )
                return colormodel;
        if( colormodel == BC_RGB161616 && native_cmodel == BC_RGBA161616
                return colormodel;
        if( native_cmodel >= 0 )
                return native_cmodel;
        int use_16bit = BC_CModels::is_float(colormodel) ||
                 BC_CModels::calculate_max(colormodel) > 255 ? 1 : 0;
        return BC_CModels::has_alpha(colormodel) ?
                (use_16bit ? BC_RGBA16161616 : BC_RGBA8888) :
                (use_16bit ? BC_RGB161616 : BC_RGB888);
}

No specific icc handling ...

I guess ffmpeg png reader also was dropping icc profile info until very
recently ....

>
> > Also, good way to put lcms transform code in our rgb2rgb function
> > (somewhere in guicast? ..but in cingg some of this code python-generated
> > and I do not know python at all ..), just with additional parameters
> > like pointer to in and out profiles? If any of them null then just not
> > execute transform call ...
> >
> I don't know python too.
>


I think I found point where new function can be inserted for display:

vdevicex11

In function
VDeviceX11::write_buffer(VFrame *output_channels, EDL *edl)

// printf("VDeviceX11::write_buffer %d output_channels=%p\n", __LINE__,
output_channels);
// printf("VDeviceX11::write_buffer %d input color_model=%d output
color_model=%d\n",
// __LINE__, output_channels->get_color_model(), bitmap->get_color_model());
                if( bitmap->hardware_scaling() ) {
                        BC_CModels::transfer(bitmap->get_row_pointers(),
output_channels->get_rows(), 0, 0, 0,
                                output_channels->get_y(),
output_channels->get_u(), output_channels->get_v(),
                                0, 0, output_channels->get_w(),
output_channels->get_h(),
                                0, 0, bitmap->get_w(), bitmap->get_h(),
                                output_channels->get_color_model(),
bitmap->get_color_model(),
                                -1, output_channels->get_w(),
bitmap->get_w());
                }
                else {
                        BC_CModels::transfer(bitmap->get_row_pointers(),
output_channels->get_rows(), 0, 0, 0,
                                output_channels->get_y(),
output_channels->get_u(), output_channels->get_v(),
                                (int)output_x1, (int)output_y1,
(int)(output_x2 - output_x1), (int)(output_y2 - out
                                0, 0, (int)(canvas_x2 - canvas_x1),
(int)(canvas_y2 - canvas_y1),
                                output_channels->get_color_model(),
bitmap->get_color_model(),
                                -1, output_channels->get_w(),
bitmap->get_w());
                }

So bitmap->get_row_pointers() can be used as buffer argument for cms*
functions?

I ignore direct mode above for now ...

For initial experiments even just hardcoded path to monitor profile should
be enough to see difference and experience speed lost?

And for encoding this profile into media/file 'asset' structure must be
altered first to hold icc profile type from lcms2.h and then encoder
functions should look if there attached profile and pass it to
libacvodec/libavformat ...


>
> >
> >     Adding profile to some video container hopefully will be not very
> >     hard task (i forgot about this patch for ffmpeg's mov muxer from
> >     2019 i talked about in cingg bug ...)
> >
> >
> > http://ffmpeg.org/pipermail/ffmpeg-devel/2019-September/250398.html
> > <http://ffmpeg.org/pipermail/ffmpeg-devel/2019-September/250398.html>
>
> This patch is applied.
>


Thanks for checking!

> >
> >
> >     Input side hopefully already covered by ffmpeg.git patches (input
> >     image format icc profile only should matter at decompressing into
> >     some pixel array? Because further processing will alter those pixels
> >     ...or I am wrong and input media profiles must be somewhat combined
> >     during track  compositing?)
> >
>
> You have to select the best internal format or you have to change and
> test 6 x N paths in every effect. N is in hundreds.
>


But then is there any sense in making profile-based transfer functions to
be available for rest of cinelerra, like in cingg's case guicast/bcxfer.C
place? (Where BC_CModels::transfer currently defined )


> Einar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cinelerra-gg.org/pipermail/cin/attachments/20220806/0942fa2c/attachment.htm>


More information about the Cin mailing list