сб, 6 авг. 2022 г., 16:16 Einar Rünkaru <[email protected]>:
On 06/08/2022 07:41, Andrew Randrianasulu wrote:
пт, 5 авг. 2022 г., 20:05 Einar Rünkaru <[email protected] <mailto:[email protected]>>:
On 05/08/2022 08:08, Andrew Randrianasulu wrote: > > > чт, 4 авг. 2022 г., 14:36 Andrew Randrianasulu <[email protected] <mailto:[email protected]> > <mailto:[email protected] <mailto:[email protected]
: > > https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html <https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html> > <
https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html < https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html>>
> > > It seems not all icc profiles are the same ...
Conversion from one profile to another may be done different ways.
Well, I was amazed by existence of unbounded mode (where values can be less than zero or more than 1.0) but I guess Cinelerra not ready for
this )
Values can overflow or underflow inside certanin conversion. The output of this conversion should be inside margins of this format.
Also, littecms pdf documentation in doc folder says you can use same buffer for input and output as long as its organization remain the same
...
--- Scanline overlap
It is safe to use same block for input and output, but only if the input and output are coded in same format. For example, you can safely use only one buffer for RGB to RGB but you cannot use same buffer for RGB as input and CMYK as output.
---
Of course when the size of pixel in bytes does not change. This is the most effective solution.
> > I also was reading ninedegreesbelow.com <http://ninedegreesbelow.com> <http://ninedegreesbelow.com <http://ninedegreesbelow.com>> > articles on color spaces and it seems some of our trouble with
blend
> modes with text and fades caused by (missing?) linearization, see > >
https://ninedegreesbelow.com/photography/test-for-linear-processing.html
<
https://ninedegreesbelow.com/photography/test-for-linear-processing.html>
> <
https://ninedegreesbelow.com/photography/test-for-linear-processing.html
<
https://ninedegreesbelow.com/photography/test-for-linear-processing.html>>
> > I did some hack for normal blend mode using code from stackoverflow and > code posted in our bugzilla > > See > https://www.cinelerra-gg.org/bugtracker/view.php?id=559 <https://www.cinelerra-gg.org/bugtracker/view.php?id=559> > <https://www.cinelerra-gg.org/bugtracker/view.php?id=559 <https://www.cinelerra-gg.org/bugtracker/view.php?id=559>>
There are broken links - not very useful.
Sorry, most likely gmail client on Android not very like copy pasted links....
No, the links in the ticket did not open. Without these it was very hard to understand what the problem was
If I understand right, the problem is related to alpha. When you mix one frame with frame with shaped alpha, the second frame "cuts" itself
into
the first. The result may be unexpected. > > So, in theory this (fast) linearization step should be added to
all
> modes, or at image-reading stage only?
My idea is that video/image loading converts to internal format including linearization. What is mode?
If I look at right place native png reader outputs into rgb(a)8 or rgb(a)16 ..
int FilePNG::colormodel_supported(int colormodel) { if( colormodel == BC_RGB888 || colormodel == BC_RGBA8888 ) return colormodel; if( colormodel == BC_RGB161616 && native_cmodel == BC_RGBA161616 return colormodel; if( native_cmodel >= 0 ) return native_cmodel; int use_16bit = BC_CModels::is_float(colormodel) || BC_CModels::calculate_max(colormodel) > 255 ? 1 : 0; return BC_CModels::has_alpha(colormodel) ? (use_16bit ? BC_RGBA16161616 : BC_RGBA8888) : (use_16bit ? BC_RGB161616 : BC_RGB888); }
No specific icc handling ...
The icc handlig has to be done when the execution reaches this place.
I guess ffmpeg png reader also was dropping icc profile info until very recently ....
Yes - this was ignored.
> > Also, good way to put lcms transform code in our rgb2rgb function > (somewhere in guicast? ..but in cingg some of this code python-generated > and I do not know python at all ..), just with additional
parameters
> like pointer to in and out profiles? If any of them null then just not > execute transform call ... > I don't know python too.
I think I found point where new function can be inserted for display:
vdevicex11
In function VDeviceX11::write_buffer(VFrame *output_channels, EDL *edl)
// printf("VDeviceX11::write_buffer %d output_channels=%p\n", __LINE__, output_channels); // printf("VDeviceX11::write_buffer %d input color_model=%d output color_model=%d\n", // __LINE__, output_channels->get_color_model(),
bitmap->get_color_model());
if( bitmap->hardware_scaling() ) {
BC_CModels::transfer(bitmap->get_row_pointers(), output_channels->get_rows(), 0, 0, 0, output_channels->get_y(), output_channels->get_u(), output_channels->get_v(), 0, 0, output_channels->get_w(), output_channels->get_h(), 0, 0, bitmap->get_w(), bitmap->get_h(), output_channels->get_color_model(), bitmap->get_color_model(), -1, output_channels->get_w(), bitmap->get_w()); } else {
BC_CModels::transfer(bitmap->get_row_pointers(), output_channels->get_rows(), 0, 0, 0, output_channels->get_y(), output_channels->get_u(), output_channels->get_v(), (int)output_x1, (int)output_y1, (int)(output_x2 - output_x1), (int)(output_y2 - out 0, 0, (int)(canvas_x2 - canvas_x1), (int)(canvas_y2 - canvas_y1), output_channels->get_color_model(), bitmap->get_color_model(), -1, output_channels->get_w(), bitmap->get_w()); }
So bitmap->get_row_pointers() can be used as buffer argument for cms* functions?
Icc profile should be added as a new parameter. You cant reuse get_row_pointers() for it.
Yes, I was talking about image buffer to pass to transform function ....
The right place for adding icc profile is color conversion routines. Caller tell what profile should be used (no profile, some predefined or some custom profile).
I ignore direct mode above for now ...
For initial experiments even just hardcoded path to monitor profile should be enough to see difference and experience speed lost?
And for encoding this profile into media/file 'asset' structure must be altered first to hold icc profile type from lcms2.h and then encoder functions should look if there attached profile and pass it to libacvodec/libavformat ...
Asset should not hold the profile. Profile is inside the original file, in decoder parameters or in encoder parameters.
Ok ....
> > > Input side hopefully already covered by ffmpeg.git patches
(input
> image format icc profile only should matter at decompressing
into
> some pixel array? Because further processing will alter those pixels > ...or I am wrong and input media profiles must be somewhat combined > during track compositing?) >
You have to select the best internal format or you have to change and test 6 x N paths in every effect. N is in hundreds.
But then is there any sense in making profile-based transfer functions to be available for rest of cinelerra, like in cingg's case guicast/bcxfer.C place? (Where BC_CModels::transfer currently defined )
Yes, as I said above.
Einar