Two links on ICC in MP4
Starter thread, at least back in time (2022) it was not easy/possible to use ffmpeg for adding custom icc profile to mp4 container but MP4Box from gpac project was up to task: https://superuser.com/questions/1739082/embedding-a-custom-icc-profile-in-an... ===== Creating MP4 video without ICC profile (for testing): ffmpeg -y -loop 1 -framerate 1 -i icc_chelsea.png -vcodec libx265 -t 10 chelsea.mp4 Extracting the ICC profile: magick icc_chelsea.png profile.icc Adding the ICC profile to the MP4 video file using MP4Box: MP4Box -add video.mp4#video:colr=prof,profile.icc -new icc_video.mp4 Testing with FFprobe: ffprobe icc_chelsea.mp4 ===== end of quotation ==== As far as I understand in this case idea was to embed display profile into media. But few apps outside of Final Cut Pro/quicktime can do anything useful about this info? https://forum.luminous-landscape.com/index.php?topic=129232.0 this one from ~2019. It states that instead of more flexible color management video world used few fixed mode monitor color spaces, and whole process was a bit manual if your input image/video was not in one of those. I think we hit crudest form of this limitation when we try to fade titler's text over png image. It sort of flashes, due to missing (unapplied) 2.4 gamma step?
пт, 11 апр. 2025 г., 20:05 Andrew Randrianasulu <[email protected]>:
Starter thread, at least back in time (2022) it was not easy/possible to use ffmpeg for adding custom icc profile to mp4 container but MP4Box from gpac project was up to task:
https://superuser.com/questions/1739082/embedding-a-custom-icc-profile-in-an...
=====
Creating MP4 video without ICC profile (for testing): ffmpeg -y -loop 1 -framerate 1 -i icc_chelsea.png -vcodec libx265 -t 10 chelsea.mp4
Extracting the ICC profile: magick icc_chelsea.png profile.icc
Adding the ICC profile to the MP4 video file using MP4Box: MP4Box -add video.mp4#video:colr=prof,profile.icc -new icc_video.mp4
Testing with FFprobe: ffprobe icc_chelsea.mp4
===== end of quotation ==== As far as I understand in this case idea was to embed display profile into media. But few apps outside of Final Cut Pro/quicktime can do anything useful about this info?
https://forum.luminous-landscape.com/index.php?topic=129232.0
this one from ~2019.
It states that instead of more flexible color management video world used few fixed mode monitor color spaces, and whole process was a bit manual if your input image/video was not in one of those.
I think we hit crudest form of this limitation when we try to fade titler's text over png image. It sort of flashes, due to missing (unapplied) 2.4 gamma step?
https://www.cinelerra-gg.org/bugtracker/view.php?id=559#c5098 probably should be simple enough to replicate in custom BlendAlgebra format ..... ? As it was indicated not all PNGs are in sRGB so it makes sense for ffmpeg to not embed this step into decoder itself?
On Fri, 11 Apr 2025, Andrew Randrianasulu wrote:
I think we hit crudest form of this limitation when we try to fade titler's text over png image. It sort of flashes, due to missing (unapplied) 2.4 gamma step?
No, the reason for "flashes" for fading in titles and dissolve transitions in transparent PNGs is totally other. Take and unpack the small archive in attachment. unwanted_black_flash.xml is a small demo project, IgorBeg has created it a few years ago to demonstrate the quirks. There are a white title over violet background with 5 sec fade in, and two white PNGs with transparent background with 5 sec Dissolve transition in between. Try to play it, focusing on the fading and the transition, it is not so easy to figure out, what is not correct. And yes, what we have now, is Blend Program. You can load now ubf1.xml, it uses two simple programs which print the values of R, G, B, A (the project's color model is RGBA-8bit) inside some letter in the title, and inside each of the two PNGs. What do we see? For the title: Title RGBA: 0.00 / 0.00 / 0.00 / 0.00 Title RGBA: 0.00 / 0.00 / 0.00 / 0.00 Title RGBA: 0.01 / 0.01 / 0.01 / 0.01 Title RGBA: 0.02 / 0.02 / 0.02 / 0.02 Title RGBA: 0.02 / 0.02 / 0.02 / 0.02 Title RGBA: 0.03 / 0.03 / 0.03 / 0.03 ..................................... Title RGBA: 0.97 / 0.97 / 0.97 / 0.97 Title RGBA: 0.98 / 0.98 / 0.98 / 0.98 Title RGBA: 0.98 / 0.98 / 0.98 / 0.98 Title RGBA: 0.99 / 0.99 / 0.99 / 0.99 Title RGBA: 1.00 / 1.00 / 1.00 / 1.00 Pay attention on these numbers! What do you see? A smooth increase of white title opacity over the background? No, you see here a smooth transition from the transparent black letters (RBG=0.00/0.00/0.00 is black, not white) to the opaque white, with some semi-transparent grey in the middle. The Dissolve transition does the similar: 1 RGBA: 1.00 / 1.00 / 1.00 / 1.00 2 RGBA: 0.00 / 0.00 / 0.00 / 0.00 1 RGBA: 1.00 / 1.00 / 1.00 / 1.00 2 RGBA: 0.00 / 0.00 / 0.00 / 0.00 1 RGBA: 0.99 / 0.99 / 0.99 / 0.99 2 RGBA: 0.01 / 0.01 / 0.01 / 0.01 ................................. 1 RGBA: 0.01 / 0.01 / 0.01 / 0.01 2 RGBA: 0.99 / 0.99 / 0.99 / 0.99 1 RGBA: 0.00 / 0.00 / 0.00 / 0.00 2 RGBA: 1.00 / 1.00 / 1.00 / 1.00 The first PNG gradually changes from opaque white to transparent black, the second one does the same in the backward direction, both being semi-transparent grey in the middle. Although one can imagine a case where exactly such behaviour (fading from white to black) could be desired, here it is definitely not that what is wanted, as the background track has some other color. Load the third example, ubf2.xml. It tries to demonstrate how this all should look, were it all consistent. I removed the title fading from the titler plugin and configured fading in the Fade Autos for the track. Now the letters gradually come from transparent white to opaque white, not from black to white. The transition is not so easy to imitate, so I added one more track and put the second PNG there with 5 sec overlap with the first one, and made the same transition from white to transparent white via the Fade Autos. This all looks quite different now! It is obvious: fading in titler and in Dissolve is to be done solely on the Alpha channel, not touching colors, provided that the track where it takes place has no transparent areas. Alternatively, it can be done in the RGB channels, without touching Alpha channel, actually it is what takes place if the project is pure RGB, without transparency. But what must Dissolve do, if there is also a controlled transparency in the track, perhaps different in the two parts between which Dissolve is played, seems not to be obvious... _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
вс, 13 апр. 2025 г., 16:05 Georgy Salnikov <[email protected]>:
On Fri, 11 Apr 2025, Andrew Randrianasulu wrote:
I think we hit crudest form of this limitation when we try to fade titler's text over png image. It sort of flashes, due to missing (unapplied) 2.4 gamma step?
No, the reason for "flashes" for fading in titles and dissolve transitions in transparent PNGs is totally other.
Hm, so we invented wrong solution then, because I recall for some reason doing inter-track mixing as sRGB with provided equation visually fixed that flash .... +//#define LINEAR2SRGB(in) (in <= 0.0031308 ? 12.92 * in : 1.055 * mpow(in, 1.0/GAMMA) - 0.055) +//#define SRGB2LINEAR(in) (in <= 0.04045 ? in / 12.92 : mpow((in + 0.055) / 1.055, GAMMA)) + +#define A_BLEND(top, bottom, alpha, max) \ + max * linear_to_srgb(srgb_to_linear(1. * top / max)+ srgb_to_linear(1. * bottom / max)*(1.0 - (1. * alpha / max))) + +// Change lines: +// #define ALPHA_NORMAL(mx, Sa, Da) (Sa + (Da * (mx - Sa)) / mx) +// #define COLOR_NORMAL(mx, Sc, Sa, Dc, Da) ((Sc * Sa + Dc * (mx - Sa)) / mx) +// To: +#define ALPHA_NORMALS(mx, Sa, Da) ((Sa + (mx - Sa)*(mx - Sa)) / mx) +#define COLOR_NORMALS(mx, Sc, Sa, Dc, Da) A_BLEND(Sc, Dc, Sa, mx) +#define CHROMA_NORMALS COLOR_NORMALS This makes me wonder about time when researchers just played around with equations to see if any of them make interesting visual results .....
Take and unpack the small archive in attachment.
unwanted_black_flash.xml is a small demo project, IgorBeg has created it a few years ago to demonstrate the quirks. There are a white title over violet background with 5 sec fade in, and two white PNGs with transparent background with 5 sec Dissolve transition in between. Try to play it, focusing on the fading and the transition, it is not so easy to figure out, what is not correct.
And yes, what we have now, is Blend Program. You can load now ubf1.xml, it uses two simple programs which print the values of R, G, B, A (the project's color model is RGBA-8bit) inside some letter in the title, and inside each of the two PNGs.
What do we see? For the title: Title RGBA: 0.00 / 0.00 / 0.00 / 0.00 Title RGBA: 0.00 / 0.00 / 0.00 / 0.00 Title RGBA: 0.01 / 0.01 / 0.01 / 0.01 Title RGBA: 0.02 / 0.02 / 0.02 / 0.02 Title RGBA: 0.02 / 0.02 / 0.02 / 0.02 Title RGBA: 0.03 / 0.03 / 0.03 / 0.03 ..................................... Title RGBA: 0.97 / 0.97 / 0.97 / 0.97 Title RGBA: 0.98 / 0.98 / 0.98 / 0.98 Title RGBA: 0.98 / 0.98 / 0.98 / 0.98 Title RGBA: 0.99 / 0.99 / 0.99 / 0.99 Title RGBA: 1.00 / 1.00 / 1.00 / 1.00
Pay attention on these numbers! What do you see? A smooth increase of white title opacity over the background? No, you see here a smooth transition from the transparent black letters (RBG=0.00/0.00/0.00 is black, not white) to the opaque white, with some semi-transparent grey in the middle.
The Dissolve transition does the similar: 1 RGBA: 1.00 / 1.00 / 1.00 / 1.00 2 RGBA: 0.00 / 0.00 / 0.00 / 0.00 1 RGBA: 1.00 / 1.00 / 1.00 / 1.00 2 RGBA: 0.00 / 0.00 / 0.00 / 0.00 1 RGBA: 0.99 / 0.99 / 0.99 / 0.99 2 RGBA: 0.01 / 0.01 / 0.01 / 0.01 ................................. 1 RGBA: 0.01 / 0.01 / 0.01 / 0.01 2 RGBA: 0.99 / 0.99 / 0.99 / 0.99 1 RGBA: 0.00 / 0.00 / 0.00 / 0.00 2 RGBA: 1.00 / 1.00 / 1.00 / 1.00
The first PNG gradually changes from opaque white to transparent black, the second one does the same in the backward direction, both being semi-transparent grey in the middle.
Although one can imagine a case where exactly such behaviour (fading from white to black) could be desired, here it is definitely not that what is wanted, as the background track has some other color.
Load the third example, ubf2.xml. It tries to demonstrate how this all should look, were it all consistent. I removed the title fading from the titler plugin and configured fading in the Fade Autos for the track. Now the letters gradually come from transparent white to opaque white, not from black to white. The transition is not so easy to imitate, so I added one more track and put the second PNG there with 5 sec overlap with the first one, and made the same transition from white to transparent white via the Fade Autos. This all looks quite different now!
It is obvious: fading in titler and in Dissolve is to be done solely on the Alpha channel, not touching colors, provided that the track where it takes place has no transparent areas. Alternatively, it can be done in the RGB channels, without touching Alpha channel, actually it is what takes place if the project is pure RGB, without transparency. But what must Dissolve do, if there is also a controlled transparency in the track, perhaps different in the two parts between which Dissolve is played, seems not to be obvious...
_______________________________________________________________________________
Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected]
_______________________________________________________________________________
On Sun, 13 Apr 2025, Andrew Randrianasulu wrote:
This makes me wonder about time when researchers just played around with equations to see if any of them make interesting visual results .....
...The reseachers should first express the property "visual beauty" in a set of exact equations, to be able to play systematically:) _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
On Fri, 11 Apr 2025, Andrew Randrianasulu wrote:
As far as I understand in this case idea was to embed display profile into media. But few apps outside of Final Cut Pro/quicktime can do anything useful about this info?
It states that instead of more flexible color management video world used few fixed mode monitor color spaces, and whole process was a bit manual if your input image/video was not in one of those.
CMS for photography & publishing is logically coherent. There is input profile, image profile, output profile. The input profiles are that without which it is hardly possible to process RAW input. The profiles embedded into images say nothing about displays, they just are to specify in which colorspace the image data are if they are not in sRGB (and even if it is sRGB, it is common now to embed sRGB into it explicitly). The output profiles are just for the particular output devices (for displays, for printing), they make no assumptions about the nature of the data for their output. Concerning displays, in X11 there are mechanisms long ago to make some color correction, such as xgamma, now it can be done by colord or Argyll, but for the best accuracy the photo processing program itself should do it. Anyway, it is not a big deal: photo processing does not demand real time picture refresh, the math needs not be extremely fast, everything is optimized for highest quality, not for highest speed. The first problem of CMS for video is the need for the real time visualization, the whole math must be fast. The second is a real zoo of display formats, hardware accelerators, etc. For example, let's imagine, some external tool like xgamma or colord has the necessary calibration and can, in principle, correct our video on the display. But what if the user has attached not a DVI monitor, but a real TV to the composite video output, for example, what signal will be generated there by the video card? These gamma (LUT) corrections are applied through the X11 VidModeExtension. But will they be applied if rendering is done via the Xv extension, for example, or via OpenGL? Let's imagine, we have implemented some kind of CMS in our program. Then we like, for better efficiency, make use of hardware accelerated video decoding like VDPAU, LIBVA or something like these. Now decoding is done not under control of our implementation, but by some (proprietary) driver/firmware on the video card. And we do not know exactly what happens there. May be, the safest CMS for video would be to output some good standard picture on the monitor, to adjust it manually for the most precise colors, and then use these adjustments untouched. Like many years ago I ran some tools for adjustments of xgamma, but actually adjusted not xgamma parameters, but the CRT monitor settings themselves. Perhaps, I am a pessimist... _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
пн, 14 апр. 2025 г., 15:42 Georgy Salnikov <[email protected]>:
On Fri, 11 Apr 2025, Andrew Randrianasulu wrote:
As far as I understand in this case idea was to embed display profile into media. But few apps outside of Final Cut Pro/quicktime can do anything useful about this info?
It states that instead of more flexible color management video world used few fixed mode monitor color spaces, and whole process was a bit manual if your input image/video was not in one of those.
CMS for photography & publishing is logically coherent. There is input profile, image profile, output profile. The input profiles are that without which it is hardly possible to process RAW input. The profiles embedded into images say nothing about displays, they just are to specify in which colorspace the image data are if they are not in sRGB (and even if it is sRGB, it is common now to embed sRGB into it explicitly). The output profiles are just for the particular output devices (for displays, for printing), they make no assumptions about the nature of the data for their output. Concerning displays, in X11 there are mechanisms long ago to make some color correction, such as xgamma, now it can be done by colord or Argyll, but for the best accuracy the photo processing program itself should do it. Anyway, it is not a big deal: photo processing does not demand real time picture refresh, the math needs not be extremely fast, everything is optimized for highest quality, not for highest speed.
The first problem of CMS for video is the need for the real time visualization, the whole math must be fast. The second is a real zoo of display formats, hardware accelerators, etc.
For example, let's imagine, some external tool like xgamma or colord has the necessary calibration and can, in principle, correct our video on the display.
But what if the user has attached not a DVI monitor, but a real TV to the composite video output, for example, what signal will be generated there by the video card?
These gamma (LUT) corrections are applied through the X11 VidModeExtension. But will they be applied if rendering is done via the Xv extension, for example, or via OpenGL?
Let's imagine, we have implemented some kind of CMS in our program. Then we like, for better efficiency, make use of hardware accelerated video decoding like VDPAU, LIBVA or something like these. Now decoding is done not under control of our implementation, but by some (proprietary) driver/firmware on the video card. And we do not know exactly what happens there.
May be, the safest CMS for video would be to output some good standard picture on the monitor, to adjust it manually for the most precise colors, and then use these adjustments untouched. Like many years ago I ran some tools for adjustments of xgamma, but actually adjusted not xgamma parameters, but the CRT monitor settings themselves.
Perhaps, I am a pessimist...
Well, even well-understood need for photo color management only surfaced in gimp 2.8 (?) in around 2007 or 8, even there it was not very simple task! Booted my imaged self-compiled live cd from around 2007 on qemu and even real hardware, with gimp 2.2 and cinelerra-cv r958, had some fun throwing screencapture mov file between various tools. x264 was so much faster in 2007! ;) Unfortunately there is quite a rift between users who need this functionality and engineers who implement it (I am fairly sure fine people from Blackmagic do not give free advice on the web on how to outdo their main work ;) ) OCIO 2.0 from what I recall can use hw (opencl for accuracy?) acceleration but I think whole ocio way of doing things also different from lcms2. May be I can ask on our big black (with black theme) Linux forum what steps exactly ppl perform if they need color managed video workflow, but we have like <1000 active participants there, looking at various survey results, so question may miss intended audience. _______________________________________________________________________________
Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected]
_______________________________________________________________________________
Always important your explanations, Georgy, thank you. I am left with some doubts: - Basically CMS means converting the input (profiled) signals into the absolute color coordinates X,Y,Z (nowadays others can be used) via the internal “engine” called CMM. Again through CMM we can convert X,Y and Z to any specific coordinate of the various devices we use (monitor, printer, projector, other PCs, video interfaces and their cables, etc.). In this way, however, the many conversions will maintain color consistency. - I knew that the monitor profiling was actually the profiling of “Monitor + video card”. Shouldn't the output, then, already consider the video card drivers? Isn't it possible to consider hardware acceleration in CMS? - DaVinci Resolve supports several types of Color Management, the oldest being as follows: You import the video with its color space. You make an initial node where you set the signal according to that color space (input). Then you make a second node for the output where you set the color space we want on the output. The set of these two nodes is the color management of the program. Between these two nodes we can insert as many color correction nodes as we want, while still staying within the color management of the program.
participants (3)
-
Andrea paz -
Andrew Randrianasulu -
Georgy Salnikov