Many (if not all) ffmpeg guides insist on using either libplacebo (vulkan API on GPU) or zscale (not available in some ffmpeg builds). Here I found collection of .cube files for doing REC2020 TO rec709. https://www.reddit.com/r/premiere/comments/12t9mh8/convert_hdr_to_sdr/ https://drive.google.com/file/d/1HQGKE5Fzg_W5vsP8wJcmzRIAtCRKevsV/view I extracted one .cube file and run this ffmpeg command (at 1.4 fps on Termux) ffmpeg -i storage/downloads/iPhone11_4K-recorder_59.940HDR10.mov -vf lut3d=file=REC2020PQto709v1.cube:interp=tetrahedral,scale=1280:-2 ~/1.mp4 and it sort of worked, while I think mpv/libplacebo did better job on my desktop. But even just in mpv I had tons of dropped frames, and decoding in cingg without any tonemapping was at may be 9 fps max, out of 60. Ow :( For some reason just throwing F_tonemap to timeline resulted in extremely dark image no matter the settings. I pushed it up by brightness/contrast native plugin but I do not think you supposed to do that? sw tonemapper was added to ffmpeg in 2017 https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_tonemap.c
пн, 21 апр. 2025 г., 16:13 Andrew Randrianasulu <[email protected]>:
Many (if not all) ffmpeg guides insist on using either libplacebo (vulkan API on GPU) or zscale (not available in some ffmpeg builds).
Here I found collection of .cube files for doing REC2020 TO rec709.
https://www.reddit.com/r/premiere/comments/12t9mh8/convert_hdr_to_sdr/
https://drive.google.com/file/d/1HQGKE5Fzg_W5vsP8wJcmzRIAtCRKevsV/view
I extracted one .cube file and run this ffmpeg command (at 1.4 fps on Termux)
ffmpeg -i storage/downloads/iPhone11_4K-recorder_59.940HDR10.mov -vf lut3d=file=REC2020PQto709v1.cube:interp=tetrahedral,scale=1280:-2 ~/1.mp4
and it sort of worked, while I think mpv/libplacebo did better job on my desktop. But even just in mpv I had tons of dropped frames, and decoding in cingg without any tonemapping was at may be 9 fps max, out of 60.
Ow :(
For some reason just throwing F_tonemap to timeline resulted in extremely dark image no matter the settings. I pushed it up by brightness/contrast native plugin but I do not think you supposed to do that?
Oh, here my answer, may be? https://forum.doom9.org/showpost.php?p=1934075&postcount=94 ====== As you probably are aware, Iphone 12 can shoot in HDR HLG BT2020 Dolby Vision H.265 10bit planar. The problem is that because of the tiny sensor, it doesn't have many stops, which in turns doesn't have many nits, however what Apple seems to be doing is very peculiar and requires a very dedicated matrix of linear transformation, hence this LUT. This is the original picture shot by the Iphone 12: https://i.imgur.com/ubIAcsl.png The reason why I had to make yet another matrix is that I wasn't really pleased with what the tonemapping algorithms were producing: on the left hand side, Reinhard, in the middle, Hable, on the right hand side the proper HDR HLG BT2020 interpreting both the color curve and color matrix correctly: https://i.imgur.com/nBNE3q5.png they came straight from zscale of FFMpeg but similar results can be achieve in Avisynth with HDRTools. After spending lots of time tweaking the parameters, I decided to make the LUT myself and this is what I've got: It seems that Apple's interpretation of the HLG curve is not exactly based on the BBC specs as it has a black point that starts very high, way higher than it should. HLG is known to be an Hybrid curve which starts like a Linear BT2020 curve, so like SDR, but then changes and approaches its HDR nature, which is why it can be viewed both on SDR and HDR monitors. In Apple's implementation, however, this is NOT the case as the black starts way higher, as if it was a proper logarithmic curve. This creates problems as it's supposed to start lower and "confuses" tonemapping algorithms. ==== so, in this case custom LUT is way to go ... ?
sw tonemapper was added to ffmpeg in 2017
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_tonemap.c
As far as I know, it is not advisable to use LUTs for color space conversion. They are good for the “color look.” Better to do it by hand. To get an idea of the steps to be taken (even if they cover the opposite case, i.e. SDR --> HDR, they are still rich in information): https://sourceforge.net/projects/qtpfsgui/ https://codecalamity.com/encoding-uhd-4k-hdr10-videos-with-ffmpeg/ Also: https://www.youtube.com/watch?v=7nI6VPTfvok In CinGG you could use “3 Color Way,” but it has the limitation of the “Value” slider. PS: I had recently made additions to the HDR manual; the last 2 commits. Could you please check and correct/delete the wrong parts? I am delving into the subject, but still confused.
вт, 22 апр. 2025 г., 11:28 Andrea paz <[email protected]>:
As far as I know, it is not advisable to use LUTs for color space conversion. They are good for the “color look.” Better to do it by hand. To get an idea of the steps to be taken (even if they cover the opposite case, i.e. SDR --> HDR, they are still rich in information):
yeah, thanks. Point is, ffmpeg is more tightly integrates various stages so for example codec's side data may contain max luminance. After you decode that to generic rgbaf32 timeline you lose this info. By look of it mpv (0.39) + shift-i hotkey show hdr10/hdr10+ info if available
https://sourceforge.net/projects/qtpfsgui/ https://codecalamity.com/encoding-uhd-4k-hdr10-videos-with-ffmpeg/ Also: https://www.youtube.com/watch?v=7nI6VPTfvok
In CinGG you could use “3 Color Way,” but it has the limitation of the “Value” slider.
Can you try to explain what is missed? Not enough room?
PS: I had recently made additions to the HDR manual; the last 2 commits. Could you please check and correct/delete the wrong parts? I am delving into the subject, but still confused.
Can you try to explain what is missed? Not enough room?
No, I can't explain. If you remember I had tried to extend the range of the slider value, but it was then found to affect the shadow color wheel and so the commit was eliminated. See the following screencast: https://streamable.com/ib71nc
вт, 22 апр. 2025 г., 15:52 Andrea paz <[email protected]>:
Can you try to explain what is missed? Not enough room?
No, I can't explain. If you remember I had tried to extend the range of the slider value, but it was then found to affect the shadow color wheel and so the commit was eliminated. See the following screencast: https://streamable.com/ib71nc
Well, I only can suggest for now to concentrate on *video* concept of HDR, and leave photo/3d integration for later! ;) While of course someone can try to create HDR video out of HDR photos or images .... but not all HDRs are defined equal, it seems! This is very possible what as of now cingg simply lack specific HDR video tools. I'll poke my Hackintosh to see what Apple was doing in this area.
On Tue, 22 Apr 2025, Andrew Randrianasulu via Cin wrote:
While of course someone can try to create HDR video out of HDR photos or images .... but not all HDRs are defined equal, it seems!
To attach an exr file in cingg should be not too difficult, at worst it could be a plugin analogous to the svg reader plugin. The exr file contains float data, one can load them into RGB-Float track as is, then tonemap it in some extra attached plugin or even leave float as is. A HDR (EXR) image is not related to some CMS or color space constraints, its luminance level can be arbitrary, unlike HDR video. Here 'HDR' for photo and for video is same word, different meaning. _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
A HDR (EXR) image is not related to some CMS or color space constraints, its luminance level can be arbitrary, unlike HDR video. Here 'HDR' for photo and for video is same word, different meaning.
Ah, thank you! That's what I've been missing to understand.... OpenEXR is linear, unlike DPX, precisely so it can be used in the VFX environment. I was mixing “artificial” luminance values with color spaces. Now I also understand the values above 1.0 read by the eydropper tool, which was the thing that confused me the most. At this point I would be really curious how CinGG plugins work with hardware and HDR media.
participants (3)
-
Andrea paz -
Andrew Randrianasulu -
Georgy Salnikov