"Unbounded" floating point image manipulation
I think this forum post (with still working image inserts) illustrates what Andrea trying to do inside CinGG: https://discuss.pixls.us/t/unbounded-floating-point-pipelines/6483/199 On input image a bit blown out, but after few layers of correction (_as long as you editing software does not clip to 1.0_) you get more details, in final image. Very unavoidable fact that _everything_ (camera, monitor, projector, printer, eyes) all non-linear in their transmission/sensing/emitting lightwaves of different frequencies makes all this quite complex topic!
On Thu, 3 Apr 2025, Andrew Randrianasulu via Cin wrote:
Very unavoidable fact that _everything_ (camera, monitor, projector, printer, eyes) all non-linear in their transmission/sensing/emitting lightwaves of different frequencies makes all this quite complex topic!
Any modern camera has at least 12-bit sensor (usually 14 or 16 bit), i.e. internally all cameras are HDR cameras. Only old wet photography on films of the past could avoid this quite complex topic! _______________________________________________________________________________ Georgy Salnikov NMR Group Novosibirsk Institute of Organic Chemistry Lavrentjeva, 9, 630090 Novosibirsk, Russia Phone +7-383-3307864 Email [email protected] _______________________________________________________________________________
Sorry to belabor my requests, but color is a topic that has always interested me, from the days of Photoshop 3.0 and then Gimp... The following link is interesting: https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html However, it is more suitable for image manipulation than for video editing, where there are complications. So it is not appropriate in our discussion. However, when I was asking about the operation of the Color Space plugin/Tool, it was to know if CinGG uses the “std formulas” used in video productions. I will elaborate, but first I would like to pay attention to the color models (RGB, YUV, and HSV) which are infinite (although in fact artificially limited to the possibilities of human vision) and the color spaces which are a fraction of that. The limits of color spaces arise from the need not to exceed the hardware limits of the devices (gamut). These limits have become std and consequently so have the formulas for conversion between color spaces. Not that there are not infinite other formulas, but often, for example for YCbCr --> sRGB, the same formula is mainly used (see Poynton: https://wangwei1237.github.io/shares/Digital_Video_and_HD_Algorithms_and_Int...). Here, I was wondering if CinGG uses these std formulas that are also the basis of the LUTs used in CMSs, or does it have its own. If it used the same formulas, I would dream of one day getting to color management inside CinGG.
чт, 3 апр. 2025 г., 21:39 Andrea paz <[email protected]>:
Sorry to belabor my requests, but color is a topic that has always interested me, from the days of Photoshop 3.0 and then Gimp... The following link is interesting: https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html However, it is more suitable for image manipulation than for video editing, where there are complications. So it is not appropriate in our discussion.
However, when I was asking about the operation of the Color Space plugin/Tool, it was to know if CinGG uses the “std formulas” used in video productions. I will elaborate, but first I would like to pay attention to the color models (RGB, YUV, and HSV) which are infinite (although in fact artificially limited to the possibilities of human vision) and the color spaces which are a fraction of that. The limits of color spaces arise from the need not to exceed the hardware limits of the devices (gamut). These limits have become std and consequently so have the formulas for conversion between color spaces. Not that there are not infinite other formulas, but often, for example for YCbCr --> sRGB, the same formula is mainly used (see Poynton:
https://wangwei1237.github.io/shares/Digital_Video_and_HD_Algorithms_and_Int... ). Here, I was wondering if CinGG uses these std formulas that are also the basis of the LUTs used in CMSs, or does it have its own. If it used the same formulas, I would dream of one day getting to color management inside CinGG.
I think original Cinelerra get floating point modes for exactly avoiding clipping ussues in pipeline, among other things. No CMS, but everything was in linear gamma (so no gamma compression as in sRGB, it really breaks compositing, from my understanding of thread I linked) Then when overlay modes were added in cingg assumption that you must clip to defined 0-1.0 range sneaked in. We removed some of it lately, but run into problem when such limiting might be unavoidable due to ways algorithms work. So, simplest manual "CMS" probably will be just lcms2 plugin working in 32fp and picking up display icc profile, and may be file input icc profile (so you apply it manually at input first and at very end of compositing second). But so far no one wrote such plugin! And of course ffmpeg was feeding us at best 16bit integers, so already pre-clipped to some range (if decoding in principle capable of producing fp values .. like MLV format? ), but may be it will be adjustable in future ffmpeg versions? Unfortunately, I am not very convincing/knowledgeable person, so my attempts to bring this to ffmpeg devs attention failed. Some parts of puzzle now should be there in ffmpeg-git, but I haven't tried to rebuild cin on top of that, yet. Fighting with official pre-made Arch qcow hdd image, so far I chrooted into it from livedvd under qemu (there is problem with APIC for some reason? noapic to kernel command line fixed that), run into keys problem, run "pacman-key --populate archlinux " after pacman-key --init, updated with pacman -Syu, installed bunch of packages with pacman -S packagename, verified I have cmake 4.0.0 ... and learned there is no startx yet! Right now I work via VNC to qemu machine running Arch in framebuffer console mode, probably will retry from real X session on host because default screens resolution a bit too big for portraint-oriented tablet screen! So ... Some steps done, more to do!
чт, 3 апр. 2025 г., 21:39 Andrea paz <[email protected]>:
Sorry to belabor my requests, but color is a topic that has always interested me, from the days of Photoshop 3.0 and then Gimp... The following link is interesting: https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html However, it is more suitable for image manipulation than for video editing, where there are complications. So it is not appropriate in our discussion.
However, when I was asking about the operation of the Color Space plugin/Tool, it was to know if CinGG uses the “std formulas” used in video productions. I will elaborate, but first I would like to pay attention to the color models (RGB, YUV, and HSV) which are infinite (although in fact artificially limited to the possibilities of human vision) and the color spaces which are a fraction of that. The limits of color spaces arise from the need not to exceed the hardware limits of the devices (gamut). These limits have become std and consequently so have the formulas for conversion between color spaces. Not that there are not infinite other formulas, but often, for example for YCbCr --> sRGB, the same formula is mainly used (see Poynton:
https://wangwei1237.github.io/shares/Digital_Video_and_HD_Algorithms_and_Int... ).
yeah. At page 298 (in file) Figure 26.6 shows a set of primary SPDs conformant to SMPTE 240M, similar to BT.709. Many different SPDs can produce an exact match to these chromaticities; the set shown is from a Sony Trinitron display. Figure 26.5 shows the corresponding colour-matching functions. As expected, the CMFs have negative lobes and are therefore not directly realizable; nonetheless, these are the idealized CMFs, or idealized taking characterstics – of the BT.709 primaries. We conclude that we can use physically realizable analysis CMFs, as in the first example, where XYZ components are displayed directly. But this requires nonphysical display primary SPDs. Or we can use physical display primary SPDs, but this requires nonphysical analysis CMFs. As a consequence of the way colour vision works, there is no set of nonnegative display primary SPDs that corresponds to an all-positive set of analysis functions. The escape from this conundrum is to impose a 3×3 matrix multiplication in the processing of the camera signals, instead of using the camera signals to directly drive the display. Consider these display primaries: monochromatic red at 600 nm, monochromatic green at 550 nm, and monochromatic blue at 470 nm. The 3×3 matrix of Equation 26.2 can be used to process XYZ values into components suitable to drive that display. Such signal processing is not just desirable; it is a necessity for achieving accurate colour reproduction! ====== Thing us, all this matrix algebra is floating point YET most of video codecs and signalling to displays (DVI, HDMI ..) operate on integer math in some range! Until roughly R300/GF5xxx era (~2003?) GPUs had internal floating point pipeline BUT accessible via integer textures and renderbuffers! So, Cinelerra was coded in OpenGL part around this era standarts. We can make openGL go via fp textures/renderbuffers now, but for example libavcodec will still give you "pre-processed" integer values, both for software and especially gardware decoders. So, unless one deals with sequence of tiff/exrs there always will be at least one step between what libavcodec outputs (bunch of integers) and what cinelerra-gg can accept (32fp at best). Probably not big deal for already-compressed h265, but those video canera raw formats have their own import module in "Big" NLEs like DVR, as far as I understand, with various manual/interactive controls. There was BRAW decoder for ffmpeg by Paul Mahol, but it was left in patchwork place may be partially because for using it you must manually debayer etc, and this process ought to be highly visual, interactive - while ffmpeg at its core batch processing tool, or part feeding display engine. A bit too low level, perhaps? As various floating point transforms find increasing use in applications where ffmpeg desirable/unavoidable (anyone want to code h264 decoder from scratch?) ffmpeg will be forced to evolve from fast but unaccurate ~2003 hack, usable only for subset of operations most common on consumer/display end into more accurate and versatile set of functions But because its developers assume all other developers will just quietly adapt or due ...I only can hope next update will be manageable by me. Back to topic of OCIO vs ICC based color management - I think mid thread conclusion there was you can get both, as long as you do not ruin your numbers in unexpected ways. Hopefully cinelerra-gg will not ruin them accidently now, when Georgy implemented custom, user-settable overlay equations and more. But someone will need to push our existence into mesa, ffmpeg developers's happy little worlds so some mutual understanding will have chance to develop (because all those company individual devs too busy in their narrow burrowing to spend time looking around, away from spotlights). Unfortunately, I do not have means to get to Eu and wave "Stop ignoring us!" banner. And email based communication easily (too easily) dismissed, unless you are Big Netflix (or Valve) with money to throw at. Here, I was wondering if CinGG uses these std formulas that are also
the basis of the LUTs used in CMSs, or does it have its own. If it used the same formulas, I would dream of one day getting to color management inside CinGG.
Thank you for the detailed explanation. The sentence "So, unless one deals with sequence of tiff/exrs there always will be at least one step between what libavcodec outputs (bunch of integers) and what cinelerra-gg can accept (32fp at best)" together with what Georgy explained, indicate that not much can be done, not only because of CinGG and its plugins, but especially because of ffmpeg. Out of curiosity I tried using CinGG's internal engine instead of the usual default ffmpeg. Tests with the two Blend Program give the same results as with ffmpeg. I should try a pipeline with EXR sequences and see if you can keep the color consistent. Looking at the ffmpeg-devel mailing-list, I understand that it is impossible to bring ffmpeg to work in float, they would probably have to rewrite everything from scratch (I saw that you reported our thread with no response...). Arch: startx requires you to install Xorg (I don't know about Wayland). Maybe that is the problem? Why did you put Arch in anyway, isn't that just a problem with Cmake?
participants (3)
-
Andrea paz -
Andrew Randrianasulu -
Georgy Salnikov