[Cin] more vfx from reddit, exr, linearization
Andrew Randrianasulu
randrianasulu at gmail.com
Tue Nov 7 18:57:03 CET 2023
https://www.reddit.com/r/vfx/comments/ptk6ej/confusion_about_floating_point_in_color_space/
well, I think this answer answers it ...
Q first:
=====
Confusion about Floating Point in Color Space Transforms/Management
from Learning
<https://www.reddit.com/r/vfx/?f=flair_name%3A%22Learning%22>
Dear VFX Gurus/Professionals, I am not a VFX person at all. My question is
more from a Color Management standpoint. And seeing how VFX artists follow
the same rules as colorists(I'm not really a colorist either just someone
who developed a bit of a hobby recently) in color management and equally
have to deal with the confusing physics of light, I was wondering if you
fine people could enlighten me.
To explain my confusion I'm going to use an example of a hypothetical
camera sensor which CLIPS White at a max 1000% scene linear reflectance and
crushes blacks at 0% light (I know there is no such thing as 0% light but
it just makes the numbers easier to deal with).
I've attached below a Hypothetical 10bit Log Transform and a 16bit Linear
RAW graph of the Input of Linear Light and Code value output(The so called
OETF). The output values are both in Code Value (and Normalized from 0-1)
{img omitted}
Now, as far as I understood, when the 10bit Log encoded image goes
through a Color Space Transform the first thing that happens to it is that
it gets "Linearized" to a 32bit Floating Point(This is part of the IDT or
Input Transform in Davinci Resolve or in the case of ACES VFX workflow,
16bit half float) before any other thing happens to it. Now, I'm not a
software developer or video engineer or anything of that sort but having
read about Floating Point(which is much more confusing than integer values)
it's been said that Floating Point allows for OUTPUT values greater than 1
and less than 0. The problem I'm having is understanding how does this
relate to Scene Linear Reflectance getting matched to the output in Linear
Floating Point? For instance is 100% light reflectance giving an output of
1(the highest possible value in integers)? Does a 500% reflectance give an
output of 5?What about 1000% reflectance, would that be an output of 10 and
would the graph end there?? Or am I completely way off and 1000%
reflectance would still be an output of 1, just that between 0 and 1
there'd be approximately 4 billion values worth of data(then what's the
point of having values greater than 1 and less than 0?)
The thing that's tricking me is that the entire dynamic range of the camera
gets squeezed into a 0-1 output range in the integer encoded camera files,
but I can't understand what's happening with it when it becomes a
Linearized 32bit Floating Point. On the one hand everyone says these (ACES
Linear Gamma or Davinci Wide Gamut) are HDR workflows but the problem is
everyone just uses 0 and 1's for their light inputs and for their outputs
when they're showing the graphs. For all I know an Input of 1 could just
mean a Reflectance of 100%(just like in Linear RGB) or it could mean a
1,000,000% reflectance(Which is indeed MEGA-HDR). And what of the outputs
potentially being greater than 1 and less than 0(What does that even mean?)
I'll admit I haven't really tried to truly understand Float Point
Operations, perhaps that's where I am failing.
How is this 32bit Floating Point any different than the Linear RAW(Other
than it being 32 bit Float rather than 16 bit?). Are they exactly the same
with the only difference being there is an infinitely bigger amount of data
values between Output 1 and 0(which linearly maps 0 - 1000% light
reflectance)? What's the significance of having Output values greater than
1 and less than 0? Just very lost on this. And yes I have read multiple
articles on this(Chris Brejons, ACES Manual, etc...) but the lightbulb
isn't turning on for some reason. Feel free to correct me since as an
amateur I definitely need guidance. Thanks
=====
Welcome to the fun of colorspace. The whole thing is a pain. I've set
up some facility level color pipelines (camera to broadcast), and even when
you have a decent grasp of it, it's still basically a nightmare.
A properly linearized image, rendered to EXR, will show values greater than
1, in my example earlier, they'd read 13-16 and everything in between.
that's what half float (16) and float (32) allow to happen. But, and this
is a big one, most container formats can't support values over 1. So
historically LOG images have always been encoded with a LOG curve to jam
their data into that 0-1 range. So it's important to note which colorspace
transform has been applied to create the output image. Nuke makes it
explicit to the format type, Resolve defaults to gamma 2.4 unless otherwise
changed at the project level.
The note above about reading in RAW, IIRC, is to make sure all the data is
pulled in (no clipping) and then the correct colorspace transform is
applied in app, pushing the super white values above 1. This was done for a
couple reasons, a) to again keep all the data, and b) to allow any legal ->
full range expansion before applying the colorspace move. It's less
prevalent now, but there used to be a workflow where the camera SDI was
outputting a legal feed, and the outboard capture decks were were also
applying a full -> legal transform, creating master files that were double
legalized.
A lot of folks still render EXR with the LOG curve applied, so it's still
technically 0-1 range. This is also the safest way for VFX vendors because
if we reapply the same log curve on export, when it gets to the color house
it will be apples to apples the same regardless of how they linearize (same
as source). In theory the best practice would be to linearize the footage
and leave it linear, but in reality that's problematic, as each software
suite uses slightly different math to linearize. ACES color management is
meant to solve this, but again most folks don't actually work in ACES
correctly, to be fair most folks don't manage color is Resolve correctly
either though.
Also, I've seen this a few times, depending on where you start, I've
noticed that some people speak in opposite jargon. IE when they say "log"
they mean "was log, has been linearized" and when they say "linear" they
mean flat raw log. When I refer to footage as log I mean very flat looking
camera native, and when I say linearized I'm referring to footage that
looks much closer to how we would see it if we were there on the day.
==== quote end =====
So .... it seems to be software-dependent what kind of values it can put in
fp32 EXRs ....
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cinelerra-gg.org/pipermail/cin/attachments/20231107/3081de66/attachment.htm>
More information about the Cin
mailing list