[Cin] OpenEXR encoding is not multithreaded?

Andrew Randrianasulu randrianasulu at gmail.com
Sat Mar 14 04:46:00 CET 2020

Hello again!

I had this qestion about why I can't see OpenEXR as choise in 'background render' format
chooser. Now I have answer....

I tried to composite 2 x 1920x1080 images (project format RGBA-float)
Speed while just previewing was up to 5.5 fps (on full speed.)

Now, rendering 10 seconds of such video took 7 min 17 sec when I choose 
OpenEXR, include alpha, compression pxr24. (into tmpfs, full speed usually more than 500Mb/s)

Each file took around 2.4 Mb.

Definitely not realtime. Of course, even perfect multithreading on 4-way
machine like I have will not make it even closer to realtime - but I think 
it will make such output format still more useful. If frame-level multithreading
in OpenEXR encoder as done by CinGG can be implemented relatively 
trivially - it will be great for such speed addict as me :}

I mean, like creating pool of image buffers in encoder itself, with n = num_of_cpus, 
but not more than, say total_buf_size than 1/2 ram. Then call OpenEXR 
compressor on each buffer. Then wait until one buffer become ready, write it, load
buffer with new image. Repeat untill all images are compressed.

Of course there is question about ordering ..or not?

If OpenEXR actually 'sequence' of images, like I choose - each file can be 
written independently, even?

If image order somewhat important - equip each buffer with serial number, 
and then reuse timestamp info from core?

Also, on question of GPU precision ...

Extended-Precision Floating-Point Numbers for
             GPU Computation

from Andrew Thall, Alma College

Abstract addition
"   [Addendum (July 2009): the presence of IEEE compliant
double-precision hardware in modern GPUs from NVidia and
other manufacturers has reduced the need for these techniques.
The double-precision capabilities can be accessed using CUDA or
other GPGPU software, but are not (as of this writing) exposed
in the graphics pipeline for use in Cg-based shader code. Shader
writers or those still using a graphics API for their numerical
computing may still find the methods described herein to be of

"   Testing accuracy and precision of hardware systems is problem-
atic; for a survey see Cuyt et al [23]. The Paranoia system [24],
developed by Kahan in the early 1980s, was the inspiration
for GPU Paranoia by Hillesland and Lastra [25], a software
system for characterizing the floating point behavior of GPU
hardware. GPU Paranoia provides a test-suite estimating floating
point arithmetic error in the basic function, as well as char-
acterizing apparent number of guard digits and correctness of
rounding modes. Recent GPU hardware has shown considerable
improvement over earlier. For the purposes of extended-precision
computation, 32-bit floats are required. For Nvidia GPUs, the
6800 series and above were the first to have sufficient precision
and IEEE compliance for extended-precision numerical work; the
7800/7900/7950 series has better IEEE compliance, allowing all
but the transcendental functions to be computed; for computa-
tions involving Taylor series, rather than self-correcting Newton-
Raphson iterations, the superior round-off behavior and guard-
digits of the 8800 series are necessary."

---quote end---

paper: http://andrewthall.org/papers/df64_qf128.pdf

But I guess if Natron is happy with OpenGL - then Cinelerra also can be ....

More information about the Cin mailing list