ср, 31 мая 2023 г., 22:35 Andrew Randrianasulu <randrianasulu@gmail.com>:


ср, 31 мая 2023 г., 14:09 Andrea paz <gamberucci.andrea@gmail.com>:
> is HDR basically "some 8 or 10 bit tech + wide gamut light emitting"?

I am not familiar with HDR. All I know is theoretical and dated
(Brinkmann's book on compositing).
HDR can only be in floating point, using a normalized color range (0-1
instead of 0-255, etc.). In fact in floating point it is possible to
have values above 1 while in 8, 10 -bit it is not possible to have
values above the limits.
Upon receiving an HDR video signal, there are algorithms to map and
balance these values on SDR or HDR displays. This is called tone
mapping.
To summarize it is necessary:
1- An HDR video signal (usually obtained by merging multiple frames at
different exposures).
2- A tone mapping tool
3- A suitable display (usually high nits, i.e., brilliance)

So, may be due to 1) modern smartphones have so many cameras!



I am looking at shotcut forums just for inspiration


====

Any editor working in HDR needs to be able to import BT.2100 HLG, BT.2100 PQ, BT.2020 and BT.709 files and map them to a common timeline format, output to those 4 formats, and allow the user to calculate the HDR10 metadata if you choose to have a PQ output format (this isn’t automatic as you need to know the gamut and peak brightness of the monitor you edited on).

===


=== quote ===

HDR Support (x264 & x265):


Tagging DCI-P3: --master-display G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(?,1)

Tagging BT.709: --master-display G(15000,30000)B(7500,3000)R(32000,16500)WP(15635,16450)L(?,1)

TaggingBT2020: --master-display G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(?,1)

The L(?,1) indicates lux ratio (e.g., 1000:1). This value has no standard and needs to be checked and written manually on each HDR video.


If tagging for the source vid is unknown, find 1 of the following format in source video metadata:

DCI-P3: G(x0.265, y0.690), B(x0.150, y0.060), R(x0.680, y0.320), WP(x0.3127, y0.329)

bt.709: G(x0.30, y0.60), B(x0.150, y0.060), R(x0.640, y0.330), WP(x0.3127,y0.329)

bt.2020: G(x0.170, y0.797), B(x0.131, y0.046), R(x0.708, y0.292), WP(x0.3127,y0.329)


Content lumiance x265: --max-cll <max content light level cd/m2, max frame-avg light level cd/m2> e.g., 1000,640

Content lumiance x264: --cll <max content light level cd/m2, max frame-avg light level cd/m2> e.g., 1000,640

CLL has no standard and needs to be checked and written manually on each HDR video


Indicate HDR10 content in supplemential enhance info (SEI), x264 : --hdr10


Optimize HDR10 content (increase video size) per block (optional, x264 :( --hdr10-opt


Indicate color range and transfer properties: --colormatrix <as source> --transfer <as source>

Color range can vary thanks for both compatibility and multple HDR implementations, (e.g., gbr bt709 fcc bt470bg smpte170m YCgCo bt2020nc bt2020c smpte2085 ictcp). Check the source video metadata for them


====

https://stackoverflow.com/questions/69251960/how-can-i-encode-rgb-images-into-hdr10-videos-in-ffmpeg-command-line

this answer says mastering-display is NOT for your editing display but for ideal viewing display! Are they supposed to be the same?

Not sure how you calculate max-cll et all from your display / source ...

 

https://github.com/HDRWCG/HDRStaticMetadata

 

HDRStaticMetadata

HDR GENERATOR TOOL In its essence, this tool will calculate the maxFall and maxCLL of a 16-bit TIFF frame using the formula 'PQ10000_f' (to linearize). This application will scan a folder of TIFF files and proceed to perform calculations on the files. File results are calculated concurrently according to the number of threads a user specifies. The results are logged to a file. The files processed and the time the files were processed are logged as well. OpenImageIO is used to read the files into a 16bit vector. OpenCV is used for conveniently accessing pixels as well as croping an image for frame average light level calculations. QtCore and QtConcurrent are used for abstracted file system access and concurrency respectively. The text files generated in this process are then analyzed in a post process tool to calculate maxFall and maxCLL values at 99.9%.

=======


from https://forum.doom9.org/archive/index.php/t-177135.html


View Full Version : How to analyze an HDR video for peak brightness level for the setting of metadata?


so, this is a bit more complicated than just feeding x265 with params ....