https://github.com/CESNET/UltraGrid/discussions/203
10bit display with SDL/GL #203
has this hint (may be not needed on Display port)
====
it is also important to set the display to Full range and set the max bpc
ACTIVEDISPLAY=`xrandr | grep " connected " | awk '{ print$1 }'`
xrandr --output $ACTIVEDISPLAY --set "Broadcast RGB" "Full" --set "max bpc" 10
https://www.soi.pw/posts/10-bit-color-on-ubuntu-20.04-with-amdgpu-driver/
====
https://github.com/CESNET/UltraGrid/commit/03b60c9931b1fc83ca91bee09dbe4266…
referred pdf says this:
https://www.amd.com/system/files/documents/10-bit-video-output.pdf
"Figure 5: 10-bit display feature can be enabled by checking the “Enable
10-bit pixel format support” checkbox in the Catalyst
Control Center. Note that this feature is only available in workstation
(ATI FireGL™) cards.
Once the 10-bit pixel format support is enabled the system will request a
reboot and after that any 10-
bit aware OpenGL application will be displayed in 10-bits without any
clamping. In the following, we
demonstrate how an application programmer can easily create a 10-bit aware
OpenGL
[skip]
Creating a 10-bit Texture
The previous section highlighted several methods to choose a 10-bit pixel
format. It is important to note
that once a 10-bit pixel format is chosen any smooth shading operation will
immediately take advantage
of the increased bit depth. In other words, the user does not need to
explicitly provide 10-bit input data
to benefit from the increased precision. This is due to the fact the
internal processing of colors in the
graphics card is in floating-point precision with 10-bit (or 8-bit)
conversion occurring only at the output
stage.
However, it is also possible to explicitly create and work with 10-bit
textures as will be explained in this
section. 10-bit texture support is exposed to the user through a packed
RGB10_A2 texture format,
which contains 10 bits for each of the color channels and 2 bits for the
alpha component.
====
original issues also should contain test images
https://github.com/CESNET/UltraGrid/files/7525556/gradients.zip
https://github.com/mpv-player/mpv/pull/8648
not quite it, not on master branch yet (look for "hdr" branch for recent
rebase) but for some Arch users it worked?
====
With the already noted exception that mpv needs to be run once and then ran
again before the monitor enters HDR mode, this seems to be working for me
with the following options:
--no-config --target-trc=pq --target-peak=400 --video-output-levels=full
--target-prim=dci-p3 --hdr-compute-peak=yes --gamut-clipping=no
--tone-mapping=mobius --vo=gpu --gpu-api=opengl --gpu-context=drm
--drm-send-hdr-meta=auto --drm-connector=HDMI-A-0
I'm on an arch based system. linux 5.10. AMDGPU. Connected over HDMI to an
Acer XV272U
=====
https://web.archive.org/web/20170822225433/http://www.mysterybox.us/blog/20…
====
As slightly more concerning consideration, however, is the availability of
high quality 12+ bit codecs for use in intermediate files. Obviously any
codec using 8 bits / channel only are out of the question for HDR masters
or intermediates, since 10 bits are required by all HDR standards. 10 bit
encoding is completely fine for mastering space, and codecs like ProRes
422, DNxHR HQX/444, 10 bit DPX, or any of the many proprietary
‘uncompressed’ 10 bit formats you’ll find with most NLEs and color
correction softwares should all work effectively.
However, if you’re considering which codecs to use as intermediates for HDR
work, especially if you’re planning on an SDR down-grade from these
intermediates, 12 bits per channel as a minimum is important. I don’t want
to get sidetracked into the math behind it, but just a straight cross
conversion from PQ HDR into SDR loses about ½ bit of precision in data
scaling, and another ¼ - ½ bit precision in redistributing the values to
the gamma 2.4 curve, leaving a little more 1 bit of precision available for
readjusting the contrast curve (these are not uniform values). So, to end
up with an error-free 10 bit master (say, for UHD broadcast) you need to
encode 12 bits of precision into your HDR intermediate.
ProRes 4444 / 4444 (XQ), DNxHR 444, 12 bit DPX, Cineform RGB 12 bit, 16 bit
TIFFs, or OpenEXR (Half Precision) are all suitable intermediate codecs,**
though it’s important to double check all of your downstream applications
to make sure that whichever you pick will work later. Similarly, any of
these codecs should be suitable for mastering, with the possibility of
creating a cross converted grade from the master later.
====
hm, so even if swscale from rgba32f timeline to libavcodec still buggy we
can just use cingg's built-in tiff/exr output, I guess?
Strangely for now those blogs are not linked to site?
I found link while looking inside HDR.zip provided at doom9 forums
https://spaces.hightail.com/space/nEaXy
91 Mb!
Note there was additional discussion about using zimg/zscale filter in
ffmpeg for getting p3-d65
https://www.reddit.com/r/ffmpeg/comments/ty9c5k/color_metadata_dcip3/
====
Zscale does have support for DCI white point P3 and D65 P3.
ffmpeg -i INPUT.mp4 -c:v libx265 -crf 20 -vf
zscale=r=limited:m=2020_ncl:t=2020_10:p=smpte431:c=topleft,format=yuv420p10
-color_primaries smpte431 -colorspace bt2020nc -color_trc bt2020_10bit
-color_range tv OUTPUT.mp4
===
I wonder if such file plays in say mpv as HDR on your display?
old ffmpeg patches (2016):
http://ffmpeg.org/pipermail/ffmpeg-devel/2016-October/201962.html
"vf_colorspace: Add support for smpte 431/432 (dci/display p3) primaries"
but previous 2 patches might be not so trivial ...
https://discuss.pixls.us/t/experimenting-with-hlg-and-hdr10-for-still-image…
====
ffmpeg -loop 1 -r 24 -t 30 -i DSC02236.tif -vf
pad=3840:2160:302:0:black,zscale=tin=linear:t=arib-std-b67:npl=800:m=bt2020nc,format=yuv420p10le
-c:v libx265 -preset medium -crf 26 -x265-params
"colorprim=bt2020:atc-sei=18:colormatrix=bt2020nc" naturalbridge.mp4
====
zimg2 should be in Debian since late 2021
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=966059
ffmpeg (7:4.3.2-2) experimental; urgency=medium
.
* debian/:
- Build with zimg (Closes: #966059
<https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=966059>)
I wonder if cingg should add zimg too?
Not sure how legitime this website (they all can sound professional!), but
...
https://displayhdr.org/certified-products/
====
*PC monitors for professional content-creators.*
Outstanding local-dimming, high-contrast HDR with advanced specular
highlights:
- Peak luminance of 1400 cd/m2 – more than 4x that of typical displays
- Full-screen flash requirement delivers ultrarealistic effects in
gaming and movies
- Unprecedented long duration, high performance ideal for content
creation
- Dynamic contrast ratio that is 3.5X greater than the DisplayHDR 1000
level
- Increased color gamut (95% DCI-P3 65) compared to all other current
DisplayHDR tiers
DisplayHDR 1400 Certified Products
====
And apparently displayHDR 400 actually can be just 8-bit!
https://displayhdr.org/#tab-400
Another article adding relationship between various HDR parameters.
https://www.electronicshub.org/what-is-an-hdr-monitor/
Note, table at the end of this article says hdr500 monitors can be 8bit but
site above lists 10bit processing requirement at this level .... o.O
confusing as always!
I think today ffmpeg/x265 can make static metadata HDR, but making dynamic
one .... not sure how it even should be done, in editing context! Attach
additional info to each vframe?
https://github.com/archont00/cinepaint-oyranos
I was reading oyranos website and especially archive.org version of
this site, and found out this specific tool was used back in 2011 for
showcasing image editing on wide-gamut monitors. There apparently is
draft X color management spec, implemented by basically CompICC as
server side and icc-examin / sample image viewer /cinepaint as
clients.
There even was Suse-based live cd from 2011, but it obviously will not
fly (with hw 3d accel) on more modern GPUs ..:(
http://www.oyranos.org/compicc/index.htmlhttp://www.oyranos.org/libxcm/index.html
"The X Color Management specification allows to attach colour regions
to X windows to communicate with colour servers."
So, it was/is region-based, not per-window (makes sense due to
in-window controls, for example)
https://web.archive.org/web/20111116054348/http://www.oyranos.org/page/2/
"
CinePaint full screen
Posted on September 8, 2011
On my git repository with CinePaint patches is the full screen mode
changed. It uses the GTK funtionality. That mode is now window manager
based and will not work with twm. The old code is still ifdefed for
users who rely on that and compile it. Putting a full fledged solution
like in gqview inside CinePaint would have been too much work.
New is as well resizing of frames in the flip book, to allow me to
load a bunch of cameraRAW images and watch them much like in a slide
show. Colour regions are updated for the colour server.
And last but not least a fix in now available, to let CinePaint run on
non KWin and non Compiz window managers. I tested on twm.
The package is patched in OBS.
Posted in cinepaint, imaging | Tagged colour management, ICC "
=====
Because I do not have wide-gamut monitor user-testing of this rare
combination still required, even for just watching it in action.
Anyone brave enough for compiling from git? :) I have it compiled on
Slackware 15.0 i586, so it ought to be compileable on ~modern Linux.
https://www.qsl.net/n1gg/linux/video/linuxdvdguide.html#6
=====
------------------------------
Chapter 6: Subtitling Subtitling is an integral part of many DVD
productions. Although I personally have used subtitles very rarely, this
guide would not be complete without some information about subtitles.
Many times it is necessary to subtitle video if the audio language needed
is not available, or for the hearing impaired, or to add commentary. The
DVD specification allows for a very nice subtitling method. Instead of
having to render the text on the video itself (which would mean you could
not turn the subtitling off), DVD implements subtitling as a series of
images overlayed on top of the video. This way, you can switch between
multiple subtitle tracks, or turn them off altogether. Although this takes
up more data space than plain text subtitles, it is far more versatile: the
fact that images are used makes it possible to subtitle using any font and
even nonstandard characters and images.
6.1 DVD Subtitle Format The DVD subtitle specification allows for 4-color
images with a transparent background. They can be created in nearly any
format, but must be converted to the special DVD compliant stream before
they can be put into the DVD structure.
6.2 Subtitling MPEG Streams For DVD Using Spumux Spumux is one such tool to
create DVD subtitle streams. Although there are many tools for subtitling,
Spumux is very useful in many areas and I have experience with it.
It accepts several image formats, including PNG, which I find to be the
most beneficial format (not just for DVD operations but for many other
things as well).
First you must create your text images. You may do this with your favorite
image editor (IMHO, if you have a brain, it's Gimp), or you may use a
text-to-image tool to make the images from your plain text such as Fly
(which i will not cover here). Spumux will also accept subtitles in a
number of text formats. See the manpage for a list of them. Since using
spumux with text files can be extremely complicated, and there are multiple
options for file formats, etc., I will only cover using PNG images for
subtitling here.
So open Gimp, (or whatever image creator that suits your fancy), and create
a new image with a transparent background. Actually you may have a colored
background if you like, but bear in mind that this may distract from your
video. Sometimes this is necessary, such as if you have white text on a
white-dominated video scene (such as snow), but most of the time this is
distracting and looks cheap. Choose a color for your text. I have found
that most of the time the best color for subtitle text is a light gray or
white. Then use the text tool to create your subtitle text, and slap it
onto the background. Be sure that you do not use more than 4 colors in your
image, as spumux will reject the file if it has more. The DVD specification
only allows for any 4 colors in a particular subtitle stream If you did use
more than 4 colors, such as for a fancy gradient text or something, or
possibly if you used anti-aliased fonts, you may set the image type to
indexed, and dither the image down to 4 colors. In Gimp, right-click on the
image, go to the "Image" sub-menu, the "Mode" sub-sub-menu, and select
"Indexed...". Optionally, you may get to this dialog using the keyboard by
pressing "ALT+I". Then make sure that the "Generate Optimal Palette:"
option is checked, and set the number of colors to 3 or 4. Then save the
image as PNG.
It is interesting to note that since the DVD subtitle method is to use
images, you may put, well, images into the subtitle stream, and they will
display just like text. Of course you are still limited to 4 colors, but
this comes in extremely handy for foreign language subtitles, and when you
need special fonts and text styles.
===
Interesting info!
Also, there was another page suggesting yuv420 option useful for *NTSC*
interlaced DV to DVD
https://renomath.org/video/linux/dv/encdvd.html
====
Our focus is encoding widescreen NTSC interlaced video source from a miniDV
camcorder. We attempt to preserve as much of the quality of the original
source as possible.
[..]
Mjpegtools The mjpegtools encoder runs more slowly than ffmpeg on my
computer; however, no patches are needed to handle interlaced video. The
encoding commands
$ lav2yuv s001.avi |
yuvcorrect -T INTERLACED_BOTTOM_FIRST |
mpeg2enc -M0 -nn -a3 -f8 -G18 -b7000 -V230 -q9 -o s001.m2v
$ lav2wav s001.avi > s001.wav
$ toolame -b224 -s48 s001.wav s001.m2a
$ mplex -f8 s001.m2v s001.m2a -o s001.mpg
work, but unfortunately reduce the effective color space to 4:1:0. Better
results can be obtained by using y4mscaler and the commands
$ lav2yuv s001.avi -C 411 |
y4mscaler -I ilace=BOTTOM_FIRST -O chromass=420mpeg2 |
mpeg2enc -M0 -nn -a3 -f8 -G18 -b7000 -V230 -q9 -o s001.m2v
$ lav2wav s001.avi > s001.wav
$ toolame -b224 -s48 s001.wav s001.m2a
$ mplex -f8 s001.m2v s001.m2a -o s001.mpg
This interpolates the chroma in the horizontal direction before subsampling
it vertically.
====
Yet another source suggest only old CRT TV
can display interlaced DVD material as intended, and Plasma/TFT
TV or computer monitors better accept de-interlaced material.
https://xpt.sourceforge.net/techdocs/media/video/dvdvcd/dv04-Interlace/sing…
Just a short update on GIT changes here:
- checked in Delay Audio plugin changes after a lot of changes. See BT
#638 for more details:
https://www.cinelerra-gg.org/bugtracker/view.php?id=638
- removed previous library versions of tiff, flac, libjpeg-turbo and libvpx
so download is smaller again.
Hm ...
https://github.com/google/lut3d_utils
=====
Tone Map Metadata (3D Look-Up-Table) Injector
A tool for manipulating production metadata
<https://github.com/google/lut3d_utils/blob/docs/Static-Colour-Mapping-Metad…>,
specifically tone map metadata (3D LUT), in MP4 and MOV files. It can be
used to inject 3D LUT metadata into a file or validate metadata in an
existing file.
<https://github.com/google/lut3d_utils#usage>Usage
Python 3.11 <https://www.python.org/downloads/> must be used to run the
tool. From within the directory above lut3d_utils:
<https://github.com/google/lut3d_utils#help>Help
python lut3d_utils -h
Prints help and usage information.
<https://github.com/google/lut3d_utils#inject>Inject
python lut3d_utils -inject_lut3d -i ${input_file} -o ${output_file} -l
${lut3d_file} -p COLOUR_PRIMARIES_BT709 -t
COLOUR_TRANSFER_CHARACTERISTICS_GAMMA22
Loads a tone mapping (3D LUT) metadata from lut3d_file and inject it to
input_file(.mov or .mp4). The specified output_colour_primaries and
output_colour_transfer_characteristics are injected too. It saves the
result to output_file.
input_file and output_file must not be the same file.
<https://github.com/google/lut3d_utils#examine>Examine
python lut3d_utils -retrieve_lut3d -i ${input} -l ${lut3d_file}
Checks if input_file contains 3D LUT metadata. If so, parses the metadata
and prints it out. In addition, it saves the 3D LUT entries to lut3d_file
as a ".cube" file.
=====
soooooo .... in theory you can use those for injecting and exporting those
3d luts in mp4/mov ..
I wonder if ffmpeg/libavformat can be teached to do this, too?