Den 11.12.2024 23:56, skrev Andrew Randrianasulu:


чт, 12 дек. 2024 г., 00:43 Terje J. Hanssen via Cin <cin@lists.cinelerra-gg.org>:
To clarify some pieces once again, I put up some basic statements or questions:

For an end-user to utilize video acceleration support, he/she need a computer with supported graphical hardware with libs/API and drivers for it(?)

yes



The libs and drivers can be dynamical linked (enabled) to the system or static built (embedded) in CinGG?


well, here it gets complicated. vaapi and such  actually TWO libs at least - one with generic code ffmpeg use, and hw-specific driver lib. Both are shared (*.so) and moreover runtime path where given  generic lib looks for drivers depend on Linux distro/how it was compiled.

Yeah, it is not easy to get an overview understanding. Fore example I have the following "mix" installed on my machine:

i  | intel-vaapi-driver   | Intel Driver for Video Acceleration (VA) API-> | pakke
i  | Mesa-libva           | Mesa VA-API implementation                     | pakke
i  | kernel-firmware-i915 | Kernel firmware files for Intel i915 graphics driver | pakke
i  | libva2               | Video Acceleration API                           pakke





So what happened when adding oneVPL (qsv) support to the build system; dynamic linked to system or static added embedded into the build?

dynamic


If oneVPL was dynamic linked, the qsv support may be be distribution specific, or if static built it will be generic available on compliant hardware?


as above, at least due to different driver path it will not work out of the box everywhere even if static (*.a) libs were used. You probably should ask for details on ffmpeg or intel mailing lists ...

Distributions nowadays tend to avoid *.a files if possible, due to consistency in upgradeability (if  you embed say libpng at some point any update to it will require not just updating *.so but also any application with embedded libpng, and there is no simple way to even tell from stripped binary that symbols it use)



Is it correct to say the build machine does not need the specific graphical hardware, but needs the actual graphic libs installed to build Cingg with it?

yes



Could in principle similar methods be extended to include broader video acceleration support for AMD/amf and NVIDIA/nvenc?


nvenc already supported, I think? At some point I tried it with GF710 on livedvd and it was working for me. try to test it if you have proprietary nvidia drivers.

Yes, seemingly:
Cin/bin/ffmpeg/video> ls *nvenc*
h264_nvenc.mp4  h264_nvenc.qt  h265_nvenc.mp4

Maybe I can do an attempt later, if I get life in the old GeForce GTX 960 in my Skylake workstation.



amf ... I have no idea how well it work or that it demand lib-wise. As long as it just ffmpeg switch - I can try to add this too but honestly, isn't it more like "checkbox" feature? Does it provide anything over vaapi?


--------------

So a confusing piece if "oneVPL" instead should have been replaced with "libvpl?, because I just read
Note for Users of Intel® oneAPI Video Processing Library (oneVPL) and for Intel® Media SDK
https://www.intel.com/content/www/us/en/developer/tools/vpl/overview.html

oneVPL is now called the Intel® Video Processing Library (Intel® VPL). The library will no longer be part of the oneAPI specification so that Intel can focus on providing video processing features on Intel GPUs.

In comparision on openSUSE/Slowroll on Intel, there are libvpl(2) (and no oneVPL).
The oneAPI Video Processing Library (oneVPL) provides a single video processing API for encode, decode, and video processing that works across a wide range of accelerators.
ffmpeg similar has --enable-libvpl --enable-vaapi --enable-vdpau --enable-vulkan



naming a bit confusing, but this is what intel invented! 

Probably OneVPL is technology/marketing name and libvpl is library component ffmpeg actually looks for.