вт, 3 дек. 2024 г., 23:59 Terje J. Hanssen <terjejhanssen@gmail.com>:
From a previous thread:
Re: [Cin] another set of test profiles
Den 18.10.2024 02:08, skrev Andrew Randrianasulu:
чт, 17 окт. 2024 г., 15:06 Terje J. Hanssen <terjejhanssen@gmail.com>:
Den 17.10.2024 13:51, skrev Andrew Randrianasulu:
чт, 17 окт. 2024 г., 13:40 Terje J. Hanssen <terjejhanssen@gmail.com>:
Den 14.10.2024 00:38, skrev Andrew Randrianasulu:
пн, 14 окт. 2024 г., 01:36 Phyllis Smith <phylsmith2017@gmail.com>:
Andrew, so it seems prudent to check into GIT, the av1_vaapi.mp4 render format (after successfully tested of course); but what about the QSV encoders?
wait for Terje's testing OR try to build oneVPL-cpu (it sort of circles back to different branch of ffmpeg, so ffmpeg will think it uses qsv but it in fact will use another ffmpeg .... well, in theory! it does not work for me on 32-bit!)
I wonder if Hw accellerated encoding support via Vaapi and QSV is to be embedded in future Cingg Appimage and/or packages if possible?
What about a list of supported dGPUs/iGPUs?
Problem is - QSV/vaapi basically search for driver component and this one might be in different location on different distros, and interface between two also not set in stone.
For appimage you can just unpack them and remove libva.so so on startup cingg will link to system's libva.
QSV as we learned is another layer with their own runtime path for yet another set of driver components. So, while building libvpl itself is relatively easily making sure it finds its drivers is not easy (at least for me).
speaking about GPU list I think it will be fairly short, you,Phyllis and Andrea probably only ones who use it and report back. Stephan noticed some troubles and reverted back to software. I can test nvdec/nvenc on livecd but this is not my everyday setup (Nvidia proprietary drivers enforce 64-bit system).
But well, feel free to post short summary of that works on your GPUs in cingg as another thread, hopefully others will chime in!
If we get available a packaged Cingg test build (rpm/Leap for me), it would be more useful to do this test. Then I have available three gen. Intel, legacy Skylake/Kabylake iGPUs and current DG2/Arc GPU. I also have/had a Nvidia GPU on Skylake, but it looks like it past away.
I think you can build rpm yourself, but for this we need to update spec file, so it will point at new source and add openvpl as requirements.
In meantime you can just make your own appimage from just build cingg-with-system-ffmpeg, so it hopefully will not be lost after few system updates.
Andrew,
I don't know how busy you are currently with other tasks, but i case you have time, I would be interested to fulfill this rpm and (possibly Appimage) exercise?
That is from my current build with third-party (internal) ffmpeg7.0.
for rpm you need to edit blds/cinelerra.spec at the very top there is date, I think latest tar version is
so replace 2020 something with 20241031
but then it need to be patched up, and I do not have tested procedure for doing this. Probably rpm should wait until new tagged release .... you can search for rpmbuild command on your system and read its manpage/help and may be test run it on some other (faster to rebuild) .spec file in meantime
Appimage should be simpler from existing source directory
just run
bld_appimage.sh
but be sure to get additional file and put it where it belong as described in comment:=====
# Get the appropriate appimagetool from https://github.com/AppImage/AppImageKit/releases# and put it in your path. Only install the version for your platform# and mark it executable. The file name must start with "appimagetool".
====
probably /usr/local/bin will be simplest place to put it as root?