[Cin] another set of test profiles
Terje J. Hanssen
terjejhanssen at gmail.com
Fri Oct 18 12:48:51 CEST 2024
Den 18.10.2024 11:33, skrev Terje J. Hanssen:
>
>
>
> Den 18.10.2024 02:08, skrev Andrew Randrianasulu:
>>
>>
>> чт, 17 окт. 2024 г., 15:06 Terje J. Hanssen <terjejhanssen at gmail.com>:
>>
>>
>>
>>
>> Den 17.10.2024 13:51, skrev Andrew Randrianasulu:
>>>
>>>
>>> чт, 17 окт. 2024 г., 13:40 Terje J. Hanssen
>>> <terjejhanssen at gmail.com>:
>>>
>>>
>>>
>>>
>>> Den 14.10.2024 00:38, skrev Andrew Randrianasulu:
>>>>
>>>>
>>>> пн, 14 окт. 2024 г., 01:36 Phyllis Smith
>>>> <phylsmith2017 at gmail.com>:
>>>>
>>>> Andrew, so it seems prudent to check into GIT, the
>>>> av1_vaapi.mp4 render format (after successfully tested
>>>> of course); but what about the QSV encoders?
>>>>
>>>>
>>>>
>>>> wait for Terje's testing OR try to build oneVPL-cpu (it
>>>> sort of circles back to different branch of ffmpeg, so
>>>> ffmpeg will think it uses qsv but it in fact will use
>>>> another ffmpeg .... well, in theory! it does not work for
>>>> me on 32-bit!)
>>>>
>>>>
>>>
>>> I wonder if Hw accellerated encoding support via Vaapi and
>>> QSV is to be embedded in future Cingg Appimage and/or
>>> packages if possible?
>>> What about a list of supported dGPUs/iGPUs?
>>>
>>>
>>> Problem is - QSV/vaapi basically search for driver component
>>> and this one might be in different location on different
>>> distros, and interface between two also not set in stone.
>>>
>>> For appimage you can just unpack them and remove libva.so so on
>>> startup cingg will link to system's libva.
>>>
>>> QSV as we learned is another layer with their own runtime path
>>> for yet another set of driver components. So, while building
>>> libvpl itself is relatively easily making sure it finds its
>>> drivers is not easy (at least for me).
>>>
>>> speaking about GPU list I think it will be fairly short,
>>> you,Phyllis and Andrea probably only ones who use it and report
>>> back. Stephan noticed some troubles and reverted back to
>>> software. I can test nvdec/nvenc on livecd but this is not my
>>> everyday setup (Nvidia proprietary drivers enforce 64-bit system).
>>>
>>> But well, feel free to post short summary of that works on your
>>> GPUs in cingg as another thread, hopefully others will chime in!
>>
>> If we get available a packaged Cingg test build (rpm/Leap for
>> me), it would be more useful to do this test. Then I have
>> available three gen. Intel, legacy Skylake/Kabylake iGPUs and
>> current DG2/Arc GPU. I also have/had a Nvidia GPU on Skylake, but
>> it looks like it past away.
>>
>>
>> I think you can build rpm yourself, but for this we need to update
>> spec file, so it will point at new source and add openvpl as
>> requirements.
>>
>> In meantime you can just make your own appimage from just build
>> cingg-with-system-ffmpeg, so it hopefully will not be lost after few
>> system updates.
>>
>
> Andrey replied to me in a previous thread:
> Hardware acceleration methods Sat, 14 Sep 2024 08:55:11 +0300
> https://lists.cinelerra-gg.org/pipermail/cin/2024-September/008621.html
>> >Is there something that prevent typical QSV to be prebuilt in the CinGG/FFmpeg rpm?
>>
>> The ffmpeg not installed on my suse build host. If you how to enable
>> QSV, I'll change the build script.
>>
>
> Is this a "path" we can continue with here?
Isn't it so that to apply oneVPL in prebuilt and packaged Cingg, it's
internal ffmpeg has to be enabled with oneVPL?
Please correct me and explain better ......
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cinelerra-gg.org/pipermail/cin/attachments/20241018/b2a07294/attachment.htm>
More information about the Cin
mailing list