[Cin] LIMITED Hardware Acceleration using the GPU Encoding/Rendering checked in
phylsmith2017 at gmail.com
Fri May 10 00:26:42 CEST 2019
On Thu, May 9, 2019 at 2:19 PM Pierre autourduglobe <
p.autourduglobe at gmail.com> wrote:
> Yes, very limited, as possibilities.... Only the integrated GPUs of the
> latest generation Intel CPUs are supported. This is obviously useful
> with recent laptops equipped with Intel processors, but for desktops
> that often have an added GPU card; more powerful and offering support
> for more monitors, no support is possible.
> To the extent that GPUs seem to give a lower rendering quality, it may
> not be a very big loss... But rendering the Timeline in real time for
> immediate and more fluid viewing of several layers of filters and
> effects would be useful and desirable if GPUs could be used more.
> @Pierre: thanks for your research on this as I became a little confused
with my reading on this. BTW: gg will make a Mint18 version with the
latest changes for testing the Compositor/Viewer switch from Mutex to
Condition in the next day or 2. He is having a problem resulting from
installing the updates to Fedora 30 today.
@All: After some more heated discussion here, GG says he might still make
another attempt to create a "build procedure" that would allow an
individual to compile with libraries using nvenc -- that being proprietary
of Nvidia BUT no more proprietary than the Nvidia driver already
installed. It would have to include compiling ffmpeg with --non-free also,
but it is my understanding that might be OK for an individual.
Also, I did update the temporary explanation of GPU usage decode to include
the encode for vaapi at:
@Ugin: I forgot to mention the other day, that GG made another attempt at
trying to see what it would take to get CUDA working. The first thing he
had to do was downgrade his Fedora O/S as the current version was not
supported. Apparently the code setup is quite cumbersome with the
necessity of creating blocks of spaces for stuff. The time he would have
to invest in analyzing and "maybe" actually getting some results with CUDA
does not seem like it will payback. Although VA-API and VDPAUX have
limited usage, he only spent 3-4 days coding for their usage and testing so
it was not a huge investment.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Cin