<div dir="ltr"><div dir="ltr"><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 9, 2019 at 2:19 PM Pierre autourduglobe <<a href="mailto:p.autourduglobe@gmail.com">p.autourduglobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Yes, very limited, as possibilities.... Only the integrated GPUs of the <br>
latest generation Intel CPUs are supported. This is obviously useful <br>
with recent laptops equipped with Intel processors, but for desktops <br>
that often have an added GPU card; more powerful and offering support <br>
for more monitors, no support is possible.<br>
<br>
To the extent that GPUs seem to give a lower rendering quality, it may <br>
not be a very big loss... But rendering the Timeline in real time for <br>
immediate and more fluid viewing of several layers of filters and <br>
effects would be useful and desirable if GPUs could be used more.<br>
<br></blockquote><div><span class="gmail_default" style="font-size:small">@Pierre: thanks for your research on this as I became a little confused with my reading on this.<span class="gmail_default" style="font-size:small"> BTW: gg will make a
Mint18 version with the latest changes for testing the Compositor/Viewer
switch from Mutex to Condition in the next day or 2. He is having a
problem resulting from installing the updates to Fedora 30 today.</span></span></div><div><span class="gmail_default" style="font-size:small"><br></span></div><div><span class="gmail_default" style="font-size:small">@All: After some more heated discussion here, GG says he might still make another attempt to create a "build procedure" that would allow an individual to compile with libraries using nvenc -- that being proprietary of Nvidia </span><span class="gmail_default" style="font-size:small">BUT no more proprietary than the Nvidia driver already installed. It would have to include compiling ffmpeg with --non-free also, but it is my understanding that might be OK for an individual. <br></span></div><div><span class="gmail_default" style="font-size:small"><br></span></div><div><span class="gmail_default" style="font-size:small">Also, I did update the temporary explanation of GPU usage decode to include the encode for vaapi at:</span></div><div><span class="gmail_default" style="font-size:small"> <a href="https://www.cinelerra-gg.org/download/GPU_potential_speedup.pdf">https://www.cinelerra-gg.org/download/GPU_potential_speedup.pdf</a><br></span></div><div><span class="gmail_default" style="font-size:small"></span></div><div><span class="gmail_default" style="font-size:small"><br></span></div><div><span class="gmail_default" style="font-size:small">@Ugin: I forgot to mention the other day, that GG made another attempt at trying to see what it would take to get CUDA working. The first thing he had to do was downgrade his Fedora O/S as the current version was not supported. Apparently the code setup is quite cumbersome with the necessity of creating blocks of spaces for stuff. The time he would have to invest in analyzing and "maybe" actually getting some results with CUDA does not seem like it will payback. Although VA-API and VDPAUX have limited usage, he only spent 3-4 days coding for their usage and testing so it was not a huge investment.</span></div><div><span class="gmail_default" style="font-size:small"><br></span></div><br></div></div></div>