I tried to bench simple ffmpeg command (ffmpeg 7.1.1):
time -p ./ffmpeg -hwaccel vaapi -i ~/K38_sdcard1/Documents/iPhone11_4K-recorder_59.940HDR10.mov -c:a copy -c:v rawvideo -f nut /dev/null
this benches for 4k 60 fps HDR video at
frame= 1148 fps= 10 q=-0.0 Lsize=27900063KiB time=00:00:19.16 bitrate=11927607.4kbits/s speed=0.173x
real 111.77 user 42.61 sys 119.01
10 fps,150% cpu load.
ffmpeg 4.4 was using just 85% cpu but only managed
frame= 1148 fps=8.5 q=-0.0 Lsize=27900063kB time=00:00:19.15 bitrate=11933560.4kbits/s speed=0.142x
8.5 fps.
But I am fairly sure I saw *15* fps in cingg on same file , if you set "play every frame" and vo driver to x11.
Anyone want to repeat this experiment? Is it artefact of running 32bit program on top of 64bit kernel?
At some point I might try latest Fedora 42 to see if full 64bit Linux will pull more frames out of same gpu decoder on same motherboard