https://github.com/Umio-Yasuno/amdgpu_top?tab=readme-ov-file Written in Rust, but Arch should have binary package. I think this one should display encode/decode gpu utilization separately,at least according to screenshots.
I use system monitor from KDE. I tried loading into CinGG an 8k video; 60 fps; AV1 (supported by vaapi) and mkv. I ran the playback, but I do not detect GPU utilization. The GPU load varies from 0 to 10%, but these are normal variations that are repeated even with GPU at rest. The CPU has a load of about 30%. With the appimage I have: GPU = Same values. CPU = about 60%. Confirmation is the terminal message: "Decoder libdav1d does not support device type vaapi." I also tried vdpau (which for AMD GPU is a wrapper that leads to vaapi) and the result is the same: "Decoder libdav1d does not support device type vdpau." I tried a second video: 4k; 60 fps; HEVC; mp4. The video does not load and I have numerous errors like: "file: /home/paz/video_editing/free_video/Life Untouched 4K Demo.mp4 int FFVideoConvert::convert_picture_vframe(VFrame*, AVFrame*, AVFrame*): Error retrieving data from GPU to CPU" With appimage I have no loading errors, but still the GPU is not engaged: "file:/home/paz/video_editing/free_video/Life Untouched 4K Demo.mp4 err: Operation not permitted Failed HW device create. dev:vaapi err: Input/output error HW device init failed, using SW decode." Finally using mpv from the command line (set to vaapi): $ mpv 'Life Untouched 4K Demo.mp4' GPU = 30-60% CPU = 0-2%
чт, 25 апр. 2024 г., 21:44 Andrea paz <[email protected]>:
I use system monitor from KDE. I tried loading into CinGG an 8k video; 60 fps; AV1 (supported by vaapi) and mkv. I ran the playback, but I do not detect GPU utilization. The GPU load varies from 0 to 10%, but these are normal variations that are repeated even with GPU at rest. The CPU has a load of about 30%. With the appimage I have: GPU = Same values. CPU = about 60%.
Confirmation is the terminal message: "Decoder libdav1d does not support device type vaapi."
I also tried vdpau (which for AMD GPU is a wrapper that leads to vaapi) and the result is the same: "Decoder libdav1d does not support device type vdpau."
Interesting, so av1 does not decode in hw in our case ... something to investigate, but I have no relevant hardware ...
I tried a second video: 4k; 60 fps; HEVC; mp4. The video does not load and I have numerous errors like:
"file: /home/paz/video_editing/free_video/Life Untouched 4K Demo.mp4 int FFVideoConvert::convert_picture_vframe(VFrame*, AVFrame*, AVFrame*): Error retrieving data from GPU to CPU"
hm, ah, second error! Was it 10bit hevc by the way?
With appimage I have no loading errors, but still the GPU is not engaged:
"file:/home/paz/video_editing/free_video/Life Untouched 4K Demo.mp4 err: Operation not permitted Failed HW device create. dev:vaapi err: Input/output error HW device init failed, using SW decode."
I think appimage may have outdated libva inside it, so course of action was to unpack it and delete this library ....
Finally using mpv from the command line (set to vaapi):
$ mpv 'Life Untouched 4K Demo.mp4'
GPU = 30-60% CPU = 0-2%
Well, finally it works :) But not for us .... I'll try to add a bit more debug at least temporarily Thanks for testing!
Interesting, so av1 does not decode in hw in our case ... something to investigate, but I have no relevant hardware ...
It seems that Dav1d is developed by VLC together with ffmpeg. They say it is software only and does not uses any GPU acceleration: https://www.videolan.org/projects/dav1d.html It would be to try with Nvidia hardware to confirm. It can use AVX2 in decoding, though. Do you have to enable this option or is it by default? https://code.videolan.org/videolan/dav1d It seems that dav1d is not a very up-to-date project; if I wanted to compare with libaom I just need to disable "-enable-libdav1d" in configure?
hm, ah, second error! Was it 10bit hevc by the way?
Yes, it is a file 10-bit: "pix yuv420p10le"; Color Space: bt2020nc; Color range: TV (mpeg). Has anything changed between CinGG with ffmpeg6 and 7, since with appimage I have no problems loading ("second error"). [PS: Following the manual, I tried to start CinGG(ffmpeg7) with: CIN_HW_DEV=vaapi ./cin Now loading works without errors and playback works with a certain percentage of GPU acceleration (8-20%). If you try scrolling, however, there are continuous freezes and artifacts. But there are no messages and errors in the terminal.
пт, 26 апр. 2024 г., 11:45 Andrea paz <[email protected]>:
Interesting, so av1 does not decode in hw in our case ... something to investigate, but I have no relevant hardware ...
It seems that Dav1d is developed by VLC together with ffmpeg. They say it is software only and does not uses any GPU acceleration: https://www.videolan.org/projects/dav1d.html
It would be to try with Nvidia hardware to confirm.
It can use AVX2 in decoding, though. Do you have to enable this option or is it by default? https://code.videolan.org/videolan/dav1d
It seems that dav1d is not a very up-to-date project; if I wanted to compare with libaom I just need to disable "-enable-libdav1d" in configure?
you can simply comment out line "remap_video_decoder libaom-av1=libdav1d" in ffmpeg/decode.opts if this works (hw decode acceleration of av1 started to work) I might have patch for you to try ...
hm, ah, second error! Was it 10bit hevc by the way?
Yes, it is a file 10-bit:
"pix yuv420p10le"; Color Space: bt2020nc; Color range: TV (mpeg).
Has anything changed between CinGG with ffmpeg6 and 7, since with appimage I have no problems loading ("second error").
[PS: Following the manual, I tried to start CinGG(ffmpeg7) with: CIN_HW_DEV=vaapi ./cin Now loading works without errors and playback works with a certain percentage of GPU acceleration (8-20%). If you try scrolling, however, there are continuous freezes and artifacts. But there are no messages and errors in the terminal.
well, freezes and artifacts are no fun either! Internals of ffmpeg were reworked, not sure how it affected user-visible interfaces cingg use ...
чт, 25 апр. 2024 г., 19:00 Andrew Randrianasulu <[email protected]>:
https://github.com/Umio-Yasuno/amdgpu_top?tab=readme-ov-file
Written in Rust, but Arch should have binary package. I think this one should display encode/decode gpu utilization separately,at least according to screenshots.
https://aur.archlinux.org/packages/amdgpu_top-bin may be this ?
participants (2)
-
Andrea paz -
Andrew Randrianasulu