On Mon, Mar 9, 2026 at 7:22 PM Terje J. Hanssen <[email protected]> wrote:
On 09/03/2026 11:28, Andrew Randrianasulu wrote:
You need to set remappig manually in (for single-user buld)
bin/ffmpeg/decode.opts
like
remap_video_decoder mpeg2video=mpeg2_qsv
and other similar lines if you want to test other formats (hevc, h264)
Thank you for quick response. After editing:
~/Applications/squashfs-root/usr/bin/ffmpeg # cat decode.opts # apply at init decode loglevel=fatal formatprobesize=5000000 scan_all_pmts=1 remap_video_decoder libaom-av1=libdav1d remap_video_decoder mpeg2video=mpeg2_qsv
Compilation of my test results as follows:
mpeg2 transcoded to h264_qsv.mp4
decoding in cpu/gpu Cingg: 265/300 fps ffmpeg: 548/562 fps
mpeg2 transcoded to hevc_qsv.mp4
decoding in cpu/gpu Cingg: 290/294 fps ffmpeg: - /688 fps
mpeg2 transcoded to av1_qsv.mp4
decoding in cpu/gpu Cingg: 251/325 fps ffmpeg: 631/698 fps
As seen the Cingg fps results are much slower than my system ffmpeg results. Is there any way to verify that decoding is done in the gpu like with ffmpeg?
Comment out remap line and retest? Watch nvtop or "intel gpu top" not sure if they discriminate between enc and decoding on Intel (it works on amd polaris12)? As I said earlier we are not even supposed to be as fast as ffmpeg in ideal case because we roundtrip uncompressed video via system memory/bus.
On Sun, Mar 8, 2026 at 11:46 PM Terje J. Hanssen via Cin <[email protected]> wrote:
I have tested transcoding mpeg2video to h264_qsv, hevc_qsv, vp9_qsv and av1_qsv HW GPU accelerated decoding+encoding with ffmpeg. Just by adding "-hwaccel qsv" before the input file, the correct qsv codec for decoding in GPU apparently is auto-detected.
ffmpeg -hide_banner -hwaccel qsv -i hdv09_04.m2t -c:v hevc_qsv -y hdv09_04_ffmpeg_hevc_qsv.mp4
With a medium powerful (balanced) Intel Alder Lake (i7-12700KF) /DG2 (A750) the qsv.mp4 video output and playback are clean and faster with GPU decoding. For the legacy, limited SkyLake (h264_qsv) and Coffee Lake (hevc_qsv) igpu's, it is best to keep decoding in CPU, due to igpu buffer errors and some distortions on the playback.
So I wonder if it manageable to test-implement an additional environment variable "CIN_HW_DEV=qsv" for GPU HW decoding that adds "-hwaccel qsv" on the input side? If it works, it may be added as a "Use HW Device: qsv" next.
Terje J. H
_______________________________________________ Cin mailing list -- [email protected] To unsubscribe send an email to [email protected]