On 10/03/2026 01:34, Andrew Randrianasulu wrote:
On Mon, Mar 9, 2026 at 7:22 PM Terje J. Hanssen <terjejhanssen@gmail.com> wrote:

On 09/03/2026 11:28, Andrew Randrianasulu wrote:

You need to set remappig manually in  (for single-user buld)

bin/ffmpeg/decode.opts

like

remap_video_decoder mpeg2video=mpeg2_qsv

and other similar lines if you want to test other formats (hevc, h264)

Thank you for quick response.
After editing:

~/Applications/squashfs-root/usr/bin/ffmpeg # cat decode.opts
# apply at init decode
loglevel=fatal
formatprobesize=5000000
scan_all_pmts=1
remap_video_decoder libaom-av1=libdav1d
remap_video_decoder mpeg2video=mpeg2_qsv

Compilation of my test results as follows:

mpeg2 transcoded to h264_qsv.mp4

decoding in cpu/gpu
Cingg:  265/300 fps
ffmpeg: 548/562 fps


mpeg2 transcoded to hevc_qsv.mp4

decoding in cpu/gpu
Cingg:  290/294 fps
ffmpeg:  - /688 fps


mpeg2 transcoded to av1_qsv.mp4

decoding in cpu/gpu
Cingg:  251/325 fps
ffmpeg: 631/698 fps


As seen the Cingg fps results are much slower than my system ffmpeg results.
Is there any way to verify that decoding is done in the gpu like with ffmpeg?
Comment out remap line and retest? Watch nvtop or "intel gpu top"  not
sure if they discriminate between enc and decoding on Intel (it works
on amd polaris12)?

As I said earlier we are not even supposed to be as fast as ffmpeg in
ideal case because we roundtrip uncompressed video via system
memory/bus.


I tried intel_gpu_top which has a video line (for total gpu use in % I think).

At least Cingg HW use device seemingly will output in the terminal if it is not supported as for:

Settings|Preferences|Performance|Use HW Device: vaapi

Decoder mpeg2_qsv does not support device type vaapi.
HW device init failed, using SW decode.
file:/Videoklipp/AV1/hdv09_04.m2t
  err: Operation not permitted
......
Render::render_single: Session finished.
** rendered 5972 frames in 20.875 secs, 286.084 fps
audio0 pad 64 0 (64)
malloc_consolidate(): unaligned fastbin chunk detected
Aborted                    (core dumped) cin
-------

Yeah, of course Cingg and ffmpeg are not comparable with regards to video format transcoding speed..

I guess their working principles, surely oversimplified, may be something like this example:

Cingg:  mpeg2 input --> decoding -->internal/raw format --> decoding?--> encoding output

FFmpeg: mpeg2 input --> decoding --> encoding output


Therefore it's confusing where "remap_video_decoder mpeg2video=mpeg2_qsv" decoding really should take place, from mpeg2 input --> internal/raw format rather than decoding before encoding output?





On Sun, Mar 8, 2026 at 11:46 PM Terje J. Hanssen via Cin
<cin@lists.cinelerra-gg.org> wrote:

I have tested transcoding mpeg2video to h264_qsv, hevc_qsv, vp9_qsv and
av1_qsv HW GPU accelerated decoding+encoding with ffmpeg.
Just by adding "-hwaccel qsv" before the input file, the correct qsv
codec for decoding in GPU apparently is auto-detected.

ffmpeg -hide_banner -hwaccel qsv -i hdv09_04.m2t -c:v hevc_qsv -y
hdv09_04_ffmpeg_hevc_qsv.mp4

With a medium powerful (balanced) Intel Alder Lake (i7-12700KF) /DG2
(A750) the qsv.mp4 video output and playback are clean and faster with
GPU decoding.
For the legacy, limited SkyLake (h264_qsv) and Coffee Lake (hevc_qsv)
igpu's, it is best to keep decoding in CPU, due to igpu buffer errors
and some distortions on the playback.


So I wonder if it manageable to test-implement an additional environment
variable "CIN_HW_DEV=qsv" for GPU HW decoding that adds "-hwaccel qsv"
on the input side?
If it works, it may be added as a "Use HW Device: qsv" next.

Terje J. H






_______________________________________________
Cin mailing list -- cin@lists.cinelerra-gg.org
To unsubscribe send an email to cin-leave@lists.cinelerra-gg.org