[Cin] SyncSink and other audio/video alignment projects

Andrew Randrianasulu randrianasulu at gmail.com
Tue Jul 16 22:21:36 CEST 2024


вт, 16 июл. 2024 г., 21:56 Stefan de Konink via Cin <
cin at lists.cinelerra-gg.org>:

> Op 7/16/24 om 6:39 PM schreef Andrew Randrianasulu via Cin:
> > I have read this problem about synchronizing few videos on forum. After
> > yet another search I found some tool ....
> >
> > Anyone have few multicam files to test this Java tool?
>
> I just did. And this tool is maybe very academically functional, it is
> not practical.
>
> This is the directory structure I usually follow, my individual
> recordings from multiple sources are placed in separate folders, where I
> typically also have an independent audio track. Not this time, we are
> keeping it "simple".
>
> <https://download.stefan.konink.de/syncsink/>
>
> In this folder structure you can see that there are two files in the two
> folders. Hence we already know if we follow the metadata of these
> individual we know 'exactly' when recording started. The device may have
> an offset.
>
> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '_DSC0680.MOV':
>    Metadata:
>      major_brand     : qt
>      minor_version   : 537331968
>      compatible_brands: qt  niko
>      creation_time   : 2023-12-10T11:10:25.000000Z
>
> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '_DSC0681.MOV':
>    Metadata:
>      major_brand     : qt
>      minor_version   : 537331968
>      compatible_brands: qt  niko
>      creation_time   : 2023-12-10T11:13:29.000000Z
>
> My basic expectation: the sync tool is aware that a sequential recording
> has been made from the same device and is able to position it relative
> towards each other. Creation time, metadata, duration, *timecode* would
> help. Even if the tool would not have figured out that this was a
> specific camera, it still would have placed it at the correct offset.
>
> Now the tool at hand is not aware of this sequential nature of the
> files. It can do relative positions of recordings, but not takes.
>
> So we get a result of the second folder and it comes up with a way to
> align the *audio*.
>
> #execute the following in the terminal
> cd '/mnt/media/video/20231209-penm-allstars/ronin'
> ffmpeg -i "_DSC4395.MOV" "original__DSC4395.wav"
> ffmpeg -f lavfi -i aevalsrc=0:d=7.5135827 -i
> "/mnt/media/video/20231209-penm-allstars/nikon/_DSC0680.MOV"
> -filter_complex "[0:0] [1:0] concat=n=2:v=0:a=1 [a]" -map [a]
> "synced__DSC0680.wav"
>
> What I would want is the ability to tag the existing files with a
> timecode so any tool would be able to use that information to place the
> files on a timeline.
>
> The ideal tool (which I would be looking at) would be something within
> the scope of opentimelineio, place the assets correctly towards each
> other like it would be the timeline view of Cinerella. The first
> synchronisation step could be a directory determining the track, file
> modification time, metadata. Where a second step either by audio or by
> visual cues syncing the media at frame and audio level, so it would
> overcome millisecond differences as well.
>
> If we would then be able to either export in Cinelerra project "xml" or
> have the ability to import opentimelineio, that would greatly improve my
> workflow and I think everyone that has multicam recordings without in
> camera timecode.
>

I found this forum post on Shotcut forums:

https://forum.shotcut.org/t/audio-alignment-implementation/30420/10

it goes into details of how it can be implemented

compare.cpp lives in

https://github.com/andre-caldas/sandbox/tree/master/audio_aligner

for more complex folder processing I found this repo, but it relies on
hardware LTC generator?

https://hackaday.io/project/176196/logs

Also,some blender experiments from 2013, not sure where real implementation
lived ....

https://monochrome.sutic.nu/2014/03/02/multi-camera-sync.html

and this article mentioning -timecode switch for ffmpeg (but it also relies
on electronic slate/tablo visible in very first frame of video)

https://probably.co.uk/posts/adding-timecode-to-older-video-recorders/

so, one idea is to

1) extract audio from sources.
2) compare all those wavs to master sound file extracted from master media
file.
3) convert offsets into timecode
4) run ffmpeg to tag video files via copying a/v streams to new files (or
may be use gpac if it can do it inplace?)

sync via timecode in cingg ;)





> --
> Stefan
>
> --
> Cin mailing list
> Cin at lists.cinelerra-gg.org
> https://lists.cinelerra-gg.org/mailman/listinfo/cin
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cinelerra-gg.org/pipermail/cin/attachments/20240716/2f29b08f/attachment.htm>


More information about the Cin mailing list