A nice command-line tool that reads the offset between two audio tracks, thus allowing them to be manually synchronized. Unfortunately it is made with Python, but I was wondering if it is possible to import it into "i" shell commands, so as to provide sync information of the various assets. It has the Apache license. https://github.com/bbc/audio-offset-finder
Op 5/18/24 om 16:49 schreef Andrea paz via Cin:
A nice command-line tool that reads the offset between two audio tracks, thus allowing them to be manually synchronized. Unfortunately it is made with Python, but I was wondering if it is possible to import it into "i" shell commands, so as to provide sync information of the various assets. It has the Apache license. https://github.com/bbc/audio-offset-finder
I had starred this project as well. There are also some alternatives in C. From my perspective this kind of work should be separated from the editing workflow. I think what we "poor people without timecode" want is our assets to be timecoded and be available aligned in a project where we would just throw all our content in. So while the ideal would be "everything is timecoded". I think my focus would be to get everything in OpenTimelineIO as some sort of container / EDL format. Have that in some way related towards each other and pseudo-timecode them. OpenTimelineIO to Cinerella is an option, having the timecoded content handled in Cinerella should be(?) already supported. What I then would like to have in my editor: the ability to ALWAYS be able to sync up the content. Hence select two clips (could be audio, or video) on the timeline, press "sync up" and it would do it by timecode. To me the whole process of syncing up content within the editor does not make sense. Why would you want to load the editor up with stuff that requires all kind of heuristics and specialistic work, while that editors job is to make an EDL? What Cinelerra is missing (even for existing containerized assets) is the "sync up" button. And we all know how easy it is to have the audio track to move by a little so slightly by our own inability to lock some track when editing. Or worse: when align to frame is not enabled. -- Stefan
Some time ago there was talk about OpenTimelineIO, but I can no longer find the reference. What I was thinking of, was to use Audio Offset Finder to see the gap between files and then manually impose a timecode on each asset via the appropriate "set timecode" button. At this point you can use your "sync up" (i.e., "Align Timecodes") button whenever you want. Regarding the ease of losing sync on the timeline and then having to redo it again and again, I seem to remember that it was not possible to lock audio and video together because it meant creating a new type of "A/V" track while CinGG only has "A" and "V". In contrast to what you said, in my opinion, what all open NLEs on linux lack is synchronization in pre-editing, that is, in the Resources window, working even on dozens and dozens of sources. What is also missing is the ability to automatically replace, in each source, the embedded audio track with an external audio track. But always in pre-editing, not on the timeline. In my opinion, the operations of pre-editing should be present in an NLE, while post-editing may be missing.
Op 5/19/24 om 09:45 schreef Andrea paz:
Some time ago there was talk about OpenTimelineIO, but I can no longer find the reference.
"Donation and DaVinci Resolve EDL"
What I was thinking of, was to use Audio Offset Finder to see the gap between files and then manually impose a timecode on each asset via the appropriate "set timecode" button.
I don't even want to do things manually, if I have a choice ;)
What is also missing is the ability to automatically replace, in each source, the embedded audio track with an external audio track.
So you mean that you would have an extern audio recorder, and just drop in replace the audio from the camera?
But always in pre-editing, not on the timeline.
What do you mean with this?
In my opinion, the operations of pre-editing should be present in an NLE, while post-editing may be missing.
What is post-editing in your opinion? Colour-grading, lower two thirds? -- Stefan
I don't even want to do things manually, if I have a choice ;)
I have tried several times to create a script for ffmpeg that would automate the A/V sync, but I do not have the skills. See: https://www.cinelerra-gg.org/bugtracker/view.php?id=448
So you mean that you would have an extern audio recorder, and just drop in replace the audio from the camera?
I have an old Tascam DR40, great but no timecode...
What do you mean with this?
I mean a button like your "sync up" button, put in the Resources window to act on the sources.
What is post-editing in your opinion? Colour-grading, lower two thirds?
Yes, Color Correction and Compositing.
Op 5/19/24 om 13:26 schreef Andrea paz:
I don't even want to do things manually, if I have a choice ;)
I have tried several times to create a script for ffmpeg that would automate the A/V sync, but I do not have the skills. See: https://www.cinelerra-gg.org/bugtracker/view.php?id=448
I'll review what you already did here.
So you mean that you would have an extern audio recorder, and just drop in replace the audio from the camera?
I have an old Tascam DR40, great but no timecode...
I am using the Tascam DR60D (or DR60MKII) for virtually all my work. So I am also quite happy with that workflow. The DR60D does have a 'slate' option, but that requires an audio cable back into the camera. For safety I typically prefer an independent audio recording at my camera, at the cost of (manually) syncing in the open source workflow (my only one).
What do you mean with this?
I mean a button like your "sync up" button, put in the Resources window to act on the sources.
So it builds a linear timeline? -- Stefan
So it builds a linear timeline?
No, just duplicate sources but with the external audio embedded and synchronized. A little like the transcode function that asks you to hide the original media leaving only the newly transcoded. 1- collect the files into one folder (video files with internal audio + external audio). Duplicate they. Calculate the length of each file by timecode. If the external audio is not unique, concatenate and insert blanks to maintain timecode linearity. [?] 2- Synchronize the files with the external audio track (your choice of timecode {or waveform}). [?; you consider the external audio track to be the "master," you synchronize the other media with reference to its timecode. Or you take advantage of CinGG's code for "Align Timecodes".] 3- Duplicate and trim the external audio track so that there are coincidental chunks with every files. [?; you assign each media a timecode based on that of the master and trim on copies of the master for each media; ffmpeg -ss, -t, map] 4- Associate the external audio chunks with the relative files. [ffmpeg map] 5- Delete the internal audio tracks leaving only the external one. [ffmpeg map]
Op 5/19/24 om 20:33 schreef Andrea paz:
So it builds a linear timeline?
No, just duplicate sources but with the external audio embedded and synchronized. A little like the transcode function that asks you to hide the original media leaving only the newly transcoded.
What you describe below, why not approach this from an EDL perspective? Don't materialise it into new content just as 'clips'? -- Stefan
What you describe below, why not approach this from an EDL perspective? Don't materialise it into new content just as 'clips'?
And here I'm already getting lost. Do you mean that you can do everything inside CinGG? I don't know how to program, and my (stupid) attempts are nothing more than looking for some project to use together with others in a very simple script.
Op 5/19/24 om 21:47 schreef Andrea paz:
What you describe below, why not approach this from an EDL perspective? Don't materialise it into new content just as 'clips'?
And here I'm already getting lost. Do you mean that you can do everything inside CinGG?
No. Your attempt is to work around for something that does not in CinGG by changing the input in a way that solves the problem output side of the program. In essence his is the same what I mentioned for "pre-editing". My suggestion would be: don't do what you want in that way. Cinelerra's Clips basically are basically a timeline. Hence without changing any of the input, you could basically get the content related towards each other like if you would do it manually. Now the basic question would be: 1. functionality wise: is this "proven technology"? 2. integration with cinelerra: what would be the best way to interface? So maybe we don't know the answer of option 2 yet too, but how far are we from a prototype functionality wise, so you would know the offsets of the content relative towards each other? -- Stefan
In the DaVinci Resolve manual (starting on page 425) the way they use for sync is explained. This might be a cue on how to do it in CinGG, but I don't actually know how to do it. It's not just a matter of not knowing how to write code: I don't even intuit how to do it at the idea level, at the design stage. https://documents.blackmagicdesign.com/UserManuals/DaVinci_Resolve_18_Refere...
participants (2)
-
Andrea paz -
Stefan de Konink