Hi, I think this scenario has passed by on the mailinglist before, and I might because of how I work, be more attracted to these kinds of problems than others that mainly do non-linear work. After the recording, and after getting the main production idea, my projects are timewise mostly influenced by reconstruction of a multi track linear timeline. I am using multiple camera's that don't use any (world) timecodes, and have 2GB limitations. So the best thing they can do is giving me a relative timestamp by means the filename or creation_time metadata. What I would be interested in is the ability to: 0. having a forever scrolling canvas, canvas may do tricks like hiding places where no content is available 1. "automatically" construct such timeline each track(group) being from a single device, which typically atomically can only produce a single stream of content by means of metadata 2. having some tooling that can do macro alignment, independent of metadata (for example by audio fingerprinting) 3. having some tooling that can do micro alignment 4. export this in either some EDL format or multitrack format being NLE independent I wonder what other people are using for the above, other than pen and paper.
From my own experience within Cinerella (I know about mixers) I don't end up with the workflow that I want for non-continious recordings.
A second email follows on a different workflow issue. -- Stefan