From this post by DeJay: https://www.cinelerra-gg.org/forum/help-video/yuv-to-rgb-conversion-issues/p... I added the example in the manual. I attach the changes. See if it is okay.
пт, 29 июл. 2022 г., 11:18 Andrea paz via Cin <[email protected]>:
From this post by DeJay:
https://www.cinelerra-gg.org/forum/help-video/yuv-to-rgb-conversion-issues/p... I added the example in the manual. I attach the changes. See if it is okay.
Thanks for picking this up .. I think our color conversion code definitely will benefit from some audit ... But who is capable of such work? May be as project should put it on some website where libre programmers hang around? May be not for free but for some agreed amount of money? Right now ffmpeg continues to integrate lcms2 so we better to have our colirs correct (right now I am not even sure if we dither rgba-float colormodel down to 8 bits (and our display code only handles 8 bit yet) in best possible way ...) Also, can anyone test this little y4m output to pre-created fifo file? You create named fifo, point cingg at this file, choose ffmpeg - y4m output, start render, in terminal start ffmpeg encoder from this pipe ... And see how it handles this full/limited range setting --
Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
Thanks for picking this up .. I think our color conversion code definitely will benefit from some audit ... But who is capable of such work? May be as project should put it on some website where libre programmers hang around? May be not for free but for some agreed amount of money?
If someone does the work to put this out there where libre programmers hang around, I will provide a reasonable amount of money to some audit programmer through PayPal. There would have to be some kind of guarantee that they actually look at it thoroughly though. ...Phyllis
Right now ffmpeg continues to integrate lcms2 so we better to have our colirs correct (right now I am not even sure if we dither rgba-float colormodel down to 8 bits (and our display code only handles 8 bit yet) in best possible way ...)
Also, can anyone test this little y4m output to pre-created fifo file? You create named fifo, point cingg at this file, choose ffmpeg - y4m output, start render, in terminal start ffmpeg encoder from this pipe ... And see how it handles this full/limited range setting
--
Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
-- Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
To coders: do you think it would be difficult to integrate the ColorSpace plugin inside the Transcode rendering window? This would not only change codec and format but also color spaces and YUV color range.
пт, 29 июл. 2022 г., 23:08 Andrea paz via Cin <[email protected]>:
To coders: do you think it would be difficult to integrate the ColorSpace plugin inside the Transcode rendering window? This would not only change codec and format but also color spaces and YUV color range.
But this will create temporary files, unlike adding it to timeline ? --
Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
But this will create temporary files, unlike adding it to timeline ? I explain my idea: Transcode originated in CinGG as support for proxies, mainly to have non-temporary conversions that could remedy sources with seeking problems. In fact, initially the Proxy and Transcode entries in the manual were merged together. But in the editing world, especially in pro programs, transcoding is mainly used to edit sources into Intermediate, suitable for editing work. It is a fundamental part of pre-editing work and they provide sophisticated features such as merging high-quality external audio tracks by replacing them with those embedded in the source, while maintaining synchronization. One has to think that in the pro environment we have to deal with dozens or even hundreds of sources and these all have to be processed together automatically, before tackling the actual editing. Hence the need for transcoding functionality. When GG created Transcode for CinGG I asked him to also offer the ability to choose color space as well as codec/format, but he replied that this was not possible. Shortly thereafter he created the ColorSpace plugin for those who needed this functionality. The trouble with the plugin, however, is that it has to act manually on each individual source in the timeline and not automatically on all the sources in the Resources (pre-editing) window. This defeats its usefulness if you have dozens and dozens of sources. I can't find the post anymore, but a while back a user asked if it was possible to have audio/video sync via timecode. GG provided this part of the request but not the second part that asked for automatic replacement of the embedded audio with the external track. He said that with this feature CinGG would have everything needed for highly professional use. But apart from audio, being able to choose the color space in the resource window would mean providing true transcoding. Hence my request to be able to put ColorSpace in the Transcode rendering window. Let's be clear though, this is only to interest Pro users, who tackle large projects for work. We are perfectly fine with that as well. It's just that it would be nice to be able to have a functional program even in pro work environments (I mean jobs for cinema, Netflix, etc.)
PS: I don't know if it is worth asking an outside developer to work on CinGG's color management. Adam Williams, Herman Vossler (who later created Lumiere, precisely because of the inability to develop Cin's code), Einar and even GG have said it is not possible. Perhaps one could ask "Monty" Montgomery, who knows both Cin and color management well, since he created BlueBanana.
сб, 30 июл. 2022 г., 12:06 Andrea paz <[email protected]>:
But this will create temporary files, unlike adding it to timeline ? I explain my idea: Transcode originated in CinGG as support for proxies, mainly to have non-temporary conversions that could remedy sources with seeking problems. In fact, initially the Proxy and Transcode entries in the manual were merged together. But in the editing world, especially in pro programs, transcoding is mainly used to edit sources into Intermediate, suitable for editing work. It is a fundamental part of pre-editing work and they provide sophisticated features such as merging high-quality external audio tracks by replacing them with those embedded in the source, while maintaining synchronization. One has to think that in the pro environment we have to deal with dozens or even hundreds of sources and these all have to be processed together automatically, before tackling the actual editing.
Ah, thanks for explanation! I have idea about creating gui for source-side opt files (this will enable at least ffmpeg effects and colorspace conversions) but obviously not sure if I ever be able to code this. Hence the
need for transcoding functionality. When GG created Transcode for CinGG I asked him to also offer the ability to choose color space as well as codec/format, but he replied that this was not possible. Shortly thereafter he created the ColorSpace plugin for those who needed this functionality. The trouble with the plugin, however, is that it has to act manually on each individual source in the timeline and not automatically on all the sources in the Resources (pre-editing) window. This defeats its usefulness if you have dozens and dozens of sources. I can't find the post anymore, but a while back a user asked if it was possible to have audio/video sync via timecode. GG provided this part of the request but not the second part that asked for automatic replacement of the embedded audio with the external track. He said that with this feature CinGG would have everything needed for highly professional use.
You meant LTC audio timecode reading? But apart from audio, being able to choose the color space in the
resource window would mean providing true transcoding. Hence my request to be able to put ColorSpace in the Transcode rendering window.
Well, dvd creating window already has code to put project-long instance of effect, but this is not our use case ..... Let's be clear though, this is only to interest Pro users, who
tackle large projects for work. We are perfectly fine with that as well. It's just that it would be nice to be able to have a functional program even in pro work environments (I mean jobs for cinema, Netflix, etc.)
PS: I don't know if it is worth asking an outside developer to work on CinGG's color management. Adam Williams, Herman Vossler (who later created Lumiere, precisely because of the inability to develop Cin's code), Einar and even GG have said it is not possible. Perhaps one could ask "Monty" Montgomery, who knows both Cin and color management well, since he created BlueBanana.
Well, last year I was inspired by fast lcms2 plugin. May be I misunderstand speedups it provides, and probably it will not work at video framerates by itself..but we at least can make exr background render with colorspace changes backed in (unless we want hdr info change dynamically with detected light/screen characteristics at workstation place?) and then display this via say 16 bit pbuffers provided by mesa? I tried to talk about it to nouveau/mesa3d devs but was evidently unsuccessfull ....? https://littlecms.com/plugin/ I think we already discovered DaVinchi Resolve uses OpenCl - openGL interoperability to get fast color correction without moving images over slow (for this task) bus ...but then this new Rust lang implementation of OpenCL in mesa3d still more like testbed .... Not even in main, but living separately in Merge Request. Some more coordination apparently needed but ...well, it seems /me is not right person to do it ....
You meant LTC audio timecode reading?
Yes, LTC timecode, CinGG scans various types of timecode (all LTC) and adopts what it finds in the edits. In pro filming environments, various cameras and audio recorders are capable of creating a synchronized timecode (jam-sync timecode), optionally also using external devices (Timecode Sync Generator). At this point in CinGG it is easy to do the alignment, just bring the edits to the timeline and use the "Align Timecodes" feature. https://cinelerra-gg.org/download/CinelerraGG_Manual/Align_Timecodes.html As I mentioned, in pro environments this step must take place in the pre-editing (Resources Window) and not on the timeline and must also provide replacement of the low-quality embedded audio track with the high-quality external one.
сб, 30 июл. 2022 г., 18:40 Andrea paz <[email protected]>:
You meant LTC audio timecode reading?
Yes, LTC timecode, CinGG scans various types of timecode (all LTC) and adopts what it finds in the edits.
So far I saw this email on ffmpeg list --- Idea: could a filter be developed in FFMPEG, based on example files we can provide, to identify the presence of an LTC-derived audio stream, and additionally identify which of the streams is the LTC stream. --- http://ffmpeg.org/pipermail/ffmpeg-devel/2022-July/298993.html In pro filming environments, various cameras and audio recorders are
capable of creating a synchronized timecode (jam-sync timecode), optionally also using external devices (Timecode Sync Generator). At this point in CinGG it is easy to do the alignment, just bring the edits to the timeline and use the "Align Timecodes" feature. https://cinelerra-gg.org/download/CinelerraGG_Manual/Align_Timecodes.html As I mentioned, in pro environments this step must take place in the pre-editing (Resources Window) and not on the timeline and must also provide replacement of the low-quality embedded audio track with the high-quality external one.
Well, but this mean audio scanning and trimming happen automatically? On timeline you can manually align audio and video ..or at least see if alignment worked as intended via waveform. How it done in other editors? So far I can only imagine checkbox and flag in asset info/resources window preventing importing audio from specified asset... But I must admit Cinelerra probably was not meant to be used exactly this way (or may be embedding mini timeline/waveform in info window quite too much work.) I saw waveform editor as tab in viewer (in Final Cut Pro) , may be it can be implemented this way?
So far I saw this email on ffmpeg list
I think ffmpeg filters that require multiple streams as input don't work in CinGG; they give the classic warning: "Input / Output error"
How it done in other editors?
DaVinci Resolve: https://beginnersapproach.com/davinci-resolve-sync-audio-clips/ Avid Media Composer: https://wikis.utexas.edu/display/comm/AVID+-+Syncing+Video+and+Audio+using+A...
So far I can only imagine checkbox and flag in asset info/resources window preventing importing audio from specified asset...
Yes, exactly. The whole pre-editing phase is based on a database that writes and organizes all the data, assets, metadata, flags, storyboards, etc. I've read the series of books (I'm missing the last 2 volumes) that, it seems to me, you yourself recommended: Timeline Analog vols. 1-6. It is clear from the history of editing that the reason for the success of digital editing over film editing is not the actual editing. In fact this became important only when the power of PCs, transfer rates, and storage became sufficient. But the real success was due to the pre-editing and organization of the material to be edited. There was no comparison between the mountain of written papers and memos hanging on the studio walls and the database built into the software. DaVinci Resolve also offers a choice between built-in database and system PostgreSQL.
But I must admit Cinelerra probably was not meant to be used exactly this way (or may be embedding mini timeline/waveform in info window quite too much work.)
I still agree with you. I would not want all this flood of words with which I have bored you lead to think that CinGG is backward or inadequate compared to anything else. These pro features are not needed for small to medium projects, and I think some of the users also use it for big projects (Pierre? Fary54? Sam?). Audio/video auto-sync is not at all necessary for CinGG. It would distinguish it from its open source competitors, but doing it manually is still okay. Putting color space in transcoding would also be nice, but it is fine to do it manually with the ColorSpace plugin.
I saw waveform editor as tab in viewer (in Final Cut Pro) , may be it can be implemented this way?
I didn't understand, do you mean the "Disable audio components on AV clips" option at minute 2.17 of this video? https://www.youtube.com/watch?v=02j6c7XFdXA (I don't know Final Cut or even Premiere Pro.) PS: one important pre-editing feature that, as a non-programmer, I always thought was easy to implement is the ability to create nested subfolders in those present (or created from scratch) in Resouces Window. I have asked this a couple of times but with no response. So I guess it is not trivial to implement it at all.
вс, 31 июл. 2022 г., 11:38 Andrea paz <[email protected]>:
So far I saw this email on ffmpeg list
I think ffmpeg filters that require multiple streams as input don't work in CinGG; they give the classic warning: "Input / Output error"
Yeah, but may be in this special case there will be way to use this specific filter .... Something to look into when it lands in ffmpeg.git
How it done in other editors?
DaVinci Resolve: https://beginnersapproach.com/davinci-resolve-sync-audio-clips/
Thanks, it seems internal representation of relations between audio and video quite richer in thiscase.
Avid Media Composer:
https://wikis.utexas.edu/display/comm/AVID+-+Syncing+Video+and+Audio+using+A...
So far I can only imagine checkbox and flag in asset info/resources window preventing importing audio from specified asset...
Yes, exactly. The whole pre-editing phase is based on a database that writes and organizes all the data, assets, metadata, flags, storyboards, etc. I've read the series of books (I'm missing the last 2 volumes) that, it seems to me, you yourself recommended: Timeline Analog vols. 1-6. It is clear from the history of editing that the reason for the success of digital editing over film editing is not the actual editing. In fact this became important only when the power of PCs, transfer rates, and storage became sufficient. But the real success was due to the pre-editing and organization of the material to be edited.
Yeah, here cingg while better than cincv still lacking ... There was no comparison between the mountain of written papers
and memos hanging on the studio walls and the database built into the software. DaVinci Resolve also offers a choice between built-in database and system PostgreSQL.
But I must admit Cinelerra probably was not meant to be used exactly this way (or may be embedding mini timeline/waveform in info window quite too much work.)
I still agree with you. I would not want all this flood of words with which I have bored you lead to think that CinGG is backward or inadequate compared to anything else. These pro features are not needed for small to medium projects, and I think some of the users also use it for big projects (Pierre? Fary54? Sam?). Audio/video auto-sync is not at all necessary for CinGG. It would distinguish it from its open source competitors, but doing it manually is still okay. Putting color space in transcoding would also be nice, but it is fine to do it manually with the ColorSpace plugin.
Well, at least having some additional pair of eyes or more than pair looking at our code with experience will be nice to have ....
I saw waveform editor as tab in viewer (in Final Cut Pro) , may be it can be implemented this way?
I didn't understand, do you mean the "Disable audio components on AV clips" option at minute 2.17 of this video? https://www.youtube.com/watch?v=02j6c7XFdXA (I don't know Final Cut or even Premiere Pro.)
PS: one important pre-editing feature that, as a non-programmer, I always thought was easy to implement is the ability to create nested subfolders in those present (or created from scratch) in Resouces Window. I have asked this a couple of times but with no response. So I guess it is not trivial to implement it at all.
I guess this just requires modifications in many places (because you want your actions to depend in various ways on type of bin/folder you select ..…?) But may be Einar (cc) can hack on some of those topics in his experimental fork and then we can try to bring it over to cingg?
Thanks to DeJay for the Tip and to Andrea for updating the manual. I have reviewed the wording and checked it into GIT. I really like the nice looking table. On Fri, Jul 29, 2022 at 2:18 AM Andrea paz via Cin < [email protected]> wrote:
From this post by DeJay:
https://www.cinelerra-gg.org/forum/help-video/yuv-to-rgb-conversion-issues/p... I added the example in the manual. I attach the changes. See if it is okay. -- Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
participants (3)
-
Andrea paz -
Andrew Randrianasulu -
Phyllis Smith