Since I keep having problems with multithreaded rendering (vedere https://www.mail-archive.com/[email protected]/msg01565.html), I tried again the render farm locally with only the threads of my CPU (8/16); but it doesn't work for me. When I try to start the client CinGG instances, instead of: RenderFarmClientThread::run: Session finished I get: RenderFarmClient::main_loop:400: Address already in use I first tried from root with ports 401 - 416. Then from no-root with ports 1026 - 1041. The result is always the same. The steps I take: - Start CinGG and activate "use render farm" and then set the 15 clients as Localhost port 1026, ..., 1041. OK to activate. - I open a terminal and start the following script: for n in 'seq 1026 1041'; do /home/paz/Desktop/./CinGG-20210228-x86_64.AppImage -d $n done - I get the message reported before and the rendering does not work. (the same happens with CinGG compiled) Any advice?
Andrea, some things to look for: RenderFarmClient::main_loop:400: Address already in use
The above message says "400" but below you say you used "401-416"? Make sure the ports you start up exactly match the ones defined in the Render Farm menu. Before trying again, be sure to stop any processes that did get started (it may take up to 30 seconds for it to be free to be used again). You can find them via a command from a window: ps -ef | grep cin They will look like: root 5064 1 0 09:29 pts/3 00:00:00 cin -d 1501 and so you will have to kill 5064 for the above to get port 1501 back again. I first tried from root with ports 401 - 416. Then from no-root with
ports 1026 - 1041. The result is always the same.
Check the file /etc/services for ports that are unused. When I look at mine, I clearly see that 1026-1041 are available as in the lines: rndc 953/udp # rndc control sockets (BIND 9) skkserv 1178/tcp sgi-storman # SKK Japanese input method so there is nothing defined between 953 and 1178, i.e. 1026-1041 are clearly available. Let me know if this gets you further along.
Andrea, in linux, user can't use ports lower then 1024. Only root can do that. Running user programs with root account is bad idea in many ways. Consider use ports like 10400 and so on. Best regards, Andrey вт, 16 мар. 2021 г., 18:38 Phyllis Smith via Cin <[email protected]
:
Andrea, some things to look for:
RenderFarmClient::main_loop:400: Address already in use
The above message says "400" but below you say you used "401-416"? Make sure the ports you start up exactly match the ones defined in the Render Farm menu. Before trying again, be sure to stop any processes that did get started (it may take up to 30 seconds for it to be free to be used again). You can find them via a command from a window: ps -ef | grep cin They will look like: root 5064 1 0 09:29 pts/3 00:00:00 cin -d 1501 and so you will have to kill 5064 for the above to get port 1501 back again.
I first tried from root with ports 401 - 416. Then from no-root with
ports 1026 - 1041. The result is always the same.
Check the file /etc/services for ports that are unused. When I look at mine, I clearly see that 1026-1041 are available as in the lines: rndc 953/udp # rndc control sockets (BIND 9) skkserv 1178/tcp sgi-storman # SKK Japanese input method so there is nothing defined between 953 and 1178, i.e. 1026-1041 are clearly available.
Let me know if this gets you further along. -- Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
Yes, nerver used port 400. Tried with ports 10445 to 10460 with the same result. I verified in /etc/services that they are free ports, unlike the others I had used before. "ps" gives as a result: [paz@arch-paz ~]$ ps -ef | grep cin paz 62968 62445 0 17:46 pts/2 00:00:00 grep cin Then: sudo kill 62445 the result does not change; always the same error.
From /etc/services you can see that port 400 is associated with "osb-sd; [tcp/udp]" (Oracle Secure Backup). I don't even know what that is and I don't understand how to not have the render farm see it.
Hi Using render farm on a daily basis here. Client machines : $ cin -d 1200 It will not run as # Cheers Ed On Tue, 2021-03-16 at 20:37 +0100, Andrea paz via Cin wrote:
Yes, nerver used port 400. Tried with ports 10445 to 10460 with the same result. I verified in /etc/services that they are free ports, unlike the others I had used before.
"ps" gives as a result: [paz@arch-paz ~]$ ps -ef | grep cin paz 62968 62445 0 17:46 pts/2 00:00:00 grep cin
Then: sudo kill 62445
the result does not change; always the same error.
From /etc/services you can see that port 400 is associated with "osb-sd; [tcp/udp]" (Oracle Secure Backup). I don't even know what that is and I don't understand how to not have the render farm see it.
Andrea, PLEASE do not give up! What you learn can be used to improve the manual procedures. Instead of using the "seq" line from the manual (which may be bash/desktop based) like Ed said just run the single line. 1) For example keyin: cin -d 10445 (or use the full Cinelerra path if "cin" is not defined) 2) You should see something like the following on the terminal window: Cinelerra Infinity - built: Mar 9 2021 17:02:31 git://git.cinelerra-gg.org/goodguy/cinelerra.git (c) 2006-2019 Heroine Virtual Ltd. by Adam Williams 2007-2020 mods for Cinelerra-GG by W.P.Morrow aka goodguy Cinelerra is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. There is absolutely no warranty for Cinelerra. [root@keystone ~]# init plugin index: /mnt0/build5/cinelerra-5.1/bin/plugins init lv2 index: *RenderFarmClient::main_loop: client started* 3) Next check just to see if that worked. Keyin: ps -ef | grep cin 4) You should see something like the following: root 161032 161015 0 14:46 pts/2 00:00:14 /tmp/cin5/cinelerra5/cinelerra-5.1/cinelerra/ci root 161415 161398 0 14:48 pts/3 00:00:11 /mnt0/build5/cinelerra-5.1/cinelerra/ci root *162255 1 0 15:48 pts/4 00:00:00 cin -d 10445* root *162287 1 9 15:49 pts/4 00:00:00 /mnt0/build5/cinelerra-5.1/**bin/cin -d 10446* root 162291 161717 0 15:49 pts/4 00:00:00 grep --color=auto cin 5) The ones marked in bold above are the render farm ports. (Yeah, yeah, I know I should not use root but have for years). On Tue, Mar 16, 2021 at 1:35 PM Andrea paz via Cin < [email protected]> wrote:
Yes, nerver used port 400. Tried with ports 10445 to 10460 with the same result. I verified in /etc/services that they are free ports, unlike the others I had used before.
"ps" gives as a result: [paz@arch-paz ~]$ ps -ef | grep cin paz 62968 62445 0 17:46 pts/2 00:00:00 grep cin
Then: sudo kill 62445
the result does not change; always the same error.
From /etc/services you can see that port 400 is associated with "osb-sd; [tcp/udp]" (Oracle Secure Backup). I don't even know what that is and I don't understand how to not have the render farm see it. -- Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
OK, as MatN mentioned earlier, the command line usage of cinelerra may not work with AppImage AND IT DOES NOT when I tried it. So for now, Andrea, use your Build. On Tue, Mar 16, 2021 at 1:35 PM Andrea paz via Cin < [email protected]> wrote:
Yes, nerver used port 400. Tried with ports 10445 to 10460 with the same result. I verified in /etc/services that they are free ports, unlike the others I had used before.
"ps" gives as a result: [paz@arch-paz ~]$ ps -ef | grep cin paz 62968 62445 0 17:46 pts/2 00:00:00 grep cin
Then: sudo kill 62445
the result does not change; always the same error.
From /etc/services you can see that port 400 is associated with "osb-sd; [tcp/udp]" (Oracle Secure Backup). I don't even know what that is and I don't understand how to not have the render farm see it. -- Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
Actually the below is not quite true. I tried them all and everything EXCEPT "-d" seemed to work. You can use "-h" to see the options. On Tue, Mar 16, 2021 at 5:05 PM Phyllis Smith <[email protected]> wrote:
OK, as MatN mentioned earlier, the command line usage of cinelerra may not work with AppImage AND IT DOES NOT when I tried it. So for now, Andrea, use your Build.
On Tue, Mar 16, 2021 at 1:35 PM Andrea paz via Cin < [email protected]> wrote:
Yes, nerver used port 400. Tried with ports 10445 to 10460 with the same result. I verified in /etc/services that they are free ports, unlike the others I had used before.
"ps" gives as a result: [paz@arch-paz ~]$ ps -ef | grep cin paz 62968 62445 0 17:46 pts/2 00:00:00 grep cin
Then: sudo kill 62445
the result does not change; always the same error.
From /etc/services you can see that port 400 is associated with "osb-sd; [tcp/udp]" (Oracle Secure Backup). I don't even know what that is and I don't understand how to not have the render farm see it. -- Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
Success! For my tests, it seems that the main problem is the script to associate all the clients with one command. Doing the associations one at a time works for me. Is it possible to create a generally valid script to replace the one mentioned in the manual? Both the built CinGG and the AppImage WORKS for me. For the latter, perhaps because something was left in memory, I had to kill some instances of cin indicated by the command "ps -ef | grep cin". I noticed a few things: 1- AppImage reloads calf plugins every time; associating clients extends the loading time to about 25s per client. It is advisable to disable calfs momentarily when we want to do a Render Farm (unless they are used in the project, of course). 2- If there are labels in the project to be rendered it may happen (rarely) that they are taken as render points even if they are not set. In this case it is advisable to use the In/Out Points for the whole timeline and use them as render option instead of "Whole Project". 3- Every time you close the client instances, if you want to redo the render farm, you have to repeat the associations. It is probable that we need to clean the memory from the cin processes to make it work again. You can see the processes with "ps" and then you have to kill them one by one. Now the association of the clients works without errors. Some information about my tests: Rendering a 16 minute file (consisting of edits of Big Buck Bunny at 1080p in h264 and Tears of Steel at 1080p in VP8; CPU: 8c/16t: RAM 32GB) is rendered in: Render Farm: 12.11 min and 47.722 fps (All threads at 100%; RAM ~10GB) No Render Farm: 35 min and 7 fps (All Threads ~30%) RF con AppImage: 12.07 min and 41.563 fps (All Threads at 100%; RAM ~10GB) Now it remains to understand how RenderMux works because, for now, I use ffmpeg to merge the various files obtained.
Andrea, that is GREAT NEWS as I was very worried about AppImage not working.
Is it possible to create a generally valid script to replace the one mentioned in the manual?
Probably, but I do now know what it is.
Every time you close the client instances, if you want to redo the render farm, you have to repeat the associations.
I never had to do this but I will try it again and see if that has changed. Will have to look at some of the other things you mentioned. It would be good for you to open a BT on the Calf plugins reload though. On Wed, Mar 17, 2021 at 4:06 AM Andrea paz via Cin < [email protected]> wrote:
Success! For my tests, it seems that the main problem is the script to associate all the clients with one command. Doing the associations one at a time works for me. Is it possible to create a generally valid script to replace the one mentioned in the manual?
Both the built CinGG and the AppImage WORKS for me. For the latter, perhaps because something was left in memory, I had to kill some instances of cin indicated by the command "ps -ef | grep cin".
I noticed a few things: 1- AppImage reloads calf plugins every time; associating clients extends the loading time to about 25s per client. It is advisable to disable calfs momentarily when we want to do a Render Farm (unless they are used in the project, of course). 2- If there are labels in the project to be rendered it may happen (rarely) that they are taken as render points even if they are not set. In this case it is advisable to use the In/Out Points for the whole timeline and use them as render option instead of "Whole Project". 3- Every time you close the client instances, if you want to redo the render farm, you have to repeat the associations. It is probable that we need to clean the memory from the cin processes to make it work again. You can see the processes with "ps" and then you have to kill them one by one. Now the association of the clients works without errors.
Some information about my tests: Rendering a 16 minute file (consisting of edits of Big Buck Bunny at 1080p in h264 and Tears of Steel at 1080p in VP8; CPU: 8c/16t: RAM 32GB) is rendered in:
Render Farm: 12.11 min and 47.722 fps (All threads at 100%; RAM ~10GB) No Render Farm: 35 min and 7 fps (All Threads ~30%) RF con AppImage: 12.07 min and 41.563 fps (All Threads at 100%; RAM ~10GB)
Now it remains to understand how RenderMux works because, for now, I use ffmpeg to merge the various files obtained. -- Cin mailing list [email protected] https://lists.cinelerra-gg.org/mailman/listinfo/cin
participants (4)
-
Andrea paz -
edouard chalaron -
Phyllis Smith -
Андрей Спицын