Hi there, I am running AVLinux on an Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz. This is an 8-core CPU with hyperthreading. The system has 32GB of RAM. I am getting what I consider to be very slow render times and wanted to know if something is set up wrong. As a test, I imported a 1080p video from my camera, looped it several times and rendered it with no effects or compositing. For a 4 minute video it took about 12 minutes to render as an .mp4. CPU usage never went above 23% and memory usage never went above 19%. Here is a screenshot showing input file characteristics, project format, the video render options, and performance settings for Cinelerra. [image: Screenshot_2019-12-07_15-06-33.jpg] When I ingest files from the camera in my normal workflow, I immediately convert them into a lower bitrate because the camera sensor quality has no need of 25mbps video directly with FFMpeg. It usually does much better than real time. I'm wondering what causes Cinelerra renders to be so slow, and whether there are any settings I can modify to improve rendering performance. Side-note: Even opening 12 1080p video streams at once on Cinelerra doesn't occupy much RAM on my computer, although performance becomes very poor. Is there a way to understand why Cinelerra doesn't use more RAM? Thanks! Dan
В сообщении от Saturday 07 December 2019 17:11:55 Daniel Kinzelman написал(а):
Hi there,
I am running AVLinux on an Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz. This is an 8-core CPU with hyperthreading. The system has 32GB of RAM.
I am getting what I consider to be very slow render times and wanted to know if something is set up wrong.
As a test, I imported a 1080p video from my camera, looped it several times and rendered it with no effects or compositing. For a 4 minute video it took about 12 minutes to render as an .mp4.
Well, not sure if CinelerraGG in AVlinux actually up to date - there was both introduction of va-api decode/encode and breakage of it, and fix later this year :} What exactly your Cinelerra prints at startup , if you run it from terminal (konsole, xterm, etc)? Mine shows: cin Cinelerra Infinity - built: Dec 1 2019 13:54:16 git://git.cinelerra-gg.org/goodguy/cinelerra.git (c) 2006-2019 Heroine Virtual Ltd. by Adam Williams 2007-2020 mods for Cinelerra-GG by W.P.Morrow aka goodguy Cinelerra is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. There is absolutely no warranty for Cinelerra. Session time: 0:00:39 Cpu time: user: 0:00:03.780 sys: 0:00:00.570 (just open and close program) You can try to set HW device in preferences to vdpau/va-api, and also play with (software) x264 encoding parameters (num. of threads, for example, or try preset "fast"). You can also try to set project's colorspace to something less demanding I have it set to cat ~/.bcast5/Cinelerra_rc | grep RGBA COLOR_MODEL RGBA-8 Bit but you can change it to cat ~/.bcast5/Cinelerra_rc | grep YUV COLOR_MODEL YUVA-8 Bit (if I understand correctly h264 video from camera still in yuv420 colorspace, less data to move around)
CPU usage never went above 23% and memory usage never went above 19%.
Here is a screenshot showing input file characteristics, project format, the video render options, and performance settings for Cinelerra.
[image: Screenshot_2019-12-07_15-06-33.jpg]
When I ingest files from the camera in my normal workflow, I immediately convert them into a lower bitrate because the camera sensor quality has no need of 25mbps video directly with FFMpeg. It usually does much better than real time. I'm wondering what causes Cinelerra renders to be so slow, and whether there are any settings I can modify to improve rendering performance.
Side-note: Even opening 12 1080p video streams at once on Cinelerra doesn't occupy much RAM on my computer, although performance becomes very poor. Is there a way to understand why Cinelerra doesn't use more RAM?
I don't think more RAM will help (directly), whole sequence between (video codec) keyframes must be decoded, either by CPU or by GPU (and transferred via PCIe to main memory in last case). You can try proxy files or new 'transcode' option .... From Phyllis's email ["[Cin] New builds ready on the cinelerra-gg.org website", Date: Sat, 30 Nov 2019 14:15:30 -0700 ] -- *Transcode 1:1* is now available in the Settings pulldown which provides a method to convert to a different format but most importantly gives the user the capability to be able to seek within media that does not have good indexing (see BT 337 for a few more details). -- For hw encoding try one of those _vaapi choices (in rendering/transcode dialog, video codec choice), it really not automagic. Not sure how well or bad nv (nvidia) specific options work, I have older GPU with free drivers (nouveau).
Thanks!
Dan
Some of the advice I've accumulated over the years: 1- Eliminate Spectre mitigations and other CPU security flaws. These in fact remove multithreading. If one makes a home use it is permissible to remove security patches, but it is a decision that everyone must evaluate for themselves. It seems to me that AVLinux uses Grub as its boot loader. In this case you have to put the entry "mitigations=off" inside the grub configuration file, under the entry "GRUB_CMDLINE_LINUX_DEFAULT". Ask Glenn for precise instructions. 2- Intervene in CFLAGS and CXXFLAGS by putting "march=native" and "-mtune=native" in place of the generic "-march=x86_64" and "-mtune=generic". Decomment "MAKEFLAGS=-j2" and bring the value to 16. In Arch linux it's easy, but in AVLinux I don't know how to do it. You have to ask Glenn on his forum. Also I'm not sure they serve for a Xeon, but I think so. 3- Install the "Irqbalance" program and then start and enable its service in systemd (AVLinux uses systemd?): # systemctl start/enable irqbalance.service This program serves to better balance the distribution of calculations in the various threads. Actually, I'm not sure that it's still useful these days. Finally, I do not recommend using vaapi or vdpau for encoding; it produces results quickly but of poor quality. Better to use them for decoding only.
Point 2 concerns only the compilation of source code and is therefore useless in your case.
As you can see from the responses already, there are a lot of factors that can affect render times! But I do not see this in a 720x576 video that I rendered with file format ffmpeg/mp4. It was a 4 minute video and took only 32 seconds to render. You can see in the attached file with another window running "top" that shows %CPU = 1149 (so while rendering it is using 11 1/2 cpus out of 16). You can see on the timeline that the video is about 4 minutes long and you can see in the lower left hand bottom corner, that Rendering took...0.32 seconds. In Settings->Preferences, Performance tab I have Cache set to 4096 and SMP cpu count set to 16. I have not looked at your screenshot in detail yet, but will try to do so tomorrow. In the Render menu, using the Video wrench, I chose h264.mp4. I will also try on a 1080p video.
I am getting what I consider to be very slow render times and wanted to know if something is set up wrong.
As a test, I imported a 1080p video from my camera, looped it several times and rendered it with no effects or compositing. For a 4 minute video it took about 12 minutes to render as an .mp4.
CPU usage never went above 23% and memory usage never went above 19%.
participants (4)
-
Andrea paz -
Andrew Randrianasulu -
Daniel Kinzelman -
Phyllis Smith