<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">---------- Forwarded message ---------<br>От: <strong class="gmail_sendername" dir="auto">Terry Corbet</strong> <span dir="auto"><<a href="mailto:tcorbet@ix.netcom.com">tcorbet@ix.netcom.com</a>></span><br>Date: вс, 1 янв. 2023 г., 09:10<br>Subject: [Libav-user] Creating Panned MP3 Clips<br>To: <<a href="mailto:libav-user@ffmpeg.org">libav-user@ffmpeg.org</a>><br></div><br><br>I have recently discovered how to use the Audacity Envelope Tool to turn <br>
a standard stereo MP3 file into a modified one in which throughout the <br>
entire duration of the clip the apparent source of the sounds will <br>
traverse from left to right. While I could use that workflow to <br>
manually perform the same transformation on multiple files, for my own <br>
use as well as to help other family members [who generally have limited <br>
computer skills] I want to automate that workflow.<br>
<br>
Over the past four days I have played as much catch-up on the many <br>
topics and toolkits which appear might permit me to engineer a software <br>
solution to this requirement. As a newbie, I probably will not <br>
correctly summarize what I believe to be the possible tools and <br>
approaches, so please forgive any misuse to the correct terminology. I <br>
hope/believe that I might be able to state my concepts/questions in a <br>
manner which will be most considerate of the time of those who <br>
participate in this mailing list and most quickly help me move closer to <br>
a good approach to the challenge.<br>
<br>
01. I have managed to download the libraries which are used for the <br>
maintenance of the ffmpeg, ffprobe and ffplay triumvirate of tools.<br>
<br>
02. I have managed to successfully build some sample C programs [taken <br>
from the doc\examples sub-directory and other miscellaneous snippets <br>
found by following the wonderful links from your Wiki] using the <br>
CodeBlocks IDE framework.<br>
<br>
03. I have squirreled my way through the parts of the Doxygen <br>
documentation which seem like they would be most apropos.<br>
<br>
What I did not discover was any functions or examples of what I assumed <br>
I would be needing to do, which essential would be to process the audio <br>
frames of the FrontLeft [FL] and FrontRight [FR] channels of coming out <br>
of a stream of packets. That caused me to think that perhaps I would <br>
find examples of that processing by searching the Audacity sources to <br>
learn when and how they use the ffmpeg libraries. And somewhere between <br>
the Audacity and FFmpeg sites I stumbled upon some sources and some <br>
documentation concerning what I suppose are two reasonable libraries <br>
devoted to "resampling" -- soxr and swr.<br>
<br>
It was about at that point that I concluded that my modification of the <br>
sampled frames probably does not fall within the ambit of what is meant <br>
by resampling at all and that led to an investigation of what Nyquist <br>
was all about. Wow, what a guy Mr. Dannenberg must be. The 2007 <br>
Nyquist Reference Manual is a jaw-dropping read.<br>
<br>
I think that is enough background/context. Here's were I would <br>
appreciate any suggestions:<br>
<br>
A. Would it be possible to accomplish the steps necessary to achieve <br>
the desired result just using ffmpeg.exe? I imagine that, using the <br>
command line tool and an appropriate shell scripting language, it might <br>
be necessary to make multiple passes of the original .mp3 file and/or <br>
the two separate channels. I am not concerned about that loss of <br>
throughput; it will always be far faster than any manual procedure.<br>
<br>
B. Nonetheless, there are some advantages that would accrue from <br>
accomplishing the work entirely in an application .exe with a little GUI <br>
glitter to help the user be able to attempt some trial-and-error <br>
[preview] with slight changes in some of the parameters of the task <br>
depending upon the nature of the audio content and the manner in which <br>
the user will eventually play the output on different devices in <br>
different environments. Since I will not have the capabilities for <br>
building an Envelope in the manner that Nyquist [Lisp] accomplishes <br>
that, can anyone point me to any sample code doing that in C with the <br>
eight ffmpeg .dll libraries?<br>
<br>
C. Or -- and I appreciate that it is not fair to ask this of this mail <br>
group -- but I would appreciate any experience/advice as to whether the <br>
solution really ought to be accomplished by some scripting and/or macro <br>
facilities wrapped around Audacity?<br>
<br>
Thank you so much for the fantastic capabilities you have provided with <br>
the entire FFmpeg effort and for your patience in reading through my <br>
questions as the bell is about to strike on the New Year.<br>
<br>
_______________________________________________<br>
Libav-user mailing list<br>
<a href="mailto:Libav-user@ffmpeg.org" target="_blank" rel="noreferrer">Libav-user@ffmpeg.org</a><br>
<a href="https://ffmpeg.org/mailman/listinfo/libav-user" rel="noreferrer noreferrer" target="_blank">https://ffmpeg.org/mailman/listinfo/libav-user</a><br>
<br>
To unsubscribe, visit link above, or email<br>
<a href="mailto:libav-user-request@ffmpeg.org" target="_blank" rel="noreferrer">libav-user-request@ffmpeg.org</a> with subject "unsubscribe".<br>
</div></div></div>