FFmpeg will listen to this input for uncompressed/PCM Audio, and then use an audio encoding codec (mp3 in this example) to compress the audio. Our microphone will be connected to the audio capture interface (“line/mic in”). Since there is no way to get your microphone plugged into a machine in the cloud, we will use the microphone on your laptop. You can’t really get a feel for that when you are pressing play and simply hear a recording streaming through the workflow. not least because things like latency become much more apparent when you say “hi” and you hear it a few seconds later. What is really interesting is hearing your own voice streaming out. Why the laptop? Well we need to present some audio, and so while you could use a file on a disc on another cloud machine, there is nothing very interesting about delivering an audio file from one location to another. So let’s assume you have the Icecast server up and running, waiting patiently in your cloud platform for a source from your laptop. I will then make a few comments that contrast this “proper” streaming workflow with the earlier rudimentary audio streaming article I wrote, which simply used a TCP connection to send audio data across your LAN. In this article we are going to wake it up and send and audio stream. It just sits there confirming it is ready. Without a source-encoded stream to distribute, it is pretty boring. The Icecast server is in effect our distribution workflow. In order to live stream audio to multiple listeners we need two things: an encoding workflow and a delivery/distribution workflow. It was a pretty “meh” exercise in isolation, but it laid the groundwork for this article. In my last article we setup an Icecast server.