Zachary Pagliaro

what is signal flow?

how we process signals, putting thought into how sound flows through our music production system, and the difference between analog and digital audio.

Signal Flow is an idea that represents how a signal actually flows through a system. In an analog audio system the signal will be affected by every component that our audio is processed through.


Setting up a basic audio production system is not exactly a simple task. Below is a diagram of the basic tools that almost all computer sound systems will have:


  • a computer and digital audio workstation (DAW) software such as Ableton Live, Logic Pro or Pro Tools
  • an audio interface for converting analog signals to digital audio and vice versa
  • a microphone or an instrument, such as a guitar, synth or midi controller
  • an audio monitoring system. Most people get away with just a basic set of left & right speaker monitors. I like to use some sort of splitting device that can route our sound output to different sets of speakers and maybe has a headphone amplifier (a couple of headphone outputs for friends is nice too)

A line indicates the connection is digital and bidirectional. An arrow indicates the signal is analog audio, and the flow only goes in one direction.


flowchart TD subgraph op[output monitoring] l(left speaker
) r(right speaker
) end A[computer] --- B{{audio interface}} G([midi controller]) --- A C([synthesizer]) --- A C([synthesizer]) --> B D([guitar]) --> B E([microphone]) --> B B --> l B --> r

The Audio Interface Of course every piece of gear in your workspace is important, and yes your computer does its fair share but as you can see from the diagram above, everything will be connected to your audio interface. Make sure to get a good one. My favorite home studio interfaces usually come from Apogee or UAD.


If you're just starting out and notice that the synthesizer in the diagram above is digitally connected directly to the computer, you may jump to the conclusion that you can start producing synth music by just plugging a synth into a computer - but a synths USB port only carries MIDI data, and no audio actually flows between that connection. It's just to transmit MIDI notes back and forth. We need a very important piece of equipment to record the audio from that synth: the audio interface.


What are analog and digital signals?


Your computer may have a very simple audio interface built into it for converting your digital music files into an analog signal, which can then be monitored through your headphone jack or built-in speakers. Your computer or iPhone may also have a microphone, so we know there's an onboard Analog to Digital Convertor (ADC) present as well, but in order to use the signal from an XLR or quarter inch instrument cable, you'll need another piece of hardware that has those input connections. That's where your audio interface comes into play. It will be the bridge between outside of the "box" and inside of it - the box being your computer.


Don't know what an XLR cable is? You can learn about them in this post about common audio cables: Cables and Connections - cantbrooklyn.com


Analog signals are the real-world values that represent a physical sound, like the strum of a guitar. The computerized representation of that sound, the music file sitting on your computer, is a digital signal made up of numbers, 1s and 0s. Basically, the analog signal can be formatted into digital text.


Sample Rate Audio interfaces use Analog to Digital Convertors (ADC) to take the analog signal that a microphone picks up and converts those variations in voltage (which itself is a representation of the pressure of the sound waves being recorded) into a numbered form. A lot of numbers! How many times the signal is sampled is known as the sampling rate. The standard sample rate for recording audio is 44.1 kHz. So 44,100 samples of the signal are taken per second.


Bit Depth Determining the quality of the signal does not stop there, there is actually a depth dimension to each sample taken. This is known as bit depth, and is not unique to audio.


Images are also a digital representation of something analog, so files such as PNGs or GIFs also use bit depth to determine the quality of a sample (a pixel in this case). While the bit depth in images represents the dynamic range of colors available to us when redrawing our image to a display (higher quality images will have a higher bit depth), in audio, bit depth represents the dynamic range of loudness.


The standard bit depth is 16. While it's harder to audibly notice increases in bit depth (because 16 is high enough to very accurately depict our sound recordings) there are still some advantages to using 24 and 32 bit depths in high production value projects. With a lot of tracks, and many stages of recording and mixing down, digital signal processing (DSP) and rendering, there could be a noticeable raise of the noise floor. The noise floor is how much sound is generated by the equipment that is actually producing your audio. The benefit from using higher bit depths during the production process is that it will actually lower your noise floor and increase the dynamic range of your signal. You can think of dynamic range as the difference between the loudest and quietest parts of the music.


What is very noticeable is the difference between 16 bit and 8 bit. As you get lower and lower, our digital representation starts losing many of the qualities that represent the original sound. Try messing around with a bit crusher plugin.


Gain Staging In a modern audio production environment, we are privileged with a large set of tools for designing our digital sound recordings. Not only do we have a wide range of DSP tools right in our DAWs, there is a wide array of hardware modules that can help us get our very best recording.


In the diagram from the beginning of the post, we're actually missing a pretty important piece of our production system puzzle. The microphone preamplifier (also known as a 'mic preamp' or just 'mic pre'), which a lot of audio interfaces actually have built into them. So the diagram isn't incorrect, but the importance of a good mic pre can't be stressed enough.


flowchart LR A(microphone) --> B(microphone preamplifier) --> c(audio interface)

The relationship between the microphone and the preamp is kind of analogous to a guitar and a guitar amp. While the microphone is undoubtedly important, the preamp will provide the sonic character of the sound. Its job is to take the microphone level, which is extremely weak in relation to a common household electric appliance power cord (or even when compared to a signal from an electric guitar) and provide enough gain to the input signal to meet our needs.


Again, in a modern audio production environment, we actually don't need to record signals crazy loud. There are a lot of tools to add gain to our recording during the mix down phase of a production, all with a unique character and desirable effect on the music. You should always prioritize keeping the input signal well below the recording system's maximum volume threshold. You can always turn up the volume, but you cannot recover lost parts of the signal that were "clipped" off during the recording process.


Signal Flow When a music production system starts to grow and more connections are made between different pieces of equipment in your setup, the concept of signal flow comes increasingly into play. It's how we design the route our audio is taking, and taking a second to consider how it's being processed or affected by each component in its path.


"outputs go to inputs"



When we have an issue with our audio recording, some anomaly that manifests itself as a click, or static, or maybe some kind of signal loss, we can follow its path through the system and try to identify the culprit. Considering the signal flow is necessary for troubleshooting bad cable connections, faulty gear, or just some error you may have made when getting your DAW configured with your recording hardware.


Published on: August 14, 2023
Edited on: February 28, 2024