This article explains the two major methods of input monitoring, and why you would use one method over another.
Apogee designs its products in a way that encourages it’s users to record via the simplest, most direct, and therefore best-sounding practices. The idea is to make your work-flow easy uncomplicated. One way we do this:
Monitoring via software
By default, the Apogee interface is set so you do not hear your input signal automatically when you plug it in. This is because it’s best for your recording software to perform the input monitoring – the action of passing your input signal to the output so you can hear it. Which means you will need to open a recording app and make the appropriate settings before you can hear your input signal.
The advantage to this method is you hear exactly what your recording program is doing to your sound. If you apply effects, then you will hear those effects as you record. This is especially important for guitar players who want to use the recording software to apply amp models and effects to their guitar signal.
The downside to this method is the potential for latency (a delay between when you input your signal and when you hear it back). The more you tax the processor in your computer or iOS device (such as adding more tracks and applying effects), the more latency there will be. This is especially true for older computers that don’t have as much processing power in the first place.
If you experience too much latency and cannot reduce the problem with troubleshooting, Apogee provides a low-latency hardware monitor feature in our ONE, Duet, Quartet, Ensemble, and Symphony interfaces to get around the problem. Which brings us to the second major method….
Hardware monitoring passes the audio signal to the output via an internal signal path built into the interface.
In other words, instead of:
Input of interface > recording app (DAW) > output of interface,
Input of interface > output of interface.
This bypasses the recording app and eliminates the latency delay it produces. The downside is you do not hear any effects that the app applies. Using a guitar player as an example again, this means you hear the direct unaffected guitar sound in your monitor. So even though you are getting your audio recorded in the app with effects, you are not hearing those effects as you record. You only hear the complete picture after the recording is complete when you play it back.
See this article on how to setup the Maestro Mixer: Read More
Another problem with this method can come up when you also have input monitoring active in your recording app at the same time as hardware monitoring. Because you are monitoring directly via the hardware AND through the recording app, you end up hearing it twice. This can result in audio artifacts ranging from a slight phasing/chorusing sound, to an echo because the direct hardware signal combines with the slightly latent software signal.
Also think about how you would do vocal (or any instrument) punch-recording if hardware monitoring is active. Most singers I know do not like hearing themselves sing with zero effects/reverb applied. This is more difficult to setup and accomplish competently if utilizing hardware monitoring.