> assume that the volume control on your operating system controls the DAC rather than doing the stupid thing of digital volume reduction
Is this a safe assumption though? As much as I consider myself knowledgeable in this area, this is way off my radar.
Even if this assumption does hold, the answer does make a good point when it comes to per-application volume controls, which are certainly performed digitally.
Your point on SNR is a very good one, particularly on the low-end, but I would question whether any computer at max volume would produce a signal strong enough to cause distortion. I generally run my computer at around 90% volume but that's merely a habit carried over from having an analog hifi stack.
Guy with some sound knowledge here. It doesn't matter if the attenutation is done digitally (as described by the top SE comment) or in analog fashion. The answer is always the same. In the music biz the general rule is: you want to put as much gain as early in the chain as possible. In other words, start at the source of the sound, and working your way towards the speakers, turn it up as loud as you can without distorting it.
Take the example of a guitarist. He turns his guitar up as high as he can, then the amp gain as high without distorting it. Then at the sound console. etc
The reason for this the later in the chain you are, the more electronics and cable is involved which adds noise to your signal. Highest signal-to-noise is achieved by ensuring you have as much signal as you can running thru from the get-go.
In the case of a computer, again it doesn't matter how volume changes are done- crank up your software as high as you need (but not beyond the point of distortion), then adjust it further at the speakers.
> start at the source of the sound, and working your way towards the speakers, turn it up as loud as you can without distorting it
In practice that will often be right. In theory it is not.
With an analog signal what you actually want to do is avoid gain, whenever possible. You don't want to add as much gain at every step and then attenuate later. Gain adds noise and removes dynamic headroom.
So, assuming a guitar with passive electronics, you want to turn it all the way up (which adds no gain, passive electronics only attenuate a signal) and put everything else at unity gain (i.e. no attenuation and no additional gain). That will be labeled "0" on mixer faders and is about 80% of the way up on the fader.
We'll ignore guitar amplifiers for the moment, where the distortion that happens when you run out of headroom (clipping, i.e. "fuzz") and the sonic properties of vacuum tubes when they're near the top of their dynamic range (i.e. "tube warmth") are desired effects.
Now, so assuming you have a signal that you want clean all the way to the output, you've got things at unity all the way across the board, and it's not loud enough, then you start looking at where to add gain.
This is where we get to the part about you being in practice correct for many cases.
What we're looking to avoid is adding noise. Cabling exposes signals to RF interference and attenuate the signal since they're not perfect carriers. The more cables and interconnects down the road from the source we are, the more noise will have been introduced. Since we don't want to amplify the noise, it's usually best to amplify the signal at the point closest to the source since less noise will have leaked in. If you amplify a signal 4 connects down your signal path, there will be a lot of noise that you're amplifying along with your source's signal.
However, not all (pre-)amplifiers are created equal. Some, in fact, are pretty noisy themselves. There are certainly points where with a noisy pre-amp, you'll degrade the signal more by amplifying it in an earlier-in-your-signal-path pre-amp than you would boosting it at a very clean pre-amplifier one step down the chain. One of the big differences between cheapo mixers and high-end mixers is the quality of their first-in-the-signal-path pre-amps. A gain-adding stomp box is almost certainly going to be noisier than a Mackie pre-amp.
Now, since this started off talking about the sound coming out of consumer grade computer components (rather than a nice break-out box), it's worth noting that their analog amplifiers are pretty universally terrible. If you have a reasonably short and well-shielded cable, you'll generally be better off leaving it at "unity" (i.e. not boosting the pure signal that's coming out of the digital-to-analog converter), but adding it at the next step in your chain -- the mixer, receiver or powered monitors.
Now, to finish up, I'll circle back around to my first point -- since boosting the signal itself adds noise, if you're having to attenuate it later on, you're adding unnecessary noise. Your strategy of turning things up all the way until they clip is not a good one if you're having to then attenuate the signal later on for it to be at the correct levels for the mix. Ideally what you want is to have all of your faders at unity (again, labeled 0) and then to add gain at the quietest pre-amp (usually early in the chain) available until you hit the loudest volume that the channel will need to be in the mix. Fortunately, the folks designing mixers know that sometimes the situation on the ground is different in a live sound context and you'll actually need to go beyond unity, which is why mixers don't peak-out at there, but allow you to boost the signal there on an as-needed basis.
Edit: Minor addendum -- this assumes that you're not sending weak signals down long, unbalanced cables. If you are, that changes the calculus a little since they'll pick up a lot of RF on the way and you hack that around that by boosting before the cable and attenuating afterwards. But don't do that. That's what DI units are for.
It's not that simple. I've found in my rig, minimum noise is achieved with the guitar volume at 7 or so and preamp gain at minimum. If I turn up the volume on the guitar I can switch on the pad on the preamp, but this adds noise. I then add tons of distortion and gain in the software realm, so every dB of noise floor counts. Even clean guitar sounds are usually distorted.
The sound console is different. You want everything in the console to be trimmed to around 0 dB on the console's meters, and NOT as loud as possible without distorting. Trimming it to 0 dB gives you consistency between channels making it easier to work with the faders, and gives you however much headroom the console is designed to work with. "Loud as possible" will mean somewhere like +24 dB on some consoles, which is too hot to work with. Yes, they have that much headroom -- nominal line level is +4 dBu, around 1.2 V, and consoles have internal voltages in the 15-24V range.
The same goes for recording, you typically want things to peak at something like -18 dBFS. Making things peak "loud as you can without distorting" is the job of the mastering engineer, and it happens right before you stamp out CDs or MP3s or whatever.
> Take the example of a guitarist. He turns his guitar up as high as he can, then the amp gain as high without distorting it. Then at the sound console. etc
Without distorting it? Are you sure you have some sound knowledge? That doesn't sound like you know many guitarists... ;)
I've worked in Austin as a live sound mixer (FOH engineer, as we say) for a while now, and it's evenly split-some will say build your gain structure using the exact method; up until you clip, and back off from there.
How do you determine the highest possible volume that doesn't introduce distortion? My sense in trying to find the right volume ratio for listening to an iPhone in my car is that it's best to not turn the iPhone volume all the way up, usually I use 75% or so. Is turning it up until you start to hear something funny the only way to do this or is there a more reliable way?
In pro audio there's usually a meter you can watch, or at least a clip LED that turns red during clipping. Otherwise you use your ears, or if you're really serious, you can use an oscilloscope with test tones and watch for the shape of the waveform to change, or beyond that, you can use dedicated test gear (or software) that measures THD directly.
It would be easy enough to test -- download Right Mark Audio (free, but Windows only), connect your audio input to output with a patch cable, and test at different volume levels.
Is this a safe assumption though? As much as I consider myself knowledgeable in this area, this is way off my radar.
Even if this assumption does hold, the answer does make a good point when it comes to per-application volume controls, which are certainly performed digitally.
Your point on SNR is a very good one, particularly on the low-end, but I would question whether any computer at max volume would produce a signal strong enough to cause distortion. I generally run my computer at around 90% volume but that's merely a habit carried over from having an analog hifi stack.