I prefer to tune after the PC when using S/PDIF connections.
If you think about it, the optical connection you are using has a limited resolution depending on the format you're carrying (for CD, it's equal to that of the source CD it is pulling sound from, each sample having 16 bits used to describe the dB level that sample represents). If, assuming you are outputting bit-perfect sound to an off-board DAC, the CD has a sine wave encoded on it with peaks at -6 dB, it is using x amount of that 16-bit CD resolution to represent the wave. If you attenuate that wave via whatever processing you are using, you then (and someone correct me if I'm wrong) lose resolution, because when you finally pipe it out over the optical connection the wave must now be represented using something less than its original resolution...in effect, you "throw bits away".
Modern techniques for software processing usually convert each sample to a higher resolution as a first step (as in, each discrete level in the recording is now worth two, or four, or whatever), allowing them room to make software adjustments internally without loss of resolution...but in the end, if you are unable to output the results of that process to a matching higher-resolution DAC for conversion to analog, and instead choose to digitally downsample the results back to their original resolution for optical transport, then you subject yourself to losing signal fidelity by rounding errors and other errata.
I don't know the degree to which this actually affects what you hear...but I'd just rather do my processing off-board and eliminate the PC middle-man. If you are outputting over analog, then the loss in quality is totally up to the internals of your sound card. An optical connection, for me, is a simple way to carry bit-perfect, or nearly bit-perfect sound to a hardware processing device that was built with such considerations in mind.