The audiophile in me hears and understands what soundstage is and when it's wide and when its not, but the engineer in me can't shake the technical issues.
I believe that all recordings have each instrument mic-ed into a multi-track recordings. The engineer then combines these tracks creatively. It would seem that if he puts an instrument equally in the left and right channels, then it will appear to come from the center. But how would he have an affect on the depth of the instrument as we all seem to be able to hear?
Does he actually consider the layout of the instruments in the "soundstage" as he decides on which channel receives which instrument?
How does this process find coherence in what you and I term as "soundstage". Am I missing something?
I believe that all recordings have each instrument mic-ed into a multi-track recordings. The engineer then combines these tracks creatively. It would seem that if he puts an instrument equally in the left and right channels, then it will appear to come from the center. But how would he have an affect on the depth of the instrument as we all seem to be able to hear?
Does he actually consider the layout of the instruments in the "soundstage" as he decides on which channel receives which instrument?
How does this process find coherence in what you and I term as "soundstage". Am I missing something?
Comment