I realize that hi-eff vs. non-hi-eff is almost a religious divide among speaker designers/hobbyists. My intention here isn't to start an argument on the subject, but to get the insights of the people who post here (who are among the most experienced and approachable designers I have come across anywhere).
So... the basic premise is that watts are cheap, so there is no need to sacrifice other attributes to increase the driver's efficiency. Looking at it another way, if you can lose a couple of dB efficiency and gain better frequency response/lower distortion/whatever, that's a no-brainer.
I've seen numbers which talk about a 24dB peak-average ratio for well recorded orchestral music. Assuming a medium/loud average listening level of around 80dB, that requires 104dB peaks. Assuming one is sitting about 2m from the speakers and isn't listening to line sources, that requires 110dB output from the speakers. Assuming an 85dB sensitive smallish midrange/woofer (to use a random mid/lowish number), that requires 25dB from the amplifier, which equates to about 256W.
Here's my question. It's fairly easy to get an amp that will produce 256W. But out of all the midranges/woofers available that produce 85dB when fed 1W power, how many will actually produce 110dB when fed 256W power? And scale linearly along the 'dynamic axis', i.e. produce 107dB with 128W, 104dB with 64W, etc. Also, what does a transient like this do to the distortion performance of these drivers?
Obviously, I've picked slightly exaggerated/worst-case numbers to illustrate my question, and no one with an 85dB woofer hopes to hit 104dB peaks. I'm just trying to understand how this is addressed in speaker designs. I've seen IB designs where on the surface the number of drivers/amplifier power seems outrageous, but it comes down to the same thing, right, having the necessary dynamic headroom. So shouldn't the same principles apply for the higher frequencies as well? Or is it not as important for non-subwoofer frequencies? Are there other considerations which make this irrelevant?
Thanks,
Saurav
So... the basic premise is that watts are cheap, so there is no need to sacrifice other attributes to increase the driver's efficiency. Looking at it another way, if you can lose a couple of dB efficiency and gain better frequency response/lower distortion/whatever, that's a no-brainer.
I've seen numbers which talk about a 24dB peak-average ratio for well recorded orchestral music. Assuming a medium/loud average listening level of around 80dB, that requires 104dB peaks. Assuming one is sitting about 2m from the speakers and isn't listening to line sources, that requires 110dB output from the speakers. Assuming an 85dB sensitive smallish midrange/woofer (to use a random mid/lowish number), that requires 25dB from the amplifier, which equates to about 256W.
Here's my question. It's fairly easy to get an amp that will produce 256W. But out of all the midranges/woofers available that produce 85dB when fed 1W power, how many will actually produce 110dB when fed 256W power? And scale linearly along the 'dynamic axis', i.e. produce 107dB with 128W, 104dB with 64W, etc. Also, what does a transient like this do to the distortion performance of these drivers?
Obviously, I've picked slightly exaggerated/worst-case numbers to illustrate my question, and no one with an 85dB woofer hopes to hit 104dB peaks. I'm just trying to understand how this is addressed in speaker designs. I've seen IB designs where on the surface the number of drivers/amplifier power seems outrageous, but it comes down to the same thing, right, having the necessary dynamic headroom. So shouldn't the same principles apply for the higher frequencies as well? Or is it not as important for non-subwoofer frequencies? Are there other considerations which make this irrelevant?
Thanks,
Saurav
Comment