Opened 11 years ago
Closed 11 years ago
#2852 closed defect (fixed)
dcadec internal default downmix is not normalised and reduces stereo separation by cross mixing L* and R*.
Reported by: | Andy Furniss | Owned by: | |
---|---|---|---|
Priority: | normal | Component: | avcodec |
Version: | git-master | Keywords: | dca |
Cc: | Blocked By: | ||
Blocking: | Reproduced by developer: | no | |
Analyzed by developer: | no |
Description
Summary of the bug: When a dts 5.1 channel is downmixed to stereo with -request_channels the results are faulty: not normalised and the L* and R* channels are cross mixed albeit with reduced dB so some stereo effect is still perceivable.
How to reproduce:
% ffmpeg -request_channels 2 -i 6ch.dts 2ch.wav ffmpeg version - git built on ... 10/08/13
Though this issue seems to exist for all dts samples I have - I don't have many and totally failed to find a "normal" VOB channel check.
The channel check I used was core extracted from 7.1MA.
Looking at the code it seems that the default matrix in libavcodec/dcadata.h looks
guilty assuming that dca_default_coeffs refers to dca_downmix_coeffs it matches with the cross channel mixing that I can hear.
This also raises the question of why this matrix is used when I expected studio dts material to have downmix meta data.
Change History (7)
follow-up: 2 comment:1 by , 11 years ago
Keywords: | dts downmix removed |
---|
comment:2 by , 11 years ago
Replying to cehoyos:
Does the output change if you change the default matrix in dcadata.h?
Yes I just tried below and the results were as I expected = no cross mixing and center/surround at -3dB but still not normalised.
diff --git a/libavcodec/dcadata.h b/libavcodec/dcadata.h index 15df49e..55f9c33 100644 --- a/libavcodec/dcadata.h +++ b/libavcodec/dcadata.h @@ -7566,7 +7566,7 @@ static const uint8_t dca_default_coeffs[10][5][2] = { { { 0, 25 }, { 25, 0 }, { 13, 13 }, }, { { 6, 6 }, { 0, 25 }, { 25, 0 }, { 13, 13 }, }, { { 0, 25 }, { 25, 0 }, { 0, 13 }, { 13, 0 }, }, - { { 6, 6 }, { 0, 25 }, { 25, 0 }, { 0, 13 }, { 13, 0 }, }, + { { 13, 13 }, { 0, 64 }, { 64, 0 }, { 13, 64 }, { 64, 13 }, }, }; /* downmix coeffs
comment:4 by , 11 years ago
It's fixed WRT the cross mixing.
It seems that it's now partially normalised - probably enough not to clip too bad on real world content that doesn't max all channels equally, but it's currently louder than -ac2 or the same content but an ac3 version downmixed like : -drc_scale 0 -request_channels 2 .
This applies to streams that hit the default table and those that have metadata - I haven't really had enough time to look properly, I suspect for the metadata case post mix gain adjustment should be decoded and applied.
I see commits to do with including LFE, but it doesn't seem to be included in the mix currently (not that I would want that to be default) what are the plans for this eg, default off and a new option?
I added some debugging so I can see the metadata if present and do have a stream with LFE non zero.
comment:5 by , 11 years ago
(Disclaimer: I neither understood your original report nor your new message, and I certainly don't blame you!)
Thank you for testing again!
I suspect that the dca decoder is supposed to include LFE in the downmix but that may be wrong.
Allow me to repeat: Is the issue fixed?
comment:6 by , 11 years ago
Well there were 2 issues one is 100% fixed and one 80% fixed :-)
Maybe you should close this bug and I can investigate further and open a new specific bug in the future if it is needed.
comment:7 by , 11 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
Thank you!
Fixed by Tim Walker.
Does the output change if you change the default matrix in dcadata.h?