The person I quoted in my reply, for one.
A method that produces impossible data is flawed, no matter how you rationalize it. The 'whatever' was not an acknowledgement that the rebuttal was correct, but rather boredom and a realization of the futility of arguing the point further.
This statement actually just shows that you have absolutely no idea what you're talking about, AND that you don't know first base in signal theory. It is perfectly normal to have signal accuracies higher than the signal quantization, and there are several universally accepted way to indicate this. The linear one used with almost all types of imaging sensors is actually the most easily understood and logical one.
DR has a very strictly locked down definition, and it is:
[maximum signal strength] / [minimum noise floor]
And the only correct way to do this is to take the average of several (at best, several thousand!) individual pixels and then work with their AVERAGE result or error in stead of doing it with single-pixel steady-states - which are heavily biased by things like individual impurities in the photo-cell, individual sensitivity deviations and small amplificiation deviations. You would get 22 million individual results for a 5D3, and this isn't very practical.
So, if you accept the (for any person versed in the area) natural and self-explanatory necessary fact that any and all signal measurements are to be taken as AVERAGES of as many individual samples as possible to get the most accurate end result, you can get results that look very strange when you look at them from a purely numerological PoV - but they're still valid, and also very easily proven with practical empirical tests.
This also kind of explains the mechanism behind down-scaling results.
If you take a 2000x2000px crop from an image and scale it down so that the end result is 1000x1000 you have effectively binned four pixels into one. This lowers the average error 'per pixel' in the crop by a full stop (1Ev) by averaging the errors - but it doesn't lower the maximum value of each individual pixel. Hence you get 1Ev more DR in the 1MP image compared to the 4MP image. Lower resolution, same maximum value, half the average amount of noise per pixel.
One practical test is to put any Sony high-DR sensor based camera at 12-bit raw output, and then compare the end image result to that which you get from a Canon-based 14-bit raw. The Sony-based solution will STILL be a lot cleaner in the shadows at low ISOs, and it will STILL have about 13Ev's of DR, even though the raw file is "just" 12 bits. AND this will still be easily visible in the image.
(not a very "nice" way to introduce myself to a forum, but it just annoys me when people that have no idea what they're talking about makes self-assured remarks like that. My advice would be to STOP being such an adult. Growing up seems to totally stop some people from being able to accept (or admit) that there are lots of things they don't know or understand. Children have a much more open mind to new knowledge and learning new things - and we could learn a lot as humans by adapting their openness to new knowledge.)