As a final point for all ... what metering mode do you want to be using? For those using evaluative, is that really the best option?
From Canon
http://cpn.canon-europe.com/content/education/infobank/exposure_settings/exposure_compensation.do
"Evaluative metering, the default setting for EOS cameras, takes measurements from different parts of the scene. Based on these, the camera can often compensate for backlit or off-centre subjects, but it has no idea of what the subject is, or the conditions under which you are shooting. You might be photographing a light-toned subject in poor light, or a dark-toned subject in bright light – it is all much the same to your camera.
<snip>
If, for example, the central zones are darker than the outer zones, it is likely that the main subject is backlit. If the central zones are much brighter than the outer zones, the main subject might be in a spotlight. In both cases, the camera will bias the exposure to the central zones, giving correct exposure to the subject.
In effect, the evaluative metering is implementing its own exposure compensation. An overall reading from either scene would not give good exposure, but exposure based on the central area will improve the results.
The trouble when using exposure compensation with evaluative metering is that you don’t know if the metering has already compensated for the conditions. If it has, and you dial in even more exposure compensation, then the exposure will be wrong. Equally, if you assume that the camera has got it right, but it hasn’t, then you will also have a badly exposed picture.
The solution, as with so many things photographic, is experience. After a while you will learn to recognise the types of scene which evaluative metering handles well, and those that it does not.
When you change to a different camera, you will have to learn all over again, as the number of metering zones can change the results."
I see evaluative as the most random of all of the metering modes. The camera really doesn't know what the hell is in front of it, so it takes a pure guess. You then have to guess what it's done and compensate for that. Now, from one camera to the next the algorithm could change significantly (especially with the new colour metering the 5d3 which will have a lot more information). So, maybe it's doing a much better job? Or worse? Or...?
However, the major point from me would be that it's very hard for a system like this to provide any particularly useful results because it's so dependent on the scene.