T stops and f stops all are pretty easy to understand in principle but....
Ever since I first started using the 24-105 in 2005 I found that if you shot from the camera's meter, or suggested exposure in manual, this lens underexposed the frame by about one third of a stop compared with say a fast lens such as the 50/1.4, so the image was slightly more 'dense' and the histogram more dense and to the left.
Now according to DXO the 24-105 has a t stop of 5.1 against an f stop of 4, so it looses two thirds of a stop.
Using the same source the 50/1.4 has a t stop of 1.6 against 1.4 so it is one third of a stop less; so; the difference between the two lenses actual transmission is one third and this is exactly the difference I find in exposure between the two, and this includes using a hand held incident light meter and setting the camera shutter speed and aperture manually.
So far so good; this difference equated exactly to the t stop differences between the lenses.
However there is a problem. The 24-70 f4 IS which by the same source has a t stop of f4 against f4 does exactly the same thing. Compared with a faster prime with less elements it underexposes by one third of a stop. So does complicated element lenses such as the 70-300L. Also I believe Edward Lang (eml58) found that his 200-400L underexposed on the same meter reading compared with the 400/2.8L .
So what's going on ? By the definition of what a 't' stop is, lenses of equal t stop should expose the same, but it seems to me that the more elements that are added the more the lens is likely to underexpose. Yet this is just what t stop is supposed to measure.
And before anyone asks, this isn't just on one copy of the lens; it is universal.
To me it suggests that 't' stop value is not accurate in practice, or at least in this application. Any ideas ?