canon - as other makers - uses "scene analysis" in addition to phase-af. face and eye-tracking for example as in the M50, and also for tracking AF (eg color information and possibly? also some "object identification AI/database?) to keep selected moving object in focus). i would think that poses an even greater challenge to (realtime) in-camera data processing. compared to that "simple" phase-af operation might be an easy exercise.
but thats not what i am trying to find out. i would like to know "why 99 AF fields", why not 999 or 10 million of it, when "each single (split) dual pixel can serve as AF-field". out of curiosity teally, nothing else.
from a practical/user perspective i want to know answer to the "AF orientation" question, ie whether there really is only AF sensitivity for vertical contrast edges/structures or not. that one i can and will check soon (on my daughter's new M50, gift/box not presented/ opened yet ;-)
i find it a bit strange that i cannot find any information on this in any of the reviews for M50 - or other canon DPAF cameras (DSLRs in live view mode, not in mirror-mode with detached AF sensor).
but thats not what i am trying to find out. i would like to know "why 99 AF fields", why not 999 or 10 million of it, when "each single (split) dual pixel can serve as AF-field". out of curiosity teally, nothing else.
from a practical/user perspective i want to know answer to the "AF orientation" question, ie whether there really is only AF sensitivity for vertical contrast edges/structures or not. that one i can and will check soon (on my daughter's new M50, gift/box not presented/ opened yet ;-)
i find it a bit strange that i cannot find any information on this in any of the reviews for M50 - or other canon DPAF cameras (DSLRs in live view mode, not in mirror-mode with detached AF sensor).
Upvote
0