And all for what? Three chip cameras were important when the resolution was under 600 lines and the image frames were interlaced because Bayer array sensors didn’t have enough color information to accurately interpolate with. Now with the resolution of sensors we have that isn’t true and Bayer arrays work very well in most cases.It would be a beast(size), not to mention the cost.
I asked someone else in the industry years ago why we didn’t have 3-chip s35 cameras(much less “FF“ sensors) and the two main answers were size and cost. Then throw in the complexity. Things to think about include the lenses are not designed to work with a prism like our “TV” lenses are, so you‘d have to design an optical correction system to go between the optical block and lens. And the higher the resolution the more difficult(if not impossible after a certain threshold) it becomes to line all three chips up, so that you don’t have registration errors. Although, speaking in a strictly stills manner, there are cameras with “pixel shift” systems/technology that move the sensors to allow for higher-than-native resolution capture, so maybe, it that regard, it would be possible to line the images up, if not at time of capture, then in post.
Last edited:
Upvote
0