Interesting article.
Mostly Interesting is that Canon for the last 10 years starting with the first 1D was using the same 500nm technology process even for the latest 1Dx !!!
And only now Canon is planning to move to 180nm technology process.
This was really amazing for me from technology evolution prospective in general
Just to think of it - Intel for it's latest IvyBridge is using 22 nm technology process and in near future will be moving to 14nm for second generation of Haswell and then to 10nm process for Skylake
http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)Plus to this new 3D (Tri-gate) transistors technology that drastically reduce currents leakage and reduce power consumption (
http://en.wikipedia.org/wiki/Tri-gate_transistor#Tri-gate_transistors). This in general significantly reduce noise produced by them .
More to this ARM year back demonstrated real 3D circuitry technology in their 1mm cube microchip where only circuit layer is grown above another layer forming multilayer chip (3D instead of flat 2D).
All that combined together is promising amazing things now and in the new future.
And just compare current level of Intel microchips technology (22nm) to the same of Canon (500nm)
Intel introduced 600nm process for Pentium P54C in October 1994 and 350nm process in June 1995.
http://en.wikipedia.org/wiki/PentiumThough it is not very fair to compare digital circuits with analog ones but still this shows very huge gap in microchips technology process itself - 20 years of difference.
And explanation for that is simple - technology process development is so costly and requires so much investments in R&D and technology equipment that is is possible itself only for few really big players on the market like Intel or AMD or Samsung and other such companies that focus on development and production and manufacturing of microchips for the rest of the industry.
Smaller companies just do not have enough resources to keep pace in microchip technology evolution and if they do so then eventually technology gap would increase and in order to survive they would need to use technology end equipment licensed (leased) for the major players on this market.
To me the best way is not to reinvent the wheel but try to use whatever is already available and to combine all the best in the top level end-user product (similar what Apple is doing including purchasing small companies that invented and patented something really useful but do not have enough resources for further quick development)
If I would be the project development manager in Canon responsible for this imaging area then I would consider establishing partnership with Intel (or AMD or other major players in the microchip technology) to get access to the latest technology processes. Having such technology at hand gives huge possibilities for the new designs for image sensors itself that would be still done within the Company.
Having high density of active elements on the image chip provided by current 3D 22nm process technology from Intel it could be feasible and possible to design new chip with extremely low lower power consumption and current leaks that in turn would could help to reduce electronic noise.
The other thing that would become feasible is to implement phase AF detection for every pixel on the image sensor - that would be biggest revolution in the camera technology ever - just think of the possibilities that this would give to the camera users. Camera could be mirrorless with AF performance of the existing top level PRO DSLRs. And you could be use any variable group of pixels on the sensor to start AF and track subject in AI Servo mode across the full frame.
And if that done on high resolution high ISO performance sensor (e.g. 80 mpx) you could do perfect shots which would be almost impossible to do before - e.g. object tracking without camera move - e.g. shooting acrobatics - you can put in frame the whole performance area , focus on object and camera will be tracking it across all the frame without need to move the camera itself. The same is very useful e.g. for shooting bailey performance.
Then you can do required size crops from the final images and get perfect subject close up.
And if that would be combined also with Foveon type sensor itself that would another step in this revolution
Plus to this each pixel could have it's own ADC with on chip pixel response uniformity calibration processor - this could totally eliminate low light pattern noise . Similar to what is done currently in astronomy for telescope with multi element mirrors arrays where each channel in real time compensated for atmosphere turbulence and light propagations variances.
There so many exiting possibilities - there would be not enough space to count all of them
And this is what could be done now or in near future with current technology level.
Just combine together all the best technology pieces already available around and get the best product ever
I am talking now just about current technology level possibilities -not about the cost - would be high now and this is different subject. But for building prototype - is something that could be considered
So all above is more to the vision of the future and how to make this future to come faster)))