My contention is that Canon has allways been about "good enough". Their policy has never really pushed the envelope of the state of the art, only just enough to keep ahead of the completion. At 500nm the current requirements from the battery are many times the requirements of 22nm. one example of the problem is their choice of ADCs being current hogs and slow. Look at the specs of a modern bit slice ADC and compare. By going to say 32nm they could use the extra current to produce high speed 16 bit ADCs which would give a decent range of bits to the bottom end and solve some of the shot noise problems. The slow scans they use contribute to the need for AA filters, also they can't scan the ADCs is a single pass. This is just of the places they could improve the "state of the art". another example is "what is a Digic". They won't publish the specs of the processor, probably with good reason as it is dismally slow from the standpoint of modern processors. Sony is the first manufacture to begin the break with the old technology and look at new ways to use the knowledge gained by the semiconductor industry to implement things. If they put the technology from a smartphone into the camera they would have a tremendous improvement. If they don't get off their duff they will go the way of Kodak.
Regarding your first two sentences, I greatly dispute that. If you've only been on the scene for about four years, then that my seem to be true. On the contrary, however, Canon was the cutting edge for quite some time. They were the SOLE company providing FF sensors for years, over several generations of cameras (5Dc, 1DsIII, 5DII), and during that time, Canon cameras offered top end IQ.
The "Competitors" are really just Sony, as Sony is the only other manufacturer making large form factor sensors. Sony came along, dropped a few tens of billions into fabrication facilities (to their great detriment, as they have excessive debt and their electronics division is hemorrhaging money by the billions), R&D, and started pumping out the sensors that EVERYONE ELSE today uses. That includes Exmor and Exmor RS, which include a rather sudden and significant leap forward in readout technology (which is patented, and is certainly going to have an effect on competition for a while.) The innovations in Exmor also includes a process shrink to 180nm that allow Sony to pack a lot more logic onto the same die the sensor itself is on, which is something Canon could probably do...if they were willing to spend the billions of dollars necessary to create a 300mm wafer 180nm fab.
I would also point out that no one is actually using a 22nm process for sensors. The current cutting edge is around 65nm for 1.1µm pixel sensors. The next stop is 0.9µm (900nm) pixel sensors, however expectations are that a 65nm process will still be used for those, as such sensors are now almost universally manufactured on a BSI process. The next step down would be in the realm of 45nm, however to date (based on patent research, ChipWorks papers, and internet searches) it does not appear as though anyone (including Sony, Toshiba, and Aptina) have moved to a 45nm CIS process in any capacity.
I would further point out that 90% of the sensor innovations in the marketplace are applied to small form factor sensors...P&S cameras, smartphone cameras, maybe a few bridge cameras here and there. Such sensors are fast approaching hard limits. At 900nm, pixels are already too small for near-infrared light. The next stop would be 0.7µm or 700nm pixels...at which point, you are already going to be filtering some red light simply due to the size of the pixel...they are getting smaller than the wavelengths of light they are supposed to be sensitive to. I don't think we will ever see a 0.5µm or 500nm sensor...sensitivity to red, orange, and yellow light would be extremely low or non existent.
I think the innovations required to support such small pixels are going to begin "traveling back up the stack". The notion that a BSI design is useless for larger pixels is only true if you are already working at the smallest process level...65nm. With large gate sizes, a BSI design could be quite useful for APS-C and FF sensors, and wouldn't require the investment of billions into new sensor fabrication plants. Other innovations that could be applied to large sensors include light pipes, more efficient metals for wiring (i.e. Cu), color splitting, multi-layer micro lenses, etc. Most of these Canon has already demonstrated in their 200mm wafer fabs, so they have the technology...it's just a matter of applying it.
As for DIGIC, I'm not sure what your complaint is. Each DIGIC 5+ chip is capable of processing at least 250mb/s, and the dual DIGIC setups process 500mb/s. That has allowed Canon to achieve the highest stills frame rate in the industry at 14fps (something I've not seen from any other DSLR manufacturer to date...hell, even 12fps is unmatched.) These dedicated image processors are actually very fast, much faster at their DEDICATED tasks than a general purpose CPU would be. They are more akin to a GPU, albeit at a much smaller die size and with much lower power requirements.