December 19, 2014, 11:39:17 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 75 76 [77] 78 79 ... 321
1141
Notice that the patent was filed in December of 2012. So they've had over a year and a half to work on it, plus whatever time they spent before filing. So it's possible it will be included in a 7D2 this fall.

I'd expect Canon to announce a prototype, and show off the benefits of their technology, as they have in the past, before actually using it in a product. There certainly isn't any guarantee that would happen, but it doesn't feel like the technology is ready yet. I expect more patents on the technology, and a prototype test, before we actually see a competitive layered sensor in a DSLR.
What about a P/S camera with the technology? That would be a lot safer way to introduce it.....

Possibly. I dunno, sometimes I think it's a very careful balance for companies like Canon. On the one hand, it might be cheaper (and therefor "safer") to introduce new technology at the low end. On the flip side, you then have your really high end, like the 1D X, where pros who really understand the value of the technology could actually put it to good use, and some of whom might be miffed if the technology did not show up there first.

1142
I'd expect Canon to announce a prototype, and show off the benefits of their technology, as they have in the past, before actually using it in a product. There certainly isn't any guarantee that would happen, but it doesn't feel like the technology is ready yet. I expect more patents on the technology, and a prototype test, before we actually see a competitive layered sensor in a DSLR.

Is this something you would want to actually introduce, and not trail?

I can think of few better ways Canon could produce its own 'Osbourne Effect' than to show something so different whilst still churning out a range of cameras with 'existing' technology.

Much like Craig, I've had quite a few 'rumours' sent about upcoming 'new' sensor technology, but nothing overly convincing, and nothing from anyone who's genuine (i.e not hiding behind anonymity) understanding of sensor technologies I'd rate.

Canon wouldn't be pioneering layered color sensor technology. That was done with Foveon. Canon certainly wouldn't be the "first" with a layered sensor if they released the 7D II with one.

Also, keep in mind, the very vast majority of "cutting edge" sensor technology has never made it's way into a DSLR or Mirrorless camera. Sony's Exmor technology is a pretty interesting step towards a better, more integrated sensor design, but even that is still a very far cry from the most cutting edge sensor technology. Most of the really amazing stuff is in video sensors and small form factor sensors...the ultra tiny 1/8th inch sensors that are used in phones, tablets, and other small and cheaper cameras.

In the grand scheme of things, Canon is a "little" behind, Sony is a "bit" ahead, when it comes to large form factor sensors. The technological differences are far from large, and not even remotely close to huge. The single largest differentiator is low ISO noise, which is largely due to Canon's ADC units, which are not actually part of their sensor at all...they are part of the DIGIC chips.

Both companies larger sensor technology is quite far behind the level of technology employed in smaller sensors, though. Even Sony's small 1/3" ICX CCD sensors have better technology in them than either Canon or Sony DSLR/Mirrorless sensors.

Canon could easily close the gap if they either fixed their ADC units in the Digic chips to introduce less noise, or move to a lower frequency on-sensor-die column-parallel ADC approach. All the rest of their sensor technology is actually very good. If they employed more of their noise reduction patents, they could dramatically reduce dark current noise (which can be problematic for higher ISO settings and long exposures), reduce readout frequency when a high speed readout is no necessary (lower frequency reduces noise), etc.

1143
Notice that the patent was filed in December of 2012. So they've had over a year and a half to work on it, plus whatever time they spent before filing. So it's possible it will be included in a 7D2 this fall.

I'd expect Canon to announce a prototype, and show off the benefits of their technology, as they have in the past, before actually using it in a product. There certainly isn't any guarantee that would happen, but it doesn't feel like the technology is ready yet. I expect more patents on the technology, and a prototype test, before we actually see a competitive layered sensor in a DSLR.

1144
Maybe 7D2 gets the new dual ISO instant read per pixel (ALL pixels) thing? And 5D4 that plus multi-layer sensor in late 2015?
Or maybe 7D2 gets enhanced dual pixel AF and 5D5 gets dual ISO read per pixel (NOT the ML stuff that has issues, true, dual read of each and every photosite) in late 2018  ;D.

Why dual ISO, instead of just reduced read noise? All dual ISO does is work around a read noise problem. I know Canon already has several read noise and dark current noise reducing patents, some seem quite effective. Dual ISO is a workaround that ML discovered and implemented, because current Canon cameras have high read noise.

If Canon would just reduce their read noise, then we wouldn't need dual ISO...the problem with noise and DR would be solved directly.

1145
It sounds like Canon is identifying and solving a number of issues with layered sensors. Given that, and given that their patent filings are still being published, I am not sure we'll see a layered sensor with the 7D II. The issues would need to be worked out first. It's possible all of these were filed a 18-24 months ago, and the technology is ready, but there could also be ongoing work.

I'm still waiting for a Canon patent that shows they figured out how to reduce noise and increase dynamic range in a layered sensor. I think that would make...well...everyone's day. :D

It's intriguing that Canon is working on a layered sensor, though. At the very least, it gives some hope for the cameras that come after the 7D II.
The 7D2 is obviously a mirrorless APS-H multilayer sensor camera that is tightly integrated with the Microsoft Surface tablets... :)

That would be nice...especially if it has 120 beautiful megapixels. :D

1146
Canon General / Re: Dragonfly, Powered by Canon Lenses
« on: July 14, 2014, 01:02:04 PM »
Actually, surveying for nearby supernova remnants in H-alpha might be a pretty interesting project scientifically in itself for this Dragonfly.

Yes - the problem is that we'd have to get different detectors, with much lower read noise. With narrow band filters the read noise is no longer smaller than the noise from the sky background, and the setup is no longer competitive.

Any chance you guys have some forward knowledge of larger ultra-low-noise sensors coming out? Sony's newer ICX line are pretty nice, with very low dark current, and pretty low read noise (~5e-?). But the sensors are tiny. Really tiny, as in 1/3" or maybe 1/2", which is about half the size of a KAF-8300 and about 1/5th the size of a full-frame/KAF-11002 sized sensor. Would be really nice to know that Sony has some larger sensors based on their new low-noise technology coming out... ;)

We are considering other projects to augment what we're doing now - particularly when the moon is up and our main science is on hold. We're also hoping to build a bigger array at some point in the future - with 50 lenses we'd effectively have a 400 mm f/0.4 lens, with a 1m aperture.

f/0.4 @ 1m...now that would really start to surpass, just in specs, some of the really large earth-based telescopes for sensitivity.

1147
It sounds like Canon is identifying and solving a number of issues with layered sensors. Given that, and given that their patent filings are still being published, I am not sure we'll see a layered sensor with the 7D II. The issues would need to be worked out first. It's possible all of these were filed a 18-24 months ago, and the technology is ready, but there could also be ongoing work.

I'm still waiting for a Canon patent that shows they figured out how to reduce noise and increase dynamic range in a layered sensor. I think that would make...well...everyone's day. :D

It's intriguing that Canon is working on a layered sensor, though. At the very least, it gives some hope for the cameras that come after the 7D II.


1148
Canon General / Re: Dragonfly, Powered by Canon Lenses
« on: July 12, 2014, 03:02:05 PM »
I chatted with one of the guys on this project over on the CloudyNights forums a couple months back. Back then, he said the current version of DragonFly had 8 commercially available Canon EF 400mm f/2.8 L II lenses, however that they were in the process of adding four more for a total of 12. Their ultimate goal was to get up somewhere round 20-24. To achieve that, they had to redesign the mount that holds the lenses. The original version was a squareish contraption, and I think the new approach uses something more modular, some kind of hexagonal or circular cells that can be attached to each other.

Anyway, it's a pretty cool setup, incredibly sensitive for an earth-based telescope.

1149
When we will know the final specification for Canon 7D Mark II


When Canon makes an official announcement. Until then it is rumours and speculation.
19Mpixels, 8.5 fps,  23 AF points, 0.5 stop improvement   ;D   ;D   ;D
I prefer my speculation on the wild side.... I predict that the 7D2 will be mirrorless. I have a perfect prediction record..... wrong every time :)

 ;D 

1150
I think this debunks the notion in the Microsoft-Canon patent deal that Microsoft was the dominant or aggressive party in the deal. If Canon also joined this new patent coop, it sounds like CANON is looking to both reduce patent litigation and share their technology in good will.

Personally, I think when companies share patents or at least agree not to litigate like drunken monkeys over them, it's only better for the consumer in the end. I think interesting things could come of a more open world where patents are not wielded as a weapon. I still think it's a companies right to protect their intellectual property, but I've been pretty soured by Apple's patent tactics and litigation...they come off as petty and dishonest.

1151
EOS Bodies / Re: Canon 6D N
« on: July 11, 2014, 06:10:17 PM »
about another 30% off without video feature. I'm in ;D

Me too! :D

1152
First, I agree, you should only insure what you absolutely have to. I insure my 5D III and EF 600/4 L II. That's it. All the rest, I can cover myself. The payouts from the insurance company would top out at maybe a grand anyway, and if you do a lot of little claims like that (especially when using a home insurance rider or schedule), it ultimately results in larger premium increases. Only insure things that cost at least a few thousand dollars. In the case of the 600mm lens, it's $12,800 new...I pay something like an extra $300 a year to insure my two things on my home insurance scheduled property rider...a very small price to pay in case my lens was damaged and had to be replaced.

Insurance companies may "win" on a personal basis, but in the big picture, they have been getting slammed over the past good number of years now. Maybe since Katrina, the insurance payouts have been pretty significant in large regions of our country. My parents house (they also live here in Colorado, up in the mountains of the Front Range, just above Jamestown) was damaged by the September rains we had here in Colorado last year.

They are just one of many thousands of people at least, if not millions of people, who have had to file insurance claims. The payout just for the Colorado disaster is going to end up in the hundreds of millions at least, and it's all still on-going. Stuff is still totally damaged, and it will take years to fully repair (for example, the main road up to my parents house is half-washed out in a couple dozen places...there is not enough room for more than one car to pass at a time). The work to shore up whole entire valleys in that area, bring in massive amounts of earth, rebuild roads, build giant culverts and other water management systems to handle the kind of deluge we had, etc. is all ongoing, probably will be for another year or so. Massive insurance payouts and other expenses going on there.

The town of Jamestown itself was pretty much destroyed, only one side of main street (and anything that was up in the mountains) survived, and a lot was damaged there as well. Many homes were completely washed away and have had to be rebuilt from scratch (that's several hundred grand a home right there in insurance payouts.)

People all over the front range and the plains just in front of it had flood and mud damage. Many thousands more, all the way out to my house (which is in the Aurora South area, fairly well east of the mountains themselves) had significant hail damage (thousands upon thousands of roofs have been replaced around here, some have had multiple claims).

My roof could probably stand to be replaced, but I'm holding out as long as I can to maintain my "claims free" status, as it's a moderately significant discount, and my insurance has gone up enough already as insurance companies around here scramble to cover all their costs. (Ah, gotta love subsidies in the face of countless natural disasters year after year.)

Anyway...I wouldn't say that insurance companies just plain and simply "always" win. They win...for a while...until the claims start piling up. Then their profit margins tank significantly, their costs just to handle all the claims flowing in increases, they eventually react by jacking up everyone's premiums...until the next natural disaster occurs, costing hundreds of millions to billions, and the payouts start again. Before Katrina, it was pretty common for insurance companies (mainly the broad insurance providers, Farmers, American Family, All State, etc.) to have profit margins in the double digits (which, truly, is very high...but when you think about what insurance is, in the good years, it NEEDS to be high). I think 8% was relatively "low". After Katrina, profit margins for insurance companies tanked down to around the -20% range for a while. They topped 10% again for a little while, and have been declining since. In recent years, big insurance company profit margins are down in the 2-6% range, however there have been spans in recent years where their profit margins were -7-10%. Average it out, and profit margins for insurance companies are at best a third of what they were, at worst a fifth, and shrinking. And the payouts continue...

So, don't be surprised if you have to pay to protect your investments. Insurance payouts are very high in recent years, and look to remain relatively high in the future.

1153
EOS Bodies / Re: Eos7D mk2, How EXCITED will you be if . . .?
« on: July 10, 2014, 08:00:12 PM »

Quote from: neuroanatomist
As for sharpness, while it's true that a multilayer sensor wouldn't need the blurring caused by an AA filter to avoid color moiré, that blurring is predictable and thus highly correctable with sharpening in post, so the true gain in sharpness is minor at best.
Not true. "No AA" picture can still be corrected/sharpened better than picture "with AA". Multilayer without AA can be sharpened/"regenerated" even further.

 All the practical evidence from people who have two cameras identical other than with, or without AA filter, ie. Nikon D800/800e and Pentax K5II/n (or whatever it's called) is that there is no perceivable difference after applying suitable un-sharp mask.

no AA is still a touch crisper, but also with false 'detail' and more issues
I don't think sensor densities are high enough yet for no AA filter to be wise.

Aye, removal of the AA results in false detail, which really just shows up as harsh noise a lot of the time, aliasing and more at others. I don't know that sensor densities will ever be high enough that we could ever really do away with AA filters. I mean, if the Otus does resolve somewhere in the realm of 400lp/mm wide open, then we would need a bayer sensor capable of resolving over 550mp in order to be able to drop the AA filter. That would be pixel sizes around 1.25µm. Not infeasible from a fabrication standpoint...probably infeasible from a data transfer rate standpoint (the file sizes at 14 bit, assuming around 7% increase in pixel count for masked border pixels, would be about 1.1GB in size, each...the only things that move that much data per second are high powered GPUs and CPUs, both if which require massive amounts of power (even i7 Intel Haswells still draw a lot of power when moving that kind of data per second.)

1154
Third Party Manufacturers / Re: Nikon's D800E 30% sharper than D800
« on: July 10, 2014, 07:34:19 PM »

Consider that across the Internet, criticism of DxO typically only comes from people that own Canon products. That piece of data speaks volumes about how DxO results are absorbed, don't you think?

I'm not sure what "typically only" means (do you mean "usually" or "most of the criticism I've seen" or "most of the criticism I've noticed"?) but I often see criticism of DxO on forums (esp. m43), including at DxO itself.  Perhaps that "piece of data" speaks volumes too.

I never see people on Nikon rumors or Sony Alpha Rumors ripping into DxO like folks do here... (don't read m43 forums so...)

LOL. You haven't read enough of Nikon Rumors or DPR then. Particularly at DPR, the DXO ripping is AT LEAST as bad as it is here, if not a lot worse. We aren't the sole group of people who have a problem with the way DXO does things, there are a lot of people out there, and many who are NOT Canon fans, who don't like DXO results nor like the black boxes DXO insists on maintaining. You can't call yourself "scientific", then have blantantly unscientific results, and not tell anyone why or explain the reasons why to anyone.

That doesn't mean that 100% of everything DXO does is bad...people here are pretty clear about what bits of DXO's information and/or processes and procedures they have a problem with. Dig down into DXO's direct measurements for sensors, and most of those are useful information. It's the extrapolations (i.e. Print DR) and funky results (i.e. lens resolution, T-stop weight in their results) that raise serious questions about what DXO is doing, how, and why. LEGITIMATE questions.

1155
Third Party Manufacturers / Re: Nikon's D800E 30% sharper than D800
« on: July 10, 2014, 01:03:39 PM »
OK, I trust you guys know your optics and related math much better than I do.  I'm just trying to figure something out here that's not quite making sense to me yet so if you care to indulge following the path I'm on with this, please tell me which step I slipped on.

I'll use round numbers for convenience but referring to the numbers jrista provided on a previous page.

Step 1:

- A digital image sensor (e.g. D800e) with pixels that are 5 microns square = 100 lp/mm physical sensor resolution with no AA filter.
I presume with whatever kind of algorithm is used, it is possible to read alternating rows of pixels, if they are properly stimulated, such that it would be possible to electronically extract the maximum of 100 lp/mm from this sensor.  If this were a monochrome rather than Bayer sensor then likely even simpler.

The resulting contrast ratio, if one were to stimulate alternating rows of pixels with high and low (dark) intensities would depend on the spot size of the illumination and how it was modulated during the raster.

Let's cheat a little bit, for fun.
I'm thinking if a visible light laser beam could be focused to about 1 micron, then rastered across the sensor in perfect geometric alignment and modulated such that the beam was ON only while the edge of its spot fringe was entirely located within a given pixel (row) such that no appreciable amount of that light were to enter an adjacent pixel (row), then the resulting contrast ratio would be quite high as there would be no bleed-over to the pixels in the dark row resulting from the fuzzy fringe of the spot.
This would be cheating because it would not be a perfect square wave function but would required a reduced ON time vs the normal 50% ON to 50% OFF of a square wave.

Thus we have applied a pattern of light and dark lines to the sensor synchronized with the sensor's physical pixel layout such that every second pixel is illuminated and alternating ones are dark.
We get 100 lp/mm equivalent signal from the sensor.  Still, we may have slightly less than perfect maximum (MTF) contrast ratio between rows but it's likely to be much higher than the typical 50% MTF standard. 

If we were to instead modulate the light spot (without cheating) so that it was turned ON and OFF as its center crossed the boundary from one (row) of pixel(s) to the next, then that will have an equivalent contrast ratio you could calculate at about 5:1.

Are there any errors in this hypothetical assumption so far?


Step 2:

- we have some lens that is capable of resolving 150 lp/mm at an MTF of 50% as measured on some optical bench...
This same lens should have a better than 50% MTF result if it were resolving a test target at 100 lp/mm.

Any error in step 2?


Step 3:

- we take the lens in step 2 and use it to focus a 100 lp/mm image onto the sensor in Step 1.  (We can use monochrome light if we have to minimize focus errors from CA)
We must now carefully align the focused image to the pixels on the sensor so that the middle of the bright line corresponds to the middle of a pixel (row) with the middle of the dark line aligned to the middle of the next pixel (row).  This should yield the maximum readable contrast ratio from the electronic sensor.
IF the alignment is PERFECT then the contrast ratio should still be a reasonably good number.  As the alignment shifts away from perfect the resulting contrast ratio will drop to a low of 1:1 (2.5 micron shift) for adjacent pixels which means no discernible contrast at all.

Are there any errors in step 3?


Conclusion:

If there were no errors in the 3 steps above then it is possible for a lens and sensor combination to resolve the physical maximum lp/mm of the sensor if the lens has a sufficiently higher resolving power in at least the ideal circumstance described.

Add angular and positional misalignments and mismatches in spatial frequency and you'll get aliasing and all manner of things that throw the above out the window and the math explained in this thread describes the system behavior.

is the conclusion correct within the limitations stipulated?

Your have it somewhat, and some things are slipping through your grasp. ;)

First thing. Yes, it is possible to use a lower contrast ratio than 50%. If you do that, then your results are generally in a different context than lens tests done anywhere else, as testing at MTF 50 is very standard. It's what all the major testers use. It is not invalid to reference a lower contrast level, however there is a diminishing guarantee that any given sensor can actually resolve any differences below a certain contrast level. The human eye is capable of barely detecting contrast at 9%. The human eye has some advantages that sensors do not, however, such as our brains doing real-time superresolution enhancement on everything we observe.

It's "safe" to refer to spatial resolution at MTF 50. It's a well-known context, it's easily comparable with results from other testers, official sites, etc. You can also very easily find LP/MM numbers for primary apertures, and sometimes half or third stops, in tables for MTF 50. You can also usually find the same for MTF 10, although there is no guarantee that a sensor could actually separate detail (real detail, not noise) at that low of a contrast level. (Noise tends to dominate at that low of a contrast level, and things like LPFs may smooth detail out, and conversely the LACK of an LPF may result in even more noise at an even higher contrast level.) MTF30 might be a good contrast level that sensors can still resolve...however there isn't a lot of readily availble information on lens resolving powers at that level. You would have to compute all that yourself (which is certainly possible, but it makes it harder for others to verify your claims.)

Some other things to account for as well.

First, lens aperture. Lens resolving power changes with aperture, as smaller apertures increase the impact of diffraction more. I have found that, based on my own testing as well as tests from official testers like DPR, PZone, etc. that lenses generally top out in resolution somewhere around f/4 to f/5.6. Diffraction-limited spatial resolution at those apertures is somewhere between 123lp/mm and 173lp/mm. There are some few lenses that may resolve more than that at wider apertures...something like the Otus could very well resolve the 247lp/mm diffraction limited resolution of f/2.8...and possibly, in the center of the lens, resolve upwards of 350-400lp/mm at f/2-f/1.4. I haven't actually looked at a real MTF chart to know for sure.

Keep in mind though...those resolutions are ONLY possible at those apertures or WIDER. The moment you stop down more, your maximum diffraction-limited resolution drops. Those are pretty fast apertures. Even f/4 is getting fairly fast. Very few lenses actually exhibit "ideal" behavior at f/4 or faster...optical aberrations generally have some kind of impact, even if it's small. Sometimes the impact of an aberration is simply a loss of contrast...resolving power might be the same, but its now at a lower contrast (i.e. MTF30), which means detail will become increasingly more difficult to differentiate from noise.

Finally, and I make this mistake myself, sensors really don't get their "theoretical maximum" resolution...not unless they are just a bare, monochrome sensor (no filters of an kind). Only a bare mono sensor is really going to be capable of resolving line pairs anywhere close to the size of their pixels. For all other sensors, the use of filters (even just IR/UV filters) will reduce resolution a bit, and the use of a CFA obviously has an impact (although more in color than in luminance, for sure). So, the D800E, with it's 4.9 micron pixels, has a raw mono spatial resolution of about 102lp/mm. It's real-world spatial resolution is going to be diminished, however. I'd say the D800 probably loses some 20-30% or so due to the CFA and filter stack. The D800E has that funky reversed LPF, so it won't lose as much, maybe 15-20%.

Given the existence of the CFA on the D800E, despite the lack of an LPF, there is no way anything could ever actually resolve anywhere remotely close to 36mp, with any lens. It just isn't possible. Hence the reason why DXO's results are so highly suspect. I could believe ~30-31mp, with a very good lens. I have a very hard time believing anything higher than that on average, though...unless it was an absolutely stupendously kick-ass god-quality lens that ACTUALLY resolved some 400lp/mm at f/1.4...and, assuming the results were actually for f/1.4, I think 33mp, maybe 33.5mp, is really the best your going to get...I mean, your WAY up there, really pushing the sensor to its absolute limits.

Pages: 1 ... 75 76 [77] 78 79 ... 321