January 29, 2015, 01:43:47 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 87 88 [89] 90 91 ... 326
1321
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 12:43:39 PM »
Therefor, you cannot share the readout transistors, because both are read out simultaneously. Therefor, DPAF does NOT use a shared pixel design. There is, literally, two independent sets of transistors to read out each half of the pixel when the row is activated...and twice as many columns.

In other words, the two halves are read as ... independent pixels.
I feel that we are getting somewhere.
Oh, wait! That's what I've been saying all along.

The shared-transistor part is a different story.
I've never said explicitly which transistors are shared - and that's the key here.

You seem to think your getting somewhere...convincing me, or anyone, that you know what your talking about. I don't really have the heart to continue the conversation, because the clueless one here is not me...and this is just becoming disheartening. Go educate yourself. Please. Once you have actually seen the actual electrical diagram for at least one of Canon's DPAF patents, AND preferably read the accompanying description of how it works, maybe then we can have a coherent discussion.

Quote
If you want your claim to be in any way believable, you need actual evidence.  Read the Canon patents on dual-pixel AF - show us where a quad pixel design is mentioned.  Show us a verifiable image of the actual photodiodes of the 70D sensor (not the dead area at the very edge of the sensor in the Chipworks teaser). 

Fair enough.

I've never presented my speculations as facts. 
But if one day I decide to do that, I'll make sure that I have very solid evidence.

The problem is that you keep "speculating" about the same thing, even though it's been proven wrong on multiple occasions. I DID link you several detailed pages that had the full patent information the last time we had this debate. It took a while to find a working link as well, but apparently you never read the link. I'm not going to go digging through the internet, spend all that time, to find something that your likely never going to read (most of the time, these patents are translated from Japanese, as they can be notoriously hard to find in the US Patent office's database...and therefor common search terms don't necessarily bring up what your looking for.) I'm not repeating that effort for someone who seems more interested in literally and intentionally ignoring the FACTS he's been presented with, and "speculating" about a falsehood that he himself initiated even though it's demonstrably false, invalid, incorrect, not real, not happening, and otherwise moot.

It's fine to speculate...when you ACTUALLY DON'T KNOW anything about whatever it is your speculating/rumormongering about. When it comes to DPAF...there is nothing to speculate about. Canon has files patents. That's the end of the story. WE KNOW. Your notion that "it's not guaranteed to be 100% revealing" is 100% absolutely wrong...patents MUST, BY DESIGN, INTENT AND FUNCTION, be 100% revealing.  Otherwise they wouldn't be granted in the first place...specificity in a patent is key. Your ignorance in that only demonstrates you don't know much about patents, nor the technologies they can potentially describe. I'm no electrical engineer with a Ph.D, but that doesn't mean people who don't have degrees in electrical engineering are incapable of understanding an electrical circuit diagram or the terminology that describes them. I do my fair share of dabbling in electronics. (Right now I'm building a pelitier cooled DSLR cold box for my 5D III, which is involving a growing amount of electrical gadgetry and know-how to get it working the way I want.) Go educate yourself, read the actual patents, or in lie, read anything you can find about DPAF that isn't on Canon's site (so you can stop worrying that it's "just Canon marketing material"...although I'd offer that Canon is very up front about their technology, they have no reason to be misleading about DPAF)...and stop digging yourself into your hole. Your half-way to China right now.

Anyway, I think the myth of QPAF in the 70D or any other current Canon camera has been successfully debunked. The myth that DPAF or QPAF could be used to improve resolution in and of themselves, simply because they have independent readout, probably still needs to be revisited, however that's a debate for another day.

1322
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 04:22:40 AM »
That is the exact OPPOSITE of a shared pixel. Shared pixels SHARE readouts. Canon's DPAF use INDEPENDENT readouts.

Heh. You share readout circuitry between photodiodes ... to read their output independently.
What a paradox. And yet, that's exactly what the industry has been doing for a decade now (or more?).
Fascinating stuff. LOL.

Your still misunderstanding. Pixels are activated row-by-row, and all columns are read out SIMULTANEOUSLY. Every column of an activated row of pixels has the charge stored in the photodiode read, amplified, and shipped down the column line AT ONCE. Because the photodiode is split per-pixel, that they occupy the same row. Therefor, you cannot share the readout transistors, because both are read out simultaneously. Therefor, DPAF does NOT use a shared pixel design. There is, literally, two independent sets of transistors to read out each half of the pixel when the row is activated...and twice as many columns. During an image read, additional binning transistors combine the charge of each photodiode half, that total charge is amplified, and only one half of the columns are used to move the charge down to the CDS units and column outputs.

Shared pixel designs usually share DIAGONALLY (I already said this, but apparently the reason did not sink in.) By sharing diagonally, you avoid the concurrent row problem. The first row is activated, the first set of pixels that are sharing readout logic are read. The next row is activated, and this second set of pixels uses the same set of transistors to read out as their DIAGONAL counterparts in the row above. I've also read about patents that share pixels vertically, which achieves the same result, but ends up resulting in mixed color output for every set of transistors...green/blue, red/green, etc.

It isn't possible to share anything in the same row, though...because once a row is activated, everything in it has to be read out...and by nature, DPAF photodiodes for any given color filter share the same row.

1323
Yeah, it's kind of a mess. It really isn't any better for other manufacturers. Some of them have even more radical generational changes in their metadata than Canon does.

It would be interesting to know if the Canon guys are as confused as the rest of the world is by now, or if they've got top-notch internal docs and samples that make everything easy to do.

For example I know the Magic Lantern devs recently failed to figure out Canon's awb algorithm - it's a complete mystery what all these color channel tags exactly mean and how they end up in a temperature and tint value (they need it for the dual_iso module). Who knows how different that is between camera models. First they tried to compute the awb from the tags which did very seldom work, now they compute it from the ground up looking at the pixels which often also doesn't work :-\

Um, I'm rather confused at why the ML guys couldn't figure out the temp/tint model. That's actually the same as the L*a*b* color space. It's just blue/yellow and magenta/green opponent-process color axes, which are the same opponent-process color axes human vision uses (i.e. we cannot simultaneously see blue and yellow at the same spatial location...same goes for magenta/red and green...this is due to the way the cones of our eyes respond to light via a tristimulus factor.) This stuff is pretty thoroughly researched theory (decades old theory)...if the ML guys want to figure it out, they should probably look into color theory, especially the work done by the CIE.

Also, as far as I know, when I went searching, Canon provides fully functional source code for how they do...everything. It is not well documented, but if your a coder, and you can get your hands on source code, you should be able to figure out what it's doing.

1324
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 03:57:11 AM »
I know EXACTLY what I am talking about ...

Hmm. Doesn't look like it. Let's see.

First you say this:

Quote
The photodiode is the light-sensitive part of a pixel. A standard bayer pixel is comprised of a photodiode, at least one microlens layer (sometimes two), and a color filter, as well as the row/column activate wiring, amplifier, and readout transistors.

And then you say:

Quote
In a DPAF pixel, the photodiode has been split in half, with insulating material between the two halves. Each halve has independent readout.

So, a pixel has a photodiode and readout transistors (in addition to the other stuff).
Simialrly, a DPAF pixel is a (half) photodiode ... with an independent readout.

Do you even realize that by splitting the photodiode in two, and by providing independent readouts for each half,
you have essentially created two pixels ?

They aren't two pixels. It's two photodiodes in a SINGLE pixel. Your trying to reduce a pixel to just the photodiode. That's incorrect. Just because they have independent readout does not make them separate pixels. Both halves are still one color. There is no useful purpose to reading each half out independently for an image read. If you read a red DPAF pixel out as independent halves, you have two red rectangular results...but those results have no meaning independently. They are just two red values with half the light and twice the read noise than what you would get if you binned the two halves electronically at readout.

I don't think you do!
That's why you can't grasp that a split photodiode is in fact a separate pixel - etched on the silicon wafer.

The microlenses and color filters are secondary - put on top of the already etched wafer.
You can put a microlenses and a color filter on top of multiple pixels.
That's what Canon is doing - and what you call a pixel with a 'split photodiode'.
What you don't grasp, obviously, is that a split photodiode with independent readouts ... is two separate pixels.

If we reduce everything to the most simple sensor, monochrome, with nothing but light-sensitive silicon patches and their companion readout transitors...then you would be correct. A photodiode would then be equivalent to a pixel.

Were not talking about a simple monochrome sensor. We are talking about a bayer sensor. A sensor that has red, green, and blue PIXELS that are interpolated during demosaicing to produce full-color RGB output pixels. Each of these pixels in a bayer sensor is, at the VERY LEAST, comprised of a color filter layered on top of a photodiode. If you split the photodiode...you now have two photodiodes that read from below the same color filter. From an interpolation standpoint...you still have to combine those two halves...either electronically or digitally, to perform demosaicing.

If you completely ignore the fact that the RAW sensor readout in a bayer sensor needs to be demosaiced, then sure...you theoretically have the potential to produce two outputs per color filter. But what does that mean? How is that useful? Spatially, nothing has really changed. Whether you have two half reds, four half greens, and two half blues...SPATIALLY, they are still IDENTICAL to one red, two green, and one blue. SPATIALLY, you've gained nothing. It doesn't matter if you can read them out independently.

Your trying to imply that somehow, this increases your resolution. It does not. Just because two (or more) photodiodes underneath a single color filter can be read out independently does not change the fact that they are all the same color, and they all define the same spatial frequency as a single photodiode under that same filter. The only way those independent photodiodes could actually become useful is if you actually built microlenses to focus a cone of light onto each one independently. THEN you might actually increase luminance resolution, and you might actually have an increase in spatial resolution.

But Canon's patents do not describe a pixel structure wherein multiple microlenses are used to focus a cone of light onto each part of the split photodiode. On the contrary, the patent's CAN'T describe such a pixel structure...as then you wouldn't actually be able to perform AF functions with it. The point of DPAF is to add PHASE DETECTION capabilities to a sensor...not increase it's resolution.

Quote
The photodiode, despite being split, still exists below the color filter and microlenses. Therefor, there is still ONE pixel...with two photodiodes.

There you go. That's the part that is escaping you.

Read the above.

Quote
You misunderstand shared-pixel designs. Shared pixels do not share the photodiode. Each pixel still has it's own independent photodiode.

You mean just like a split photodiode with two independent readouts???

That is the exact OPPOSITE of a shared pixel. Shared pixels SHARE readouts. Canon's DPAF use INDEPENDENT readouts. They HAVE to use independent readouts, because they are in the same row. You cannot share pixel transistors in the same row, since columns of pixels are read out row-by-row. DPAF is essentially the opposite of a shared pixel sensor design...instead of reducing logic space and sharing logic among larger photodiodes, DPAF increases logic space and isolates logic among smaller photodiodes.

Quote
The photodiodes ARE rectangular! That's EXACTLY what they are! That's exactly how they are described in Canon's patents on the technology!  ::)

O-o-kay. Care to share a link to least one of these patents. That should settle it, right?
So, let's settle it by you providing a link to at least one of these patents.
You have the link handy, don't you?

Look, I suggest that you drop the 'I'm the authority' attitude - because you are not an authority.
In fact, it's very clear that you don't even come from a technical background.
So, drop the attitude and let's have a friendly discussion.
That's the reason why are all here, no?. What's with all that bullying ??

I have the PDFs for those patents saved on my hard drive. I'm not going to do the legwork for you AGAIN to find the source for those downloads. I shared several links to DPAF patents the LAST time we had this debate. You clearly ignored them. If you want to educate yourself, educate yourself. You can start here (they have the patent number...go dig through the bowels of the internet on your own time to find it):

http://thenewcamera.com/canon-patent-more-sensetive-dual-pixel-cmos-af/

As for the rest, your making a LOT of assumptions, and piling assumption on top of assumption, then making bold claims about how you've discovered Canon has QPAF technology, all based on nothing but assumption and a misunderstanding of a tiny little image from a single page of ChipWorks site. If Canon had QPAF technology, they would NOT be keeping it a secret. That would be insane for them, what with the perception in the community at large being that Canon is behind on sensor tech. QPAF would be huge news for Canon. I've spent years on this forum debunking hair-brained theories like that because all it does is mislead people, give people false hopes, and otherwise confuse the issue about what any given technological advancement REALLY offers, what it REALLY allows, and how it is REALISTICALLY likely to evolve into the future. Your radically confusing the issue about DPAF. It's a very simple technology, designed for a SINGLE purpose, and it serves THAT purpose extremely well. Your trying to inflate it into something it isn't even remotely intended to be. Sorry, but I've never liked it when people make wild uninformed assumptions then boldly claim they know what their talking about. It's just something that irks me.

1325
Landscape / Re: Deep Sky Astrophotography
« on: June 21, 2014, 02:37:57 AM »
Thanks, guys. :)

Traingineer: I have no plans to mess with the 5D III. I use it for my bird, wildlife, and landscape photography, so I don't want to mess with it's ability to produce high quality, accurate color. I plan to buy a cooled mono Astro CCD soon enough, with a full set of LRGB and narrow band filters, which will trounce anything a modified 5D III, 6D, or any other modded DSLR could do. I expect, in the long run, to have a few cooled astro CCD cams. Different sensor sizes and types are useful for different things, some have huge sensors with lower sensitivity great for ultra wide field stuff, others have small sensors with insanely high sensitivity (77-90% Q.E.), great for deep narrow band imaging.

Bean: I use my Canon EF 600mm f/4 L II lens as a telescope right now.

Here is another image from Cygnus. Just took the subs for this last night, and just finished nearly six hours worth of integration/stacking and processing. This is the Sadr region of Cygnus, the close neighboring region to the North America/Pelican nebulas I shared before.



There are multiple objects in this region. The Gamma Cygni region, comprised of IC1318 A, B, and C, also called the Butterfly Nebula by some, is in the lower middle. This one field also contains two open clusters, M29 and NGC6910. The bright star is Sadr, one of the primary stars that make up the constellation Cygnus itself. A big dust lane (unnamed, as far as I can tell) stretches through the center. A small double star near the lower right of that dust lane is also a reflection nebula...light from the blue star of the pair (barely discernible here) reflects off the dark dust. Another reflection nebula can be found in the upper left region just on the border of one of the darker areas (again too small to really be seen here).

As with most of my images, I was only able to gather about 1/3rd of the total subs I needed to get the best quality. All of my images have around 35-50 individual frames (subs) integrated. I need at least 100 subs to reduce noise to an acceptable level (100 subs averaged together reduces noise by SQRT(100), or 10x), and these days, with summer nighttime temps in the 70s, I probably need to reduce my noise levels by twice that. Problem is, to reduce noise by 20x, I would need 400 subs!! :P Hope to get some time this weekend to start building my cold box. I received my copper plate a couple days ago...so I can solder it together and put some insulation around it. Peltiers and voltage regulator are still on the way, and I need see if I can pull apart this indoor/outdoor thermometer to get myself a temperature sensor and readout screen that I can embed into the box. I'm hoping to be able to cool my camera to around -10°C. Compared to the 30-35°C it runs at right now, I should be able to reduce dark current noise by 6-7x.

1326
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 02:13:18 AM »
No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES.

Jrista, you are just assuming that Canon's dual-pixel tech is in fact a dual-photodiode tech.
My assumption is that it's already a quad-photodiode tech - and it's equally valid, as neither
one us has info on the actual implementation.

Sorry, but I am NOT assuming. I've actually read Canon's own patents. Those patents describe a system where the photodiodes for each pixel have been divided in half. This stuff isn't a mystery. Patentes are ESSENTIAL for the protection of intellectual property. Canon has been filing patents for DPAF for quite some time, a couple of years at least now, with the most recent ones being near the end of last year.

My assertions are based on concrete fact as described by the DPAF engineers at Canon themselves. Your assumptions are just that, assumptions based on an extremely TINY image posted on the ChipWorks page of the BACKSIDE of some sensor, an image which you have gravely misinterpreted, and an image we all can only assume is even of a Canon sensor, let alone one with DPAF technology (although it certainly is not of a Canon sensor with QPAF technology...since such a sensor doesn't exist yet.)


In general, before making any claims for photodiodes and pixels, consider the following:
A 'classic' pixel design has a photodiode plus three transistors (you can read about it on Wikipedia):
  • a reset transistor for resetting the photodiode voltage
  • a source-follower transistor for signal amplification
  • a row select transistor

So, one definition of a pixel is a photodiode with three transistors.

Sure. An extremely basic kind of "pixel" that you might find in an entry level course on image sensor design. Modern sensors often have a lot more logic than that per pixel. That logic usually involves some level of noise reduction, potentially charge bucketing for global shutter sensors, anti-blooming gates and shift registers in CCDs, extra logic to allow the selection of which photodiode to read in shared-pixel designs (which most smaller-pixel sensor designs are these days), etc.

The thing is, to improve fill factor and for other design considerations, modern sensors are using transistor sharing.
That is, a single set of the 'classic' transistors is shared between multiple phododiodes.

Transistor sharing is widely used in small sensors.
In the case of these sensors, though, each photodiode has its own microlens.
Thus, the photodiode is the pixel in these designs.

The photodiode is the light-sensitive part of a pixel. A standard bayer pixel is comprised of a photodiode, at least one microlens layer (sometimes two), and a color filter, as well as the row/column activate wiring, amplifier, and readout transistors.

In a DPAF pixel, the photodiode has been split in half, with insulating material between the two halves. Each halve has independent readout. The photodiode, despite being split, still exists below the color filter and microlenses. Therefor, there is still ONE pixel...with two photodiodes. Canon did not increase the pixel count...they increased the photodiode count.

In short, depending on the implementation, a photodiode and a pixel could mean the same thing.

Your interpretation is wrong. ;P Sorry. Go read the darn patents, and stop making assumptions.

Canon's 'dual-pixel' tech is assumed to be based on a shared-transistor design.
That is, it is a multi-photodiode design.
But since in a shared-transistor design photodiodes are effectively equivalent to pixels (as explained),
Canon's tech could be called multi-pixel design as well.

But photodiodes and pixels are not effectively equivalent. A pixel is more complex than a photodiode. A photodiode is simply a PART of a pixel. Your conflating the two for the sake of your argument, but that does not mean your conflation is valid.

So, you can stop correcting people who use dual/quad-pixel terminology, as these could in fact be used interchangeably.
The line between between a pixel and a photodiode is blurred in shared-pixel designs.
And the fact that the two photodiodes are read independently for auto-focus further
indicates that these could very well be independent pixels - if they didn't share the
same microlens and color filter.

You misunderstand shared-pixel designs. Shared pixels do not share the photodiode. Each pixel still has it's own independent photodiode. What's shared in a shared-pixel design is the readout logic...transistors. Usually, the sharing is diagonal, although some prototypical designs share directly neighboring pixels. Green pixels usually share their readout logic diagonally. Those two green pixels, however, each still have their OWN photodiode. The purpose of a shared pixel design is not to share the light-sensitive charge collector...that would be useless, since it would share each pixel's charge in one bucket, meaning you couldn't actually read them out independently.

The purpose of a shared pixel design is to save die space FOR the photodiode by reusing transistors and wiring for more than one pixel. The use of shared transistors to activate, amplify, and read the pixel has nothing to do with blurring the line between pixel and photodiode. The pixel is a vertical stack of layers of silicon materials. The photodiode is (usually) at the bottom of a physical well...it's the bit of silicon that is actually sensitive to light and converts some ratio of incident photons to free electrons (charge). Above that is a layer of translucent silicon material, usually silicon dioxide. Above that is often a microlens, and above that is a color filter array. There is sometimes buffer materials in between these layers, on top of which you finally have the primary microlens. THAT is a "pixel". The photodiode is just one part of the whole pixel. If you split the photodiode underneath all those other layers...you still have just one pixel. You have a pixel that is now capable of detecting phase, but it's still just one pixel, not two pixels. Regardless of what kind of readout logic it has...a pixel is a pixel, independent and atomic, and a photodiode is just a part of a pixel.

Also, your claim that there are exactly TWO PHOTODIODES (and that's it!) is not based on fact.
We don't know for sure if Canon's design is a dual-pixel design (your assumption) or a quad-pixel design
(my assumption).

Your assuming I am assuming. Your assumption is, once again, wrong. You are also assuming that "we" don't know anything "for sure" about Canon's sensor designs. Sorry, but again, your assumption there is WRONG. Canon has filed patents for all of their DPAF designs. Those patents are the basis for their technology...the technology that actually exists in the 70D, for example. I am not assuming. My assertions are based on actual fact as clearly and definitively defined by Canon engineers themselves.

You can go look up these patents for yourself. They aren't hard to find. Many of them have been posted right here on CR in the past. This stuff isn't some mysterious, mystical, magical sensor technology that Canon is keeping obfuscated. Obfuscation and secrecy is the worst form of protection for technology. By filing and receiving patents, Canon LEGALLY protects their work from theft by other manufacturers...they have no reason to hide or obfuscate anything.

Canon's marking is selling it as a 'dual-pixel' tech likely because it's easier this way to communicate
the concept to the general public.
But we don't know for a fact what the actual implementation is.

We DO know what the actual implementation is. Not only that, we know EXACTLY what it is. See my prior comment.

So, your TWO PHOTODIODES claim is based on marketing materials, really.
If I were you, I wouldn't put too much weight into these  8).

Again, wild assumption, and a wrong one. You assume WAY too much. You might want to verify your facts first, before putting yourself out like that. I have never based anything I've said about Canon sensor technology on marketing materials. I read patents, of which there are many thousands filed by Canon every year, and many thousands more filed by all the other entities involved in sensor research and design. I know EXACTLY what I am talking about, and it's based on actual sensor designs that have either been manufactured for commercial use, or have been prototyped and thouroughly demonstrated at one of the numerous ICS conferences around the world every year.

The only person who puts weight into something they shouldn't is you...putting a lot of weight into the validity of your assumptions.

My assumption for a quad-pixel design is based on simple geometry.
If there are just two photodiodes per pixel, these photodiodes need to be rectangular.
This would be uncommon - if not even a first in the industry.
But with a quad-pixel design, the photodiodes are square just like in any other sensor.

Considering the potential future advantages of a quad-pixel design (e.g. for a non-Bayer sensor),
I'd speculate that Canon would have invested in a quad-pixel design from the start - rather than
designing rectangular photodiodes that later would need to be made square anyway.

Just a speculation, of course - but based on some informed assumptions.

The photodiodes ARE rectangular! That's EXACTLY what they are! That's exactly how they are described in Canon's patents on the technology!  ::) It's not a first in the industry...for decades, there have been sensors with non-square photodiodes, even non-square pixels. There have been hexagonal pixels (Fuji first released sensors with hexagonally shaped pixels with extra small "white" pixels filling in the diagonal spaces between them many years ago), triangular pixels (Sony has a prototype 50mp sensor with triangular pixels), even pixels with non-uniform pixel sizes and layouts (some sensor designs, usually from Fuji, have had large regtangular white pixels, along with a non-standard layout of smaller rectangular red, green, and blue pixels). I currently use a CCD camera for guiding my astrophotography that uses rectangular pixels, due to the use of an anti-bloom gate. Again...your making some wild assumptions that have absolutely no basis in fact. Your assumptions are FAR from informed, as well. I don't know where you think your "informing" yourself, but you really need to go right to the source...patents. You seem to think that all this technology is kept secret and obfuscated and hidden away within the bowls of "Canon the over-protective corporation". That is, once again, an assumption. Canon has decades of sensor technology filed legally as patents in countries around the world. Those patents are fully available, in complete detail, with abstracts, technical diagrams, and full-blown conceptual and functional dissection and breakdown, for review by anyone who wishes to spend the time looking them up. If patents weren't freely available, then they would be useless. Competitors have to be able to investigate what technology their competitors have already invented and patented, so they don't try inventing the same exact thing to patent themselves...that would be a patent violation. Potential licensees of patented technology need to know how the technology is implemented, so they may implement it themselves in their own products, with the added requirement of a royalty fee.

This technology is WELL KNOWN, because it has to be. "We" know EXACTLY how DPAF is designed...and it is not quad-pixel. It's, quite literally, dual-photodiode. There are now multiple patents that PROVE that FACT.

1327


You are personally concerned about total noise levels, and specifically total noise levels in a normalized context. That is COMPLETELY VALID! I'm not debating that. I don't think ANYONE has ever debated that. It's just a different context. Evaluating the total amount of noise in a downsampled image is different than evaluating the editing latitude of a RAW file.

Then how come you started up all your DxO is all misleading BS with their Print screens, you can't use that nonsense that has no connection and no use whatsoever of the webpage, etc. etc.?

Print DR only has meaning when you are comparing cameras within the context of DXO. That is the sole valid use of Print DR. Because that is the only place where DXO's specific algorithm for determining Print DR is applied, and where an image size for a 8x12" 300ppi print is the target of that scaling algorithm. That 8x12" 300ppi image, scaled with DXO's algorithm, is the ONLY way to get the dynamic range numbers DXO spits out for Print DR. Outside of that context...Print DR is meaningless.

IF you are on DXO's site comparing cameras, then it's valid. But to refer to Print DR as the dynamic range that anyone, anywhere, at any time, regardless of their processing, output image size, or a myriad of other factors that go into making an image, to refer to Print DR as THE dynamic range they have when they use a camera is WRONG. That is NOT the dynamic range of the camera. You cannot know how people are going to process, what dimensions they may scale to. The only thing you can really know is that people are either shooting JPEG, in which case every DR number that DXO spits out is irrelevant...or they are editing RAW images in ACR, LR, or maybe Aperture/Darktable/RawThearapy. EDITING RAW. Print DR has no meaning OUTSIDE of the context of DXO. Such as, oh, I dunno...here on CR?!? Print DR only has meaning when you check off three cameras for direct comparison ON DOX's web site.

So, here on CR...where the discussions about DR always revolve around lifting shadows with RAW (your one of maybe two people who ever bring up noise frequencies specifically, in which case the context is different)...here on CR (and pretty much any other site, like DPR) where DR is interpreted to mean shadow lifting latitude, I refer to Screen DR. It's contextually valid, it has a direct application to what people actually do with their actual images that they actually get out of actual cameras in actual life.

1328
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 20, 2014, 08:54:00 PM »
Yes and no?

In the theoretical perfect transition, with no AA filter you get a "black-white" transition, and then you have a "black-grey-grey-white" transition with the AA filter.
Ideally with the AA filter the line would land directly in the middle of the pixels, which would actually produce exactly the same result as having no AA filter, a "white-grey-black" transition. But that is equally improbable as the perfect pixel transition, you're practically going to end up with a transition going from white, to two pixels of varying shades of grey, to black. Whereas without your worst case scenario is one line of grey pixels.

So the best case scenario with an AA filter is somewhere between a 50%-100% increase in blur. Where without you go somewhere from a 0% to 50% increase in blur. On a theoretical perfect line.
Yes, it's nitpicking, but that's what makes the forums so much fun.

Your talking about the scientifically ideal situation. Those only exist in text books. They don't exist in reality, not with the countless other factors that go into resolving an image accounted for. That would be like saying that you could create the ideal frictionless surface often referred to in physics text books. You can greatly reduce the coefficient of friction, but you cannot eliminate it. You cannot actually achieve the perfectly ideal.

So there is the theory of resolving line pairs, and then there is the 100% perfectly ideal exemplar. Yes, in the ideal exemplar case, theoretically you could line up black and white lines perfectly on top of rows of pixels, and they would end up perfectly sharp. Perfect is unattainable.

When you account for other factors, such as the statistical improbability that you would EVER be able to line up alternating white and black lines perfectly on the sensor, the difference isn't blur...it's aliasing. You either end up with aliased results, which means you have "nonsense" information...or VERY SLIGHTLY blurred results for high frequency oscillations. It really isn't even blurring, it's frequency stretching or spreading, which effectively stretches high frequencies and makes them a slightly lower frequency, which actually represents the real information much more accurately than the nonsense. That is not a reduction in resolution, it's the elimination of useless data. Anti-aliasing is not designed to destroy information...it is actually designed to PRESERVE information, by throwing away what you cannot resolve accurately anyway.

The removal of an AA filter does not mean your producing more accurate images. Your producing less accurate images that have higher acutance. That's it. Thing about acutance is...it's easy to replicate, to the small degree necessary at high frequencies...with software. A simple unsharp mask will improve the acutance of an image taken with a camera that has an AA filter. The only difference between the two images at that point is that the anti-aliased image is accurate AND sharp, where as the aliased image is sharp but not accurate. There is really no benefit to removal of the AA filter unless your imaging highly random information. There are very few things like that. Landscapes come to mind as one of the primary, and very few, situations where removal of an AA filter could potentially be useful. I wouldn't even say macro photography would be better without an AA filter...when you magnify small subjects so significantly, there tends to be a LOT of high frequency data, and you would be surprised how often there are repeating patterns at the microscopic scale. Even without repeating patterns, certain natural features, such as the cells of an insect eye, end up looking more jagged and harsh than they do when you use a camera with an AA filter.

Sharpness isn't the supreme indicator of IQ. Too much sharpness is often the hallmark of significant overprocessing...a slight amount of softening of very high spatial frequencies is usually the hallmark of a skilled processor. Some of THE BEST landscape photography I admire the most has a very soft aesthetic, with a specific amount of slightly lower contrast in the high frequencies. These kinds of landscapes are the ones that really stand out from the throngs of landscape photos as being exceptional.

1329
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 20, 2014, 08:37:50 PM »
As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...

Thus you start with an 80MP sensor to get a nice 20MP image.

No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES. The two are not the same. Pixels have photodiodes, but photodiodes are not pixels. Pixels are far more complex than photodiodes. DPAF simply splits the single photodiode for each pixel, and adds activate wiring for both. That's it. It is not the same as increasing the megapixel count of the sensor.

And, once again...I have to point out. There is no such thing as QPAF. The notion that Canon has QPAF is the result of someone seeing something they did not understand. Canon does not have QPAF. Their additional post-DPAF patents do not indicate they have QPAF technology yet...however there have been improvements to DPAF.

BTW, what your describing is called super-pixel debayering. That, too, is a common option in astrophotography image stacking...instead of basic or AHD debayering, you usually have the option to either super-pixel debayer, or "drizzle" (which, if you have enough subs...such as a couple hundred...is a means of achieving superresolution, and can increase your output image resolution by two to three fold.) You don't even need another microlens layer to do super-pixel debayering...you could use a tool like Iris or maybe even DarkTable/RawThearapy, to do it on any image you want.

Finally, even if you do super-pixel debayering, your not going to ever have "hard edges". Statistically speaking, the chances if a white/black line pattern you wish to photograph perfectly lining up with your pixels, regardless of how large or small they are, is so excessively remote that it is statistically impossible. Not in any real-world situation. You might be able to build some kind of contraption and AI software to eventually achieve it, but that is well beyond the realm of practicality. If you remove the AA filter, use super-pixel debayering, you might have larger pixels with full color fidelity...but your going to have a massive amount of aliasing. Those white and black lines would have some nasty stair-stepped edges, they would just look atrocious.

Wow, it looks like superpixel debayering (http://pixinsight.com/doc/tools/Debayer/Debayer.html) is exactly what I'm after. Make a 128MP sensor and use superpixel debayering and you'll have a nice compact, super accurate 32MP image.
Again, really, I'm fine with just shooting on a 128MP sensor and dealing with 100MB+ RAW files, the trick is to get a similar result in a format that's going to be acceptable to the majority of photographers who refuse to deal with large file sizes.

As long as your final image is around 32MP I don't think people are going to notice the stair stepping, unless you're standing right next to something like a 40" high quality print.

Well, someday we may have 128mp sensors...but that is REALLY a LONG way off. DPAF technology, or any derivation thereof, isn't going to make that happen any sooner.

1330
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 20, 2014, 06:18:44 AM »
Right.
What would be really cool to see is some sort of hardware level binning process that maintains the integrity of the RAW file.

Half the reason I'm so anxious for super high resolution cameras is that I haven't been terribly impressed with the image quality off my 5D2. That nasty AA filter (which I'm pretty sure is especially bad on the 5D2) effectively cuts resolution in half. When I first saw my pictures on a decent 4MP monitor I was amazed at how little detail loss there was vs. looking at the image zoomed to 100%. My bet is that a good 4K (8MP) monitor is going to display your images with just as much detail as a high quality print... Because the detail actually isn't there in the first place.

One option is just quadrupling resolution and getting rid of the AA filter (which I'm actually fine with), but if they could bin the full per-pixel RGB signal on the sensor it should effectively deal with moire, and we get to keep our current file size, and it should produce an actual 20MP image instead of the blurred out fake we currently end up with.

The last thing I really want to see is the integration of clear microlenses. Even the heavily faded green pixels that we have right now still block a lot of light. Given how advanced interpolation is I doubt that eliminating the colour value for one of the pixels would have a significant impact on image quality.

Sorry, but that (bolded) is such a ludicrous, laughable comment, I'm just flabbergasted. An AA filter DOES NOT cut resolution "in half". That is blowing things SO FAR out of proportion it may be one of the most ludicrous things I've read on these forums. OLPFs, optical low pass filters, are designed to affect high frequencies only, and only around the nyquist limit at that. You lose a TINY amount of resolution...but it doesn't matter, because the "resolution" your losing just contains nonsense anyway. OLPFs blur very high frequency data that nearly or exactly matches the spatial frequency of the sensor's pixels just enough such that they the information doesn't alias. That's it. Aliased information is a REAL loss of information. Technically speaking, OLPFs PRESERVE information...they save information that can be saved, and discard information that cannot be correctly interpreted by the sensor anyway. On top of that, a very light application of unsharp masking can effectively reverse the blurring, and improve the resolution of that high frequency data, without actually bringing back all the nonsense.

Quadrupling resolution and removing the AA filter is only an option if your lenses cannot resolve that much detail. With the resolving power of Canon's current lens lineup at faster apertures, I'm not so sure that cutting pixels into quarters is actually enough to avoid any kind of aliasing. At narrower apertures, like f/8, diffraction already blurs information enough that it can't alias, but that's a really narrow aperture for a lot of work, not everyone uses it. There are very few applications where removal of an AA filter will not cause aliasing of some kind, and pretty much anything artificial is going to have repeating patterns that, depending on distance to camera, can create interference patterns (moire).

This whole "Remove the AA filter" craze is just that...a craze. It's a "thing" Nikon started doing to be different, to get some "wows", and maybe bring in some more customers. Ironically, given that removal of an AA filter is really NOT a good thing...it's worked. Nikon's marketing tactics have sucked in a whole lot of gullibles who don't really know what an AA filter does or how it works, or how to work WITH it, and now we have a whole army of "photographers" who want AA filters removed from all cameras. Personally, I REALLY, TRULY, HONESTLY DO NOT want Canon to remove the AA filter. It is NECESSARY, it PRESERVES preservable data and eliminates useless data, and I LIKE THAT.

And anything that is lost? It's MINIMAL. In the grand scheme of how much resolution you have...you maybe lose a percent or two of really high frequency information...but you really don't have that information anyway because it is similar in frequency to noise...so again, moot.

Given that the filter makes it physically impossible to have a repeating pattern of stripes the same frequency as the pixel grid, so that you cannot have a perfect transition of black pixels to white, I'd say that is cutting resolution in half. That is, compared to some magical thing that accurately reads the full RGB spectrum on each pixel.

You are right about the necessity of the AA filter though.
I was thinking that if the interpolation algorithm only sampled each pixel within a specific cluster of four pixels and not every pixel around it that it would solve the moire problem. Really that would just give you different colour banding instead.
Now, if we added a second layer of microlenses on top of the first to direct light only at individual groups of pixels, that would guarantee the full RGB read on each cluster, and allow hard transitions...

On second thought I guess that sounds a little excessive just to gain the ability to have large pixels with a hard transition instead of twice as many pixels with a row of grey pixels that's half as big. You can bin the smaller pixels with a normal AA filter just the same, we just need a way of doing that without destroying the flexibility of RAW (otherwise I assume people would have been using compressed formats a long time ago).

I think your conflating the CFA with the AA filter. The CFA, color filter array, is what produces the RGBG pixel pattern. That is ENTIRELY different than the AA filter, which does optical blurring only at high spatial frequencies near the spatial frequency of the sensor pixels.

The CFA doesn't cut resolution in half either. It has a minor impact on luminance resolution, it's mostly color resolution that is affected by the CFA. But since we pick up detail primarily due to luminance, a bayer sensor doesn't lose anywhere remotely close to as much resolution as Sigma would have you believe with their Foveon marketing, for example. The luminance resolution, the detail resolution, of a bayer still trounces anything else. It's your color fidelity and color resolution that suffers. Were not as sensitive to color spatial resolution as we are to luminance though, expecially when the luminance is combined. (It's actually a pretty standard practice in astrophotography to generate an artificial luminance channel, blur the RGB channel a bit (which practically eliminates noise and actually improves color fidelity a bit by reducing color noise), process the luminance channel for detail, then combine the L with the blurred RGB. The end result is a highly detailed image that has great color fidelity.)

As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...so you actually don't gain squat from a resolution standpoint by doing that. Doing that, you would lose significantly more resolution than either the CFA or the AA filter cost you...both of which are trivial in comparison do doing what your asking for. BTW, what your describing is called super-pixel debayering. That, too, is a common option in astrophotography image stacking...instead of basic or AHD debayering, you usually have the option to either super-pixel debayer, or "drizzle" (which, if you have enough subs...such as a couple hundred...is a means of achieving superresolution, and can increase your output image resolution by two to three fold.) You don't even need another microlens layer to do super-pixel debayering...you could use a tool like Iris or maybe even DarkTable/RawThearapy, to do it on any image you want.

Finally, even if you do super-pixel debayering, your not going to ever have "hard edges". Statistically speaking, the chances if a white/black line pattern you wish to photograph perfectly lining up with your pixels, regardless of how large or small they are, is so excessively remote that it is statistically impossible. Not in any real-world situation. You might be able to build some kind of contraption and AI software to eventually achieve it, but that is well beyond the realm of practicality. If you remove the AA filter, use super-pixel debayering, you might have larger pixels with full color fidelity...but your going to have a massive amount of aliasing. Those white and black lines would have some nasty stair-stepped edges, they would just look atrocious.

1331
Landscape / Re: jrista et al, Why Astrophotography?
« on: June 20, 2014, 06:07:39 AM »
Because it's really hard to do well, I like a challenge.

Aye! Astrophotography is the most challenging photography I do. It takes so much time, with careful planning, careful management of gear and tracking, and hours of processing, to create one image. In comparison, my bird and wildlife photography is a cakewalk.

1332
Landscape / Re: Deep Sky Astrophotography
« on: June 20, 2014, 06:05:51 AM »
Bradbury and emag, great images! I love the veil, wonderful detail there.

1333
Landscape / Re: Deep Sky Astrophotography
« on: June 20, 2014, 06:04:08 AM »
More Cygnus. I really love this region of sky, it's amazing. Tonight I've been getting image time on IC1318, IC1318B which are large nebulous regions, and NGC6910 which is a nice little open cluster nearby. The full frame of the 5D III is JUST AMAZING. It's more than twice as big as the 7D frame, and the images, once processed, are pretty stunning.

This is my first pass at processing a single-frame image of North America and Pelican nebulas in Cygnus, near the top star. Not entirely satisfied with it...I'd like to stretch it more, bring out some more detail, but I need to get a better handle on noise and color correction (a lot of the color correction routines end up making things noisier as they end up nuking most of the green color channel.)



Usually, getting this entire region requires a 4-panel mosaic with the smallish CCD sensors you can usually find for a reasonable price. Only those with the big money can get comparable full frame CCD cameras...which usually cost about $10,000 or more. I've got a cold box in the works for the 5D III, which should help get my dark current levels under control, and help me get better, deeper, less noisy subs (although still not as good as a cooled CCD...my cold box will probably only get me down to around -10°C, where as a good CCD can get you down to -25°C. With dark current doubling/halving every 5.8°C, a CCD is going to be about about 2.6x less noisy (and even better than that, really, as a mono CCD has a higher fill factor, no sparse color spacing, and CCDs designed for astro tend to have lower dark current to start with...)

1334
EOS Bodies / Re: DSLR vs Mirrorless :: Evolution of cameras
« on: June 20, 2014, 03:57:48 AM »
On the first day I took my new 5D III out to photograph birds, another bird photographer came up (discourteously, I might add...I'd just spent 35 minutes getting VERY close (like, less than 15 feet to the closest one...in Colorado, where birds are jittery, that is REALLY close) to some 6 or 7 Night Herons...and he stomped right up and scared the whole lot off, along with a couple egrets and a great blue...and I think a couple ducks). Well, he persisted, stomped right up and sat down next to me. Turned out there was one younger BCNH left, and he hopped a couple trees and ended up right in front of us, about 45 feet out.

This other photographer had two cameras, both mirrorless, one a small Panasonic Lumix and one a Fuji. He chitchatted about how much he loved 'em, how great they were, etc. We both had 600mm focal lengths, me with my EF 600mm f/4, and the other guy with a small zoom lens that had an FF-effective 600mm focal length, or thereabouts.

In the end, the smaller sensors of his mirrorless cameras couldn't stand a chance against the 5D III. The slower frame rate, which were between 4-5 fps, did not do as well. The AF system did not lock remotely as fast (it's almost instantaneous with the 600/4II and 5D III), and in both cases with both cameras, he seemed to be using some kind of contrast-detection AF, or perhaps hybrid contrast/phase detection? Either way...it was quite slow, and while decently accurate, not as accurate as the 5D III seemed to be (although I guess that could boil down to technique.)

The only advantage I could really see in the mirrorless was their near-microscopic size...they were both TINY, and in comparison they almost looked like toys to the system I was using. The guy got antsy pretty quick, and was unwilling to stick around...within about 5 minutes he got up and left, but before he did, he mentioned the dozen or so other bird spots he'd tromped through in the park on the way to me. I suspect he tromped through a dozen more, and scared off another couple dozen beautiful subjects, before he finally called it quits. (The guy missed out, too...while in his exit he finally did scare off that one last BCNH, within about 10 minutes after he left, a snowy and a couple more of the night herons came back, and within another 15 minutes proceeded to fish. Mirrorless vs. DSLR...Mirrorless: 0, DSLR: 1)

The moral of the story? If your a discourteous, tromping wannabe who has to keep on the move because your too impatient to set up, sit, and wait for natures beauty to come to you in comfort...then a tiny light weight mirrorless with a tiny light weight lens is probably for you. You won't get the same action-grabbing performance, you won't have the same ergonomics (those mirrorless cams and lenses are TI-NY...like, toy tiny, like, barely fits in your hands tiny...like, WTF am I doing with a TOY with that BEAUIFUL BIG BIRD in front of me?!?!? OMG!), your IQ won't be as good (or maybe it will if you drop some dough on the FF A7r, but then you'll really be suffering on the AF and ergonomics front).

Anyway...mirrorless has it's place. They have their uses and their benefits.  But, every time I encounter a die-hard mirrorless user, my experiences tend to be similar to the above. Mirrorless users are ALWAYS on the move. Moving moving moving moving. No patience, no time to wait and let things just happen around you. MOVING. I totally understand why they are fanatics about mirrorless...but wow...slow down and enjoy something, enjoy life happening around you every once in a while! :P

1335
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 20, 2014, 03:33:16 AM »
Further, for everyone else who continues to perpetrate the myth that somehow the two halves of the pixels, which are under not only one microlens, but also under one color filter block, could somehow magically be used to expand dynamic range "for free" are fooling themselves, and anyone who listens to them. Magic lantern either uses two FULL sensor reads (vs. half sensor reads), or they do line interpolation for half the resolution, to achieve their dynamic range. There is no free increase to dynamic range, and DPAF isn't going to somehow allow more dynamic range for free. The problem with the idea of using one half of the AF photodiodes for an ISO 100 read, and the other half for an ISO 800 read, is that is HALF the light! That is not the same as what ML does, which involves the FULL quantity of light, or else half the light AND half the resolution.

Huh?  Please explain how reading both halves at the same gain gets you all the light but reading them at different gains gets you only half the light?  How different do the gains have to be to cut the light in half?  Is 1% enough?

What you said makes no sense to me.

The photodiodes are SPLIT. Each half gets half the light coming through the lens. It doesn't matter what ISO you read them at...if you read "half"...it's half the light. So your reading half the light at ISO 100, and half the light at ISO 800...well, you really aren't gaining anything. The only way to increase dynamic range by any meaningful amount is to either gather MORE light IN TOTAL...or reduce read noise by a significant degree (i.e. drop it from ~35e- to 3e-). Assuming it ever even becomes possible to read the photodiodes for image purposes like that, you might gain an extremely marginal improvement...but overall, there really isn't any point. It isn't the same as what ML is doing. They are either reading alternate lines of the sensor at two different ISOs, then combining them at HALF THE RESOLUTION, or they are doing two full reads of the sensor. Either way, for the given output size, they double the quantity of light. Reading two HALVES of a SPLIT photodiode gets you...ONE full quantity of light.

Pages: 1 ... 87 88 [89] 90 91 ... 326