October 01, 2014, 08:38:07 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 76 77 [78] 79 80 ... 316
1156
EOS Bodies / Re: DSLR vs Mirrorless :: Evolution of cameras
« on: June 23, 2014, 10:14:53 PM »

The moral of the story? If your a discourteous, tromping wannabe who has to keep on the move because your too impatient to set up, sit, and wait for natures beauty to come to you in comfort...then a tiny light weight mirrorless with a tiny light weight lens is probably for you. You won't get the same action-grabbing performance, you won't have the same ergonomics (those mirrorless cams and lenses are TI-NY...like, toy tiny, like, barely fits in your hands tiny...like, WTF am I doing with a TOY with that BEAUIFUL BIG BIRD in front of me?!?!? OMG!), your IQ won't be as good (or maybe it will if you drop some dough on the FF A7r, but then you'll really be suffering on the AF and ergonomics front).

Anyway...mirrorless has it's place. They have their uses and their benefits.  But, every time I encounter a die-hard mirrorless user, my experiences tend to be similar to the above. Mirrorless users are ALWAYS on the move. Moving moving moving moving. No patience, no time to wait and let things just happen around you. MOVING. I totally understand why they are fanatics about mirrorless...but wow...slow down and enjoy something, enjoy life happening around you every once in a while! :P

makes me smile.....

my birding setup includes a camping chair and a good book :) and while I was reading today Harry came past to check me out...

Nice shot. :)

My birding setup included my ass and the ground. :D And the camera+lens on a tripod, of course. And maybe my phone...on which I have good books, good music, good games, lots of good stuff.

1157
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 23, 2014, 02:38:24 AM »
The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.


That sounds like it's just a problem with the way the software is handling the data. A TIFF is still assigning full RGB values to each pixel, the debayering is done, and I'm assuming the original values are lost. Whereas if you store the information in a pre-debayered state, even with one pixel averaged out (which was next on the to-do list anyway) it shouldn't be any different from reading the original RAW... with the slight exception that adjusting the value after averaging would be different than adjusting the values of two pixels and then averaging them (I assume that when you adjust things in post it's playing with the RAW numbers before debayering).
But that still sounds like a fairly inconsequential concession to make compared to storing data after debayering.

There are some things in a RAW editor that must be done before debayering (i.e. whitebalance), and some that are usually done after debayering. It's just that some things are more effectively performed with an original digital signal, and others with a full RGB color image. Exposure and white balance are the two main things that benefit most from being processed in the original RAW, where the signal information is pure and untainted with any error introduced by conversion to RGB.

You also have to realize that RGB binds the three color components together....they cannot be shifted around much in an independent way, not like you can with RAW, without introducing artifacts. At least, not with real-time algorithms. There are other tools, like PixInsight (astrophotography editor) that have significantly more powerful, mathematically intense, and often iterative processes that put most of the tools in something like Lightroom to shame. One example is TGVDenoise...which is capable of pretty much obliterating noise without affecting larger scale structures or stars at all. Problem is, at an ideal iteration count (usually around 500) on a full RGB color image, running TGVDenoise can take several minutes to complete. And that is just one small step in processing a whole image.

So sure, with the right tools, you can probably do anything with a 16-bit TIFF. It's just that with lower precision but significantly faster algorithms like are often found in standard tools like Lightroom, you either end up with artifacts, or run into limitations with the data or the algorithm that won't let you push the data around as much.

1158
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 09:10:41 PM »
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.


Darn.
After reading a bit about the various file formats (TIFF is high fidelity, but both huge and still damaging even in 16bit) It sounds like the best that could be done would be just to "prep" the raw file for superpixel debayering by saving it with the two green pixels averaged already. It would be useless for anything else, but you use 25% less space. Not nothing, but not great.
People just need to get used to handling large files.

The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.

1159
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 07:35:05 PM »
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.

I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.

I'm guessing the only reason you mention 90nm and not 30nm is that in this application the cost/benefit ratio favours slightly larger circuits rather than smaller? (you'd only gain minimal surface area but potentially make production much more difficult)

Well, I mention 180nm and 90nm because I am pretty sure Canon has the fab capability to manufacture transistors that small. In the smallest sensors, transistor sizes are a lot smaller than that...I think they are down to 32nm for the latest stuff, with pixels around 1µm (1000nm) in size. I think that some of Canon's steppers and scanners can handle smaller transistors, 65nm using subwavelength etching, but I don't know if that stuff has been/can be used for sensor fabrication. I know for a fact that Canon already uses a 180nm Cu fab process for their smaller sensors, so I know for sure they are capable of that. Their highest resolution fabs are around 90nm natively, but again, most of what I've read about them indicates IC fabrication...I've never heard of them being used to manufacture sensors (but there honestly isn't that much info about Canon's fabs...nor who owns them...)

1160
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 06:40:49 PM »
As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...

Thus you start with an 80MP sensor to get a nice 20MP image.

No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES. The two are not the same. Pixels have photodiodes, but photodiodes are not pixels. Pixels are far more complex than photodiodes. DPAF simply splits the single photodiode for each pixel, and adds activate wiring for both. That's it. It is not the same as increasing the megapixel count of the sensor.

And, once again...I have to point out. There is no such thing as QPAF. The notion that Canon has QPAF is the result of someone seeing something they did not understand. Canon does not have QPAF. Their additional post-DPAF patents do not indicate they have QPAF technology yet...however there have been improvements to DPAF.

Sorry, maybe I should have communicated that better. I wasn't referring to dual pixel technology, just normal sensors at high resolution.
(If superpixel debayering is really as simple as it sounds, dual pixel technology is completely unnecessary in this context.)

Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Well, someday we may have 128mp sensors...but that is REALLY a LONG way off. DPAF technology, or any derivation thereof, isn't going to make that happen any sooner.

http://www.gizmag.com/canon-120-megapixel-cmos-sensor/16128/

I'm still of the opinion that Canon is only limiting resolution because of either the lack of user infrastructure (flash memory needs to drop in price), or the lack of a practical processor to pair with the sensor (problems with size, battery life, heat, etc...).

My bet is they will ramp up resolution as surrounding technology allows.

I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.

1161
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 03:52:54 AM »
After reading a number of (what else, rumors, albeit convincing ones) on a few other websites, I am highly inclined to believe that there will NOT be a 7D mark II. I am very convinced that Canon will unveil this new body, which we are referring to as the 7D mark II, as a completely "unrelated" camera line/series/ what have you. It could potentially have some things in common with the 7D, and could potentially be considered somewhat of a follow up to it, but I really think this is meant to be the next level of body, something new, different, and not really meant to be a 7D mark II. This is completely my own feeling, based on what I've read here and on other sites. All I can do, personally, is pray that it is geared toward the type of shooting that I do, and my needs. If not, I will be beyond disappointed.

EDIT: After more 100% pure speculative thinking, I have decided that I would be willing to bet money (if I had any money) that this camera will have 30+ MP, and that, in Q1 of 2015, a camera will be released from Canon which will have 40-50 MP. Call me crazy.

If the thing that is released is heavily video-based, then I think it'll probably be something else. If it is still primarily a stills DSLR geared for semi-pro action shooters, I think it will still get the 7D moniker.

1162
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 03:48:11 AM »
And back to our regularly scheduled programming.

The 7D II's new technology. What will it be? Here are my thoughts, given past Canon announcements, hints about what the 7D II will be, interviews with Canon uppers, etc.

 1. Megapixels: 20-24mp
 2. Focal-plane AF: Probably DPAF, with the enhancements in the patents Canon was granted at the end of 2013. *
 3. New fab process: Probably 180nm, maybe on 300mm wafers. **
 4. A new dedicated AF system: I don't know if it will get the 61pt AF, probably too large for the APS-C frame. A smaller version...41pts would be my hope. Same precision & accuracy of 61pt system on 5D III, with same general firmware features.
 5. Increased Q.E.: Canon has been stuck below 50% Q.E. for a long time now. Their competitors have pushed up to 56% and beyond, a couple have sensors with 60% Q.E. at room temperature. Higher Q.E. should serve high ISO very well.
 6. Faster frame rate: I suspect 10fps. I don't think it will be faster than that, 12fps is the reserved territory of the 1D line.
 7. Dual cards: CF (CFast2) + SD. I hate that, personally, but I really don't see the 7D line getting dual CF cards. (I'll HAPPILY be proven wrong here, though!)
 8. No integrated battery grip. Just doesn't make sense, the 7D was kind of a smaller, lighter, more agile alternative to the 1D, a grip totally kills that.
 9. New 1DX/5DIII menu system. Personally, I would very much welcome this! LOVE the menu system of the 5D III.
10. GPI and WiFi: I think both should find their way into the 7D II, what with the 6D having them. Honestly not certain, though...guess it's a tossup.
11. Video features: Video has always been core to the 7D II rumors. 60fps 1080p; 120fps 720p (?); HDMI RAW output; External mic jack; 4:2:2; I think DIGIC 7 would probably arrive with enhancements on the DIGIC 6 image and video processing features. Maybe on par with Sony's Bionz X chip.

* Namely, split photodiodes, but with different sizes...one half is a high sensitivity half, the other half is a lower sensitivity half. The patents are in Japanese, and the translations are horrible, so I am not sure exactly WHY this is good, but Canon's R&D guys seem to think it will not only improve AF performance and speed, but "reduce the negative impact to IQ"....which seems to indicate that the use of dual photodiodes has some kind of impact on IQ, a negative impact.

** We know Canon has been using a 180nm process for their smaller form factor sensors for a while. Not long ago, a rumor came through, I think here on CR, indicating Canon was building a new fab and would be moving to 300mm wafers. That should greatly help Canon's ability to fabricate large sensors with complex pixels for a lot cheaper. A smaller process would increase the usable area for photodiodes, as transistors and wiring would be a lot smaller than they are today on Canon's 500nm process. That would be a big benefit for smaller-pixel sensors. If they moved to a 90nm process, all the better. I don't suspect we'll see any kind of BSI in the 7D II...but, who knows.

41 AF points would be excellent, especially if they were all cross-type!  Still, Nikon has crammed 51 into its cropped frame, so it might be possible for Canon to put 61 into a crop frame.  Based on what I've read (and the little on the few minutes I actually handled a 1DX at a camera show), the 61 point systems on the 1DX/5D3 are quite concentrated in the centre.  Is it possible for Canon to adapt a 61 point system for the 7D2 that fills more of the frame, given that it's a crop?  Would that be a possible adaptation of the existing Full Frame AF to a Crop Frame analogue--same density over a "larger" area of the frame given the tighter field of view?

As to the frame rate, the initial specs all said 10 fps.  When they suddenly began to suggest it might be 12, I was rather surprised, but I'd be pleased if it were true--but 10 will be awesome too (it definitely needs to outperform the 8 fps of the current model!).  I suppose it depends on what they think Nikon will do, and also what the next 1D series will do. I also think it will probably be 10, but 12 would give it more time on top of the competition and they might just want to overshoot in order to make this a standard for crop-frames for the next several years. 

Either way, I'm excited! :)

The Canon 61pt system was actually ground-breaking for covering the widest spread of frame ever. Based on what I see in my 5D III viewfinder, I think the AF point spread is larger than the entire 7D frame! :P It's actually rather incredible.

That said, the dedicated phase-detect system is comprised of a little unit embedded in the base of the mirror box. Part of the unit is a small lens that splits the light and redirects it to each line sensor. I don't see why that little lens couldn't be redesigned for a smaller frame, but then I don't know if the size of the AF sensor itself would be too large (require too much bending of light that you could no longer accurately detect phase at the extremes?)

I think a 10fps frame rate is more than reasonable. I was pretty happy with 8fps on the 7D, 10fps would just be a bonus. I think 12fps would require some fairly significant engineering...that was another one of the ground-breaking things with the 1D X. They had to completely redesign the mirror apparatus to handle that kind of frame rate. It really sounds like a machine gun, too, and I'm actually not sure I like that. The 5D III is so quiet, it's wonderful for wildlife (it has more of a soft chi-ching sound, vs. the 7D's cha-thuck, and the 1D's Cha!-Cha!-Cha!-Cha!-Cha!-CHA!)

1163
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 12:10:11 AM »
And back to our regularly scheduled programming.

The 7D II's new technology. What will it be? Here are my thoughts, given past Canon announcements, hints about what the 7D II will be, interviews with Canon uppers, etc.

 1. Megapixels: 20-24mp
 2. Focal-plane AF: Probably DPAF, with the enhancements in the patents Canon was granted at the end of 2013. *
 3. New fab process: Probably 180nm, maybe on 300mm wafers. **
 4. A new dedicated AF system: I don't know if it will get the 61pt AF, probably too large for the APS-C frame. A smaller version...41pts would be my hope. Same precision & accuracy of 61pt system on 5D III, with same general firmware features.
 5. Increased Q.E.: Canon has been stuck below 50% Q.E. for a long time now. Their competitors have pushed up to 56% and beyond, a couple have sensors with 60% Q.E. at room temperature. Higher Q.E. should serve high ISO very well.
 6. Faster frame rate: I suspect 10fps. I don't think it will be faster than that, 12fps is the reserved territory of the 1D line.
 7. Dual cards: CF (CFast2) + SD. I hate that, personally, but I really don't see the 7D line getting dual CF cards. (I'll HAPPILY be proven wrong here, though!)
 8. No integrated battery grip. Just doesn't make sense, the 7D was kind of a smaller, lighter, more agile alternative to the 1D, a grip totally kills that.
 9. New 1DX/5DIII menu system. Personally, I would very much welcome this! LOVE the menu system of the 5D III.
10. GPI and WiFi: I think both should find their way into the 7D II, what with the 6D having them. Honestly not certain, though...guess it's a tossup.
11. Video features: Video has always been core to the 7D II rumors. 60fps 1080p; 120fps 720p (?); HDMI RAW output; External mic jack; 4:2:2; I think DIGIC 7 would probably arrive with enhancements on the DIGIC 6 image and video processing features. Maybe on par with Sony's Bionz X chip.

* Namely, split photodiodes, but with different sizes...one half is a high sensitivity half, the other half is a lower sensitivity half. The patents are in Japanese, and the translations are horrible, so I am not sure exactly WHY this is good, but Canon's R&D guys seem to think it will not only improve AF performance and speed, but "reduce the negative impact to IQ"....which seems to indicate that the use of dual photodiodes has some kind of impact on IQ, a negative impact.

** We know Canon has been using a 180nm process for their smaller form factor sensors for a while. Not long ago, a rumor came through, I think here on CR, indicating Canon was building a new fab and would be moving to 300mm wafers. That should greatly help Canon's ability to fabricate large sensors with complex pixels for a lot cheaper. A smaller process would increase the usable area for photodiodes, as transistors and wiring would be a lot smaller than they are today on Canon's 500nm process. That would be a big benefit for smaller-pixel sensors. If they moved to a 90nm process, all the better. I don't suspect we'll see any kind of BSI in the 7D II...but, who knows.

1164
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 10:45:53 PM »
Ah...so, we have a sensor now, without any light sensing components? But we have sub-pixels, and that's VEEERY IMPORTANT, ppls! Interesting...

1165
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 09:09:53 PM »
You've somehow equated the term "sub-pixel" with "pixel". Why use a different term, sub-pixel, if it's the same thing?

So, the crux of this argument, really, is whether a sub-pixels is a photodiode or a pixel.

Let me just state the Canon's DPAF patent doesn't even mention the word photodiode.
Instead, it is using the wording 'sub-pixel'.

You are only assuming that by a 'sub-pixel' Canon actually means a photodiode.
This is just an assumption, however, as no statement/fact from the patent supports it.
Let's be very clear about this.

I, on the other hand, am assuming that a sub-pixel is in fact a full-blown pixel.
This is another assumption, however, as the patent doesn't define what a sub-pixel really is.

In other words, this tiresome, pointless debate is a debate about two assumptions.

I am perfectly fine that I'm making an assumption. But yours is an assumption too, mind you.
Neither your assumption, nor mine, is any more valid than the other, though, as what you've
been saying is no more based on facts vs what I've saying.

It does state photoelectric converter, though, in the abstract (a simplification into fewer words of the extremely wordy breakdown) which most definitely IS a photodiode. The abstract also clearly states that there are two "photoelectric converters" per "pixel". You've handily ignored the abstract, but it is still an entirely valid description, and is still a part of the patent. The description of a sub-pixel, in combination with how they are portrayed in the diagrams, also indicates they are photodiodes. Sure, there is readout logic, as there is binning and readout logic for the whole pixel. Is the readout logic part of the pixel, or the photodiodes? We could debate that round and round as well, I'm sure. Again...all just words used to describe concepts. We can mince words all day and all night, which is why I'm going to go back to working in my yard. Ta! Ta!

1166
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 06:52:45 PM »
"... an image sensor including pixels each having a pair of photoelectric conversion units (photodiodes) capable of outputting the pair of image signals obtained by independently receiving a pair of light beams...

Btw, the word 'photodiode' is not even mentioned in the patent. So, you basically forged this quote.

The patent talks about sub-pixels, not photodiodes.
Your entire stance is based on forged/misinterpreted information.

Photoelectric converter == photoconductor == photodiode.

It's all the same thing. I haven't forged (LOL ???) or misinterpreted anything. As for "sub-pixel"...again, the same thing. It's all a photodiode. It's a concept were talking about here...an anode and a cathode plugged into a bit of silicon that has the ability to convert incident photons to free electrical charge. That's what a sub-pixel in Canon's patent is! I've always used the term photodiode (because that's what it is, if you look at the symbol on an electrical diagram, and there are two other patents in Japanese that have actual electrical diagrams, it's a light-sensitive diode). Some of the other patents actually clearly show two "photoelectric converters" per pixel in multiple diagrams. If you had read my earlier comments about SPATIAL RESOLUTION, you would understand WHY it's the same thing, and why it doesn't matter if there are two, four, or N number of them contained within a single "pixel" (a multi-layered structure containing one or more photodiodes, a CFA, and a microlens).

It clearly states TWO sub-pixels for each pixel, not four as you've been claiming all along. 

Good. At least we've have established that these are sub-pixels, not photodiodes.

But I guess that also makes Jrista's hundreds of misleading claims about photodiodes all false.
It seems that he's the one who's been continuously embarrassing himself - together with the small
gang lining up in support of an imposter.

Me? I just made a speculation that instead of two sub-pixels, Canon is using four.
That's not the least embarrassing.

LOL. I'm having so much fun.

Hmm. Hundreds, eh? Care to, um, enumerate all several hundred for us? I'd...really like to see that.

You've somehow equated the term "sub-pixel" with "pixel". Why use a different term, sub-pixel, if it's the same thing? Sub-pixel == photodiode == photoconductor == photoelectric converter. Conceptually, in the context of CIS, these things are identical. They are all represented by the same symbol in an electrical circuit diagram (a photodiode:



). Conceptually, in the context of CIS, a pixel and a photodiode are not the same thing. This is a pixel:



As it so happens, this is a pixel with two photodiodes (the N-type silicon dropped into the P-type substrate.) I really don't care what terms are used, ultimately in the end, the term used to describe the thing isn't what matters, it's the concept the term encapsulates that matters...sup-pixel, photodiode, photoelectric converter...pick your poison. A (full, discrete, atomic) pixel and a photodiode are different conceptual things. You can mince words all you want, but now your obfuscating and dancing around the original point: You have claimed, in multiple threads for a good while now, that not only does Canon have QPAF, but that somehow QPAF/DPAF somehow leads, probably with ML (although I don't remember if you actually said that exactly) to better resolution. THOSE are the points at debate. Try all you want to play me for a fool, it isn't going to phase me, I don't care. Be as happy as you want that you discovered the term "sub-pixel" in the patent. To me, it's the same freakin thing, the same exact concept...a photodiode. I don't equate sub-pixel with pixel, as one is a complex multi-layered structure, one is a bit of doped silicon with an anode and a cathode tacked onto the ends that is a part of a pixel.

From a spatial resolution standpoint, DPAF doesn't bring anything to the table. You would need to redesign THE PIXEL, that complex multilayered thing built into the silicon substrate, to actually make DPAF, QPAF, or any number of bits to a diced up piece of silicon, become something more than just being separate photodiodes/photoconductors/photoelectric-converters/sub-pixels under one microlens/color filter into something that actually can actually meaningfully separate spatial frequencies, resolve them independently, and which ultimately represents more detail in a two-dimensional spatial frequency (in other words, an image.)



Anyway...I'd like to see the list of hundreds of things I've made misleading claims about. That's hundreds, plural. Get to work, bub!

As for me, as Don has said, this conversation has just gone way off the tracks and has become pointless. I'm not embarrassed by anything I've said here, I'm confident in my knowledge and assessments, but the conversation itself is becoming embarrassing. It's clear your not interested in any of the facts, this has devolved into an "I'm right your wrong" spitfest. I'm not interested in that. I proved my points, I debunked the myth I wanted to debunk, and not even for your sake...for everyone else's sake (although I don't think they care any longer): There is no QPAF. DPAF does not enhance resolution, and likely never will (not without PIXEL redesign). You've retreated, and contracted your argument into the most basic, minimally attackable position possible: I was just speculating and having fun!  ::)  ::) Fine by me. I'm clearly not alone in my assessments, others back me up, so I'm happy to exit the conversation here. I don't like to keep debating once the locals get fed up with the conversation. :P (Sorry, Don!)

1167
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 03:34:43 PM »
Anybody think it's possible Canon could drop a dedicated processor into the 7D2 to handle AF subject recognition, like the 1DX does?

Better noise handling is my biggest want but a superb AF system is a close second.

It depends. If the 7D II hits with a new DIGIC processor, they may not need to resort to a dedicated AF processor. Each generation of DIGIC chips gets considerably more powerful than their predecessors. I'd imagine a DIGIC 7 would be quite considerably more capable than the DIGIC 6 used in some of the more compact cameras. DIGIC 6 does a LOT of image processing (It's more like Sony's Bionz X chip than the DIGIC 5), with high quality noise reduction, high quality video processing, etc. If Canon created a DIGIC 7 with some 7x or so more processing power than the DIGIC 6, they could easily handle high frame rates as well as high end AF capabilities all in the one chip (or two chips, as it's probably likely to be.)

1168
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 01:55:57 PM »
I'm no electrical engineer with a Ph.D ...

Yup, that was pretty obvious ... and I was right about that (so, I was definitely right at least about one thing  8) ).

Kudos for having the courage to admit it, though. Seriously.

Of course now all the people on your side of this argument should have second thoughts on
why they should be listening to you.

And btw, I happen to be an electrical engineering by training. Not a Ph.D., though.

LOL. I have not admitted anything that diminishes my position, here. It doesn't matter that I don't have a Ph.D. You missed the point entirely. You don't need a Ph.D. to understand this stuff, if your otherwise well educated enough to understand the technology involved.

Even if you are an electrical engineer, you are not educated as to how DPAF works. Your lacking a basic knowledge set that has allowed you to wildly speculate, without any sound basis for your speculation. Here is what Canon's ACTUAL patent, from the US Patent office, actually states:

Quote
United States
Patent Application Publication
Yoshimura et al.


Pub. No.: US 2013/0147998 A1
Pub. Date: Jun. 13, 2013


IMAGE CAPTURING APARATUS AND FOCUS DETECTION METHOD
Applicant: Canon Kabushiki Kaisha, Tokyo (JP)
Inventors: Yuki Yoshimura, Tokyo (JP); Koichi Fukuda, Tokyo (JP)
Assignee: CANON KABUSHIKI KAISHA, Tokyo (JP)
Appl. No.: 13/692,173
Filed: Dec. 3, 2012

  Foreign Application Priority Data
Dec. 13, 2011 (JP) ...................... 2011-272746

  Publication Classification
Int. Cl.
H04N 5/232

ABSTRACT

An image capturing apparatus performs focus detection based on a pair of image signals obtained from an image sensor including pixels each having a pair of photoelectric conversion units capable of outputting the pair of image signals obtained by independently receiving a pair of light beams that have passed through different exit pupil regions of an imaging optical system. In the focus detection, an f-number of the imaging optical system is acquired, the pair of image signals undergo filtering using a first filter formed from an summation filter When the f-number is less than a predetermined threshold, or using a second filter formed from the summation filter and a differential filter When the f-number is not less than the threshold, and focus detection is performed by a phase difference method based on the pair of filtered image signals.

Note the words used here. This is DIRECTLY from the patent:

"An image capturing apparatus performs focus detection based on a pair of image signals from an image sensor including pixels each having a pair of photoelectric conversion units (photodiodes) capable of outputting the pair of image signals obtained by independently receiving a pair of light beams...

Performs focus detection. The intent of DPAF is to perform focus detection. There is no mention anywhere in this abstract that describes an increase in imaging resolution.

Pair. Pair is used repeatedly in this abstract, indicating that there are only TWO photodiodes.

The abstract quite explicitly describes a "pixel" as having a "pair" of photoelectric conversion units. In other words, a pair of photodiodes.

Everything I've said before was not based on assumption, or my own personal ignorance because I'm not a Ph.D. wielding electrical engineer. Everything I said was based directly on Canon's own patent, invented by Yoshimura and Fukuda. Invented, actually, quite well before the 70D ever actually used the technology in an actual commercial product...the earliest date of foreign application (application for patent in Japan, I assume) was Dec. 13, 2011!!! The patent in the US office was filed Dec. 3, 2012. This technology is not particularly new, and it has been well described for years. We aren't lacking knowledge about it.

Here is some more of the patent. The summary of the invention (just the first part...this section of the patent is a couple pages long):

Quote
SUMMARY OF THE INVENTION

[0007] The present invention has been made in consideration of the above situation, and improves defocus direction detection performance in focus detection of a pupil division phase difference method even When the pupil division performance is insufficient, and the influence of vignetting is large.

[0008] According to a first aspect of the present invention, there is provided an image capturing apparatus comprising: an image sensor including a plurality of two-dimensionally arranged pixels including pixels each having a pair of photo electric conversion units arranged to output a pair of image signals obtained by independently receiving a pair of light beams that have passed through different exit pupil regions of an imaging optical system; a first filter formed from a summation filter; a second filter formed from the summation filter and a differential filter; an acquisition unit arranged to acquire an f-number of the imaging optical system; a filtering unit arranged to perform filtering of the pair of image signals by selecting the first filter When the f-number is less than a predetermined threshold and selecting the second filter When the f-number is not less than the threshold; and a focus detection unit arranged to perform focus detection by a phase difference method based on the pair of image signals that have undergone the filtering by the filtering unit.

I don't assume when I have the option of actually knowing. I like to actually KNOW...know everything I can about as many things as I can. I have based my assertions in this debate on actual facts, derived from a patent that clearly and explicitly specifies exactly what DPAF is: What it's purpose is, how it works, why it works that way.

You now have the patent number. You have what you need to go look the patent up yourself and read it.

1169
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 12:43:39 PM »
Therefor, you cannot share the readout transistors, because both are read out simultaneously. Therefor, DPAF does NOT use a shared pixel design. There is, literally, two independent sets of transistors to read out each half of the pixel when the row is activated...and twice as many columns.

In other words, the two halves are read as ... independent pixels.
I feel that we are getting somewhere.
Oh, wait! That's what I've been saying all along.

The shared-transistor part is a different story.
I've never said explicitly which transistors are shared - and that's the key here.

You seem to think your getting somewhere...convincing me, or anyone, that you know what your talking about. I don't really have the heart to continue the conversation, because the clueless one here is not me...and this is just becoming disheartening. Go educate yourself. Please. Once you have actually seen the actual electrical diagram for at least one of Canon's DPAF patents, AND preferably read the accompanying description of how it works, maybe then we can have a coherent discussion.

Quote
If you want your claim to be in any way believable, you need actual evidence.  Read the Canon patents on dual-pixel AF - show us where a quad pixel design is mentioned.  Show us a verifiable image of the actual photodiodes of the 70D sensor (not the dead area at the very edge of the sensor in the Chipworks teaser). 

Fair enough.

I've never presented my speculations as facts. 
But if one day I decide to do that, I'll make sure that I have very solid evidence.

The problem is that you keep "speculating" about the same thing, even though it's been proven wrong on multiple occasions. I DID link you several detailed pages that had the full patent information the last time we had this debate. It took a while to find a working link as well, but apparently you never read the link. I'm not going to go digging through the internet, spend all that time, to find something that your likely never going to read (most of the time, these patents are translated from Japanese, as they can be notoriously hard to find in the US Patent office's database...and therefor common search terms don't necessarily bring up what your looking for.) I'm not repeating that effort for someone who seems more interested in literally and intentionally ignoring the FACTS he's been presented with, and "speculating" about a falsehood that he himself initiated even though it's demonstrably false, invalid, incorrect, not real, not happening, and otherwise moot.

It's fine to speculate...when you ACTUALLY DON'T KNOW anything about whatever it is your speculating/rumormongering about. When it comes to DPAF...there is nothing to speculate about. Canon has files patents. That's the end of the story. WE KNOW. Your notion that "it's not guaranteed to be 100% revealing" is 100% absolutely wrong...patents MUST, BY DESIGN, INTENT AND FUNCTION, be 100% revealing.  Otherwise they wouldn't be granted in the first place...specificity in a patent is key. Your ignorance in that only demonstrates you don't know much about patents, nor the technologies they can potentially describe. I'm no electrical engineer with a Ph.D, but that doesn't mean people who don't have degrees in electrical engineering are incapable of understanding an electrical circuit diagram or the terminology that describes them. I do my fair share of dabbling in electronics. (Right now I'm building a pelitier cooled DSLR cold box for my 5D III, which is involving a growing amount of electrical gadgetry and know-how to get it working the way I want.) Go educate yourself, read the actual patents, or in lie, read anything you can find about DPAF that isn't on Canon's site (so you can stop worrying that it's "just Canon marketing material"...although I'd offer that Canon is very up front about their technology, they have no reason to be misleading about DPAF)...and stop digging yourself into your hole. Your half-way to China right now.

Anyway, I think the myth of QPAF in the 70D or any other current Canon camera has been successfully debunked. The myth that DPAF or QPAF could be used to improve resolution in and of themselves, simply because they have independent readout, probably still needs to be revisited, however that's a debate for another day.

1170
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 21, 2014, 04:22:40 AM »
That is the exact OPPOSITE of a shared pixel. Shared pixels SHARE readouts. Canon's DPAF use INDEPENDENT readouts.

Heh. You share readout circuitry between photodiodes ... to read their output independently.
What a paradox. And yet, that's exactly what the industry has been doing for a decade now (or more?).
Fascinating stuff. LOL.

Your still misunderstanding. Pixels are activated row-by-row, and all columns are read out SIMULTANEOUSLY. Every column of an activated row of pixels has the charge stored in the photodiode read, amplified, and shipped down the column line AT ONCE. Because the photodiode is split per-pixel, that they occupy the same row. Therefor, you cannot share the readout transistors, because both are read out simultaneously. Therefor, DPAF does NOT use a shared pixel design. There is, literally, two independent sets of transistors to read out each half of the pixel when the row is activated...and twice as many columns. During an image read, additional binning transistors combine the charge of each photodiode half, that total charge is amplified, and only one half of the columns are used to move the charge down to the CDS units and column outputs.

Shared pixel designs usually share DIAGONALLY (I already said this, but apparently the reason did not sink in.) By sharing diagonally, you avoid the concurrent row problem. The first row is activated, the first set of pixels that are sharing readout logic are read. The next row is activated, and this second set of pixels uses the same set of transistors to read out as their DIAGONAL counterparts in the row above. I've also read about patents that share pixels vertically, which achieves the same result, but ends up resulting in mixed color output for every set of transistors...green/blue, red/green, etc.

It isn't possible to share anything in the same row, though...because once a row is activated, everything in it has to be read out...and by nature, DPAF photodiodes for any given color filter share the same row.

Pages: 1 ... 76 77 [78] 79 80 ... 316