April 18, 2014, 08:06:05 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 97 98 [99] 100 101 ... 216
1471
Software & Accessories / Re: Normal RAW vs Dual ISO Raw Example Video
« on: July 17, 2013, 07:28:09 PM »
Well the colour looks nice in dual-iso, but that's some crazy bad moire on the cushion.

+1 yeah it makes it perform like an Exmor or better for shadow pulling but then again the resolution is what cut in half in each direction so.... and the moire and aliasing are so bad that it looks more or less unusable.

Cut in half only vertically, full resolution horizontally.

1472
Animal Kingdom / Re: Show your Bird Portraits
« on: July 17, 2013, 05:56:36 PM »
Bar-winged Prinia(Prinia familiaris)
60D, 70-300mm L - 1/320 f5.6, 300mm, ISO 320

IMG_3648cropped by sleon_falconity, on Flickr


Very interesting. That bird looks almost identical to the Western Kingbirds we have here in the US. I think the Prinia has a lighter throat than the Kingbird, but outside of that...they are extremely similar. I'm curious....how big is the Prinia? A Kingbird (western or eastern) is just slightly smaller than an American Robin...I wonder if a Prinia is similar in size.

1473
Actually they are all sort of bad. It's a shame stuff like AmigaOS and such are forgotten and stuff like Windows hangs around.

Anyway, the above list is sort of accurate, but they also never did anything as radically silly as trying to think that a tablet interface is ideal for desktop usage. Like we really want to smear greasy fingers all over photo-editing monitors or hold arms up and lean up to reach 24-36" monitors (or worse if you hook it to an HDTV too).

I'm curious if the assumption, that you MUST use touch to use the start screen, is a common one. The start screen is not inherently touch only. You can use Windows 8 without touch, and it works just fine. There is no reason to touch a screen in order to be capable of using the new start screen. If that is what most people think, then I guess it is no wonder that people aren't buying Windows 8.

I'd also point out that it works even better on an HDTV. I have Win8.1 on my Media PC, attached to a 46" Samsung. I use the standard Media PC remote to control it, along with a companion Logitech T650 touchpad for supporting any of the gestures (which, I'd add, is fully compatible with any desktop, allowing you to take advantage of the touch interaction without needing to ever touch a screen, if that kind of think irks you.)

1474
Im not very interested of what you think   when I have a dialog with Eric Fossum, Emil Martinec, BOBn2, John Sheehy  and several others about the benefits of BSI  at Dpreview  years back and also private

Well...good to see your keeping the culture of obfuscation and misinformation alive.  ::) Good day, Mikael.

1475
I actually like my HTC Windows Phone. The interface is consistent and simple and it is much easier to use than an Android phone (try giving an Android phone to somebody 60+ and over, I feel the Android phones are more for the tech-customizable-oriented people). I like the simplicity of iPhones as well but I think the Windows phones are quite easy to use. My HTC phone is also a lot thinner, lighter than the iPhone, and the LCD is fantastic. Battery life, admittedly, is so-so but I've just learned to have a lot of USB chargers laying around. Also their speech recognition still needs some work (as compared to Android - man, it is good).

As far as the camera capabilities, I think most people will admit that iPhone cameras can produce very good pics and even the camera in my HTC phone can produce great colors and picture quality - good enough for me to point on a monitor and ask people which one was taken by a DSLR and which one by phone (obviously we're talking outdoor pics). Sometimes I even use my phone as a lighting source when I want to take pics inside a dimly lit place. I just turn on the flashlight app, get a white paper napkin for diffusion and there you go.

I've always wondered why we don't have phones with the thickness of a Canon S95 and a proper zoom lens. I'd put that in a small case on my belt. It'd just be a multi-purpose device. Sorta like a p&s but you can have it upgraded to be a phone - linked to your regular cellular carrier.

Not sure why we would pick one brand over another - most cellphone makers wouldn't be in biz if they didn't perform to a minimum. Most of us switch cellphones as often as 6 months to one year so I just pick the model that has the right OS, price point and features.

Speech recognition in Windows Phone 8 is phenomenal. It isn't as interactive as Siri, but it is flawless, and even works in noisy environments now. If you haven't tried it, its worth messing with a Windows Phone 8 device in a store somewhere...the voice control, voice texting, etc. is pretty nice.

1476
I have to concur that Win-8 sucks big time on a desktop. Metro has no business on a PC machine...

but on a tablet device or a phone, Its far better than my Iphone. The only reason I haven't jumped ship is that the more and more stuff you buy on itunes and the app store, the more it ties you down on the system.  :P I couldn't switch if I wanted too with all my purchases.


It did, I agree. I think Windows 8.1 fixes most of that, though. It still isn't ideal, but a hell of a lot better than what it was. That's Microsoft's MO, though. It always takes a couple versions for quirks to iron out. Also, keep in mind, people utterly HATED Windows XP when it first hit (I remember reading scathing, hateful articles months and months after its initial release), and it was over a year before it became the most used and most loved Windows OS ever. I don't suspect things will be any different for Windows 8...and it is a hell of a lot better release than Windows Vista was (so the next major release should be a pretty significant improvement even over Win8.1).

Microsoft has a different release MO. Apple builds up an unquenchable fervor by not releasing ANY details about its releases until the day they unveil. (Well, they did....seems that may change under Cook, and I guess we'll see whether that is to the detriment of apple in the long term.) Microsoft has always approached releases with lots of software leaks, beta versions, community technology previews, etc. I think that can be good and bad, but these days, it seems it gives people too much time to play with new products before they are even released, encounter all the pre-release bugs, and decide they don't like the product. I would prefer Microsoft take the old Apple/Jobs approach. Don't release anything until its done, and when its released, make sure its solid, and make it a big party. They wouldn't lose people in the beta and CTP phase that way, they wouldn't get a bunch of pre-release bad press, and they would gain the benefit of people being antsy and excited to see and use the next greatest Microsoft thing. People just end up bored with the bugs before new Windows versions are actually released, the excitement is gone, so the release suffers, and it takes longer to build momentum.

Maybe the MS reorg will change things...but I don't really trust Ballmer to be anything other than a raging tool...so....


Windows 2000/NT - Good

Windows ME - Bad

Windows XP - Good

Windows Vista - Bad

Windows 7 - Good

Windows 8 - Bad

Windows 9 - ? Fill the blank.

I love M$ products but not when they revamp something the first time. The second attempt is usually perfect.


Yup, that's pretty much it! :D It would be nice if it became:

Windows 9: Good
Windows 10: Good
   .
   .
   .
Windows N: Good

I get the feeling it will probably be more along the lines of :

Windows 8: So-So
Windows 8.1: Better
Windows 8.2: Even Better
Windows 8.5: Good
Windows 9: Better than Good
Windows 9.1: Even Better than Good

And if there are six to eight months between each release, then reaching Even Better than Good could take years. Assuming they don't end up continuing to flipflop.


I think windows 8.x would be likely received MUCH better, if they would give the user the choice, especially with respect to a real computer (laptop/tower) to completely divorce the Metro UI from the system and allow you to go fully and ONLY into a classic desktop only paradigm.


Why? Seriously, Why?

There are several reasons I don't see any reason to do that. First, if you don't want the new start menu, then you might as well stick with Windows 7. If you want that kind of desktop, then there is very little benefit to moving to Windows 8. Aside from doing away with a lot of the more fancy glass effects, which improve performance a smidge, and a slightly faster boot time...Windows 8 in desktop mode is nearly identical to Windows 7 in desktop mode. There isn't any compelling reason to move to Windows 8 if you loath the new Metro UI that much.

Second, Microsoft has long had the desire to move to a 2D immersive, interactive experience. They started with the "Office 2019" videos from a few years ago, and recently have a few new ones. For the latest, see the following link and click "Future Vision":

http://www.microsoft.com/office/labs/index.html

You can see the older videos here:

http://blogs.technet.com/b/next/archive/2011/10/25/looking-back-on-2019.aspx#.Ueb_kI3bPzw

There are some awesome concepts in those videos. The ubiquitous touch integration on all 2D surfaces, phones, tablets, and other devices that work in harmony, and allow instant transfer of data and responsibility, etc. I really want that. I can't wait until I can tap my phone on my glass coffee table in my living room, and see the days photos, my schedule, etc. all in a clean, pristine 2D touch interface. Windows 8 is the first real stepping stone I've seen towards this vision from Microsoft...and they first started releasing the Offie 2019 videos at least five or six years ago.

The way people complain about the new 2D UI...I think its just the fact that it is new and uncomfortable. I know too many people who have seen Corning's videos of ubiquitous touch computing, and said they would love such a thing...then say they hate Windows 8. Well, hey....pretty much ANY of the Office future concept UIs could be created on Windows 8 today. The parallax scrolling seen in the corporate agriculture apps could be done right now (actually, its already done on the Office Future Vision site I linked above, and it performs like fluid glass on IE10). The transfer of responsibility can already be done today with XBox Glass, which allows you to either remotely control an XBox, or transfer responsibility for playback of music or video from a tablet to a TV. And all people can do is complain about it. Sorry, but that just boggles my mind.

It's just a matter of time before better apps find their way into the Windows Store. Some already are...some of the fitness apps are already getting quite good (i.e. FitBit), and offer very advanced interactive UIs. I think it is also just a matter of time before the kind of advanced manufacturing and agriculture apps find their way into real-world corporations. They just need people to have the vision for them, and to develop them. I don't know when ubiquitous touch computing finds its way into tabletops, windows, walls, etc., but I can't wait.

I think that's largely the main gripe about it, trying to use a tablet UI on a desktop (even if it had touch, not many want to keep their arms up off the desk a lot, constantly touching the screen)....


I agree a bit here, touch shouldn't be the primary mode of interaction for a desktop. I think Windows 8.1 has already fixed a lot of that, and even that being said, Windows 8 started out with very good mouse and keyboard support. You never HAD to use touch for the start screen...it has always supported mouse and keyboard. For that matter, it also has great support for a remote control...I use Windows 8 on my Media PC, and use the remote to move around the tiles and run programs. The remote works well in most Microsoft Win8 apps as well...for example, I just hit the left or right buttons to scroll through news articles,  up and down to scroll through emails, etc.

I think the complaints about the new start screen not working on a desktop are overblown. I also think that Microsoft has done a poor job educating new users how easy it is to use the mouse to control the new UI. Closing an app, for example, often baffles people. It is actually a simple gesture...point to top of screen, click and hold, drag to bottom. It's a simple, fluid motion once you know it exists...most people don't...and that's the real problem. It's one of Microsofts fundamental problems...they have a severe gap in helping their users KNOW how to use their OS.

I hear Win8 is pretty snappy and does good things with memory management, but if they don't allow classic computer users to turn Metro OFF, I think they're gonna lose business. People are NOT in a rush to migrate off Win7, businesses certainly aren't going to migrate, heck, they're just now coming off XP still in many cases.


In Windows 8.1, you can't entirely decouple yourself from Metro, but you can get pretty close. You can boot directly into the classic desktop now, and do everything you used to do...with the exception that the restored start button still brings up the start screen, rather than the classic start menu. There are a myriad of third-party tools (really just registry hacks) that restore the classic start menu. You can do that, if that's what you want...but again, Why? You might as well stick with Windows 7 until it EOLs if that's really how you feel. If you are entirely uninterested in moving into a new era of computing, it isn't like Microsoft is holding a gun to your head. ;)

I mean, look at Apple...they don't have the same OS on the tablet/phone that they have on the computer...iOS vs OSX...different beasts. Sure, they are converging to some extent, but not to the same extent MS tried with Win8.

My $0.02,

cayenne


I guess I think that the dual-platform nature of Windows 8 is its strength against iOS. I think that is how most people feel as well. Windows RT is largely a flop. People don't WANT just Windows Metro, even just on a tablet. People, including myself, explicitly held out for Surface Pro, because we WANT that dual nature. I really love it. I waited years for the ability to run Lightroom on a tablet out in the middle of nowhere Colorado, where I can tether my DSLR to my fully mobile, fully featured PC that neatly rests in the palms of my hands, and effectively get a large screen view camera out of a lowly Canon 7D. I didn't need any extra accessories, custom cables, or anything like that to get it working, either. Personally, I think that is a highly valuable thing. That's something no other company has offered me yet, not even Apple.

1477
read what I write, the real improvements are around 1,1 to 1,4 um sensel  size

and there are no APS or 24x36 from Canon or others yet= with that small pixel size

BSI cost about 30% more than FSI

Eric Fossum:

Improvements like BSI typically improve image quality mathematically and from a perception point of view, by increasing QE and reducing effects orginating from pixel stack height, when comparing two pixels of equal size. At 1.4 um pixel pitch the improvement offered by BSI is small. By 1.1 um pixel pitch, BSI offers a substantial advantage, unless some FSI breakthrough is made. BSI costs more to make so there is motivation for the FSI breakthough

It really depends on the photodiode size. A 7D has 4.3 micron pixels, but the actual photodiode is smaller than that. The entire pixel is surrounded by 500nm (.5 micron) transistors and wiring, which would mean the photodiode...the actual light sensitive area embedded in the silicon substrate, is only about 3.3 microns at best (and usually, the photodiode has a small margin around it...so closer to 3 microns). A 24.4mp sensor would have pixels in the range of 3.2 microns, however with a 500nm process, the actual photodiode pitch is closer to 2 microns.

Canon has already demonstrated that larger pixels can be huge for overall SNR (and therefor actual light sensitivity) with the 1D X. Despite the fact that the 1D X is a FF sensor, it benefits greatly from a larger pixel, and thus a larger photodiode size...as the gain is relative to the square of the pixel pitch. Production of a BSI APS-C 24.4mp sensor would mean that it could have 3.1 micron photodiodes that perform at least as well as the 7D's 18mp sensor, as total electron capacity is relative to photodiode area. A 24.4mp BSI 7D II could then be roughly as capable (~21,000 electrons charge FWC @ ISO 100) as an 18mp FSI 7D.

Personally, I find that to be quite a valuable thing. Especially given that the 7D currently performs about as poorly as one could expect by today's standards. A 2 micron photodiode in the 7D II would mean SNR suffers even more, which is going to have an impact on IQ, especially for croppers, so I can't imagine Canon doing that.

you are mixing up things, why do you think Im saying that Canon needs 180 or smaller  tech?

The real benefits of BSI you find in very small sensel, and I do not think it is a good idea  to talk to much what you believe or think when Eric Fossum have   shown when the benefits starts of a BSI construction.
And that is around 1,4 micron and smaller
A tipping point for BSI will be the 1.1 micron pixel node where FSI will likely be unable to achieve the market-required performance – necessitating a transition to BSI for applications that require this smaller pixel."

There are some benefits of BSI and larger pixels and that is with wide angle lenses and corners, as for example SLR+ wide angel lens  and incident light angle

I am not mixing anything up. The primary benefit of 180nm is that you have more area per pixel to dedicate to the photodiode. In the case of 1.4 micron pixels, use of a 500nm process is already a non-option...you would have already passed the limit you claim would be reached with 1.1 micron pixels on a 180nm process...the photodiode of a 1.4 micron pixel on a 500nm process would be maybe .3 microns (300nm). You have to translate from a 180nm 1.4 micron pixel to a 500nm 3.2 micron pixel. The wiring and transistors in a 500nm process take up a lot of space. That space could be put to better use...and assuming one does not change from a 500nm process....well, then BSI DOES have value.

Instead of taking up ~1 micron of pixel pitch for wiring and other logic, you take up a quarter of a micron if you moved to a 180nm process on APS-C. That means, for a 4.3 micron pixel pitch, the actual photodiode could be ~3.95 microns, rather than 2.1 microns. That increase in area is where you gain the greatest potential for an improvement in IQ. Now, with 180nm transistors, you can pack more of them in. Canon could stick with a 2.1 micron photodiode, and have a lot more logic circuitry around it with a 180nm process. That would allow them to add more sophisticated noise reduction logic, maybe drop in some on-die ADC, etc....simply because each transistor and all the wiring consumes less space. But fundamentally, photodiode area is the key thing from an SNR standpoint, and a higher   SNR leads to less noisy images.

When it comes to Canon's read noise, the primary issue there is high frequency components and binned pixel processing on an off-die component. The longer the signal remains analog, and the closer any pixel processing is to a high frequency component (a DIGIC processor is a CPU...the whole thing is a high frequency component), the greater the chance that read noise will interfere with shadow detail. It doesn't matter what the fabrication process is...Canon could move to 180nm, and keep using their Digic processors with off-die ADC. They will continue to have shadow noise problems, despite the move to a better process. If they move the ADCs on-die, and do something akin to what Exmor does, by moving the PLL, Clock, and other high frequency components to an isolated area away from those ADC units, then Canon could reduce their shadow noise.

That only affects low ISO, however, and a lot of Canon users care more about high ISO. Using a BSI design, even in APS-C, allows photodiode area to remain large. Canon could also still add more advanced per-pixel logic in a BSI design even if they stay on a 500nm process, as they would have the full photodiode area on the front side to utilize for logic (i.e. additional noise reduction circuitry...one of their patents described a power-source free CDS system that decoupled the power input while performing CDS, as keeping the power coupled continued to add dark current noise.)

It is not NECESSARY for Canon to move to a 180nm process, or only use BSI with small form factor sensors having 1.4 micron pixels or smaller, in order to continue innovating and improving IQ. As far as I am concerned, for the kind of high ISO work I do, I would LOVE to see Canon produce a FF BSI sensor. That would allow them to increase photodiode area, particularly in a shared pixel architecture, by another micron. Right now, in the 1D X, photodiode pitch is around 5.8 microns, while the actual pixel pitch is 6.95. I think it would be awesome to see a 1D XI with a BSI design that had 6.95 micron photodiodes. That is a 43% increase in total photodiode area, an increase that would have a measurable improvement in high ISO performance (imagine an actual usable ISO 25600 and maybe 51200 for wildlife and birds.) Again, Canon could move to a 180nm process, and either pack more logic into each pixel and improve readout NR (i.e. CDS), or reduce the logic, increase photodiode area, and move the ADC on-die, which at the very least should increase the maximum readout rate and possibly improve read noise performance. There are a whole lot of options...Eric Fossum isn't the only source of CIS innovation, nor the bible of what is and is not possible with CIS devices. Eric Fossum has done a lot of research in the area, however so has Canon (remember, it wasn't that long ago that Canon had the best sensors in the digital camera arena...they certainly have the knowledge and knowhow...I think their current reliance on 500nm is more of a business and financial matter than a lack of ability.)

I think moving to BSI, even if Canon sticks to 500nm, is a better option. It frees up the entire front side for logic, and the entire back side to light sensitive photodiodes. It is something Canon could do with their current process, potentially freeing up a billion dollars for other purposes (R&D, greater production capacity, whatever.)

1478
I have to concur that Win-8 sucks big time on a desktop. Metro has no business on a PC machine...

but on a tablet device or a phone, Its far better than my Iphone. The only reason I haven't jumped ship is that the more and more stuff you buy on itunes and the app store, the more it ties you down on the system.  :P I couldn't switch if I wanted too with all my purchases.

It did, I agree. I think Windows 8.1 fixes most of that, though. It still isn't ideal, but a hell of a lot better than what it was. That's Microsoft's MO, though. It always takes a couple versions for quirks to iron out. Also, keep in mind, people utterly HATED Windows XP when it first hit (I remember reading scathing, hateful articles months and months after its initial release), and it was over a year before it became the most used and most loved Windows OS ever. I don't suspect things will be any different for Windows 8...and it is a hell of a lot better release than Windows Vista was (so the next major release should be a pretty significant improvement even over Win8.1).

Microsoft has a different release MO. Apple builds up an unquenchable fervor by not releasing ANY details about its releases until the day they unveil. (Well, they did....seems that may change under Cook, and I guess we'll see whether that is to the detriment of apple in the long term.) Microsoft has always approached releases with lots of software leaks, beta versions, community technology previews, etc. I think that can be good and bad, but these days, it seems it gives people too much time to play with new products before they are even released, encounter all the pre-release bugs, and decide they don't like the product. I would prefer Microsoft take the old Apple/Jobs approach. Don't release anything until its done, and when its released, make sure its solid, and make it a big party. They wouldn't lose people in the beta and CTP phase that way, they wouldn't get a bunch of pre-release bad press, and they would gain the benefit of people being antsy and excited to see and use the next greatest Microsoft thing. People just end up bored with the bugs before new Windows versions are actually released, the excitement is gone, so the release suffers, and it takes longer to build momentum.

Maybe the MS reorg will change things...but I don't really trust Ballmer to be anything other than a raging tool...so....

Windows 2000/NT - Good

Windows ME - Bad

Windows XP - Good

Windows Vista - Bad

Windows 7 - Good

Windows 8 - Bad

Windows 9 - ? Fill the blank.

I love M$ products but not when they revamp something the first time. The second attempt is usually perfect.

Yup, that's pretty much it! :D It would be nice if it became:

Windows 9: Good
Windows 10: Good
   .
   .
   .
Windows N: Good

I get the feeling it will probably be more along the lines of :

Windows 8: So-So
Windows 8.1: Better
Windows 8.2: Even Better
Windows 8.5: Good
Windows 9: Better than Good
Windows 9.1: Even Better than Good

And if there are six to eight months between each release, then reaching Even Better than Good could take years. Assuming they don't end up continuing to flipflop.

1479
I have to concur that Win-8 sucks big time on a desktop. Metro has no business on a PC machine...

but on a tablet device or a phone, Its far better than my Iphone. The only reason I haven't jumped ship is that the more and more stuff you buy on itunes and the app store, the more it ties you down on the system.  :P I couldn't switch if I wanted too with all my purchases.

It did, I agree. I think Windows 8.1 fixes most of that, though. It still isn't ideal, but a hell of a lot better than what it was. That's Microsoft's MO, though. It always takes a couple versions for quirks to iron out. Also, keep in mind, people utterly HATED Windows XP when it first hit (I remember reading scathing, hateful articles months and months after its initial release), and it was over a year before it became the most used and most loved Windows OS ever. I don't suspect things will be any different for Windows 8...and it is a hell of a lot better release than Windows Vista was (so the next major release should be a pretty significant improvement even over Win8.1).

Microsoft has a different release MO. Apple builds up an unquenchable fervor by not releasing ANY details about its releases until the day they unveil. (Well, they did....seems that may change under Cook, and I guess we'll see whether that is to the detriment of apple in the long term.) Microsoft has always approached releases with lots of software leaks, beta versions, community technology previews, etc. I think that can be good and bad, but these days, it seems it gives people too much time to play with new products before they are even released, encounter all the pre-release bugs, and decide they don't like the product. I would prefer Microsoft take the old Apple/Jobs approach. Don't release anything until its done, and when its released, make sure its solid, and make it a big party. They wouldn't lose people in the beta and CTP phase that way, they wouldn't get a bunch of pre-release bad press, and they would gain the benefit of people being antsy and excited to see and use the next greatest Microsoft thing. People just end up bored with the bugs before new Windows versions are actually released, the excitement is gone, so the release suffers, and it takes longer to build momentum.

Maybe the MS reorg will change things...but I don't really trust Ballmer to be anything other than a raging tool...so....

1480
Yeah i saw the info regarding the patent. Seems like they have the know how or even have known for some time. I guess they are just waiting for the right time. Seems they can keep up with current market trends just fine. If things change drastically then they'll prob step it up. I have faith. And in the meantime theres always Magic Lantern! Hey hey!

The Magic Lantern 14stop Dr thing is interesting. Certainly not the same as what you get with a D800 and its Exmor...you lose vertical resolution. To me, the point of having additional native hardware DR is the ability to recover shadow DETAIL. You can always downsample, which will improve image DR, but at the cost of detail...so to me that is kind of a net zero tradeoff (at least, when printing...doesn't matter if your uploading online.)

I guess for web publishers, the trick will be quite handy, and will certainly be better than the banding you get now on a Canon sensor.

1481
Thanks jrista and Driz for the info. I couldn't quite picture a BSI sensor so I wikipediaded it and found some links that were helpful. I learned something today! This is why I love CR!

Let me see if I have this right -

So FSI is cheaper as only one side need to be treated in the manufacturing process, it's more common and what Canon uses. However light can be reflected by the metal layer which sits in front of the photodiode. One way to get around that would be to make the transistors and metal logic parts smaller, right? Or just have less pixels. See 1DX.

And BSI is more expensive to make due to both sides of the wafer being treated however it essentially captures more light and is better for low light photography as light hits the silicon layer directly. So this has up until recently only been used in very small sensors, right? I read Sony were putting a 1 inch sensor in the RX-200.

Some conflicting info though. Have Sony found a way to reduce the cost of producing a BSI sensor then?  And are there any other disadvantages to BSI?

I would imagine that the equipment that is used to make BSI sensors also costs more than FSI and that for Canon to switch they would have to spend a boat load of money which in turn would mean more expensive cameras? Or can it be done relatively easily and Canon are working on this for the big megapixel body next year?

I think Sony quite simply just adds more debt in order to manufacture their sensors. They have tens of billions in debt, in no small part due to the creation of their highly modern fabs. Sony does bring in revenue, but last I heard, their operating expenses were higher, so they are loosing money to the tune of several hundred billion yen a year. I can't say whether they have found ways to make BSI fabrication cheaper or not...although I suspect they can certainly refine the process over time.

Canon is capable of producing sensors using more advanced processes. Currently, they use 8" wafers for fabricating smaller CMOS devices, sensors for small cameras. An 8" wafer doesn't offer as much surface area, so it is more expensive to fabricate larger sensors, like APS-C and FF, on them. They build their own fabs, so I see no reason they couldn't build a fab capable of 180nm on 12" wafers.

I think it is probably more likely that Canon is using some kind of BSI 500nm process for their high density APS-C and FF sensors. They actually have a patent for such a thing, and it wouldn't require them to build a new fab...and it would really be the only way to continue using a 500nm process and still make sensors with even smaller pixels produce IQ that is on par with their past and current generation sensors. I haven't heard even a rumor of anything indicating they have created new fabs or anything like that (although I certainly hope they have...I don't see how Canon can remain competitive moving forward without jumping to a 180nm process, while the rest of the world is already there or even moving beyond. Canon has certainly been able to remain competitive with 500nm...but they have to be well into the realm of diminishing returns now.)

1482
read what I write, the real improvements are around 1,1 to 1,4 um sensel  size

and there are no APS or 24x36 from Canon or others yet= with that small pixel size

BSI cost about 30% more than FSI

Eric Fossum:

Improvements like BSI typically improve image quality mathematically and from a perception point of view, by increasing QE and reducing effects orginating from pixel stack height, when comparing two pixels of equal size. At 1.4 um pixel pitch the improvement offered by BSI is small. By 1.1 um pixel pitch, BSI offers a substantial advantage, unless some FSI breakthrough is made. BSI costs more to make so there is motivation for the FSI breakthough


It really depends on the photodiode size. A 7D has 4.3 micron pixels, but the actual photodiode is smaller than that. The entire pixel is surrounded by 500nm (.5 micron) transistors and wiring, which would mean the photodiode...the actual light sensitive area embedded in the silicon substrate, is only about 3.3 microns at best (and usually, the photodiode has a small margin around it...so closer to 3 microns). A 24.4mp sensor would have pixels in the range of 3.2 microns, however with a 500nm process, the actual photodiode pitch is closer to 2 microns.

Canon has already demonstrated that larger pixels can be huge for overall SNR (and therefor actual light sensitivity) with the 1D X. Despite the fact that the 1D X is a FF sensor, it benefits greatly from a larger pixel, and thus a larger photodiode size...as the gain is relative to the square of the pixel pitch. Production of a BSI APS-C 24.4mp sensor would mean that it could have 3.1 micron photodiodes that perform at least as well as the 7D's 18mp sensor, as total electron capacity is relative to photodiode area. A 24.4mp BSI 7D II could then be roughly as capable (~21,000 electrons charge FWC @ ISO 100) as an 18mp FSI 7D.

Personally, I find that to be quite a valuable thing. Especially given that the 7D currently performs about as poorly as one could expect by today's standards. A 2 micron photodiode in the 7D II would mean SNR suffers even more, which is going to have an impact on IQ, especially for croppers, so I can't imagine Canon doing that.


I was never quite sure about this topic, it seemed very electrical engineer related and there was a lot of acronyms and stuff that confused me and made my brain hurt but this post by jrista is the first time I kinda understand what you guys are talking about! Thanks!

Rookie question - what does BSI and FSI stand for?


Glad it was helpful. Any engineering stuff aside, an image sensor is really just a circuitboard with sensors that generate electric charge in response to light stimulus surrounded by a bunch of electronic logic (transitors, capacitors/resistors, and wiring) designed to make it possible to "read" out the charge of each pixel when told to do so. Generally, as a matter of physics, the larger the area of the sensor, the more light can be detected and converted into charge.

BTW, BSI stands for Backside Illiminated, it has to do with the specifics of how the sensor is manufactured. These nano-scale circuit boards are "etched" onto the surface of highly polished, high grade silicon wafers. Etching occurs via light, which is beamed through a much larger scale "circuit board template" and onto the surface of the silicon (its a lot more complicated than that, as etching a CMOS device is usually done in layers, with depositions of various material for each layer, and further etchings with different templates...but that's the gist). The "front" side is the side that is etched. Usually, all the logic is etched onto the front side, and the photodiode itself is simply appropriately doped silicon in a grid at the bottom of the "well" created by all the transistors and wiring. Sensors etched in such a way are FSI, or Front Side Illuminated.


Fig 1: You can see the photosite well in this image. The "pixel cathode" is the photodiode. Various wiring surrounds the photodiode. Above the pixel is a color filter and a microlens.


Fig 2: You can see the grid layout of pixels in this image.

A newer technique originally designed to support the increasingly small photodiode area left available in small form factor sensors (such as the ones that are a fraction of a fingernail in size) for cell phone cameras, cheap point & shoots, etc. put the photodiode on the back of the silicon wafer, then etched the wiring on the front side, connected to the previously etched photodiodes. There are also usually color filters and micro lenses etched into the back side as well, above the photodiode itself. The process is more expensive as usually, only one side of the wafer needs to be etched or doped. The back side is usually just part of the "substrate", and the number of defects (stratches, pits, or other marks or even particulate embedded into the surface) do not matter. Since both sides of the wafer are important in a BSI design, both sides of the silicon wafer must be not only polished, but defects must be kept to a minimum. Hence it is more expensive and harder to manufacture.


Fig 3: A sony BSI sensor design. You can see all of the logic on top (front side), and microlenses, color filters, and photodiode on the bottom (back side). You can see where the photodiode for each pixel is connected to its logic in the middle.

An alternative to BSI design is LightPipe design. Canon also has patents as well as prototype (and possibly production...not sure) designs for a 180nm Cu (copper wiring) LightPipe sensor design with a double layer of microlenses. LightPipes make use of a high refractive index material to fill in the well. Normally, any light not directly incident on the photodiode itself will convert to heat or possibly reflect. That results in a loss of light energy, reducing the sensitivity of the sensor.


Fig 4: Canon's 180nm Cu LightPipe sensor cross section. This is for a very small sensor, possibly with pixels less than 2 microns in size (as evidenced by the very large wiring blocks next to each pixel, which on a 180nm process, means these pixels are quite small.)

1483
Lenses / Re: Dxo tests canon/nikon/sony 500mm's
« on: July 16, 2013, 07:31:47 PM »
Simple fact of the matter is a better lens will perform better on ALL sensors, 20mp, 30mp, or 50mp. The problem with DXO's tests is they quite simply don't give you a reasonable camera-agnostic basis from which to compare lenses.


Actually, they do. There is a way to extract the pure lens resolution from the data they used to publish (full MTF curves, not the nonsense they publish now).

Umm, no...sorry. The final image is a convoluted result...one could not extract a "pure" lens resolution...you could only approximate it. (For the very same reason one cannot perfectly extract noise from a noisy image...it is part of a convolution produced by a complex real-world system. Too much uncertainty and a loss of information prevents perfect noise removal.)

You are wrong on that. I am not saying that you can remove the AA filters/sensor blur from the image. I am saying that you can find (estimate, if you wish) the strength of the sensor blur. If you are interested in the math, go to my profile, click on the link, etc. Deconvolution is a very different process, very unstable but you do not need to deconvolute to estimate the effect of the sensor blur. You can get instability only if you use sensors with such a low resolution, that the lenses you want to compare look the same (and they are not).

The problem with all that is that even if you are going to get the pure lens resolution somehow, you still need to consider the blurring effect of a future sensor, and compute the combined resolution again. So my question stands: are you sure you know how to do that?

It doesn't matter what kind of sensor you have, low resolution, high resolution, or tomorrows resolution. A convolved result is a convolved result, and in this case stability (or the lack thereof) doesn't really apply like it might when trying to denoise or deblur. You are talking about reverse engineering the actual lens PSF from an image produced by a grid of spatially incongruent red, green, and blue pixels (likely covered by additional lenses (microlenses)), then further interpolated by software to produce the kind of RGB color pixels we see on a screen and analyze with tools like Imatest (or DXO's software). The moment you bring the sensor into play, there are significant enough losses of data, and you can only, at best, guess at what those losses are (unless you have some detailed inside knowledge about whatever sensor it is your testing with). Your article is an interesting start, but you are assuming a Gaussian PSF. An actual PSF is most definitely not Gaussian, nor is it constant across the area of the lens (i.e. it changes as you leave the center and approach the corners...do a search for "spot diagram" to see actual lens PSF's produced mathematically from detailed and accurate lens specifications...even for the best of lenses, outside of the most centeral on-axis results, a PSF can be wildly complicated). Not to mention the fact that you have to guess the kernel in the first place, so whatever your result, it is immediately affected by what you think the lens is capable of in the first place.

Personally, I wouldn't trust any site that provided "lens resolution" results reverse engineered from an image produced by any sensor. I would actually rather take the "camera system" tests than have someone telling me what their best guess is for lens performance.

Quote
One wouldn't, necessarily. But your missing the point. The point is to call out DXO's BS approach to performing lens tests. The point is to clearly note that those tests are "camera system" tests...they are neither lens tests nor sensor tests. I wouldn't go so far as to say that is 100% useless, but it is certainly biased the way DXO does it, and there is a suspiciously long-term bias towards a particular manufacturer by DXO. (Not just away from Canon, either...even the Sony lens, which actually has better transmission, should have scored better...but it was limited by a sensor!)

Of course those are lens+camera tests, and DXO never said otherwise.

Hmm, DXO's own description on the lens tests page begs to differ:

Quote
DxOMark's comprehensive camera lens test result database allows you to browse and select lenses for comparison based on its characteristics, brand, type, focal range, aperture and price.

Nowhere in there do they state that the camera sensor is a factor in your ability to select and compare lenses. They only state that the lens characteristics, brand, type, focal range, aperture, and price are the applicable factors.

1484
Lenses / Re: Dxo tests canon/nikon/sony 500mm's
« on: July 16, 2013, 06:23:16 PM »
Simple fact of the matter is a better lens will perform better on ALL sensors, 20mp, 30mp, or 50mp. The problem with DXO's tests is they quite simply don't give you a reasonable camera-agnostic basis from which to compare lenses.


Actually, they do. There is a way to extract the pure lens resolution from the data they used to publish (full MTF curves, not the nonsense they publish now).

Umm, no...sorry. The final image is a convoluted result...one could not extract a "pure" lens resolution...you could only approximate it. (For the very same reason one cannot perfectly extract noise from a noisy image...it is part of a convolution produced by a complex real-world system. Too much uncertainty and a loss of information prevents perfect noise removal.) A mathematically generated MTF that takes into account the real mathematical point spread function of the entire lens is really the only way to get any realistic idea of how a lens will actually perform. The moment that convolution is further convolved by a sensor, you lose the ability to "perfectly" (or purely) revert to the prior result...there is too much uncertainty and loss of information.

Quote
The Nikon 500/4 performs "on par" (toung in cheek) with the Canon 500/4 solely because of the higher resolution sensor. That sort of tells you that the Canon lens is particularly good, because it is performing so well on a worse sensor...but you don't really have any exact way of comparing. You only get a "feeling" that it performs so well.

Why in the world would you want to know how a Canon compares to a Nikon without a body? For bragging rights? They tell you what is achievable with the current bodies on which the lens works, the way it is deigned to work. A better lens on one body will be better on future bodies as well.

One wouldn't, necessarily. But your missing the point. The point is to call out DXO's BS approach to performing lens tests. The point is to clearly note that those tests are "camera system" tests...they are neither lens tests nor sensor tests. I wouldn't go so far as to say that is 100% useless, but it is certainly biased the way DXO does it, and there is a suspiciously long-term bias towards a particular manufacturer by DXO. (Not just away from Canon, either...even the Sony lens, which actually has better transmission, should have scored better...but it was limited by a sensor!)

1485

Have you actually used a Windows Phone 8 device? They are certainly not a joke, and after owning several generations of iPhone, I much prefer the Metro experience. The app gap is shrinking fast, and most of the apps I want are already available, and those that aren't are either coming, or I can write myself. I'd also point out that as the Android vs. iPhone battle has raged, iPhone has been losing, while Android and Windows have been gaining. Windows market share is about doubling every year, particularly with the Nokia Lumia phones. Again, I think people who skip past a Lumia just because its Nokia or just because its Windows are short changing themselves.

No point in having that argument really, not going to win anything.
I find the 'apps gap' irrelevant in about 5 mins I'd downloaded (free) every app I'm likely to need on my phone. (Nokia 925 win8)

I think what Nokia are doing is facinating, apart from IQ what I want to see improving substantially though is focus and shutter lag.

Shutter lag on an electronic shutter has always been an oddity to me. Is it simply because most smartphone cameras (and, for that matter, P&S cameras) insist on making a cutsie and unbelievably annoying little fake shutter click when people press the button? I figure, assuming the lens is focused, taking a picture should be near instantaneous...

Pages: 1 ... 97 98 [99] 100 101 ... 216