October 23, 2014, 02:52:28 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 100 101 [102] 103 104 ... 298
1516
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 20, 2014, 12:27:37 PM »
I would even be willing to bet that Canon could add new 4k features and functionality to the 1D X and 5D III as well, if the time comes that consumers really demand it.

If I were a videographer right now, using either the 1DX or the 5D3 since their respective availability, and Canon releases firmware updates enabling 4K recording with these cameras, then ...
  • it would mean that the camera was capable of this functionality right from the start, but Canon chose to "cripple" the camera for some obscure and probably financial reason; and
  • I'd have serious doubts about ever using Canon products again.

But, hey, that's just silly old me!  ;)
I have no doubts that a 1DX or a 7D2 would be capable of capturing 4K video with a firmware update, but unless there is faster storage, you will have slow frame rates and heavy compression.  Ask yourself, what is the good of having 4K video that it so heavily compressed that it looks like up sampled 2K video?

You need fast storage before you will get decent 4K video.

You seem to keep forgetting that cinematic frame rate is 24fps. Anything higher, and you have "high speed recording for slow motion video". It is not standard yet to play back videos on your TV at anything faster than a 24fps video frame rate. The TV REFRESH rate may be higher, may be as high as 240hz, but refresh rate and frame rate are very different things. Hollywood itself is just now starting to EXPERIMENT with 48fps frame rates, however when it comes to any higher frame rates, the purpose is to record at a higher rate that is still played back at the standard...that's what produces slow motion. For all intents and purposes, for consumers, THE frame rate is 24fps. That isn't low, it's just standard. And it's what most people's videos will be played back at.

At 24fps, with standard forms of video compression (which do a remarkable job at not making video look like crap, while reducing it by as much as 72%), Canon's current cameras should be more than capable of handling 24fps 4k video. They just wouldn't be able to do RAW 4k video (however that really is a high end feature, as most consumers wouldn't even know what to do with RAW 4k video, and probably wouldn't have the tools to do anything with it if they did know what it was.)

1517
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 20, 2014, 02:16:19 AM »
...
Um, first off, PS4 isn't 4k capable. Both Microsoft and Sony have been rumored to be investigating support, but neither have 4k support yet. If 4k was as close as you say, they would have both been released with 4k OOTB.

Sony talking about 4K support in the PS4:
http://crave.cnet.co.uk/gamesgear/ps4-can-handle-4k-output-but-not-for-games-50010491/

... so it will play 4K content on the PS4. No support for 4K games, but that's different from 4K movies/video.

Are you sure that is 4k BluRay playback? Or is it just high resolution online video playback (which is certainly a step in the right direction).

Quote
Furthermore, 4k upscaling is not 4k. You can upscale to any resolution, but that doesn't make it actually that resolution. If 4k was just around the corner, you would have wide scale support for 4k natively in existing bluray players.

Upscaling to 4k is producing 4k output. Just the same as upscaling 576p or 720p to 1080p produces 1080p output. The detail might not be there as would native 4k but that's another story. The point here is that the generation of 4k video output is appearing in consumer devices. How it's produced is not important. The market is moving in the direction of 4k. Either get on the train or get left behind.

http://www.samsung.com/us/video/blu-ray-dvd/BD-F7500/ZA

http://store.sony.com/sony-dual-core-blu-ray-disc-player-4k-upscaling-zid27-BDPS6200/cat-27-catid-All-Blu-ray-DVD-Players

and so on. It has started and will keep coming and filtering down. Above a certain price point, it is present in all new BluRay players.

Denon AVR doing 4K pass-through and upscaling:
http://usa.denon.com/us/Product/Pages/ProductDetail.aspx?PCatId=AVSolutions(DenonNA)&CatId=AVReceivers(DenonNA)&Pid=AVRE400(DenonNA)

On nearly all products except for the cheapest:
http://www.avsforum.com/t/1465528/the-official-2013-denon-e-series-x-series-avr-model-owners-thread-faq

Note that all of these Denon products were announced almost 12 months ago.

4K BluRay disc spec by end of *this* year:
http://www.hdtvtest.co.uk/news/bda-4k-201401093581.htm

HDMI 2.0:
http://www.hdmi.org/manufacturer/hdmi_2_0/
Was finalised last September:
http://www.electronista.com/articles/13/09/04/hdmi.moves.to.version.20.boost.performance.for.ultra.hd.tvs/
Expect all new AVR style products to be HDMI 2.0 next year if not this year. After this year, the expectation will be for everything supporting HDMI to be delivered with HDMI 2.0 support.

Sorry, but upscaling is NOT the same thing. It was NEVER the same thing with 1080p either. Either you haven't really watched many movies from actual BluRay discs, or your just making an argument for the sake of argument here. Upscaled 480p & 720p to 1080p is NOT 1080p, and neither is upscaled 1080p to 4k actually 4k. Upscaling doesn't even bring you close, especially as the scaling can be done at the wire (meaning it doesn't require nearly the processing power or bandwidth as true 4k.)

Upscaling is the stopgap measure you implement between technologies, before the new technology has hit the mainstream. It's for the early adopters who buy a $40,000 80" 4k 3D TV.

Sorry, but I still don't buy this argument. I'll happily admit if I'm wrong about the PS4 playing 4k BluRay discs, but there is no way you can convince me that the advent of 4k upscaling heralds the imminent explosion of 4k in everyone's homes, and the imminent demand for 4k video recording in every device.

Quote
None of these things have come to pass, which indicates that the beginning of broad 4k adoption is not just around the corner.

I don't think you're looking in the right place(s).

If you're announcing a new product that does video next year, it'll be old/out-dated technology before it hits the shelf if it doesn't do 4k. If Canon do a 7D Mark II this year without 4k and the successor to that is some 3 or 4 years away then its life span as a modern and relevant tool in video production will be quite short. If I were Canon and it was my xD DSLR being launched this year without HDMI 2.0 or 4K, I'd hold it off another 12 to 18 months to include it so that it has a reasonable chance of having a life expectancy of more than 12 to 18 months in the market place. But that's just me.

Heh, so I agree with Neuro here. This sounds like the successor to the "DR is everything" argument. Boy, you anti-fanboys really know how to pick em and play em. The one feature you want that you can't get on Canon, and it's immediate grounds for the doomsday arguments. Good grief. I would offer that Canon producing higher DR sensors and making that an ubiquitous feature in all their DSLRs is VASTLY more important than them suddenly putting 4k video output in all of their DSLRs "right now". And you know where I stand on the DR argument.

So yes, it really is just you (and maybe a handful of your anti-fan brothers in arms) who DEMAND that Canon jump on the bandwagon the moment it's out of the gate, before more than a handful of consumers even knows there's a race going on.

BTW, Canon has done some pretty amazing things in the past with nothing but firmware. If they can support processing full resolution 10fps RAW right off the sensor, then Canon is MORE than capable of handling a 4k pulldown. If they don't release the 7D II with 4k support right out the gate, I'd be willing to bet good money they add it later on with a firmware update. I would even be willing to bet that Canon could add new 4k features and functionality to the 1D X and 5D III as well, if the time comes that consumers really demand it. All of those sensors have more than enough resolution...the 5D III is the only one that might not have the bandwidth and processing power, but then again, with a proper pulldown, handling 4k video probably wouldn't need the kind of bandwidth as 6fps RAW (which requires about 270mb/s, with a 4:2:2 YCbCr type pulldown, you would have a 14-bit full precision luma (Y) channel, and lower precision color channels, in which case 270mb/s is probably just about right.)

Finally, if you are so dead set on having more DR and 4k video right now...I honestly don't know why your still here. There are other products out there that offer what you want. You clearly don't seem to like Canon. Even if you like their ergonomics, you put so much weight on all the things Canon doesn't have...go out there and get the functionality you want, and be done with it.

There isn't any point in all the doomsaying about Canon and the very few features they lack in their otherwise exceptionally feature-rich cameras. Canon will release that functionality when they release it. We really don't have any control over that. Not when the technology is as complex as it is...it takes too much time for new technology to be researched and developed for any customers to really have any say over what Canon does or does not include in the next camera. By the time we all start hollering for something, the next product is usually so far along in the pipeline that it's too late to change. I think it is actually smart for Canon to kind of stagger their key model releases...5D III, 1D X, 6D, then finally 7D. It spreads things out enough that by the time they start putting the finishing touches on their first 1D X II prototypes, they will be able to factor in some of their customers feedback.

1518
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 20, 2014, 12:07:10 AM »
lets have a look at a video from guys who actually shot video.

enough boring talk from guys in this tread who know not much about video but talk like they have invented photography and videography.

just for a change...

http://www.canonwatch.com/moving-portraits-fashion-industrys-famous-people-shot-4k-canon-eos-c500/

As far as my own perspective goes, I am not arguing with you about the utility of 4k nor am I arguing with you about the fact that it will absolutely be the standard at some point.

My main issue is with people who think that it is something Canon SHOULD or will be doing very soon based on the fact that the GH4 is out. It is NOT the standard for the market as a whole currently.

Furthermore, the GH4 is in a price range that would require Canon put 4k in their consumer lines in order to compete directly (right now). THAT, is not something I believe will happen (right now) and is truly the only point I have been trying to convey.

If we are not talking about Canon being forced to compete directly, then the issue is moot as they already offer 4k in their lineup.

+1. Totally agree. This has been the argument all along, that the notion Canon would "have" to respond to the GH4 is wrong. Canon will make 4k a standard feature when 4k is a standard feature in general.

Well Panasonic has effectively introduced 4k as the standard HD video on their digital cameras.

Given that, I'd expect Olympus to follow suit. Not sure about Sony or Nikon. But within the year, I expect 4k to be a tick-box feature that is standard across new digital cameras above $1000. Well all except Canon of course because nobody with a Canon DSLR wants 4k video. *shrug*

Your missing the point. When I say "a standard feature in general", I really mean "IN GENERAL". As in, the general public will be fully aware of 4k, will understand, at least to a basic level, what 4k is, have 4k TVs, and will generally want to shoot their home videos of their doggies and babies in 4k.

Ok, so you're using a different definition of "in general" than I am. I don't see the "IN GENERAL" as needing to refer to general availability to the general public in all related products.

FWIW, BluRay players and AV receivers are already offering 4K processing and up-scaling (this was news at CES in January 2013.) My BluRay player can upscale content to 4K, my AV receiver can process or upscale content for 4K. The only part of the equation that is missing is the TV screen or projector. The PS4 is 4k capable. 4k is already a tick-box item when comparing various products when shopping for AV goodies.

I don't think that Canon will have a choice to wait for general adoption of 4k. Those in video circles will go "Well, I can buy a $3000 Black Magic camera or a $10,000 Canon camera. Hard choice." The obstacle for many people with the Nikon D800 and D610 is a change of lenses is required.

If I want to shoot 4k video, I can spend $3000 on Black Magic and use my EF lenses or spend $10,000 on Canon and use my EF lenses. Heck, the Black Magic MSRP is now also cheaper than the Canon 5D Mark III MSRP. So even if you I want to shoot 1080p now and move to 4k in the future, I'd have to be crazy to buy a 35mm FF Canon DSLR. Same for anyone else going into digital video production.

Quote
Just because Panasonic has made 4k "standard" on THEIR products does not mean that 4k is a general consumer technology.

Maybe not now, but I'm tipping that it will become so quicker than I think you want to publicly admit.

Quote
It isn't. It won't be for a while. When it is, then Canon will be there with 4k video in all their latest DSLRs and mirrorless cameras. That probably won't be for another four to five years yet, though.

How do you know all this?

You say this with such certainty that it sounds like you're privy to inside information of Canon.

Or are you just making educated guesses like the rest of us?

If you are just making educated guesses (using your logic) then you might want to say that these are your opinions rather than a statement of fact.

Um, first off, PS4 isn't 4k capable. Both Microsoft and Sony have been rumored to be investigating support, but neither have 4k support yet. If 4k was as close as you say, they would have both been released with 4k OOTB.

Furthermore, 4k upscaling is not 4k. You can upscale to any resolution, but that doesn't make it actually that resolution. If 4k was just around the corner, you would have wide scale support for 4k natively in existing bluray players. The 100GB BluRay disc update would be far more widespread, and the cost would already be approaching affordable (as it stands, a single 100GB BluRay costs $45-80...yes, thats EACH.) You would have more players that support 100GB BluRay discs. You would have $500 4k TVs showing up in stores like Best Buy and Walmart.

None of these things have come to pass, which indicates that the beginning of broad 4k adoption is not just around the corner. It's a few years down the road yet. Canon releases a new Rebel and other more compact cameras every year. When the time comes for them to add 4k, they will add 4k. I don't NEED to have insider knowledge to know that. It is plain and simply logical. Canon isn't run by a bunch of idiots. They know their market. Canon makes oodles of money, and their photography division is one of their largest. They have a hell of a lot of shareholders, and they have a critical reputation to keep in the photography market, and imaging markets at large. They aren't going to screw that up. When the time comes for 4k to be in every Canon camera, it'll be in every Canon camera.

The trigger that marks "the time" for 4k...it isn't the moment Panasonic released the GH4. I would be willing to bet that on the list of competitors to worry about most, Panasonic doesn't even make the top five.

1519
http://www.macworld.com/article/2039427/how-fast-is-usb-3-0-really-.html
Thanks for sharing that.  It reinforces in my mind the unrealistic image sold to the public on theoretical transfer rates, when an upgrade from USB2 to USB3 yields a 5x practical speedup, instead of the theoretical 10x speedup.  That is 50% slower, which is nothing to sneeze at.

It's still early for USB 3.0. It takes time for manufacturers to eek out the best performance out of a protocol. You also have to understand that the 5gbit/s throughput rate is the RAW throughput rate, agnostic of any specific use case. EVERY use case has a certain amount of overhead. That overhead, such as framing and control headers and communications-specific control packets and whatnot, require some of those bits. If a certain device uses a chatty communications mechanism, then that overhead becomes more costly, as every communications packet sent across a USB3 channel includes that overhead. There are also error correction bits to ensure that communications is stable and reliable, etc.

Unless you are simply streaming raw bits, no overhead, no error correction, nothing...then and only then could you actually achieve the maximum throughput rate. Even those older USB 2.0 devices, which would have been about as refined as USB 2.0 devices can get, only achieved ~40MB/s. That is 66% of the theoretical maximum 60MB/s that USB 2.0 offers. To complain that early USB 3.0 devices are getting "only" 50% is missing the fact that they really aren't all that far off the mark from USB 2.0 throughput...a mere 16% off, to be exact.

When you throw in overhead and error correction with smaller block sizes (and even for a 10GB data test, the average block size on hard drives is still quite tiny, 4k at most!), you have to deal with the overhead and error correction for each and every transfer. In some respects, overhead can be reduced, especially when a new standard is involved. SSD is newer than older hard drives. Even though I have some older hard drives with high density platters that can achieve 180mb/s burst rates, there is more overhead involved when reading and writing to an older platter drive with standard algorithms. SSD has less overhead, so while a slower SSD (earlier models, where latency, while better than platter drives, was still not nearly as good as it is today) in general offers similar burst rates, they eek out a higher maximum because of less overhead.

Finally, a standard approach to transferring data across any communications channel, in order to minimize error rate, is to purposely avoid 100% saturation. Due to the possibility of error correction, which can reduce throughput as it delays certain data packets, if you try to send data (including overhead and error checking codes) at 100% of the channel's theoretical maximum, you can end up flooding the channel, which forces application code to back off, which can cause dramatic drops in throughput, which actually has the effect of reducing average throughput overall for the duration of the transfer.

I would expect USB 3 data transfer rates to reach somewhere between 60-70% for "user or app level data", plus another 10-20% for overhead, plus maybe a 10% buffer level to avoid saturating the channel to 100% and allow for error correction.

In the long run, once manufacturers have fully optimized their USB 3.0 products, I wouldn't expect it to achieve any better than USB 2.0 does...an average 66% app-level data throughput rate.

1520
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 19, 2014, 01:46:13 PM »
But where are the 4k computer screens and 4k TVs that people need to view those videos on?

In the shops would be my guess ... http://en.wikipedia.org/wiki/4K_resolution#List_of_4K_monitors.2C_TVs_and_projectors ...  :D

LOL!

Sure, at $40,000 a TV and $3000 a computer screen. That isn't consumer-level pricing. When TV's get down to $2000 and screens get down to $200, then were talking broad consumer consumption. We aren't even close to that yet.

1521
EOS Bodies / Re: What's Next from Canon?
« on: February 19, 2014, 01:43:52 PM »
The technology had to be in development for more than 6 months before the 70D hit the streets. Canon has patents on the technology. If someone was digging, they would have found them (quite possibly LONG before the 70D hit the streets, as patents have to be requested and then filed quite some time before they are granted. You don't know about the request, but once they are filed, it's all public knowledge...you can find it if you want to. I used to go digging through CIS patents...I don't have enough time to do that any more, but I don't doubt that the patents were out there before the 70D hit the streets.)

In the US, it's 18 months between filing a patent and the publication of that patent.  So, 18 months before the patent issued, it was filed.  Most of the research had to be completed before the patent was filed, so that goves you an idea of the lead time (there are provisional patents, too, which give you an extra year to fully develop the invention, but I don't know that Canon uses that mechanism all that much).

The average time for a patent to move the entire way through the process is ~24 months. The 18 month period is from the time they actually start the process to when it's published. There can be some "queue time" as well, and on average, that time seems to be 6 months (although, a highly innovative company like Canon may get more special treatment, honestly can't say there). Canon had to have DPAF technology years ago. People just don't understand the effort and time it takes to bring research to a consumer product. It doesn't happen overnight. It takes years, many years, and a lot of effort.

People also don't seem to understand that technologies like QIS are heavily dependent upon other technological innovations occurring elsewhere in the industry to support some of their needs. I've never gathered from his papers that Fossum is a genius in data transfer concepts. He knows sensor design like the back of his hand, and he is one of the most innovative forces in the image sensor world, but a lot of his technology builds or relies on other technology. Before QIS can happen, we need to know how to transfer data at 100 terrabits per second, and do that in such a way as to not melt the sensor or the data channel or the DSP. That is a LOT of information to move around per second. You need massive processing, orders of magnitude more processing power than current DSLRs have. You need significant cooling as well, even if were not talking about a superconductive device.

To think that a QIS sensor is "in the bag", as Diko's hopes indicate, is simply naive of the realities of some of the assumptions Fossum is making.

1522
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 19, 2014, 01:34:28 PM »
Your missing the point. When I say "a standard feature in general", I really mean "IN GENERAL". As in, the general public will be fully aware of 4k, will understand, at least to a basic level, what 4k is, have 4k TVs, and will generally want to shoot their home videos of their doggies and babies in 4k.

Just because Panasonic has made 4k "standard" on THEIR products does not mean that 4k is a general consumer technology. It isn't. It won't be for a while. When it is, then Canon will be there with 4k video in all their latest DSLRs and mirrorless cameras. That probably won't be for another four to five years yet, though.

Reportedly the Samsung Galaxy S5 will record 4K video.

Sure. But where are the 4k computer screens and 4k TVs that people need to view those videos on? The technology has to be mainstream for people to really want it. Until that point, it's just a feature, a sales gimmick (especially in something like the Galaxy S5!!)

1523
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 19, 2014, 01:20:47 PM »
lets have a look at a video from guys who actually shot video.

enough boring talk from guys in this tread who know not much about video but talk like they have invented photography and videography.

just for a change...

http://www.canonwatch.com/moving-portraits-fashion-industrys-famous-people-shot-4k-canon-eos-c500/

As far as my own perspective goes, I am not arguing with you about the utility of 4k nor am I arguing with you about the fact that it will absolutely be the standard at some point.

My main issue is with people who think that it is something Canon SHOULD or will be doing very soon based on the fact that the GH4 is out. It is NOT the standard for the market as a whole currently.

Furthermore, the GH4 is in a price range that would require Canon put 4k in their consumer lines in order to compete directly (right now). THAT, is not something I believe will happen (right now) and is truly the only point I have been trying to convey.

If we are not talking about Canon being forced to compete directly, then the issue is moot as they already offer 4k in their lineup.

+1. Totally agree. This has been the argument all along, that the notion Canon would "have" to respond to the GH4 is wrong. Canon will make 4k a standard feature when 4k is a standard feature in general.

Well Panasonic has effectively introduced 4k as the standard HD video on their digital cameras.

Given that, I'd expect Olympus to follow suit. Not sure about Sony or Nikon. But within the year, I expect 4k to be a tick-box feature that is standard across new digital cameras above $1000. Well all except Canon of course because nobody with a Canon DSLR wants 4k video. *shrug*

Your missing the point. When I say "a standard feature in general", I really mean "IN GENERAL". As in, the general public will be fully aware of 4k, will understand, at least to a basic level, what 4k is, have 4k TVs, and will generally want to shoot their home videos of their doggies and babies in 4k.

Just because Panasonic has made 4k "standard" on THEIR products does not mean that 4k is a general consumer technology. It isn't. It won't be for a while. When it is, then Canon will be there with 4k video in all their latest DSLRs and mirrorless cameras. That probably won't be for another four to five years yet, though.

1524
EOS Bodies / Re: What's Next from Canon?
« on: February 19, 2014, 01:16:18 PM »
;D ;D ;D ;D ;D

You remind me of this:

That's not what I'm saying at all. The 640k deal was a choice, not due to physical limitations. Building a system that can scan incoming light at subwavelength and transfer information at 1000 to 100000 times faster than the fastest we've ever been able to achieve boils down to physics. MAYBE we can do it. MAYBE, if we produce the necessary technology by 2015 (which is when Fossum, based on that latest paper you linked, which I've already read BTW). Sorry, but I truly do not believe that even by 2017 or 2020, we will have the technology to transfer data at 100tbit/s. We won't even be close.

Your knowledge is adorable... And yet quite interesting that you DO continue to fight with the idea that he may have did it.

He hasn't done it. It's a theory. It's a concept. It isn't an actual prototype. If it was, I GUARANTEE YOU it would make waves. It would be on every sensor-related news site and probably every technology site everywhere. Fossum wouldn't keep it under wraps. Not a chance. (You clearly don't read ImageSensorsWorld...this kind of technology, if it reaches prototype stage, will be HUGE.)

http://m.eet.com/media/1081272/SARGENT1433_PG_46.gif
If you stop just for a second using your knowledge from  here whereby this is an improvement of the CMOS:

I'm not sure what that has to do with anything. It is simply an alternative approach to making photosensitive electronics. Your readout logic is still built with standard silicon at standard sizes, transferring information at standard speeds. And the sensitivity improvement, from what I know about it, doesn't allow photon counting. There have been other improvements that increase the light gathering capacity of silicon without resorting to wet tech, such as black silicon. Even black silicon isn't going to solve the data transfer rate issue, or allow photon counting, though.

http://m.eet.com/media/1081272/SARGENT1433_PG_46.gif
And the regular physics knowledge... Yeah I had that moment with the jots and the wavelength just as you did. But I do know that he is quite longer in this business than you and me together ;-)

DO you believe that he will reveal every detail of his study to the world so someone could steal it from him? ;-) And do you believe that he would continue for 10 years to research in that field within a reputable university and some sponsor (it could be even Samsung)?

Fossum is a researcher. He produces patents. It's what he does. So absolutely. I do believe he will reveal every detail of his research. I believe he has already revealed everything he knows. Besides, due to things like prior art, no one can really steal it from him. It's his work. He has the prior art. Even if someone tried to patent it, he could prevent it in court. He has about a decade of research and documentation to clearly prove the concept is his, and therefor not the unique invention of someone else. So yes, he absolutely would reveal all the details. He reveals details all the time, and again, if you read ImageSensorsWorld, you would know this.

Fossum doesn't HAVE the specific details for things like high speed tbit/s data transfer. NO ONE DOES. There are undoubtedly people researching it. People have been researching 500gbit and tbit data transfer rates since the 80s. I remember a Byte Magazine article from the late 80s that talked about organic memory and hundreds of gigabits per second data transfer rates. Well, were some 25 years on, and it still hasn't happened. The organic memory concept died, it just wasn't viable, and SSD offered realistic, tangible gains in performance without being unrealistically hopeful.

http://m.eet.com/media/1081272/SARGENT1433_PG_46.gif
IF scientist were so sure like you and so negative... we would be in the medieval age, no offence.

I'm not negative. I'm realistic. You can be as hopeful as you want, but it doesn't mean your hopefulness means success. Your hopefulness is simply unrealistic given how far technology has come, and how close the walls of physics are. This isn't the 90's. Back then we couldn't even see those walls. It's now the 2010's. Two decades on, at the relentless rate we've been pushing technological advancement, those walls are right in front of us now. And not just in the case of QIS...technological advancement via traditional means (i.e. primarily via reduction in size) is going to come to a crushing halt relatively soon. Certain problems have already forced some radical changes to how we manufacture CPUs, for example, and all that's done is stave off the inevitable for a little while longer.

You also cant' forget, Fossum has been trying for a decade to design this type of sensor. A DECADE. That is a very long time to even prove a concept can work. A LOT of CIS patents files in the 80's were viable, and we knew we could eventually shrink die sizes and increase data transfer speeds to levels where we could eventually achieve them. Many of the new technologies being implemented today were actually discovered decades ago. However back then, transistor sizes were hundreds of microns in size, and data transfer rates were so low we had absolutely no question we could improve them.

Today, we've been riding the limits of Moores Law on a continuous basis. The effort involved in developing new advances costs and order of magnitude more money each time we develop a new fabrication process (i.e. it used to cost a few hundred million to build a CMOS fab, today it costs tens of billions.) Transistor sizes are approaching physical limits...the next die shrink is 14nm, and the one after that is 7nm. Gate sizes, even with 3D/finFET, are now only a couple ATOMS across. Even with stretching, that poses a real problem for current flow, hence finFET, and that is only a stopgap measure (and it imposes it's own limitations as well.) The hard, impassable physical WALL is looming very close. There are a couple generations left before there is no such thing as a die shrink anymore. We have a decade, two at most (assuming two to four years between die shrinks) before efforts begin in earnest to develop full multi-layered CPUs and the like, because that will be the only remaining option.

You call me negative. I'm just a realist. There are significant physical limitations that computer technology is already riding close to. If Fossum said he would need 100gbit/s transfer rate, I'd say "When does the technology hit?!?" Why? Because 100gbit/s is only 10x (or less) faster than the fastest transfer speeds we already have today. It's realistic, it's doable, there is already research that indicates it's possible, and it could be ready by late 2015/early 2016. Fossum said he needs 100tbit/s transfer rate, and his timetable showed 2015-2016 as his target date for QIS. Sorry, but you don't suddenly go from 10gbit/s (the fastest ethernet, and also the transfer speed of Thunderbolt) to 100tbit (100000gbit/s) overnight. It just aint gonna happen. Maybe a decade or so from now. But not by 2016. It simply isn't realistic.

There is a pdf (if in those presentations not included back dated from 2010, if I recall correctly where Fossum represents that they are to try to implement that technology (of course quite away from Q.E. of 100%) in 3 stages....
The first one on a regular CMOS. The last one on a new superconducting material... so there you go...

I believe he said the same thing in 2005. In 2010, the new superconducting material was actually just announced as something called a superinsulator. Superinsulators had been hypothesized for years, and we knew they had to exist if superconductors existed. We just didn't know how they worked. Ironically, they really don't work all that differently than superconductive material...just in the opposite (instead of encouraging cooper pairs to attract, superinsulators cause them to repel). The other sensor I've been talking about, the Titanium Nitride sensor, uses both superinsulating and superconducting properties. TiN IS the new superconducting/superinsulating material. However for those properties to exhibit, the sensor has to be cooled to absolute zero.

Again, not being negative. Being realistic. We won't have DSLR-sized cameras with supercooled sensors by 2016. Not a chance. The power and material requirements necessary to cool anything to absolute zero are immense, not to mention rather unique.

As for the CANON.. Officially 200mm is what I have head as well about CANON... but you have to admit that if you were CANON you wouldn't reveal of you are already on a 300 or 450mm wafer, now would you?

Of course they would! You really don't understand either the technologies involved here, nor the economics. Canon moving to 300mm wafers for their FF and APS-C fabrication would be a huge boon to their stock price. OF COURSE THEY WOULD OFFICIALLY ANNOUNCE IT! It's ludicrous to think otherwise, and exceptionally naive.

A proof that is this very topic here - we even are not sure what the new 1Dx m2 and 7D m2 would be look like... we are pretty confident they will include dual pix though....

Canon already has 300mm fabs for their small form factor sensors, which are built on a 180nm process rather than a 500nm process. Canon has had those fabs for years. There has been speculation that the 70D DPAF pixels would need a smaller process in order to be produced. The pixel size shrink on the 70D, however, is not actually that great. At first I thought the shrink was more significant, however the size of the 70D sensor also grew. Prior sensors were around 22.2x14.8mm in size. The 70D is 22.5x15.1mm. The increase in size means instead of a 3.9µm pixel, they actually have 4.1µm pixels. The pixels are only 0.2µm smaller than the 7D. There isn't any need for Canon to reduce their process size to split those pixel in two...they are still more than large enough. Even at 3.9µm, they would still be large enough.

As much as I personally hoped Canon would move the 70D to a smaller process, there just isn't any evidence to support that theory. Physically, Canon still has space to use a 500nm process, even with dual photodiodes. When you think about it, doping the photodiodes really is not that big of a deal, as the photodiodes themselves are a couple thousand nanometers in size, which is about four times larger than the smallest 500nm etching possible with a 500nm process.

If Canon had moved FF and APS-C manufacturing to a 300mm wafer fab, they would have announced it. It would be a massive move, and a move for the better, for Canon as a company, for their shareholders, for their customers. A move to 300mm means more FF sensors manufactured faster with less waste, reducing cost, allowing more electronics on-die at a smaller transistor size, etc. It would be big news, for everyone. No way in hell would Canon hide that fact.

Which reminds me that we even didn't know about the DUAL PIX just before the release of 70D - a few months before that we had some rumor about new focus tech... And you, as well as the others are quite aware that Dual PIX AF didn't emerge like that in the last 6 months before the 70D, now did it? ;-)

The technology had to be in development for more than 6 months before the 70D hit the streets. Canon has patents on the technology. If someone was digging, they would have found them (quite possibly LONG before the 70D hit the streets, as patents have to be requested and then filed quite some time before they are granted. You don't know about the request, but once they are filed, it's all public knowledge...you can find it if you want to. I used to go digging through CIS patents...I don't have enough time to do that any more, but I don't doubt that the patents were out there before the 70D hit the streets.)

1525
EOS Bodies / Re: What's Next from Canon?
« on: February 18, 2014, 06:46:15 PM »

The Q.E. is indeed high. I don't know about 90%, even with a BSI design unless he is supercooling, there is going to be a certain amount of loss due to dark current.
Actually that is the idea: the Q.E. to be at almost 100%. Here is an extract of some more recent materials about QIS:

Fossum writes:
QIS "vision" is to count every photon that hits the sensor, recording its location and arrival time, and create pixels from bit-planes of data.

That sounds to me as 100% Quantum Efficiency ;-) No?

There is a very significant difference between "vision" and "reality". The vision may indeed be to count every photon. The reality is, in order to achieve that, the sensor would have to be superconducting. That's the only way you can count every photo. The Titanium Nitride video sensor I linked is a photon counting, position recording, exact wavelength detecting sensor. It is about as close to Fossum's vision as modern technology gets. The only difference is it doesn't use jots and dynamic grains. The reality is, that TiN sensor is cooled to superconducting supercool temperatures.

If your thinking that someday Fossum's QIS is going to pan out to a hand-holdable photon-counting DSLR (or for that matter even a DSLR with 90% Q.E.), your gravely mistaken. It isn't possible to cool electronics to a fraction of a degree above absolute zero in a hand-holdable package, and room-temperature superconductive materials, as much as they are the holy grail of the electronics industry, simply haven't been discovered, and the more time passes, the liklihood of finding a room-temperature superconductor diminishes (research has been ongoing for decades, and every time someone "discovers" a room temp superconductor, it always pans out to be false.) This is all assuming that actual photon-counting is possible with any superconducting material above absolute zero...dark current is the photon counting killer.

Having a high Q.E., however, does not change the notion of digital grains. In the presence of low light, you have low incident photon counts. The whole entire DFS/QIS design is based not just on jots, but on the fact that jots are organized into dynamic grains. In low light, all it takes is ONE jot to receive a photon in a grain for the ENTIRE grain to be activated. Let's say grains start out containing 400 jots each (20x20, a 16µm pixels...HUGE). Lets say were shooting in very low light, starlight. The moment one jot in each 20x20 size grain receives a few photons (lets say 50% Q.E., so two photons), then all 400 of those jots are marked as active! So, under low light, it might seem as though you actually received 800 photons, rather than just two! Big difference...you are now simulating the reception of a lot of light, however it is at the cost of resolution. At 16µm a grain, your resolution is going to be pretty low by modern standards...roughly 3.375mp.

Now, lets say a crescent or half moon comes out, and we take the same picture again. We have about two to three more stops of light. Instead of two incident photons, we now have ~8 incident photons per grain. Lets say a dynamic grain division is set at 8 photons. Once our jots receive and convert eight photons, our grains all split. We now have four times the resolution (10x10 grains, or 100 jots per grain, four times as many grains). We have a stronger signal overall, but roughly the same signal per grain as we did before. However we now have an image with four times as many megapixels, 13.5mp to be exact.

Now a full moon is out, and we take the same picture. We have another two stops of light. We get about 32 incident photons. Our grain size is now 5x5, or 25 jots per grain. Our resolution has quadrupled again. Same overall SNR, but our image resolution is 54mp.

This is what Eric Fossum has designed. A totally dynamic sensor that adjusts itself based on the amount of incident light, maintaining relative signal strength and SNR regardless of how much light is actually present. It does this by dynamically reconfiguring the actual resolution of the device...very low light, very low resolution, low light, low resolution, adequate light, good resolution, tons of light, tons of resolution. Technologically it is pretty advanced, conceptually it is relatively strait forward.

I've greatly exagerrated the scenario above...you wouldn't be able to have 54mp under moonlight. You would probably have something closer to 0.8mp under starlight, maybe 3mp under full moonlight, 13.5mp under morning or evening light, and maybe finally be able to achieve 54mp under full midday sunlight.

Actually he intends to put more than 4K jots in 1 pixel :D :D :D  However I believe:

At 4k jots per "pixel" (pixels don't really exist in the DFS concept, not sure about any more recent QIS papers), if we assume he is using an 800nm jot pitch, that would mean you have 45,000 jots across and 30,000 jots down in a 36x24mm sensor. That makes a grain/pixel with 4000 jots (63x63 jots per grain) over 50µm in size. I mean, there are physical limitations here. We can't break the laws of physics, not even if we are Eric Fossum. Make jot size much smaller than 800nm, and you'll start filtering out red light. You can't have a color accurate sensor if you do that...not at room temperature anyway.

(Based on one of the papers you linked, it isn't actually 4096 jots per pixel. It is 16x16 jots read 16 times to produce one frame, 16 times 16 (physical dimensions) times 16 (time) is 4096. Based on other information in the article, it sounds like his jots are about 1µm in size, or 1011nm to be exact.)

Now, if we move back into the realm of superconductive TiN sensors at absolute zero, you could probably make jots 100nm, maybe even 10nm in size, have near-perfect positional measurement accuracy by measuring broken cooper pairs. Since your measuring the exact energy in each incident photon, your jots are recording the exact wavelength of the disturbance in the superconductor. The TiN sensor I linked here uses the same general concept Fossum put forward on 2005...taking minuscule time-slices of an exposure by reading the sensor hundreds or thousands of times per second, and integrating the result. That gives it essentially infinite dynamic range if you expose for long enough (although you would still be limited if you needed shorter exposures, however that limit would still be considerably higher than modern day sensors...18-20 stops maybe.)

There are other physical problems that have to be solved before this technology would even be viable. According to another one of Fossum's more recent papers, were talking about excessive data throughput rates. The fastest data throughput rates we have today for storage are on the order of gigabits per second. A high end PCI-E type SSD can move around a gigabyte and a half per second or so, which is about 12 gigabits per second. Fossum's QIS concept requires 100 TERRABIT per second throughput. That is 12.5 terrabytes per second!!! That is unprecedented transfer speed. I mean, MASSIVELY UNPRECEDENTED. I don't even know that I've heard 1/100th of those kinds of throughput rates for single supercomputer throughput channels. You would have to bundle hundreds if not thousands of the high speed fiberchannel connections usually used with supercomputing in order to achieve a terrabit of throughput. Fiberchannel, one of the fastest transfer channels , tops out at around 25-gigabit per second, or about 0.00025x as fast as would be necessary for a QIS sensor to operate effectively. To be really clear about this, the fastest data channel on earth can currently only achieve 0.00025x what is necessary to support Fossum's QIS concept. It is still 0.025x the necessary throughput rate to achieve even 1 terrabit/second throughput. Even the on-chip data cache of a modern CPU is still only pushing a couple hundred gigabits/second throughput to the CPU registers, and that is thanks to exceptionally short physical data paths. In a digital camera, the image information has to be shipped off-sensor to a processor powerful enough to integrate hundreds of individual jot frames per second. Even if were talking about an integrated stacked sensor and DSP package, the distance those frames have to travel would make achieving 1-100tbit/s throughput difficult without some radical new breakthrough in bit transfer technology.

There are significant challenges in order to make Fossum's DFS/QIS concept a reality. Which is why, even after at least nine years, it is still just a concept.

1/ your info might be a little out-of-date.

2/ Fossum knows what he is doing if he is doing it for more than 10 years now. And he already has created something befre (the CMOS).

3/ I hope you will agree that we both are a little bit behind - no matter how much we know, with our understanding of this TO-EMERGE technology ;-)

MAY-EMERGE technology. As I said above, there are some pretty massive physical and technological issues to overcome. I've never heard of a photon-counting sensor that used sensing elements as small as a jot that wasn't supercooled. A data throughput rate of 1-100tbit/s is not only unprecedented, but could very possibly be impossible without integrating the entire concept onto a single die, and even then, at a tbit/s throughput, that little sensor+dsp die is going to get exceptionally hot (even supercooled, your producing a hell of a lot of energy in an extremely concentrated space, meaning you run a high risk of heating the sensor above the point where it can behave as a superconducting device...either that, or you need orders of magnitude more power to keep it all cool).

You don't need to know what technological advances may come down the road in the future to know that the QIS concept is running up really close to the laws of physics. It's very likely it is bending them as far as they can go, and it may not be enough.

...
Absolutely. I'm 100% sure. It makes no sense for Canon to try to break into a niche market that already has not only it's dominant players, but dominant players with a HELL of a LOT of loyalty among their customers. There have been Canon MF rumors for years. I remember reading MF rumors here back in the 2005 era. Nothing has ever come of them, despite how often Northlight tends to drag the subject back out.

The only way Canon could make a compelling entry into MF is if they launched an entire MFD system. Cameras with interchangable backs, image sensors that at least rival but preferably surpass the IQ of the Sony MF 50mp, a wide range of extremely high quality glass (they are certainly capable here, but it still is a MASSIVE R&D effort), and a whole range of necessary and essential accessories like flash. Canon has to do this all UP FRONT, on their own dime, to cover the massive R&D effort to build an entirely new system of cameras that can compete in an already well established market.

Now, they've done that once. They did it with Cinema EOS. But the cinema market is a lot broader with more players, and is a significant growth market with the potential for significant long-term gains, even for a new entrant like Canon. The medium format market is not a growth market. It's a relatively steady market, that has its very few players and it's loyal customers. Since there are so few players who already dominate the market, breaking in for a new player like Canon would be a drain on resources, and there is absolutely zero guarantee of any long-term payoff.

So, yes, I'm sure. Canon won't be offering a medium format camera any time soon.
OK.
 - Yes about SYSTEM, of course. I have never imagined CANON selling digital backs, or sensors to anyone :-))))
 - Yes about glass
 - No about light
 - Perhaps CANON has been in the MF R&D since 2001 with the introducing of 12"' Si wafer
Let us not forget the BIG SENSOR or the BIG 120 MPs APS-H sensor - the 2007 success?

If Canon had been developing an MF system for 13 years, then they have already failed. They've failed multiple times over. Sorry, but I find that idea exceptionally unlikely.

The "BIG SENSOR" was designed for an entirely different division of Canon for use in scientific grade imaging. It will never be used in a DSLR type camera. The 120mp APS-H was an APS-H sensor...that is smaller than FF, and significantly smaller than MF (MF sensors start around 44x33, and get as large as 60x70...anything smaller, and were not talking medium format.)


Silicon Wafer Sizes Trend The picture I provide is more relevant to intel then to SONY or CANONn and yet it is a trend:


I know all about wafer size trends. Canon still uses 200mm wafers for their APS-C and FF sensors as far as I know. Another indication that they haven't and are not moving into MF any time soon. On a 200mm wafer, you can fit 24 44x33mm sensors. On a 300mm wafer, you can fit 54 sensors, with less waste. Assuming Canon somehow skipped 300mm and went strait to 450mm (highly unlikely, the 450mm wafer size still seems to be somewhat fragile), you could fit 130 sensors, with even less waste. I don't see Canon manufacturing MF sensors on 200mm wafers.

Let me make another comparison exactly with the small Cinema EOS success. It's like an early bird. FF sensor from DSLR equipment against ARRI, RED & SONY APS-C Cinema solutions..... Hmmm... Who knows... ;-) Extra dollar is always welcomed. Even if it is from 0.5 market share. If CANON succeeds to sell 2k MF bodies in 3 years, let's say 10K$ each.... 2 million extra dollars... I ask

You don't seem to understand the market dynamics involved here. Cinema is a big, huge growth market into the future. There is massive potential for Canon, who already has an exceptional reputation in photography, to make big inroads into the Cinema market space as it grows, grabbing up new customers, many of whom are already familiar with Canon video from using their video-capable DSLRs. Canon already has a name in that industry thanks to the 5D II, which has been used in a number of relatively big name productions for TV and even a few big name movies.

Canon's break into the cinema market is easy. It cost them little to integrate video into the 5D II, which gave them their initial foothold, paving the way for them to expand that foothold into a legitimate presence. Since the market is a growth market, the risk is relatively low compared to an entry into MF, because you can grab new customers who are just moving to digital cinema cameras and have yet to buy into a system.

This is in contrast with the medium format market. It's growth opportunities, such as they are, are small. That means the primary source of market share gain has to come from existing dominant players. The market is relatively closed, with relatively few needs and a small base of customers to start with.

You could compare Cinema EOS as sprinting up a gentle slope, the wind of the 5D II success at their backs, with Canon MFD as climbing up an ice cliff with the wind buffeting them around their precarious perch. Medium format is a loss/loss for Canon. Excessive up-front cost, uphill climb once they finally enter the market full of very few customers and low growth.

Sorry...still don't believe it's going to happen. Not with Canon, anyway, not right now, not with global economies still in the pits relative to their 2007 peaks.

WHY NOT? ;-)

See above. :P

1526
Don & jrista: You both make excellent points, but I'm still really impressed with the Tamron. If wildlife photography wasn't my primary interest, it would be the perfect choice.  I don't have any regrets over my 300 and while it's not the ideal choice for birding, it's great for mammals, alligators, a lots of other critters.  Not to mention that it's awesome for portraits, sports, low light, and takes the extenders and a drop in C-PL as needed. I haven't tried the 25mm macro tube with it quite yet, but have seen excellent near-macro shots with it as well.

I think that Canon's big white sales are safe, but they are going to lose a ton of 300 f/4 IS, 400 f/5.6, and 100-400 sales to the Tamron, which might force Canon to finally update at least one of those models.

I'm not saying the Tamron isn't impressive. For it's price, it is. It's jut that if you already have the 300 f/2.8 L II, there is absolutely ZERO reason to doubt the decision to buy it. It is still and will always be a superior piece of equipment. It doesn't just needlessly cost that much more...the cost is well justified (especially once you understand the manufacturing process...making those huge glass and fluorite elements requires high grade costly materials and extremely precise manufacture.)

I do agree about their lower-grade telephotos, though. The 100-400 sales, which have always been good, will probably suffer quite a bit. Hard to beat 600mm of extra focal length and 2.25x the detail. (And there is NO WAY the Tamron is "soft"...compared to other lenses in it's class, it seems to be excellent.) I think Canon would even have a hard time maintaining 100-400 sales with a new version...400mm just doesn't compare to 600mm.

1527
Just having my morning read at CR and wow CarlTN and others, thanks for creating some humor.  Better than the funny paper.

Having spent money a year ago on the 300 II I have a slight uneasiness now but it's history and what I have is a great lens so I'm not sure why I keep reading about this great deal with the Tammy.

Anyway, it's prompted me to think about what I should be thankful for with my lens and two converters.  Jrista, yes bokeh.  One thing came to mind that I really love on the 300 is the smooth rotation from veritical to horizontal when on the gimbal, and the detente that tells you you've gone 90 degrees. 

I also loosen that knob which allows the camera to swivel similarly when I'm shooting hand held.  This works great with my preference for a very short strap that goes under my right arm (strap is snug as I fire).

Never the less, if I was buying today I probably would have looked at the 300 as just too expensive.  Thankfully my wife wouldn't hear such talk - hard to believe isn't it! :)

Jack

Don't let yourself be discouraged. There is no way the IQ of the Tamron will rival your 300/2.8 II, even with a 2x TC. You cannot underestimate the value of the large entrance pupil, the better barrel build, the vastly superior firmware chip, the full time manual USM focus ring with it's wide throw (excellent when you need to manually focus, such as with astrophotography...total godsend!!) You aren't just paying for "glass" when you buy a Canon supertele. Your paying for "the best" LENS. It's a whole package deal. It isn't just the optical quality. The AF USM drive and firmware are the best available for Canon. When coupled with a 5D III or 1D X, you get superior AF precision and accuracy (I'll find the LensRental blog that proves this.)

Also, you can't underestimate the value of that boke. It's one of the key things, I think given how many professional bird photos I've seen, that sets apart professional quality bird photography from all the rest. Boke is your subject isolator. When you take a photo of a bird, or for that matter of wildlife in general, your subject isn't the background...it's the bird, or the deer, or the coyote or wolf or bear. You don't want the background to intrude on your subject much...just the faintest idea of the general structure of what's there is the most you ever really want, and when it comes to birds, having the background completely blurred into a smooth creamy backdrop is usually the most desirable result.

Entrance pupil diameters <100mm generally don't quite cut it. The Tamron is just on the edge, but so far I've only seen a couple photos taken with it that truly show that kind of creamy background blur for birds (and then, only from very skilled photographers who have the talent to get appreciably close, and who also already own a 600mm prime of some kind.) The 72mm entrance pupil of your average 400mm entry-level birders lens (400mm f/5.6) is just not enough, and the same goes for the 75mm entrance pupil of 300mm f/4 lenses.

You have one of the best lenses for bird and wildlife photography that you can get on planet earth. The release of the Tamron doesn't change that, despite how good it is.

1528
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 18, 2014, 01:59:40 PM »
lets have a look at a video from guys who actually shot video.

enough boring talk from guys in this tread who know not much about video but talk like they have invented photography and videography.

just for a change...

http://www.canonwatch.com/moving-portraits-fashion-industrys-famous-people-shot-4k-canon-eos-c500/

As far as my own perspective goes, I am not arguing with you about the utility of 4k nor am I arguing with you about the fact that it will absolutely be the standard at some point.

My main issue is with people who think that it is something Canon SHOULD or will be doing very soon based on the fact that the GH4 is out. It is NOT the standard for the market as a whole currently.

Furthermore, the GH4 is in a price range that would require Canon put 4k in their consumer lines in order to compete directly (right now). THAT, is not something I believe will happen (right now) and is truly the only point I have been trying to convey.

If we are not talking about Canon being forced to compete directly, then the issue is moot as they already offer 4k in their lineup.

+1. Totally agree. This has been the argument all along, that the notion Canon would "have" to respond to the GH4 is wrong. Canon will make 4k a standard feature when 4k is a standard feature in general.

1529
EOS Bodies / Re: What's Next from Canon?
« on: February 18, 2014, 12:47:43 PM »
there is already  lossless jpg ver9 something... could be implemented in any of the next canon bodies (DIGIC 7 or 8 perhaps)
There is also a rumour that Canon has lossless RAW file format. :) The problem with any lossless compression is that you end up with large file sizes... If lossless jpg isn't much of a savings over RAW, why bother?
Good to know :D :D :D I mentioned it only due to the chit chat about DR and JPG.



If by QIS you are referring to Eric Fossum's Digital Film Sensor (DFS), that is a very old concept. Almost a decade old now, given this original paper: http://ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf

I read that many, many years ago. Very intriguing concept...however it doesn't mean that you actually have a gigapixel sensor. The notion of a digital film is that the sensor works more like actual film which has silver halide grains, wherein the "jots" combine to make up large digital sensor "grains". Under lower illumination where there are fewer incident photons, one jot strike within the region of a grain would "illuminate" the entire grain as if each jot had received a photon. Grains remain large, resolution remains low, SNR is high, noise is low. Technically speaking, this isn't all that different from downsampling a high ISO image in post.

Under high illumination, where photon strikes are frequent, most jots would receive photon strikes. By employing a mechanism to "divide" digital grains, one could dynamically increase resolution, since smaller grains with fewer jots could still achieve a higher SNR. It's most definitely an intriguing concept, but it is also requires technological capabilities beyond what we are currently capable of (at least, as far as image sensor fabs go). Jots are considerably smaller than your average pixel...they would have to be close to deep red wavelength (somewhere between 750 and 800nm...current APS-C pixels are 4000-5000nm, current full frame pixels are 6000-9000nm).

To make an ideal Digital Film Sensor, I'd combine the Jot concept with the Titanium Nitride superconducting material and microwave comb readout to produce a sensor with infinite dynamic range, exact color replication, and effectively the highest resolution possible for an image sensor. The TiN technology is still pretty new, and pixel size is much larger than a jot (the only existing sensor is 44x46 pixels in size), and it still requires cooling in a dewar jar. But it would probably be the best sensor on earth. ;)

Yes I do mean the same concept. These days Fossum calls it QIS. As for the thin film added to APS CMOS innovation from Canada IMO you have the whole concept wrong... No matter how he calls his pixels he claims of gathering 90%of the incident light and what is more important his intend is NOT to put it through ADC descrete process ergo the gigabytes. But even I could be wrong since he and his fellows are researching it as we speak.

The Q.E. is indeed high. I don't know about 90%, even with a BSI design unless he is supercooling, there is going to be a certain amount of loss due to dark current.

Having a high Q.E., however, does not change the notion of digital grains. In the presence of low light, you have low incident photon counts. The whole entire DFS/QIS design is based not just on jots, but on the fact that jots are organized into dynamic grains. In low light, all it takes is ONE jot to receive a photon in a grain for the ENTIRE grain to be activated. Let's say grains start out containing 400 jots each (20x20, a 16µm pixels...HUGE). Lets say were shooting in very low light, starlight. The moment one jot in each 20x20 size grain receives a few photons (lets say 50% Q.E., so two photons), then all 400 of those jots are marked as active! So, under low light, it might seem as though you actually received 800 photons, rather than just two! Big difference...you are now simulating the reception of a lot of light, however it is at the cost of resolution. At 16µm a grain, your resolution is going to be pretty low by modern standards...roughly 3.375mp.

Now, lets say a crescent or half moon comes out, and we take the same picture again. We have about two to three more stops of light. Instead of two incident photons, we now have ~8 incident photons per grain. Lets say a dynamic grain division is set at 8 photons. Once our jots receive and convert eight photons, our grains all split. We now have four times the resolution (10x10 grains, or 100 jots per grain, four times as many grains). We have a stronger signal overall, but roughly the same signal per grain as we did before. However we now have an image with four times as many megapixels, 13.5mp to be exact.

Now a full moon is out, and we take the same picture. We have another two stops of light. We get about 32 incident photons. Our grain size is now 5x5, or 25 jots per grain. Our resolution has quadrupled again. Same overall SNR, but our image resolution is 54mp.

This is what Eric Fossum has designed. A totally dynamic sensor that adjusts itself based on the amount of incident light, maintaining relative signal strength and SNR regardless of how much light is actually present. It does this by dynamically reconfiguring the actual resolution of the device...very low light, very low resolution, low light, low resolution, adequate light, good resolution, tons of light, tons of resolution. Technologically it is pretty advanced, conceptually it is relatively strait forward.

I've greatly exagerrated the scenario above...you wouldn't be able to have 54mp under moonlight. You would probably have something closer to 0.8mp under starlight, maybe 3mp under full moonlight, 13.5mp under morning or evening light, and maybe finally be able to achieve 54mp under full midday sunlight.

Canon is not in the image sensor market. Canon is in the photography market.

....

Canon won't be doing any kind of medium format anything any time soon.

Now , about the last statement are you sure?

Absolutely. I'm 100% sure. It makes no sense for Canon to try to break into a niche market that already has not only it's dominant players, but dominant players with a HELL of a LOT of loyalty among their customers. There have been Canon MF rumors for years. I remember reading MF rumors here back in the 2005 era. Nothing has ever come of them, despite how often Northlight tends to drag the subject back out.

The only way Canon could make a compelling entry into MF is if they launched an entire MFD system. Cameras with interchangable backs, image sensors that at least rival but preferably surpass the IQ of the Sony MF 50mp, a wide range of extremely high quality glass (they are certainly capable here, but it still is a MASSIVE R&D effort), and a whole range of necessary and essential accessories like flash. Canon has to do this all UP FRONT, on their own dime, to cover the massive R&D effort to build an entirely new system of cameras that can compete in an already well established market.

Now, they've done that once. They did it with Cinema EOS. But the cinema market is a lot broader with more players, and is a significant growth market with the potential for significant long-term gains, even for a new entrant like Canon. The medium format market is not a growth market. It's a relatively steady market, that has its very few players and it's loyal customers. Since there are so few players who already dominate the market, breaking in for a new player like Canon would be a drain on resources, and there is absolutely zero guarantee of any long-term payoff.

So, yes, I'm sure. Canon won't be offering a medium format camera any time soon.

1530
EOS Bodies / Re: What's Next from Canon?
« on: February 17, 2014, 09:36:05 PM »
Hi....

1/ there is already  lossless jpg ver9 something... could be implemented in any of the next canon bodies (DIGIC 7 or 8 perhaps)

2/ quantum image sensor (QIS) is on the way by the father of APS CMOS. The main obstacle for its research ending is IMO is the current lack of power that would be able to process raw data from the sensor in the terabytes

3/by the time QIS is reality most of us would have lost about half a kilo of brain matter ( so really no folio needed) and will either suffer some neuro related sickness like Parkinson's desease or will be stupid enough to feel overwhelmed with the QIS menu.

If by QIS you are referring to Eric Fossum's Digital Film Sensor (DFS), that is a very old concept. Almost a decade old now, given this original paper: http://ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf

I read that many, many years ago. Very intriguing concept...however it doesn't mean that you actually have a gigapixel sensor. The notion of a digital film is that the sensor works more like actual film which has silver halide grains, wherein the "jots" combine to make up large digital sensor "grains". Under lower illumination where there are fewer incident photons, one jot strike within the region of a grain would "illuminate" the entire grain as if each jot had received a photon. Grains remain large, resolution remains low, SNR is high, noise is low. Technically speaking, this isn't all that different from downsampling a high ISO image in post.

Under high illumination, where photon strikes are frequent, most jots would receive photon strikes. By employing a mechanism to "divide" digital grains, one could dynamically increase resolution, since smaller grains with fewer jots could still achieve a higher SNR. It's most definitely an intriguing concept, but it is also requires technological capabilities beyond what we are currently capable of (at least, as far as image sensor fabs go). Jots are considerably smaller than your average pixel...they would have to be close to deep red wavelength (somewhere between 750 and 800nm...current APS-C pixels are 4000-5000nm, current full frame pixels are 6000-9000nm).

To make an ideal Digital Film Sensor, I'd combine the Jot concept with the Titanium Nitride superconducting material and microwave comb readout to produce a sensor with infinite dynamic range, exact color replication, and effectively the highest resolution possible for an image sensor. The TiN technology is still pretty new, and pixel size is much larger than a jot (the only existing sensor is 44x46 pixels in size), and it still requires cooling in a dewar jar. But it would probably be the best sensor on earth. ;)

And yet here we are and I wonder why there is not even a single person to mention something about FF or MF, which IMO is also partially the topic.

SONY made its MF epic debut and it wasn't with Nikon. Hasselbad and Phase One...aditionally there are rumors about the ex-pentax with their 645 (mark 2) ;) I bet that 44x33 mm CMOS is BI.

BTW SONY's contract with Nikon is about to end this month. Any updates on that one?

So I wounder if the Canon MF beast will come out this year or next... What do you think?

Canon is not in the image sensor market. Canon is in the photography market. Canon doesn't sell sensors, so they wouldn't be selling sensors to Hasselblad or Phase One. Canon would have to bring a compelling medium format camera SYSTEM to market in order to compete with Hasselblad or Phase One. Given how Canon's tentative foray into the low end mirrorless market with a single camera and less than a handful of lenses, they weren't exactly successful.

It takes considerably more upfront resources to develop a complete competitive ecosystem when you are trying to break into an existing market that already has it's dominant players. That's a HUGE risk for Canon to take in order to enter the medium format market. We have already been through the fact that Canon is a conservative company, and they won't take a risk unless they have enough means to reduce it. They also won't take a risk unless the long-term payoff is significant. I see no reason to indicate that Canon should risk an entry into the medium format market right now. Especially now that those big manufacturers are utilizing Sony's currently superior sensor technology...that's even more up-front effort to develop something that is even better than what Sony offers. The sensors, and the optics, would all have to be better enough than the competition (which are already producing solely high-end, top grade products that will be quite difficult to beat as it is) in order to steal sales away.

Canon won't be doing any kind of medium format anything any time soon.

Ps: I hate tablets :))))))

Hmm. Good for you.  ???

Pages: 1 ... 100 101 [102] 103 104 ... 298