August 30, 2014, 12:22:55 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 75 76 [77] 78 79 ... 273
1141
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 19, 2014, 01:34:28 PM »
Your missing the point. When I say "a standard feature in general", I really mean "IN GENERAL". As in, the general public will be fully aware of 4k, will understand, at least to a basic level, what 4k is, have 4k TVs, and will generally want to shoot their home videos of their doggies and babies in 4k.

Just because Panasonic has made 4k "standard" on THEIR products does not mean that 4k is a general consumer technology. It isn't. It won't be for a while. When it is, then Canon will be there with 4k video in all their latest DSLRs and mirrorless cameras. That probably won't be for another four to five years yet, though.

Reportedly the Samsung Galaxy S5 will record 4K video.

Sure. But where are the 4k computer screens and 4k TVs that people need to view those videos on? The technology has to be mainstream for people to really want it. Until that point, it's just a feature, a sales gimmick (especially in something like the Galaxy S5!!)

1142
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 19, 2014, 01:20:47 PM »
lets have a look at a video from guys who actually shot video.

enough boring talk from guys in this tread who know not much about video but talk like they have invented photography and videography.

just for a change...

http://www.canonwatch.com/moving-portraits-fashion-industrys-famous-people-shot-4k-canon-eos-c500/

As far as my own perspective goes, I am not arguing with you about the utility of 4k nor am I arguing with you about the fact that it will absolutely be the standard at some point.

My main issue is with people who think that it is something Canon SHOULD or will be doing very soon based on the fact that the GH4 is out. It is NOT the standard for the market as a whole currently.

Furthermore, the GH4 is in a price range that would require Canon put 4k in their consumer lines in order to compete directly (right now). THAT, is not something I believe will happen (right now) and is truly the only point I have been trying to convey.

If we are not talking about Canon being forced to compete directly, then the issue is moot as they already offer 4k in their lineup.

+1. Totally agree. This has been the argument all along, that the notion Canon would "have" to respond to the GH4 is wrong. Canon will make 4k a standard feature when 4k is a standard feature in general.

Well Panasonic has effectively introduced 4k as the standard HD video on their digital cameras.

Given that, I'd expect Olympus to follow suit. Not sure about Sony or Nikon. But within the year, I expect 4k to be a tick-box feature that is standard across new digital cameras above $1000. Well all except Canon of course because nobody with a Canon DSLR wants 4k video. *shrug*

Your missing the point. When I say "a standard feature in general", I really mean "IN GENERAL". As in, the general public will be fully aware of 4k, will understand, at least to a basic level, what 4k is, have 4k TVs, and will generally want to shoot their home videos of their doggies and babies in 4k.

Just because Panasonic has made 4k "standard" on THEIR products does not mean that 4k is a general consumer technology. It isn't. It won't be for a while. When it is, then Canon will be there with 4k video in all their latest DSLRs and mirrorless cameras. That probably won't be for another four to five years yet, though.

1143
EOS Bodies / Re: What's Next from Canon?
« on: February 19, 2014, 01:16:18 PM »
;D ;D ;D ;D ;D

You remind me of this:

That's not what I'm saying at all. The 640k deal was a choice, not due to physical limitations. Building a system that can scan incoming light at subwavelength and transfer information at 1000 to 100000 times faster than the fastest we've ever been able to achieve boils down to physics. MAYBE we can do it. MAYBE, if we produce the necessary technology by 2015 (which is when Fossum, based on that latest paper you linked, which I've already read BTW). Sorry, but I truly do not believe that even by 2017 or 2020, we will have the technology to transfer data at 100tbit/s. We won't even be close.

Your knowledge is adorable... And yet quite interesting that you DO continue to fight with the idea that he may have did it.

He hasn't done it. It's a theory. It's a concept. It isn't an actual prototype. If it was, I GUARANTEE YOU it would make waves. It would be on every sensor-related news site and probably every technology site everywhere. Fossum wouldn't keep it under wraps. Not a chance. (You clearly don't read ImageSensorsWorld...this kind of technology, if it reaches prototype stage, will be HUGE.)

http://m.eet.com/media/1081272/SARGENT1433_PG_46.gif
If you stop just for a second using your knowledge from  here whereby this is an improvement of the CMOS:

I'm not sure what that has to do with anything. It is simply an alternative approach to making photosensitive electronics. Your readout logic is still built with standard silicon at standard sizes, transferring information at standard speeds. And the sensitivity improvement, from what I know about it, doesn't allow photon counting. There have been other improvements that increase the light gathering capacity of silicon without resorting to wet tech, such as black silicon. Even black silicon isn't going to solve the data transfer rate issue, or allow photon counting, though.

http://m.eet.com/media/1081272/SARGENT1433_PG_46.gif
And the regular physics knowledge... Yeah I had that moment with the jots and the wavelength just as you did. But I do know that he is quite longer in this business than you and me together ;-)

DO you believe that he will reveal every detail of his study to the world so someone could steal it from him? ;-) And do you believe that he would continue for 10 years to research in that field within a reputable university and some sponsor (it could be even Samsung)?

Fossum is a researcher. He produces patents. It's what he does. So absolutely. I do believe he will reveal every detail of his research. I believe he has already revealed everything he knows. Besides, due to things like prior art, no one can really steal it from him. It's his work. He has the prior art. Even if someone tried to patent it, he could prevent it in court. He has about a decade of research and documentation to clearly prove the concept is his, and therefor not the unique invention of someone else. So yes, he absolutely would reveal all the details. He reveals details all the time, and again, if you read ImageSensorsWorld, you would know this.

Fossum doesn't HAVE the specific details for things like high speed tbit/s data transfer. NO ONE DOES. There are undoubtedly people researching it. People have been researching 500gbit and tbit data transfer rates since the 80s. I remember a Byte Magazine article from the late 80s that talked about organic memory and hundreds of gigabits per second data transfer rates. Well, were some 25 years on, and it still hasn't happened. The organic memory concept died, it just wasn't viable, and SSD offered realistic, tangible gains in performance without being unrealistically hopeful.

http://m.eet.com/media/1081272/SARGENT1433_PG_46.gif
IF scientist were so sure like you and so negative... we would be in the medieval age, no offence.

I'm not negative. I'm realistic. You can be as hopeful as you want, but it doesn't mean your hopefulness means success. Your hopefulness is simply unrealistic given how far technology has come, and how close the walls of physics are. This isn't the 90's. Back then we couldn't even see those walls. It's now the 2010's. Two decades on, at the relentless rate we've been pushing technological advancement, those walls are right in front of us now. And not just in the case of QIS...technological advancement via traditional means (i.e. primarily via reduction in size) is going to come to a crushing halt relatively soon. Certain problems have already forced some radical changes to how we manufacture CPUs, for example, and all that's done is stave off the inevitable for a little while longer.

You also cant' forget, Fossum has been trying for a decade to design this type of sensor. A DECADE. That is a very long time to even prove a concept can work. A LOT of CIS patents files in the 80's were viable, and we knew we could eventually shrink die sizes and increase data transfer speeds to levels where we could eventually achieve them. Many of the new technologies being implemented today were actually discovered decades ago. However back then, transistor sizes were hundreds of microns in size, and data transfer rates were so low we had absolutely no question we could improve them.

Today, we've been riding the limits of Moores Law on a continuous basis. The effort involved in developing new advances costs and order of magnitude more money each time we develop a new fabrication process (i.e. it used to cost a few hundred million to build a CMOS fab, today it costs tens of billions.) Transistor sizes are approaching physical limits...the next die shrink is 14nm, and the one after that is 7nm. Gate sizes, even with 3D/finFET, are now only a couple ATOMS across. Even with stretching, that poses a real problem for current flow, hence finFET, and that is only a stopgap measure (and it imposes it's own limitations as well.) The hard, impassable physical WALL is looming very close. There are a couple generations left before there is no such thing as a die shrink anymore. We have a decade, two at most (assuming two to four years between die shrinks) before efforts begin in earnest to develop full multi-layered CPUs and the like, because that will be the only remaining option.

You call me negative. I'm just a realist. There are significant physical limitations that computer technology is already riding close to. If Fossum said he would need 100gbit/s transfer rate, I'd say "When does the technology hit?!?" Why? Because 100gbit/s is only 10x (or less) faster than the fastest transfer speeds we already have today. It's realistic, it's doable, there is already research that indicates it's possible, and it could be ready by late 2015/early 2016. Fossum said he needs 100tbit/s transfer rate, and his timetable showed 2015-2016 as his target date for QIS. Sorry, but you don't suddenly go from 10gbit/s (the fastest ethernet, and also the transfer speed of Thunderbolt) to 100tbit (100000gbit/s) overnight. It just aint gonna happen. Maybe a decade or so from now. But not by 2016. It simply isn't realistic.

There is a pdf (if in those presentations not included back dated from 2010, if I recall correctly where Fossum represents that they are to try to implement that technology (of course quite away from Q.E. of 100%) in 3 stages....
The first one on a regular CMOS. The last one on a new superconducting material... so there you go...

I believe he said the same thing in 2005. In 2010, the new superconducting material was actually just announced as something called a superinsulator. Superinsulators had been hypothesized for years, and we knew they had to exist if superconductors existed. We just didn't know how they worked. Ironically, they really don't work all that differently than superconductive material...just in the opposite (instead of encouraging cooper pairs to attract, superinsulators cause them to repel). The other sensor I've been talking about, the Titanium Nitride sensor, uses both superinsulating and superconducting properties. TiN IS the new superconducting/superinsulating material. However for those properties to exhibit, the sensor has to be cooled to absolute zero.

Again, not being negative. Being realistic. We won't have DSLR-sized cameras with supercooled sensors by 2016. Not a chance. The power and material requirements necessary to cool anything to absolute zero are immense, not to mention rather unique.

As for the CANON.. Officially 200mm is what I have head as well about CANON... but you have to admit that if you were CANON you wouldn't reveal of you are already on a 300 or 450mm wafer, now would you?

Of course they would! You really don't understand either the technologies involved here, nor the economics. Canon moving to 300mm wafers for their FF and APS-C fabrication would be a huge boon to their stock price. OF COURSE THEY WOULD OFFICIALLY ANNOUNCE IT! It's ludicrous to think otherwise, and exceptionally naive.

A proof that is this very topic here - we even are not sure what the new 1Dx m2 and 7D m2 would be look like... we are pretty confident they will include dual pix though....

Canon already has 300mm fabs for their small form factor sensors, which are built on a 180nm process rather than a 500nm process. Canon has had those fabs for years. There has been speculation that the 70D DPAF pixels would need a smaller process in order to be produced. The pixel size shrink on the 70D, however, is not actually that great. At first I thought the shrink was more significant, however the size of the 70D sensor also grew. Prior sensors were around 22.2x14.8mm in size. The 70D is 22.5x15.1mm. The increase in size means instead of a 3.9µm pixel, they actually have 4.1µm pixels. The pixels are only 0.2µm smaller than the 7D. There isn't any need for Canon to reduce their process size to split those pixel in two...they are still more than large enough. Even at 3.9µm, they would still be large enough.

As much as I personally hoped Canon would move the 70D to a smaller process, there just isn't any evidence to support that theory. Physically, Canon still has space to use a 500nm process, even with dual photodiodes. When you think about it, doping the photodiodes really is not that big of a deal, as the photodiodes themselves are a couple thousand nanometers in size, which is about four times larger than the smallest 500nm etching possible with a 500nm process.

If Canon had moved FF and APS-C manufacturing to a 300mm wafer fab, they would have announced it. It would be a massive move, and a move for the better, for Canon as a company, for their shareholders, for their customers. A move to 300mm means more FF sensors manufactured faster with less waste, reducing cost, allowing more electronics on-die at a smaller transistor size, etc. It would be big news, for everyone. No way in hell would Canon hide that fact.

Which reminds me that we even didn't know about the DUAL PIX just before the release of 70D - a few months before that we had some rumor about new focus tech... And you, as well as the others are quite aware that Dual PIX AF didn't emerge like that in the last 6 months before the 70D, now did it? ;-)

The technology had to be in development for more than 6 months before the 70D hit the streets. Canon has patents on the technology. If someone was digging, they would have found them (quite possibly LONG before the 70D hit the streets, as patents have to be requested and then filed quite some time before they are granted. You don't know about the request, but once they are filed, it's all public knowledge...you can find it if you want to. I used to go digging through CIS patents...I don't have enough time to do that any more, but I don't doubt that the patents were out there before the 70D hit the streets.)

1144
EOS Bodies / Re: What's Next from Canon?
« on: February 18, 2014, 06:46:15 PM »

The Q.E. is indeed high. I don't know about 90%, even with a BSI design unless he is supercooling, there is going to be a certain amount of loss due to dark current.
Actually that is the idea: the Q.E. to be at almost 100%. Here is an extract of some more recent materials about QIS:

Fossum writes:
QIS "vision" is to count every photon that hits the sensor, recording its location and arrival time, and create pixels from bit-planes of data.

That sounds to me as 100% Quantum Efficiency ;-) No?

There is a very significant difference between "vision" and "reality". The vision may indeed be to count every photon. The reality is, in order to achieve that, the sensor would have to be superconducting. That's the only way you can count every photo. The Titanium Nitride video sensor I linked is a photon counting, position recording, exact wavelength detecting sensor. It is about as close to Fossum's vision as modern technology gets. The only difference is it doesn't use jots and dynamic grains. The reality is, that TiN sensor is cooled to superconducting supercool temperatures.

If your thinking that someday Fossum's QIS is going to pan out to a hand-holdable photon-counting DSLR (or for that matter even a DSLR with 90% Q.E.), your gravely mistaken. It isn't possible to cool electronics to a fraction of a degree above absolute zero in a hand-holdable package, and room-temperature superconductive materials, as much as they are the holy grail of the electronics industry, simply haven't been discovered, and the more time passes, the liklihood of finding a room-temperature superconductor diminishes (research has been ongoing for decades, and every time someone "discovers" a room temp superconductor, it always pans out to be false.) This is all assuming that actual photon-counting is possible with any superconducting material above absolute zero...dark current is the photon counting killer.

Having a high Q.E., however, does not change the notion of digital grains. In the presence of low light, you have low incident photon counts. The whole entire DFS/QIS design is based not just on jots, but on the fact that jots are organized into dynamic grains. In low light, all it takes is ONE jot to receive a photon in a grain for the ENTIRE grain to be activated. Let's say grains start out containing 400 jots each (20x20, a 16µm pixels...HUGE). Lets say were shooting in very low light, starlight. The moment one jot in each 20x20 size grain receives a few photons (lets say 50% Q.E., so two photons), then all 400 of those jots are marked as active! So, under low light, it might seem as though you actually received 800 photons, rather than just two! Big difference...you are now simulating the reception of a lot of light, however it is at the cost of resolution. At 16µm a grain, your resolution is going to be pretty low by modern standards...roughly 3.375mp.

Now, lets say a crescent or half moon comes out, and we take the same picture again. We have about two to three more stops of light. Instead of two incident photons, we now have ~8 incident photons per grain. Lets say a dynamic grain division is set at 8 photons. Once our jots receive and convert eight photons, our grains all split. We now have four times the resolution (10x10 grains, or 100 jots per grain, four times as many grains). We have a stronger signal overall, but roughly the same signal per grain as we did before. However we now have an image with four times as many megapixels, 13.5mp to be exact.

Now a full moon is out, and we take the same picture. We have another two stops of light. We get about 32 incident photons. Our grain size is now 5x5, or 25 jots per grain. Our resolution has quadrupled again. Same overall SNR, but our image resolution is 54mp.

This is what Eric Fossum has designed. A totally dynamic sensor that adjusts itself based on the amount of incident light, maintaining relative signal strength and SNR regardless of how much light is actually present. It does this by dynamically reconfiguring the actual resolution of the device...very low light, very low resolution, low light, low resolution, adequate light, good resolution, tons of light, tons of resolution. Technologically it is pretty advanced, conceptually it is relatively strait forward.

I've greatly exagerrated the scenario above...you wouldn't be able to have 54mp under moonlight. You would probably have something closer to 0.8mp under starlight, maybe 3mp under full moonlight, 13.5mp under morning or evening light, and maybe finally be able to achieve 54mp under full midday sunlight.

Actually he intends to put more than 4K jots in 1 pixel :D :D :D  However I believe:

At 4k jots per "pixel" (pixels don't really exist in the DFS concept, not sure about any more recent QIS papers), if we assume he is using an 800nm jot pitch, that would mean you have 45,000 jots across and 30,000 jots down in a 36x24mm sensor. That makes a grain/pixel with 4000 jots (63x63 jots per grain) over 50µm in size. I mean, there are physical limitations here. We can't break the laws of physics, not even if we are Eric Fossum. Make jot size much smaller than 800nm, and you'll start filtering out red light. You can't have a color accurate sensor if you do that...not at room temperature anyway.

(Based on one of the papers you linked, it isn't actually 4096 jots per pixel. It is 16x16 jots read 16 times to produce one frame, 16 times 16 (physical dimensions) times 16 (time) is 4096. Based on other information in the article, it sounds like his jots are about 1µm in size, or 1011nm to be exact.)

Now, if we move back into the realm of superconductive TiN sensors at absolute zero, you could probably make jots 100nm, maybe even 10nm in size, have near-perfect positional measurement accuracy by measuring broken cooper pairs. Since your measuring the exact energy in each incident photon, your jots are recording the exact wavelength of the disturbance in the superconductor. The TiN sensor I linked here uses the same general concept Fossum put forward on 2005...taking minuscule time-slices of an exposure by reading the sensor hundreds or thousands of times per second, and integrating the result. That gives it essentially infinite dynamic range if you expose for long enough (although you would still be limited if you needed shorter exposures, however that limit would still be considerably higher than modern day sensors...18-20 stops maybe.)

There are other physical problems that have to be solved before this technology would even be viable. According to another one of Fossum's more recent papers, were talking about excessive data throughput rates. The fastest data throughput rates we have today for storage are on the order of gigabits per second. A high end PCI-E type SSD can move around a gigabyte and a half per second or so, which is about 12 gigabits per second. Fossum's QIS concept requires 100 TERRABIT per second throughput. That is 12.5 terrabytes per second!!! That is unprecedented transfer speed. I mean, MASSIVELY UNPRECEDENTED. I don't even know that I've heard 1/100th of those kinds of throughput rates for single supercomputer throughput channels. You would have to bundle hundreds if not thousands of the high speed fiberchannel connections usually used with supercomputing in order to achieve a terrabit of throughput. Fiberchannel, one of the fastest transfer channels , tops out at around 25-gigabit per second, or about 0.00025x as fast as would be necessary for a QIS sensor to operate effectively. To be really clear about this, the fastest data channel on earth can currently only achieve 0.00025x what is necessary to support Fossum's QIS concept. It is still 0.025x the necessary throughput rate to achieve even 1 terrabit/second throughput. Even the on-chip data cache of a modern CPU is still only pushing a couple hundred gigabits/second throughput to the CPU registers, and that is thanks to exceptionally short physical data paths. In a digital camera, the image information has to be shipped off-sensor to a processor powerful enough to integrate hundreds of individual jot frames per second. Even if were talking about an integrated stacked sensor and DSP package, the distance those frames have to travel would make achieving 1-100tbit/s throughput difficult without some radical new breakthrough in bit transfer technology.

There are significant challenges in order to make Fossum's DFS/QIS concept a reality. Which is why, even after at least nine years, it is still just a concept.

1/ your info might be a little out-of-date.

2/ Fossum knows what he is doing if he is doing it for more than 10 years now. And he already has created something befre (the CMOS).

3/ I hope you will agree that we both are a little bit behind - no matter how much we know, with our understanding of this TO-EMERGE technology ;-)

MAY-EMERGE technology. As I said above, there are some pretty massive physical and technological issues to overcome. I've never heard of a photon-counting sensor that used sensing elements as small as a jot that wasn't supercooled. A data throughput rate of 1-100tbit/s is not only unprecedented, but could very possibly be impossible without integrating the entire concept onto a single die, and even then, at a tbit/s throughput, that little sensor+dsp die is going to get exceptionally hot (even supercooled, your producing a hell of a lot of energy in an extremely concentrated space, meaning you run a high risk of heating the sensor above the point where it can behave as a superconducting device...either that, or you need orders of magnitude more power to keep it all cool).

You don't need to know what technological advances may come down the road in the future to know that the QIS concept is running up really close to the laws of physics. It's very likely it is bending them as far as they can go, and it may not be enough.

...
Absolutely. I'm 100% sure. It makes no sense for Canon to try to break into a niche market that already has not only it's dominant players, but dominant players with a HELL of a LOT of loyalty among their customers. There have been Canon MF rumors for years. I remember reading MF rumors here back in the 2005 era. Nothing has ever come of them, despite how often Northlight tends to drag the subject back out.

The only way Canon could make a compelling entry into MF is if they launched an entire MFD system. Cameras with interchangable backs, image sensors that at least rival but preferably surpass the IQ of the Sony MF 50mp, a wide range of extremely high quality glass (they are certainly capable here, but it still is a MASSIVE R&D effort), and a whole range of necessary and essential accessories like flash. Canon has to do this all UP FRONT, on their own dime, to cover the massive R&D effort to build an entirely new system of cameras that can compete in an already well established market.

Now, they've done that once. They did it with Cinema EOS. But the cinema market is a lot broader with more players, and is a significant growth market with the potential for significant long-term gains, even for a new entrant like Canon. The medium format market is not a growth market. It's a relatively steady market, that has its very few players and it's loyal customers. Since there are so few players who already dominate the market, breaking in for a new player like Canon would be a drain on resources, and there is absolutely zero guarantee of any long-term payoff.

So, yes, I'm sure. Canon won't be offering a medium format camera any time soon.
OK.
 - Yes about SYSTEM, of course. I have never imagined CANON selling digital backs, or sensors to anyone :-))))
 - Yes about glass
 - No about light
 - Perhaps CANON has been in the MF R&D since 2001 with the introducing of 12"' Si wafer
Let us not forget the BIG SENSOR or the BIG 120 MPs APS-H sensor - the 2007 success?

If Canon had been developing an MF system for 13 years, then they have already failed. They've failed multiple times over. Sorry, but I find that idea exceptionally unlikely.

The "BIG SENSOR" was designed for an entirely different division of Canon for use in scientific grade imaging. It will never be used in a DSLR type camera. The 120mp APS-H was an APS-H sensor...that is smaller than FF, and significantly smaller than MF (MF sensors start around 44x33, and get as large as 60x70...anything smaller, and were not talking medium format.)


Silicon Wafer Sizes Trend The picture I provide is more relevant to intel then to SONY or CANONn and yet it is a trend:


I know all about wafer size trends. Canon still uses 200mm wafers for their APS-C and FF sensors as far as I know. Another indication that they haven't and are not moving into MF any time soon. On a 200mm wafer, you can fit 24 44x33mm sensors. On a 300mm wafer, you can fit 54 sensors, with less waste. Assuming Canon somehow skipped 300mm and went strait to 450mm (highly unlikely, the 450mm wafer size still seems to be somewhat fragile), you could fit 130 sensors, with even less waste. I don't see Canon manufacturing MF sensors on 200mm wafers.

Let me make another comparison exactly with the small Cinema EOS success. It's like an early bird. FF sensor from DSLR equipment against ARRI, RED & SONY APS-C Cinema solutions..... Hmmm... Who knows... ;-) Extra dollar is always welcomed. Even if it is from 0.5 market share. If CANON succeeds to sell 2k MF bodies in 3 years, let's say 10K$ each.... 2 million extra dollars... I ask

You don't seem to understand the market dynamics involved here. Cinema is a big, huge growth market into the future. There is massive potential for Canon, who already has an exceptional reputation in photography, to make big inroads into the Cinema market space as it grows, grabbing up new customers, many of whom are already familiar with Canon video from using their video-capable DSLRs. Canon already has a name in that industry thanks to the 5D II, which has been used in a number of relatively big name productions for TV and even a few big name movies.

Canon's break into the cinema market is easy. It cost them little to integrate video into the 5D II, which gave them their initial foothold, paving the way for them to expand that foothold into a legitimate presence. Since the market is a growth market, the risk is relatively low compared to an entry into MF, because you can grab new customers who are just moving to digital cinema cameras and have yet to buy into a system.

This is in contrast with the medium format market. It's growth opportunities, such as they are, are small. That means the primary source of market share gain has to come from existing dominant players. The market is relatively closed, with relatively few needs and a small base of customers to start with.

You could compare Cinema EOS as sprinting up a gentle slope, the wind of the 5D II success at their backs, with Canon MFD as climbing up an ice cliff with the wind buffeting them around their precarious perch. Medium format is a loss/loss for Canon. Excessive up-front cost, uphill climb once they finally enter the market full of very few customers and low growth.

Sorry...still don't believe it's going to happen. Not with Canon, anyway, not right now, not with global economies still in the pits relative to their 2007 peaks.

WHY NOT? ;-)

See above. :P

1145
Don & jrista: You both make excellent points, but I'm still really impressed with the Tamron. If wildlife photography wasn't my primary interest, it would be the perfect choice.  I don't have any regrets over my 300 and while it's not the ideal choice for birding, it's great for mammals, alligators, a lots of other critters.  Not to mention that it's awesome for portraits, sports, low light, and takes the extenders and a drop in C-PL as needed. I haven't tried the 25mm macro tube with it quite yet, but have seen excellent near-macro shots with it as well.

I think that Canon's big white sales are safe, but they are going to lose a ton of 300 f/4 IS, 400 f/5.6, and 100-400 sales to the Tamron, which might force Canon to finally update at least one of those models.

I'm not saying the Tamron isn't impressive. For it's price, it is. It's jut that if you already have the 300 f/2.8 L II, there is absolutely ZERO reason to doubt the decision to buy it. It is still and will always be a superior piece of equipment. It doesn't just needlessly cost that much more...the cost is well justified (especially once you understand the manufacturing process...making those huge glass and fluorite elements requires high grade costly materials and extremely precise manufacture.)

I do agree about their lower-grade telephotos, though. The 100-400 sales, which have always been good, will probably suffer quite a bit. Hard to beat 600mm of extra focal length and 2.25x the detail. (And there is NO WAY the Tamron is "soft"...compared to other lenses in it's class, it seems to be excellent.) I think Canon would even have a hard time maintaining 100-400 sales with a new version...400mm just doesn't compare to 600mm.

1146
Just having my morning read at CR and wow CarlTN and others, thanks for creating some humor.  Better than the funny paper.

Having spent money a year ago on the 300 II I have a slight uneasiness now but it's history and what I have is a great lens so I'm not sure why I keep reading about this great deal with the Tammy.

Anyway, it's prompted me to think about what I should be thankful for with my lens and two converters.  Jrista, yes bokeh.  One thing came to mind that I really love on the 300 is the smooth rotation from veritical to horizontal when on the gimbal, and the detente that tells you you've gone 90 degrees. 

I also loosen that knob which allows the camera to swivel similarly when I'm shooting hand held.  This works great with my preference for a very short strap that goes under my right arm (strap is snug as I fire).

Never the less, if I was buying today I probably would have looked at the 300 as just too expensive.  Thankfully my wife wouldn't hear such talk - hard to believe isn't it! :)

Jack

Don't let yourself be discouraged. There is no way the IQ of the Tamron will rival your 300/2.8 II, even with a 2x TC. You cannot underestimate the value of the large entrance pupil, the better barrel build, the vastly superior firmware chip, the full time manual USM focus ring with it's wide throw (excellent when you need to manually focus, such as with astrophotography...total godsend!!) You aren't just paying for "glass" when you buy a Canon supertele. Your paying for "the best" LENS. It's a whole package deal. It isn't just the optical quality. The AF USM drive and firmware are the best available for Canon. When coupled with a 5D III or 1D X, you get superior AF precision and accuracy (I'll find the LensRental blog that proves this.)

Also, you can't underestimate the value of that boke. It's one of the key things, I think given how many professional bird photos I've seen, that sets apart professional quality bird photography from all the rest. Boke is your subject isolator. When you take a photo of a bird, or for that matter of wildlife in general, your subject isn't the background...it's the bird, or the deer, or the coyote or wolf or bear. You don't want the background to intrude on your subject much...just the faintest idea of the general structure of what's there is the most you ever really want, and when it comes to birds, having the background completely blurred into a smooth creamy backdrop is usually the most desirable result.

Entrance pupil diameters <100mm generally don't quite cut it. The Tamron is just on the edge, but so far I've only seen a couple photos taken with it that truly show that kind of creamy background blur for birds (and then, only from very skilled photographers who have the talent to get appreciably close, and who also already own a 600mm prime of some kind.) The 72mm entrance pupil of your average 400mm entry-level birders lens (400mm f/5.6) is just not enough, and the same goes for the 75mm entrance pupil of 300mm f/4 lenses.

You have one of the best lenses for bird and wildlife photography that you can get on planet earth. The release of the Tamron doesn't change that, despite how good it is.

1147
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: February 18, 2014, 01:59:40 PM »
lets have a look at a video from guys who actually shot video.

enough boring talk from guys in this tread who know not much about video but talk like they have invented photography and videography.

just for a change...

http://www.canonwatch.com/moving-portraits-fashion-industrys-famous-people-shot-4k-canon-eos-c500/

As far as my own perspective goes, I am not arguing with you about the utility of 4k nor am I arguing with you about the fact that it will absolutely be the standard at some point.

My main issue is with people who think that it is something Canon SHOULD or will be doing very soon based on the fact that the GH4 is out. It is NOT the standard for the market as a whole currently.

Furthermore, the GH4 is in a price range that would require Canon put 4k in their consumer lines in order to compete directly (right now). THAT, is not something I believe will happen (right now) and is truly the only point I have been trying to convey.

If we are not talking about Canon being forced to compete directly, then the issue is moot as they already offer 4k in their lineup.

+1. Totally agree. This has been the argument all along, that the notion Canon would "have" to respond to the GH4 is wrong. Canon will make 4k a standard feature when 4k is a standard feature in general.

1148
EOS Bodies / Re: What's Next from Canon?
« on: February 18, 2014, 12:47:43 PM »
there is already  lossless jpg ver9 something... could be implemented in any of the next canon bodies (DIGIC 7 or 8 perhaps)
There is also a rumour that Canon has lossless RAW file format. :) The problem with any lossless compression is that you end up with large file sizes... If lossless jpg isn't much of a savings over RAW, why bother?
Good to know :D :D :D I mentioned it only due to the chit chat about DR and JPG.



If by QIS you are referring to Eric Fossum's Digital Film Sensor (DFS), that is a very old concept. Almost a decade old now, given this original paper: http://ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf

I read that many, many years ago. Very intriguing concept...however it doesn't mean that you actually have a gigapixel sensor. The notion of a digital film is that the sensor works more like actual film which has silver halide grains, wherein the "jots" combine to make up large digital sensor "grains". Under lower illumination where there are fewer incident photons, one jot strike within the region of a grain would "illuminate" the entire grain as if each jot had received a photon. Grains remain large, resolution remains low, SNR is high, noise is low. Technically speaking, this isn't all that different from downsampling a high ISO image in post.

Under high illumination, where photon strikes are frequent, most jots would receive photon strikes. By employing a mechanism to "divide" digital grains, one could dynamically increase resolution, since smaller grains with fewer jots could still achieve a higher SNR. It's most definitely an intriguing concept, but it is also requires technological capabilities beyond what we are currently capable of (at least, as far as image sensor fabs go). Jots are considerably smaller than your average pixel...they would have to be close to deep red wavelength (somewhere between 750 and 800nm...current APS-C pixels are 4000-5000nm, current full frame pixels are 6000-9000nm).

To make an ideal Digital Film Sensor, I'd combine the Jot concept with the Titanium Nitride superconducting material and microwave comb readout to produce a sensor with infinite dynamic range, exact color replication, and effectively the highest resolution possible for an image sensor. The TiN technology is still pretty new, and pixel size is much larger than a jot (the only existing sensor is 44x46 pixels in size), and it still requires cooling in a dewar jar. But it would probably be the best sensor on earth. ;)

Yes I do mean the same concept. These days Fossum calls it QIS. As for the thin film added to APS CMOS innovation from Canada IMO you have the whole concept wrong... No matter how he calls his pixels he claims of gathering 90%of the incident light and what is more important his intend is NOT to put it through ADC descrete process ergo the gigabytes. But even I could be wrong since he and his fellows are researching it as we speak.

The Q.E. is indeed high. I don't know about 90%, even with a BSI design unless he is supercooling, there is going to be a certain amount of loss due to dark current.

Having a high Q.E., however, does not change the notion of digital grains. In the presence of low light, you have low incident photon counts. The whole entire DFS/QIS design is based not just on jots, but on the fact that jots are organized into dynamic grains. In low light, all it takes is ONE jot to receive a photon in a grain for the ENTIRE grain to be activated. Let's say grains start out containing 400 jots each (20x20, a 16µm pixels...HUGE). Lets say were shooting in very low light, starlight. The moment one jot in each 20x20 size grain receives a few photons (lets say 50% Q.E., so two photons), then all 400 of those jots are marked as active! So, under low light, it might seem as though you actually received 800 photons, rather than just two! Big difference...you are now simulating the reception of a lot of light, however it is at the cost of resolution. At 16µm a grain, your resolution is going to be pretty low by modern standards...roughly 3.375mp.

Now, lets say a crescent or half moon comes out, and we take the same picture again. We have about two to three more stops of light. Instead of two incident photons, we now have ~8 incident photons per grain. Lets say a dynamic grain division is set at 8 photons. Once our jots receive and convert eight photons, our grains all split. We now have four times the resolution (10x10 grains, or 100 jots per grain, four times as many grains). We have a stronger signal overall, but roughly the same signal per grain as we did before. However we now have an image with four times as many megapixels, 13.5mp to be exact.

Now a full moon is out, and we take the same picture. We have another two stops of light. We get about 32 incident photons. Our grain size is now 5x5, or 25 jots per grain. Our resolution has quadrupled again. Same overall SNR, but our image resolution is 54mp.

This is what Eric Fossum has designed. A totally dynamic sensor that adjusts itself based on the amount of incident light, maintaining relative signal strength and SNR regardless of how much light is actually present. It does this by dynamically reconfiguring the actual resolution of the device...very low light, very low resolution, low light, low resolution, adequate light, good resolution, tons of light, tons of resolution. Technologically it is pretty advanced, conceptually it is relatively strait forward.

I've greatly exagerrated the scenario above...you wouldn't be able to have 54mp under moonlight. You would probably have something closer to 0.8mp under starlight, maybe 3mp under full moonlight, 13.5mp under morning or evening light, and maybe finally be able to achieve 54mp under full midday sunlight.

Canon is not in the image sensor market. Canon is in the photography market.

....

Canon won't be doing any kind of medium format anything any time soon.

Now , about the last statement are you sure?

Absolutely. I'm 100% sure. It makes no sense for Canon to try to break into a niche market that already has not only it's dominant players, but dominant players with a HELL of a LOT of loyalty among their customers. There have been Canon MF rumors for years. I remember reading MF rumors here back in the 2005 era. Nothing has ever come of them, despite how often Northlight tends to drag the subject back out.

The only way Canon could make a compelling entry into MF is if they launched an entire MFD system. Cameras with interchangable backs, image sensors that at least rival but preferably surpass the IQ of the Sony MF 50mp, a wide range of extremely high quality glass (they are certainly capable here, but it still is a MASSIVE R&D effort), and a whole range of necessary and essential accessories like flash. Canon has to do this all UP FRONT, on their own dime, to cover the massive R&D effort to build an entirely new system of cameras that can compete in an already well established market.

Now, they've done that once. They did it with Cinema EOS. But the cinema market is a lot broader with more players, and is a significant growth market with the potential for significant long-term gains, even for a new entrant like Canon. The medium format market is not a growth market. It's a relatively steady market, that has its very few players and it's loyal customers. Since there are so few players who already dominate the market, breaking in for a new player like Canon would be a drain on resources, and there is absolutely zero guarantee of any long-term payoff.

So, yes, I'm sure. Canon won't be offering a medium format camera any time soon.

1149
EOS Bodies / Re: What's Next from Canon?
« on: February 17, 2014, 09:36:05 PM »
Hi....

1/ there is already  lossless jpg ver9 something... could be implemented in any of the next canon bodies (DIGIC 7 or 8 perhaps)

2/ quantum image sensor (QIS) is on the way by the father of APS CMOS. The main obstacle for its research ending is IMO is the current lack of power that would be able to process raw data from the sensor in the terabytes

3/by the time QIS is reality most of us would have lost about half a kilo of brain matter ( so really no folio needed) and will either suffer some neuro related sickness like Parkinson's desease or will be stupid enough to feel overwhelmed with the QIS menu.

If by QIS you are referring to Eric Fossum's Digital Film Sensor (DFS), that is a very old concept. Almost a decade old now, given this original paper: http://ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf

I read that many, many years ago. Very intriguing concept...however it doesn't mean that you actually have a gigapixel sensor. The notion of a digital film is that the sensor works more like actual film which has silver halide grains, wherein the "jots" combine to make up large digital sensor "grains". Under lower illumination where there are fewer incident photons, one jot strike within the region of a grain would "illuminate" the entire grain as if each jot had received a photon. Grains remain large, resolution remains low, SNR is high, noise is low. Technically speaking, this isn't all that different from downsampling a high ISO image in post.

Under high illumination, where photon strikes are frequent, most jots would receive photon strikes. By employing a mechanism to "divide" digital grains, one could dynamically increase resolution, since smaller grains with fewer jots could still achieve a higher SNR. It's most definitely an intriguing concept, but it is also requires technological capabilities beyond what we are currently capable of (at least, as far as image sensor fabs go). Jots are considerably smaller than your average pixel...they would have to be close to deep red wavelength (somewhere between 750 and 800nm...current APS-C pixels are 4000-5000nm, current full frame pixels are 6000-9000nm).

To make an ideal Digital Film Sensor, I'd combine the Jot concept with the Titanium Nitride superconducting material and microwave comb readout to produce a sensor with infinite dynamic range, exact color replication, and effectively the highest resolution possible for an image sensor. The TiN technology is still pretty new, and pixel size is much larger than a jot (the only existing sensor is 44x46 pixels in size), and it still requires cooling in a dewar jar. But it would probably be the best sensor on earth. ;)

And yet here we are and I wonder why there is not even a single person to mention something about FF or MF, which IMO is also partially the topic.

SONY made its MF epic debut and it wasn't with Nikon. Hasselbad and Phase One...aditionally there are rumors about the ex-pentax with their 645 (mark 2) ;) I bet that 44x33 mm CMOS is BI.

BTW SONY's contract with Nikon is about to end this month. Any updates on that one?

So I wounder if the Canon MF beast will come out this year or next... What do you think?

Canon is not in the image sensor market. Canon is in the photography market. Canon doesn't sell sensors, so they wouldn't be selling sensors to Hasselblad or Phase One. Canon would have to bring a compelling medium format camera SYSTEM to market in order to compete with Hasselblad or Phase One. Given how Canon's tentative foray into the low end mirrorless market with a single camera and less than a handful of lenses, they weren't exactly successful.

It takes considerably more upfront resources to develop a complete competitive ecosystem when you are trying to break into an existing market that already has it's dominant players. That's a HUGE risk for Canon to take in order to enter the medium format market. We have already been through the fact that Canon is a conservative company, and they won't take a risk unless they have enough means to reduce it. They also won't take a risk unless the long-term payoff is significant. I see no reason to indicate that Canon should risk an entry into the medium format market right now. Especially now that those big manufacturers are utilizing Sony's currently superior sensor technology...that's even more up-front effort to develop something that is even better than what Sony offers. The sensors, and the optics, would all have to be better enough than the competition (which are already producing solely high-end, top grade products that will be quite difficult to beat as it is) in order to steal sales away.

Canon won't be doing any kind of medium format anything any time soon.

Ps: I hate tablets :))))))

Hmm. Good for you.  ???

1150
EOS Bodies - For Stills / Re: 7D vs. 70D: Which has better image quality?
« on: February 17, 2014, 08:12:43 PM »
...but assuming these are two different samples of the 200 f/2L, that would be the biggest factor in the difference, by far...

While Bryan often tests multiple copies of lenses (presumably keeping the best one, he knows about sample variation, too), the 200/2 is part of his personal collection and AFAIK it's the same copy of the lens used for all tests.  I could ask him (or you could), if you're curious...

I am pretty sure it's the same lens he uses for the 200mm samples. The entire point was that they all used exactly the same lens so that lens variation could be eliminated as a variable. Bryan has always been rather meticulous, I know he's mentioned in past reviews how many times he had to send a lens back to get a good copy.

1151
Landscape / Re: Waterscapes
« on: February 17, 2014, 07:38:49 PM »
Pacific coast at Victoria, B.C.

I like this one. All that driftwood is a nice touch.

1152
Alan, sadly it's never clear cut.  If I never used 300 2.8 or 420 4 it'd be a different story. OTOH I use 300 X2 a lot which suggests I need a 600!  However, I very often am trying to pull my ISO down to 1250 in not the greatest light because I don't have the reach I really need, but I still want decent shutter speed.  Then I think 7D2 for reach but I know the ISO capability is just not going to be that great compared to FF so ...... :(

What's you feeling on the Tam bokeh?

Jack

Your 300+2x will allow you to use lower shutter speeds than the Tamron, which is f/6.3 at 600m. Your lens is f/5.6 at 600mm. That said...you use the aperture you need to use in order to get the depth of field you need. If you need to use f/8, you need to use f/8, regardless of the lens your using.

The smoothness of boke is ultimately determined by the size of the entrance pupil (the aperture as viewed through the front of the lens from a distance of "infinity" (in other words, a sufficient distance that light rays are collimated)). The Tamron, regardless of how well it's been designed, still won't compare to your 300mm f/2.8 107mm entrance pupil diameter. Entrance pupil size was one of the reasons I chose the EF 600/4 II...it's entrance pupil diameter is 150mm...which is why the boke from that lens is exquisitely creamy and smooth.

There are few other lenses on earth that can produce the kind of background boke that the 600mm f/4 L II does, and not that many more that can produce the kind of boke that the 300mm f/2.8 L does.

1153
Landscape / Re: Waterscapes
« on: February 16, 2014, 07:49:24 PM »
Hey guys, not to be a prick, but the sports stuff really isn't a "waterscape". I originally started this thread for waterscapes...landscapes of water subjects, rivers, creeks, lakes, oceans, etc. I think it would be best to start another thread for the waterSPORTS, which is quite different.

Thanks!

1154
EOS Bodies / Re: What's Next from Canon?
« on: February 16, 2014, 07:31:36 PM »
... (rather than 8-bit JPEG DR, which tops out at 8 stops at best).

Straying a bit off topic here; but I thought the DR of a JPEG image was a function of the tone curve / colour space of the image file.
I thought the JPEG format was quite capable of recording 12 or more stops - the main problem with having 8 bits per channel is banding (and compression artefacts of course).

Phil.

I get what your saying, but technically speaking, reality would be a bit different. The raw data before being saved to JPEG can have a tone curve applied. That tone curve, if designed correctly, could compress the real dynamic range of the camera (say 12- to 14- stops) into the 8-stop dynamic range allowed by 8-bit data. But you don't still have 12 or 14 stops of DR. You compressed the original dynamic range into a smaller space, meaning you discarded some of that original information.

With an actual RAW, you can push the exposure around in a tool like lightroom and recover information that doesn't fit within the dynamic range of your computer screen (which is also likely 8 bits). When you first load up a raw, it might appear that the highlights are blown and the blacks are crushed...even if you apply say the Canon Neutral camera profile (the initial tone curve), that is still likely to be the case in a high DR photo. You can recover those highlights and shadows, though, because the underlying RAW has enough bit depth to preserve all that information.

With a JPEG, the tone curves applied by the camera are usually more likely to clip highlights and crush blacks. The most preservative "Picture Style" in Canon cameras is neutral or standard, however neither are actually capable of preserving the full original 14 stops of dynamic range. All those tone curves are doing is shifting a greater precision of information into a space capable of less precision. No matter how you slice it, your loosing some information. When the JPEG is saved, you have 8 stops of dynamic range. Clipped highlights are clipped, permamently, there is no recovery. Crushed blacks are black, there is no lifting them. Furthermore, because JPEG is a lossy compression format, you lose EVEN MORE information.

While the camera, if it's using a standard picture style rather than say landscape or faithful or something like that, can indeed preserve some of the original dynamic range, it's still doing so by discarding some of the information that already fit in that 8 bits of data anyway (i.e. the high shadows, midtones and lower highlights). You don't have 12 or 14 stops of editing latitude...you have at most 8 stops of editing latitude (however, because the 8 bit pixel information already fits within the confines of your 8 bit computer screen, there really isn't any NEED for that editing latitude.)

Bits and stops, in a digital image signal, are synonymous in many ways. Both are base 2/power of 2, in that for every additional bit, you double the digital number space for storing light level information. That corresponds directly with every additional stop. If you have one bit, you can store one stops of luminance information. If you have two bits, you can store two stops of luminance information. Since stops are power of two, moving up to two stops means you are sensitive to four times the range of light. That too, corresponds to having two bits...00, 01, 10, 11. Bit depth implicitly limits the dynamic range of the information stored in the file. Even if you are using a tone curve to compress a larger dynamic range into that smaller bit depth, the dynamic range of a JPEG once saved is 8 stops, your editing latitude is limited to 8 stops, and any information you discarded in order to pack the additional stops into that 8-stop range is lost forever, it is not recoverable.

1155
EOS Bodies / Re: What's Next from Canon?
« on: February 16, 2014, 05:39:57 PM »
The EVFs are showing what is in essence the out-of-camera JPEG, with about 1 stop clipped from each end.  And as we all know, the out-of-camera JPEG contains several stops less DR than is available in the raw data.

Interesting. So the camera is essentially generating 60 or more JPEG images per second, plus adding overlay data to it. Amazing.

Not even close, at least on Canon P&S cameras.

For live view (either on the LCD, or EVF), the sensor image is converted to an 8 bit, 4:1:1 YUV image.
This has a luminance (Y) resolution of 720 x 240 pixels on most Canon cameras; but only 180 x 240 resolution for each of the chrominance channels (U & V).

Some cameras, such as the G12 & G1X, double the vertical resolution to 480 lines.

My understanding is this is done using a special read-out mode on the sensor; but I may be wrong there.

The live view is normally done at 30 frames per second, except in low light when the camera will lower the refresh rate to capture more light per frame.

There is no JPEG processing being done.

Phil.

As far as I know, Phil, you are correct. I believe that Canon cameras use sRAW readout (when available) or something very similar. The sRAW format is a YCbCr format, full resolution luminance, half resolution color channels (red-magenta and blue-yellow). Fundamentally, sRAW and it's variants are technically the same kind of format as JPEG, however JPEG is 8-bit, where as sRAW & variants are 12- or 14-bit. Since it is a sensor pulldown (basically a real-time stream off the sensor), things aren't really compressed much (outside of lowering the resolution of Cr and Cb channels.) The thing about this kind of pulldown is you are affected by rolling shutter...as that's basically what's going on. Again, not particularly ideal, however you are indeed limited initially by sensor DR (rather than 8-bit JPEG DR, which tops out at 8 stops at best).

Pages: 1 ... 75 76 [77] 78 79 ... 273