November 20, 2014, 06:08:33 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 68 69 [70] 71 72 ... 309
1036
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 26, 2014, 10:49:43 PM »
And finally, I think it will be a dual processor.... but I am wondering if the time has come for one processor to be optimized for stills and the other processor optimized for video.

That's an interesting thought. I guess it depends on the frame rate. If they only bump up to 10fps, I think a single DIGIC7 could handle it (easily...with room to spare for a whole ton of other stuff). That would be ESPECIALLY if they move the ADC onto the sensor die and make it column parallel. They do have a patent for CP-ADC with a Dual-Scale Ramp ADC (the dual-scale just allows the ADC to operate at different rates based on some trigger factor...say sensor heat...more heat, higher noise, slower readout, less noise...switch from high speed readout to low speed readout when possible under higher heat, and you could counteract the increase in dark current noise...I have no idea what the trigger factor would be to switch from the higher speed to the lower speed or vice versa in an actual product, though.) Parallelizing ADC and putting that logic on the die also reduces load in the DIGIC itself...it would then solely be responsible for digital pixel processing, in which case a DIGIC 7 that has no ADC units could theoretically have even more processing power than a DIGIC 7 that did include the ADC units. So instead of being 7x faster than a DIGIC 6, it might end up being 12x or 14x faster.

If you had one of those dedicated to stills/af/metering, and one dedicated to video, you could really do a hell of a lot with the video. Canon should be able to surpass what the Bionz X in the A7s does easily, achieving ultra low noise ISO 400k, maybe even 800k.
Every so often there needs to be a "reset"

It happened in lenses when AF started to come out. Canon took the brave step of completely redesigning the interface to handle the requirements of digital communication between lens and body and we jumped from FD to EOS... this might be the time for a similar jump on the inside of the camera.

15 years ago, the innards of digital cameras were a collection of ic's with specific functions and the communication between them was both complex and at the same time, fairly limited. At the moment, Canon is doing a major redesign to fit DPAF onto it's sensors and presumably, onto a smaller fabrication technique. This gives us four big possibilities on the sensors. First, with less wasted surface area, the amount of captured light goes up and with that, so does the performance. The second one is with smaller fabrication comes smaller transistors and that means shorter electrical paths and that gives us higher speed. Third is that with smaller transistors and lower voltages, we get less heat, and that means lower noise and longer battery life. Fourth is that with smaller transistors the a/d can be integrated into the sensor and that gives us less noise and a cleaner design.

With all the analog confined to the sensor, one can now communicate digitally between the sensor, processor(s), display devices, and storage medium(s). Gigabyte per second transfer rates are easy to achieve.

15 years ago, there was no video on DSLRs.... now it is a standard feature.... yet we have the same general purpose DIGIC doing both..... it might be time for a split into a dedicated stills processor and a dedicated video processor.

This could be the time for a "reset" of the internal architecture of the Canon DSLRs.....

Regarding the shorter electrical paths bit...I think you may be conflating CPUs with sensors. In a CPU, you can shrink the whole package...and yes, that results in shorter distances for electrons to travel. In a sensor, it isn't really the same. There is a very minimal amount of logic that occurs at the pixel...basically amplify. The distance that amplified charge travels is the same for any given sensor size thought...the top row of pixels is going to have to travel the full 24mm height of the light-sensitive area, plus the additional millimeter or so of masked and calibration pixels around the border, plus the extra millimeter to the CDS unit, plus another short distance to some voltage output (or in the case of CP-ADC, to the ADC units). Shrinking the transistor size doesn't really change charge travel distance in CIS devices, since the whole die generally remains the same. Even if you shortened the distances in the logic around the pixel itself, that is such a trivial distance compared to the total readout distances, I don't think it would be meaningful.

As for lower voltages, for cameras used for normal photography, again I don't think the difference in noise from using a lower voltage on the sensor itself is going to matter much. In Canon's current setup, the sensor itself is actually very low noise. Based on Roger Clark's work, some of Canon's more recent sensors have as little as 1.5e- worth of electronic noise introduced by the sensor itself (or maybe it was 1.2e-). When your FWC is around 30ke- for APS-C and around 67ke- to 92ke- for FF, that is effectively meaningless. The primary source of read noise comes from the downstream (and off the sensor die) electronics, the off-sensor bus, secondary downstream amp and the high frequency ADC units. Those suckers are adding up to 35e- or so worth of noise.

I do believe moving the ADC onto the sensor die is the biggest move that could reduce noise. Eliminating the bus, secondary amp, and massively increasing the parallelism of the on-die ADC units would allow them to operate at a significantly lower frequency. The reduction in operating frequency would have the most significant noise reduction effect. If Canon follows Sony's design, moving the clock and other high frequency components off to a remote corner of the die, away from the ADCs, would practically eliminate the noise they cause.

I also agree that once the analog signal is converted to a digital signal, then you have the freedom to move it around at high speed. You can use error-corrected transfer, and sure, ship the information around at gigahertz error-corrected speeds.

I think that DIGIC is still handling both still and video because up to date, the DIGIC houses the ADC units. Once the ADC units are on the sensor, then sure, I think it would be a lot easier to have some kind of switching unit on the digital bus between the sensor and the processors, and you could dedicate one to stills processing and another to video processing. However, if you have a sufficiently powerful DSP with enough horsepower, it would probably be more energy efficient to hose both of those dedicated processors in a single unit. Especially of the ENTIRE unit can be dedicated to image processing, since the ADC units wouldn't be taking up die space or power.

1037
EOS Bodies / Re: 7D mark 2 crop vs full frame
« on: June 26, 2014, 10:31:30 PM »
Your comparing general-purpose processors to special-purpose DSPs designed to handle, in hardware, the specific processing needs to a specific camera, or small set of cameras. The two, a general-purpose ARM and a DIGIC DSP, are NOT directly comparable. The clock rate of a DIGIC may be "abysmally slow", however it's IPC is extremely high compared to the ARM.

I'm area of the difference between normal CPUs and dedicated DSPs.  I'm also aware that modern 64-bit ARM CPUs have vector engines and graphic chips that are fast enough to almost certainly make that DSP hardware completely unnecessary.  The reason you use DSPs is because the CPUs can't handle the processing.  The CPU in an iPhone 5S is faster than a single-processor 2 GHz G5 Mac from just a few years ago.  And Canon RAW image rendering seems to be pretty close to instant on an iPhone 5 using just the CPU, as far as I can tell from my Safari experiments, so I would expect that a CPU comparable to the one in the 5S ought to be able to handle a DSLR's image processing without breaking a sweat.  Granted, converting to JPEG takes extra work, but not that much extra work.

You should be able to get by with a small amount of dedicated hardware to control the ADC sweep across the CMOS part and shove the data into a small chunk of dual-port RAM so the CPU can then copy it into normal RAM using NEON instructions.  Mind you, I could be wrong—I'd have to actually write the code before I could say with absolute certainty—but I'm pretty sure we're either past the point where those DSPs are unnecessary or at least rapidly approaching it.

BTW, I'm not sure what you mean by "IPC".  To me, that means interprocessor communication, which isn't relevant here.  Do you mean IOPS?

IPC = Instructions Per Clock. It's a well-known term when describing how much work any kind of processor does in one clock cycle. Longer, more complex pipelines (such as in a purpose-built DSP) usually have much higher IPC. So, while they often have a lower clock rate, they do just as much if not more work than a general purpose device.

I don't think we'll see DSPs disappear from DSLRs any time soon. Purpose-built processors allow manufacturers to very finely and precisely tune the processor to the capabilities of the camera, optimize power usage, etc. If anyone was going to jump the DSP ship first, it would have been Sony, however instead, they built the Bionz X DSP for the A7s, and it is pretty kick-ass.

Regarding memory, I wouldn't say that it's just the memory that consumes power...because the IPC of the DIGIC chips is high, they ARE doing a LOT of work, regardless of the clock rate. Despite that, the primary power consumer is unlikely to be either the memory nor the DSP. Moving physical components requires more power...flapping a mirror @ 12fps, moving large focus groups in lenses, those are going to consume more power. If your shooting action, those things are going to consume a lot more power. With tiny transistors these days, its easy to build low-power electronics...but the force required to move a physical object will always be the same.

35mm cameras used to run for months on a tiny button cell.  So I'd expect that, compared with the CPUs, LCD panels, RAM, the mechanical bits should pretty much be lost in the noise, power-wise (though that may not be true with mirror lock-up—not sure).  Then again, people took fewer shots in those days, so maybe that's not a fair comparison.

I don't think any film SLR cameras were ever cranking out 12fps, nor moving the lens focus group of the great whites like the 300mm through 600mm lenses that frequently. The 1D X requires a higher voltage to supply the necessary power to the lens to support fast AF. Moving the mirror that fast, reliably and accurately on a consistent basis takes the right kind of power/signal.

I also don't think that a large pro-grade film SLR was running off of a button battery. IIRC, Canon has a specific NiMH battery used in the 1V and EOS 3 cameras...last I saw, it looked pretty similar to the 1D X batteries in terms of shape and size.

I don't think "throwing an ARM at the problem" is a solution. The IPC of an arm is low, they are GENERAL purpose processors, so they will require far more cycles to perform the kind of image processing necessary to handle the information coming off the sensor. A specially-designed DSP that has the necessary logic built into the hardware will perform image processing a lot faster for less power, as it's a SPECIAL purpose device. That's why we have GPUs in our computers...they are specially designed to tackle the problem of pixel processing in a more efficient manner than a CPU ever could.

It doesn't really matter how many cycles the processing takes.  What matters is the clock time and, to a lesser extent, the power consumption.  If the general-purpose CPUs can handle the processing in the required time, it makes a lot more sense to use those rather than custom DSP hardware, because in the downtime between photos, you can repurpose that extra CPU power for other useful tasks, unlike DSP hardware, which is pretty much a one-trick pony.

What matters is how much work is done in any given unit time. That's what affects power consumption. That's where IPC comes into play. A high IPC/low clock part can do the same amount or more work (same amount of power consumption) as a low IPC/high clock part.

As for repurposing extra processing power for other "useful tasks"...what useful tasks? Were talking about cameras here, not smartphones. I am happy that my immensely capable smartphone has a camera, but it's also basically a general purpose pocket PC. It's a small, mobile computer. My DSLR is just a camera...it only really has one specific task. I'm not going to be doing image editing on it's microscopic 3.2" screen, I'm not going to be dialing up my buddies or playing games or listening to music out in the field. I bought a DSLR so I would have a very powerful, very capable camera with a high frame rate, highly accurate focus, and the ability to use a wide variety of lenses. That's it's purpose. It's a SPECIAL purpose. I don't know what I'd use a whole lot of extra processing horsepower for in a camera... I use my Lumia for any general purpose tasks, as it is far better suited to it.

1038
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 26, 2014, 08:48:44 PM »
And back to our regularly scheduled programming.

The 7D II's new technology. What will it be? Here are my thoughts, given past Canon announcements, hints about what the 7D II will be, interviews with Canon uppers, etc.

 1. Megapixels: 20-24mp
 2. Focal-plane AF: Probably DPAF, with the enhancements in the patents Canon was granted at the end of 2013. *
 3. New fab process: Probably 180nm, maybe on 300mm wafers. **
 4. A new dedicated AF system: I don't know if it will get the 61pt AF, probably too large for the APS-C frame. A smaller version...41pts would be my hope. Same precision & accuracy of 61pt system on 5D III, with same general firmware features.
 5. Increased Q.E.: Canon has been stuck below 50% Q.E. for a long time now. Their competitors have pushed up to 56% and beyond, a couple have sensors with 60% Q.E. at room temperature. Higher Q.E. should serve high ISO very well.
 6. Faster frame rate: I suspect 10fps. I don't think it will be faster than that, 12fps is the reserved territory of the 1D line.
 7. Dual cards: CF (CFast2) + SD. I hate that, personally, but I really don't see the 7D line getting dual CF cards. (I'll HAPPILY be proven wrong here, though!)
 8. No integrated battery grip. Just doesn't make sense, the 7D was kind of a smaller, lighter, more agile alternative to the 1D, a grip totally kills that.
 9. New 1DX/5DIII menu system. Personally, I would very much welcome this! LOVE the menu system of the 5D III.
10. GPI and WiFi: I think both should find their way into the 7D II, what with the 6D having them. Honestly not certain, though...guess it's a tossup.
11. Video features: Video has always been core to the 7D II rumors. 60fps 1080p; 120fps 720p (?); HDMI RAW output; External mic jack; 4:2:2; I think DIGIC 7 would probably arrive with enhancements on the DIGIC 6 image and video processing features. Maybe on par with Sony's Bionz X chip.

* Namely, split photodiodes, but with different sizes...one half is a high sensitivity half, the other half is a lower sensitivity half. The patents are in Japanese, and the translations are horrible, so I am not sure exactly WHY this is good, but Canon's R&D guys seem to think it will not only improve AF performance and speed, but "reduce the negative impact to IQ"....which seems to indicate that the use of dual photodiodes has some kind of impact on IQ, a negative impact.

** We know Canon has been using a 180nm process for their smaller form factor sensors for a while. Not long ago, a rumor came through, I think here on CR, indicating Canon was building a new fab and would be moving to 300mm wafers. That should greatly help Canon's ability to fabricate large sensors with complex pixels for a lot cheaper. A smaller process would increase the usable area for photodiodes, as transistors and wiring would be a lot smaller than they are today on Canon's 500nm process. That would be a big benefit for smaller-pixel sensors. If they moved to a 90nm process, all the better. I don't suspect we'll see any kind of BSI in the 7D II...but, who knows.
I agree with everything... but I would like to make three comments.

It's not very cost effective to just upgrade fabrication by one step.... they are going to have to live with the new facility for a long time. I would expect that they would jump over 180 to 90nm... or even smaller.

I think that for larger sensors, 180nm is still used by other major manufacturers, like Sony. It's only with the much smaller sensors that have pixels smaller than 2µm that you start seeing transistors smaller. Even a move to 180nm for large (APS-C, FF) sensors would be HUGE. I mean, that's compared to 500nm. It's about 1/3rd the size. If pixel count only goes up to 24mp in the 7D II, a move to 180nm would mean that Q.E. actually increases on a per-pixel basis relative to say 20mp at 500nm. That's how significant it would be.

I'd have to check, but I would expect that Canon is probably already on a smaller process, 90nm or even 65nm, for the 1/2", 1/3", and smaller sensors.


And video, I think 4K video on this camera is a flip of the coin.. I wouldn't bet one way or the other, but if they do use C-fast cards the odds of 4K video goes up

Agreed. I really hope they do move to C-Fast, and I also hope they move to USB 3.0. If we see both of those, then 4k should be a given.

And finally, I think it will be a dual processor.... but I am wondering if the time has come for one processor to be optimized for stills and the other processor optimized for video.

That's an interesting thought. I guess it depends on the frame rate. If they only bump up to 10fps, I think a single DIGIC7 could handle it (easily...with room to spare for a whole ton of other stuff). That would be ESPECIALLY if they move the ADC onto the sensor die and make it column parallel. They do have a patent for CP-ADC with a Dual-Scale Ramp ADC (the dual-scale just allows the ADC to operate at different rates based on some trigger factor...say sensor heat...more heat, higher noise, slower readout, less noise...switch from high speed readout to low speed readout when possible under higher heat, and you could counteract the increase in dark current noise...I have no idea what the trigger factor would be to switch from the higher speed to the lower speed or vice versa in an actual product, though.) Parallelizing ADC and putting that logic on the die also reduces load in the DIGIC itself...it would then solely be responsible for digital pixel processing, in which case a DIGIC 7 that has no ADC units could theoretically have even more processing power than a DIGIC 7 that did include the ADC units. So instead of being 7x faster than a DIGIC 6, it might end up being 12x or 14x faster.

If you had one of those dedicated to stills/af/metering, and one dedicated to video, you could really do a hell of a lot with the video. Canon should be able to surpass what the Bionz X in the A7s does easily, achieving ultra low noise ISO 400k, maybe even 800k.

1039
EOS Bodies / Re: 7D mark 2 crop vs full frame
« on: June 26, 2014, 01:26:48 PM »
I posted this in another 7D mk II thread but wanted to mention it here as well. I have decided that I would be willing to bet money (if I had any money) that this camera will have 30+ MP, and that, in Q1 of 2015, a camera will be released from Canon which will have 40-50 MP. Call me crazy.

It might go 23-24mp, anything higher in a APS-C would cause big technical issues trying to achieve a 10 fps speed.  The amount of data to process is too much.

Only because Canon's custom silicon is abysmally slow.  If the specs from Magic Lantern are correct (and if I read the spec sheets correctly), then the top-of-the-line, current-generation 64-bit ARM chips are at least 200x as fast as the DIGIC 5, and maybe an order of magnitude more than that.  (I don't know which version of the CPU they based their chip on.)  All their custom DSP hardware is just working around the CPU itself being a total dog.

The FPS of DSLRs is not primarily limited by the CPU.  It is primarily limited by the speed of the mechanical shutter and secondarily by the size of the buffer and the speed at which the flash parts can write the data usefully.  The CPU speed is almost certainly chosen to be just fast enough to handle the needed throughput.


My comment had to do with the internal readout of the sensor and the processor.  That's why the 7D and 1D series use dual processors, to get the data flow that's needed.

I suspect they use dual processors because it's cheaper to add more existing, slower parts than it is to design newer, faster ones.  :)


A 50MP raw image before it is compressed by the processor will be well over 200MB.  Jpeg images are very highly compressed, but the processor must work very hard to compress a 200MB+ image to jpeg size.  After that, write speeds are not the issue.

The big headache with data that big is actually the power consumption of the RAM needed to store it.  JPEG processing is borderline trivial, CPU-wise, and is almost infinitely parallelizable.  You could throw a separate CPU core at each DCT block if you were so inclined, or even even parallelize the DCT itself (though I don't remember precisely how you do the latter without duplicating parts of the computation).  Even if you're doing scaling to obtain a lower quality resolution, it's still trivially parallelizable.  Just throw a decent 64-bit, 4-core ARM CPU at the problem, and you're likely in the ballpark.

But write speeds are going to be a big part of the problem.  Lots of people shoot RAW, and a 200 MB file size represents nearly a factor of ten increase.  And JPEG, assuming the quality setting remains the same, and assuming you're shooting at full resolution, is likely to also increase in size roughly proportional to the number of pixels, so that's going to balloon by close to a factor of ten as well.

Your comparing general-purpose processors to special-purpose DSPs designed to handle, in hardware, the specific processing needs to a specific camera, or small set of cameras. The two, a general-purpose ARM and a DIGIC DSP, are NOT directly comparable. The clock rate of a DIGIC may be "abysmally slow", however it's IPC is extremely high compared to the ARM. The ARM may have a ridiculously high clock rate, but it's IPC is very low. Canon's DSP's aren't any slower really than the competitions. DIGIC 5/5+ is already a bit older now, DIGIC 6 is really what should be compared to the competition. In that respect, DIGIC 6 is on par with Sony BIONZ X (the DSP used in their A7s), as they both do similar processing, all in hardware (although Canon's is currently only used for some of their compact cameras.)

Regarding memory, I wouldn't say that it's just the memory that consumes power...because the IPC of the DIGIC chips is high, they ARE doing a LOT of work, regardless of the clock rate. Despite that, the primary power consumer is unlikely to be either the memory nor the DSP. Moving physical components requires more power...flapping a mirror @ 12fps, moving large focus groups in lenses, those are going to consume more power. If your shooting action, those things are going to consume a lot more power. With tiny transistors these days, its easy to build low-power electronics...but the force required to move a physical object will always be the same.

I don't think "throwing an ARM at the problem" is a solution. The IPC of an arm is low, they are GENERAL purpose processors, so they will require far more cycles to perform the kind of image processing necessary to handle the information coming off the sensor. A specially-designed DSP that has the necessary logic built into the hardware will perform image processing a lot faster for less power, as it's a SPECIAL purpose device. That's why we have GPUs in our computers...they are specially designed to tackle the problem of pixel processing in a more efficient manner than a CPU ever could.

1040
Photography Technique / Re: Shallow DOF vs lighting
« on: June 26, 2014, 12:21:28 AM »
Hi all, my first post at canonrumours!

For portraits where you have time to mess around a bit, if you had to pick between shallow DOF or off camera lighting, which would you choose? Which technique alone do you think makes better portraits?

I choose both!

why force yourself to choose between one or the other?

I like this answer! I totally agree...if you have the option to control both, control both! Simple as that. Your photos are superbalicious, btw. LOVE that DOF, and the lighting is excellent! :D

1041
- faster SD card slot
- dual SD (get rid of that ancient and expensive CF)

Your position. Hopefully they won't do that.

Quote
- and yes.. MORE DR please.. kill that banding in the shadows, buy sensors from Sony for Christ sake as nikon does.
- more mp like 40mp (and 20mp mRAW) that those great lenses such as 24-70 2.8 II, 70-200 2.8 II and 16-35 f/4 can handle!

Why do you need a Canon, just grab the latest Nikon with your fancy Toshiba/Sony/Whatsoever-Sensor and get lucky. I don't understand why you bought a 5DM3 when everything inside is wrong for you, even the Memorycard.

You'll get an equally good 24-70 or 70-200@Nikon. And who needs 40MPixel? Get a Pano-tool. Understand it. Use it.

P.S. I'm glad that there are companies out there who invent their own sensors, like Sigma or Canon.

are you still glad when you're pushing out the shadows in post processing knowing that you've spent a fortune on buying a great camera with almost the same sensor as the previous model?

I'm not a nikon guy but honestly i don't get it!
Doesn't canon sensors need improvements?
I don't care who makes it, i care on doing my job better.
But i think its funny to talk about USB3 and highlighted buttons in the next models.

Don't complain if canon re-sells the same sensor in the next model with USB3 and 4k as a new feature which is useless for photographers.

How many times have you lifted the shadows of your photos by MORE than four stops? Be honest, now, with yourself, because that's the cutoff point. How often do you lift shadows by 4.5, 5, 5.3 stops? That's what the additional DR of the D800 gets you...those extra stops of shadow lifting power (assuming you expose to preserve the highlights). There are very few situations where you really actually need to lift shadows by that much. Sunset landscapes where the sun is setting behind mountains, or something like that, are one of the few cases where you actually NEED that kind of dynamic range...and then, having barely more than two additional stops of DR is almost meaningless, as the dynamic range of the scene is likely closer to 20 stops. Your still going to need GNDs.

Sure, it's nice having cleaner shadows, I want cleaner shadows myself...but more DR is a feature you'll rarely use. On the flip side, USB3.0? You'll use that EVERY time you import. Dual CF cards? You'll use that EVERY time you shoot anything. Better/faster AF? You'll use that EVERY time you shoot. There are more important features on a camera than the dynamic range, unless you primarily shoot scenes that have a ton of DR in them, or don't like having any contrast in your final images.

1042
If this is true, and Canon feels that this "new technology" about to be debuted in the 7D2 is so good that the rest of the lineup must have it, it must be something big!

That is quite optimistic. 

Launch price for the mkiii was 3500... so I suppose $4000 seems like a reasonable bump up in price... with the 1dx's selling at $5000... it really raises the question of whether a 1dx for an extra grand is worth it more than a new mkiv.  Interesting question.

There are many reasons to buy the 5D line over the 1D line, regardless of how much better the 1D line may be. No battery grip (BIG reason for many people), more megapixels, less read noise, quieter mirror, cheaper (even a thousand bucks is more than "cheaper enough"), etc.

1043
Living in Colorado, I think I can help. Colorado has a LOT of wonderful places to visit, for landscapes, wildlife, and more.

The most well known would be Rocky Mountain National Park, just north of the city of Estes Park. From there, you have everything your looking for...lots of wildlife, lots of birds, lots of nature, tons of wild flowers (at least now...they are blooming and will be done soon), and some truly amazing landscapes. You will want to take Trail Ridge Road up to the continental divide...absolutely spectacular! You should also visit Bear Lake, from which a whole host of trail heads spin off. Bear Lake itself is beautiful, but you can take trails up to Dream and Emerald Lakes below Hallette and Flattop Peaks. You can also head off in another direction, and climb to the basins just below Longs Peak (a 14,000+ foot mountain) at he head of the divide. There are also dozens more trails.

You can take longer hikes up to other truly amazing vistas in other areas of the park. Most of those trails are longer than the ones around Bear Lake, so you'll have to prepare. If you take Trail Ridge Road up to the divide, you can continue along it and go over the divide to the western side. More beautiful vistas there. There is also Lake Granby, one of the largest lakes in Colorado (second largest), but has some amazing water vistas with the divide behind it. Just to the south of RMNP is the Indian Peaks region. I actually spent the last few years of my teens growing up in this area. It's truly amazing, another huge range of incredible peaks stretching down parallel to Peak to Peak highway. There are lots of little county roads that take you west towards the mountains, and dozens of trails to follow. I recommend hitting up Brainard Lake, and take the trail to Long Lake. Beautiful landscape up there, wonderful place for wildflowers, beautiful creeks and rivers, and some more unique birds (such as the Gray Jay...truly beautiful birds!) Later in the year, it's also a wonderful spot to get macro photos of mushrooms, of which there is a pretty wide variety.

Another wonderful place is the Mt. Evans Wildlife area, and Mt. Evans itself. This is one of a few mountains you can drive right up to the peak of. The vistas from up there are amazing...you can see for miles in every direction, and it's stunning at sunset. Last time I was up there, I was a noob of a photographer, so I don't really have any nice photography to share. But I do remember the alpenglow...dozens of peaks all glowing at the same time right at sunset...just absolutely amazing. There are a few mountain ridge trails that you can travel to get to some neighboring peaks around Evans, and there are a number of lakes dotted around.

One of the more iconic spots in Colorado is the Maroon Bells. These are highly recognizable because of John Fielder's work, and he has photos of the Bells in every season. The Bells themselves take a while to get to...it's a good long drive from pretty much any of the major staging towns (mainly Aspen, CO IIRC). If you are willing to hike, the high plateaus above and behind the Maroons are also just phenomenally beautiful. A large format film photographer, Jack Brauer, has some stunning photos from the high altitude areas behind the bells.

The continental divide stretches through Colorado's entire height. You can find 14ers all along the central ridge, including Mt. Elbert (tallest in Colorado), Mt. Massive, La Plata, Mts. Oxford, Harvard, Columbia and Yale, as well as a host of others.

Just to the west of this lower region of the divide are Aspen, Glenwood Springs, etc. These towns are beautiful, and can be staging grounds for visiting the other mountainous regions nearby. Glenwood Springs is home to some hotspring-fed pools. If you can hit them on a weekday, it's better...as there are fewer people...but the pools can become really packed. There are also some caves that you can visit near Glenwood as well, if your interested in that kind of thing (there are actually a number of interesting cave systems around Colorado, something to keep an eye out for.)

There are a couple other amazing regions as well. Colorado has a small desert, Great Sand Dune National Park. This is home to the tallest sand dunes in North America, nestled just behind (to the west) of Pikes Peak. I haven't been down there since the 1990's before I moved to Colorado, but a photographer friend of mine, author of a photobook for the park, has some amazing photos of the area.

Finally, if you have the time, the south western quadrant of Colorado has some truly stunning landscapes. Again, Jack Brauer has truly amazing photos of this region, as it's where he lives. South western Colorado is home to the Uncomphagre and San Juan mountain regions. Littered with aspens and many high open meadows, it's just stunning. I can't describe it...just check out Brauer's work. It's a long ways from the populated "center" of Colorado, but that region is definitely the "heart" of Colorado. It's out of the way, there aren't many towns down there, certainly no real big cities...but if you have the chance, it's worth a visit.

There are plenty of other interesting places to visit. One of my favorite places near the urban and suburban sprawl of the greater metro areas around Denver is Roxborough State Park, which is full of giant red slabs of earth that jutt up from the earth. Another similar, but also distinctly different, place is Garden of the Gods near Colorado Springs. Small places, but worth a visit if you have the chance. Well, that is all barely scratching the surface. There are thousands of incredibly picturesque places to visit in Colorado...impossible to list them all. I think I've named the major regions for you, though, which should get you started.

1044
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 25, 2014, 02:40:42 AM »
It could as well be a global shutter based on this technology.
No more X-sync barrier, no more rolling shutter problems.

I don't think so. There are two modes...fully transparent, and "colored". The material does not get completely opaque, it takes on a lumpy curve that peaks near the blue end and ultimately trails off by IR. The material would need to be 100% completely opaque to operate as a global shutter. On the other hand, you wouldn't necessarily want it 100% opaque when using it as a color filter...you would want it to have some kind of response curve that peaks in a given range of wavelengths, and bottoms out in the other wavelengths. Even a standard color filter is not 100% completely opaque to other colors, at the very least there is usually a few percent of red and green getting through a blue filter, a few percent of blue getting through a red filter, etc. As filter material, it sounds pretty amazing.

As a shutter, I don't know that it's capable of becoming opaque enough...even if Canon ultimately made it 0.1% or even 0.01% opaque...that is still allowing light through. Even if you weren't taking pictures...at that low transmittance level, your actually exposing the sensor...which would ultimately lead to higher levels of noise.

1045
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 25, 2014, 12:47:50 AM »
Interesting article about new canon patents:
http://www.photographybay.com/2014/06/23/canon-is-developing-a-new-organic-compound-for-sensor-and-optics-technology/

The organic compound patent is only patenting the molecule, and electrochromic molecule (molecules or polymers that change color in the presence of an electric current), but the implications are very interesting. It's an adjustable organic filter...which has very high transmittance in one mode, then has lower transmittance in another, the "color" mode. In color mode, it looks like the color would be more blueish or cyan colored, as some light is transmitted, but most IR is blocked. If they could refine the capability...it might lead to adjustable color filters. This was something I hypothesized a couple years ago....sensors with a high refresh rate during exposure, with dynamic color filters. You would get 100% fill factor for all colors, without the need to layer the sensor. That means you get higher sensitivity.

(Note, this is not what the patent describes...however it is something it could imply:) Assuming this compound leads to a more controllable transmittance, it might be possible for Canon to create a full-color sensor  with a single layer of organic film that changes for red, green, and blue channels during the length of the exposure. You could theoretically even switch the filter to full transparency mode for a "luminance" channel. This would be WAY better than a Foveon-style sensor, as you have problems with noise in the red and a bit in the green channels do to their depth within the silicon. With a dynamic color filter, especially one with high transmittance, you could gather far more light. You could even shorten the sub-exposures in each color channel, expose for longer in luminance to get more detail (for any given duration of exposure...say you choose 1/500th second exposure...the RGB sub-exposures might be 1/3000th of a second long, while the luminance sub-exposure might be 1/1000th long. Amplify, then slightly blur, the color channels...that reduces noise, then integrate with the luminance, which adds back the detail.)

No idea if such a sensor would ever materialized, but it's the ultimate implication of electrochromic compounds.

1046
EOS Bodies / Re: DSLR vs Mirrorless :: Evolution of cameras
« on: June 23, 2014, 10:14:53 PM »

The moral of the story? If your a discourteous, tromping wannabe who has to keep on the move because your too impatient to set up, sit, and wait for natures beauty to come to you in comfort...then a tiny light weight mirrorless with a tiny light weight lens is probably for you. You won't get the same action-grabbing performance, you won't have the same ergonomics (those mirrorless cams and lenses are TI-NY...like, toy tiny, like, barely fits in your hands tiny...like, WTF am I doing with a TOY with that BEAUIFUL BIG BIRD in front of me?!?!? OMG!), your IQ won't be as good (or maybe it will if you drop some dough on the FF A7r, but then you'll really be suffering on the AF and ergonomics front).

Anyway...mirrorless has it's place. They have their uses and their benefits.  But, every time I encounter a die-hard mirrorless user, my experiences tend to be similar to the above. Mirrorless users are ALWAYS on the move. Moving moving moving moving. No patience, no time to wait and let things just happen around you. MOVING. I totally understand why they are fanatics about mirrorless...but wow...slow down and enjoy something, enjoy life happening around you every once in a while! :P

makes me smile.....

my birding setup includes a camping chair and a good book :) and while I was reading today Harry came past to check me out...

Nice shot. :)

My birding setup included my ass and the ground. :D And the camera+lens on a tripod, of course. And maybe my phone...on which I have good books, good music, good games, lots of good stuff.

1047
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 23, 2014, 02:38:24 AM »
The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.


That sounds like it's just a problem with the way the software is handling the data. A TIFF is still assigning full RGB values to each pixel, the debayering is done, and I'm assuming the original values are lost. Whereas if you store the information in a pre-debayered state, even with one pixel averaged out (which was next on the to-do list anyway) it shouldn't be any different from reading the original RAW... with the slight exception that adjusting the value after averaging would be different than adjusting the values of two pixels and then averaging them (I assume that when you adjust things in post it's playing with the RAW numbers before debayering).
But that still sounds like a fairly inconsequential concession to make compared to storing data after debayering.

There are some things in a RAW editor that must be done before debayering (i.e. whitebalance), and some that are usually done after debayering. It's just that some things are more effectively performed with an original digital signal, and others with a full RGB color image. Exposure and white balance are the two main things that benefit most from being processed in the original RAW, where the signal information is pure and untainted with any error introduced by conversion to RGB.

You also have to realize that RGB binds the three color components together....they cannot be shifted around much in an independent way, not like you can with RAW, without introducing artifacts. At least, not with real-time algorithms. There are other tools, like PixInsight (astrophotography editor) that have significantly more powerful, mathematically intense, and often iterative processes that put most of the tools in something like Lightroom to shame. One example is TGVDenoise...which is capable of pretty much obliterating noise without affecting larger scale structures or stars at all. Problem is, at an ideal iteration count (usually around 500) on a full RGB color image, running TGVDenoise can take several minutes to complete. And that is just one small step in processing a whole image.

So sure, with the right tools, you can probably do anything with a 16-bit TIFF. It's just that with lower precision but significantly faster algorithms like are often found in standard tools like Lightroom, you either end up with artifacts, or run into limitations with the data or the algorithm that won't let you push the data around as much.

1048
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 09:10:41 PM »
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.


Darn.
After reading a bit about the various file formats (TIFF is high fidelity, but both huge and still damaging even in 16bit) It sounds like the best that could be done would be just to "prep" the raw file for superpixel debayering by saving it with the two green pixels averaged already. It would be useless for anything else, but you use 25% less space. Not nothing, but not great.
People just need to get used to handling large files.

The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.

1049
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 07:35:05 PM »
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.

I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.

I'm guessing the only reason you mention 90nm and not 30nm is that in this application the cost/benefit ratio favours slightly larger circuits rather than smaller? (you'd only gain minimal surface area but potentially make production much more difficult)

Well, I mention 180nm and 90nm because I am pretty sure Canon has the fab capability to manufacture transistors that small. In the smallest sensors, transistor sizes are a lot smaller than that...I think they are down to 32nm for the latest stuff, with pixels around 1µm (1000nm) in size. I think that some of Canon's steppers and scanners can handle smaller transistors, 65nm using subwavelength etching, but I don't know if that stuff has been/can be used for sensor fabrication. I know for a fact that Canon already uses a 180nm Cu fab process for their smaller sensors, so I know for sure they are capable of that. Their highest resolution fabs are around 90nm natively, but again, most of what I've read about them indicates IC fabrication...I've never heard of them being used to manufacture sensors (but there honestly isn't that much info about Canon's fabs...nor who owns them...)

1050
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 22, 2014, 06:40:49 PM »
As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...

Thus you start with an 80MP sensor to get a nice 20MP image.

No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES. The two are not the same. Pixels have photodiodes, but photodiodes are not pixels. Pixels are far more complex than photodiodes. DPAF simply splits the single photodiode for each pixel, and adds activate wiring for both. That's it. It is not the same as increasing the megapixel count of the sensor.

And, once again...I have to point out. There is no such thing as QPAF. The notion that Canon has QPAF is the result of someone seeing something they did not understand. Canon does not have QPAF. Their additional post-DPAF patents do not indicate they have QPAF technology yet...however there have been improvements to DPAF.

Sorry, maybe I should have communicated that better. I wasn't referring to dual pixel technology, just normal sensors at high resolution.
(If superpixel debayering is really as simple as it sounds, dual pixel technology is completely unnecessary in this context.)

Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Well, someday we may have 128mp sensors...but that is REALLY a LONG way off. DPAF technology, or any derivation thereof, isn't going to make that happen any sooner.

http://www.gizmag.com/canon-120-megapixel-cmos-sensor/16128/

I'm still of the opinion that Canon is only limiting resolution because of either the lack of user infrastructure (flash memory needs to drop in price), or the lack of a practical processor to pair with the sensor (problems with size, battery life, heat, etc...).

My bet is they will ramp up resolution as surrounding technology allows.

I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.

Pages: 1 ... 68 69 [70] 71 72 ... 309