Canon EOS-1D X Mark II Studio Tests

Sporgon

5% of gear used 95% of the time
CR Pro
Nov 11, 2012
4,722
1,542
Yorkshire, England
LetTheRightLensIn said:
Refurb7 said:
LetTheRightLensIn said:
Refurb7 said:
Is that meaningful to anyone? If you're doing a 4EV push (or even 3EV), it means you really messed up the exposure. It means you have no clue about metering and probably suck as a photographer.

Who goes through life doing 4EV (or higher) pushes on their digital files? Anyone? Can you stop or is this a chronic condition?

Actually your statements prove that it is you who knows little about exposure and how sensors and exposure work and apparently who doesn't actually get out and shoot much or only sticks to highly controlled lighting scenarios.

I shoot weddings, events, portraits, corporate, kids sports (indoor & outdoor) and occasional personal stuff (landscape & street). All on location using whatever each location offers (no studio). All hours of the day or night, with available light and/or flash. I've delivered ~ 30K edited photos to clients every year for the past 15 years or so. I've never done a 4EV push — never needed to. I don't claim to be Ansel Adams or Annie Leibovitz, but I can manage exposure. What do you shoot?

I've done top level NCAA D1 Sports, a touch of pro sports, general shooting of all sorts of newspapers, a little bit of wildlife, tons of landscape (including lots of interior forest shooting), etc.

It's a bit surprising that you shoot 30k shots a year and have not once ever needed to pull up shadows 4 stops ever you are so perfect

I've never done + 4 stops either..........and I'm definitely not perfect, despite what my wife says :)
 
Upvote 0
Mar 2, 2012
3,188
543
dilbert said:
Will the A9 use the sensor in the K1?

If there is an "A9," I kinda doubt it.

There are vying rumors for a camera with that name:
1) super high res studio camera (on the order of 80MP)
2) attempt to break into the 1Dx/D5 market

In either case, I find it unlikely Sony would use a sensor from 2013 in a 2016+ flagship model.
 
Upvote 0
StudentOfLight said:
Neutral said:
For all 1DXm2 studio shots comparisons on DPR there is one, which is mostly interesting there.
This is 1DXm2 vs Pentax K-1 in standard and Pixel shift mode.
This one clearly shows where technology is going on (evolution trend) and what we could expect from Sony A9, which is expected to demonstrate some latest sensor technology advances combined with IBIS latest developments one of which could be several different pixel shift modes for different applications.
Then overall game will become very interesting.

As owner of 1DX and a7r2 I am going to get both 1DXm2 and A9 (or whatever name it will be).
DPR studio comparison test shots for 1DXm2 demonstrated that sensor performance is better than 1DX in every aspect – at low ISO DR and high ISO performance.
1DXm2 at ISO6400 and ISO12800 looks noticeably better than 1DX files.
Thanks to DPR, I downloaded all RAW files and compared that in LR and differences are obvious.
As for 1DXm2 comparison to a7r2 then visually 1DXm2 files looks a bit better than a7r2 despite Rishi noise measurements showing opposite. This might be due to the better noise pattern for 1DXm2, which is seems more uniform and more pleasant to eye.
Jrista demonstrated some time ago FFT of noise for different sensors, he might do the same exercise for these two bodies and share result of noise distribution.
Need to mention also that Rishi was right about color artefacts on a7r2 snapshots that which I posted earlier – that this was moiré and not the issue with rendering low contrast areas.
It became clear after downloading RAW files from DPR and checking them in LR.
Therefore, there is one point of criticism to DPR.
This is regarding quality of image rendering on their studio shot comparison - this could be done a bit better
Pixel shift is a multiple exposure which you are comparing to a single shot. Perhaps a more comparable approach would be to auto-bracket (or use multiple exposure mode) with the the one camera vs pixel-shift with the other.

You are correct, conceptually pixel shift is multiple exposure combined in one shot but it is also much more advanced method of doing ME.
First of all each resulting pixel is full RGB so this totally eliminate any moire.
Resulting resolution is also full pixel count as there is no need to interpolate between pixels.
Total resolution is the same as BW sensor with the same pixel count.
If you compare Pentax K-1 PS with 645Z you can see that resulting image quality of K-1 with PS is better than 645Z.
There are several methods of doing pixel shift - all depends of what is required at the end.
K-1 method is just one of them.

And finally this done instantly instead of doing that manually.
Several years back I described how this could be done for 1DX to get one RAW file from 1DX with significantly improved SNR - but still that was not very convenient and also required tripod. I used that from time until I got a7R and then a7r2.
Now with a7r2 using fast primes I can shoot 1/10 with 35mm f1.4 and get excellent results in very dark conditions , no need for 1DX for still shots.

What done by Pentax K-1 is much better, you can shoot handheld with PS with one press of button.

The other interesting thing that though they using almost 3 years old a7r Sony sensor but getting better results than Sony itself.
Compare 1DXm2 and K-1 in standard shutter mode - you can see that K-1 giving almost the same results - just a bit behind 1DXm2.
 
Upvote 0
3kramd5 said:
dilbert said:
Will the A9 use the sensor in the K1?

If there is an "A9," I kinda doubt it.

There are vying rumors for a camera with that name:
1) super high res studio camera (on the order of 80MP)
2) attempt to break into the 1Dx/D5 market

In either case, I find it unlikely Sony would use a sensor from 2013 in a 2016+ flagship model.

Definitely Sony is not going to use 3 years sensor from a7R in new pro kind camera.
I believe they are working on sensor which should significantly outperform anything existing in the market in PRO segment. So this is source of delays with this camera and a7rm2 was as intermediate solution to keep customer waiting for Sony new Pro body.
They are ambitious and I think they clearly understand that without breakthrough in sensor tech (let alone camera itself) they are unlikely to bite big piece from Canon/Nikon cake.
 
Upvote 0
Mar 2, 2012
3,188
543
dilbert said:
3kramd5 said:
dilbert said:
Will the A9 use the sensor in the K1?

If there is an "A9," I kinda doubt it.

There are vying rumors for a camera with that name:
1) super high res studio camera (on the order of 80MP)
2) attempt to break into the 1Dx/D5 market

In either case, I find it unlikely Sony would use a sensor from 2013 in a 2016+ flagship model.

Do we know that it is a 2013 sensor?

It definitely seems better than what is in the A7R, D800 & D810.

The only full frame 36.4MP sensor listed on their products page is IMX094. If they designed a special sensor for Ricoh that matches exactly the form factor and resolution of the A7R's, they're keeping it a secret.

http://www.sony.net/Products/SC-HP/IS/sensor2/products/

Note the 42MP sensor is not listed; I assume that's because it's not currently on the market (i.e. available outside of Sony)
 
Upvote 0
dilbert said:
Neutral said:
...
The other interesting thing that though they using almost 3 years old a7r Sony sensor but getting better results than Sony itself.
...

Does anyone know that for a fact or is everyone assuming it because the A7R has 36MP and the K-1 has 36MP?

There was multiple talks about that almost year back if Sony will give them new 42mp sensor from a7r2 or old 36mp from a7r.
Seems that Pentax did the same that Nikon was doing with Sony sensor.
Do you believe that Sony would be developing new sensor for Pentax ?
They just need to get as much profit from what they have available.
Might be there was some minor adjustments were done which do not affect much manufacturing process but this is very unlikely.
 
Upvote 0

StudentOfLight

I'm on a life-long journey of self-discovery
Nov 2, 2013
1,442
5
41
Cape Town
Neutral said:
StudentOfLight said:
Neutral said:
For all 1DXm2 studio shots comparisons on DPR there is one, which is mostly interesting there.
This is 1DXm2 vs Pentax K-1 in standard and Pixel shift mode.
This one clearly shows where technology is going on (evolution trend) and what we could expect from Sony A9, which is expected to demonstrate some latest sensor technology advances combined with IBIS latest developments one of which could be several different pixel shift modes for different applications.
Then overall game will become very interesting.

As owner of 1DX and a7r2 I am going to get both 1DXm2 and A9 (or whatever name it will be).
DPR studio comparison test shots for 1DXm2 demonstrated that sensor performance is better than 1DX in every aspect – at low ISO DR and high ISO performance.
1DXm2 at ISO6400 and ISO12800 looks noticeably better than 1DX files.
Thanks to DPR, I downloaded all RAW files and compared that in LR and differences are obvious.
As for 1DXm2 comparison to a7r2 then visually 1DXm2 files looks a bit better than a7r2 despite Rishi noise measurements showing opposite. This might be due to the better noise pattern for 1DXm2, which is seems more uniform and more pleasant to eye.
Jrista demonstrated some time ago FFT of noise for different sensors, he might do the same exercise for these two bodies and share result of noise distribution.
Need to mention also that Rishi was right about color artefacts on a7r2 snapshots that which I posted earlier – that this was moiré and not the issue with rendering low contrast areas.
It became clear after downloading RAW files from DPR and checking them in LR.
Therefore, there is one point of criticism to DPR.
This is regarding quality of image rendering on their studio shot comparison - this could be done a bit better
Pixel shift is a multiple exposure which you are comparing to a single shot. Perhaps a more comparable approach would be to auto-bracket (or use multiple exposure mode) with the the one camera vs pixel-shift with the other.

You are correct, conceptually pixel shift is multiple exposure combined in one shot but it is also much more advanced method of doing ME.
First of all each resulting pixel is full RGB so this totally eliminate any moire.
Resulting resolution is also full pixel count as there is no need to interpolate between pixels.
Total resolution is the same as BW sensor with the same pixel count.
If you compare Pentax K-1 PS with 645Z you can see that resulting image quality of K-1 with PS is better than 645Z.
There are several methods of doing pixel shift - all depends of what is required at the end.
K-1 method is just one of them.

And finally this done instantly instead of doing that manually.
Several years back I described how this could be done for 1DX to get one RAW file from 1DX with significantly improved SNR - but still that was not very convenient and also required tripod. I used that from time until I got a7R and then a7r2.
Now with a7r2 using fast primes I can shoot 1/10 with 35mm f1.4 and get excellent results in very dark conditions , no need for 1DX for still shots.

What done by Pentax K-1 is much better, you can shoot handheld with PS with one press of button.

The other interesting thing that though they using almost 3 years old a7r Sony sensor but getting better results than Sony itself.
Compare 1DXm2 and K-1 in standard shutter mode - you can see that K-1 giving almost the same results - just a bit behind 1DXm2.
I agree.
 
Upvote 0
Mar 2, 2012
3,188
543
dilbert said:
Whilst the K-1 can achieve amazing results at ISO 100 & 200, in terms of motion, to get a 1/250, you want to shoot at 1/1000. To maintain light levels, you would need to up the ISO twice.

When using pixel shif on the K-1, it would make more sense to compare 1/250 @ ISO 100 on the 1DX wih 1/1000 @ ISO 400. Of course that's only for shots where there is motion ... cars, people, planes, animals, waves, wind ...

So if you have pixel shift set and, say, an exposure time of 1/250, it does take four 1/250 captures, as opposed to a total exposure time 1/250?

If so (i.e. the shifted/composite capture represents a total 4/250 exposure time), to compare sensor performance given equal light input (if that kind of thing tickles your fancy), you either have to stop down the lens on the pentax twice, or set each individual capture on the pentax to 25% that of the single exposure from the compared camera. Not just for motion, for anything.
 
Upvote 0
Feb 8, 2013
1,843
0
dilbert said:
Foveon gives you RGB for each pixel by vertically stacking the sensors - i.e. no bayer matrix.

http://www.dpreview.com/articles/3118159533/sigma-unveils-radical-dp2-quattro-with-re-thought-19-6mp-foveon-sensor

Technically it's not full RGB per-pixel anymore.
It might have been in earlier models, but right now the Red and Green channels run at 1/4 the resolution of the top blue channel, otherwise they can't get much further than ISO 800.
I have to wonder if Foveon is ever going to work well in crop sensor formats, if the loss of signal strength on Red and Green is inherent to the design then it should probably be left to 35mm sensors and up.
 
Upvote 0
Jul 21, 2010
31,234
13,096
While we're off-topic, I haven't looked in detail at Pentax's pixel shift technology, but why four images and not just three? You should only need to image each position once for each color in the CFA. In Zeiss' implementation (which is >15 years old), only three images are needed to eliminate the need for color interpolation (although Zeiss actually moved the sensor relative to the CFA).
 
Upvote 0
Mar 2, 2012
3,188
543
neuroanatomist said:
While we're off-topic, I haven't looked in detail at Pentax's pixel shift technology, but why four images and not just three? You should only need to image each position once for each color in the CFA. In Zeiss' implementation (which is >15 years old), only three images are needed to eliminate the need for color interpolation (although Zeiss actually moved the sensor relative to the CFA).

Maybe for the same reason there are twice as many greens as reds or blues in a bayer pattern? Just a thought.

Plus, the more frames you have for superresolution algorithms, the better your noise reduction, so there is an advantage beyond colors per photo site.
 
Upvote 0
Mar 2, 2012
3,188
543
dilbert said:
3kramd5 said:
neuroanatomist said:
While we're off-topic, I haven't looked in detail at Pentax's pixel shift technology, but why four images and not just three? You should only need to image each position once for each color in the CFA. In Zeiss' implementation (which is >15 years old), only three images are needed to eliminate the need for color interpolation (although Zeiss actually moved the sensor relative to the CFA).

Maybe for the same reason there are twice as many greens as reds or blues in a bayer pattern? Just a thought.

Plus, the more frames you have for superresolution algorithms, the better your noise reduction, so there is an advantage beyond colors per photo site.

The sensor can be moved in one direction at a time: left/right or up/down - no diagonal movements. So it is necessary (for example) to go right, down, left, up. For some pixels the sequence will be RGBG but for others it will be GBGR. Easier to capture the light from each 4 locations than to selectively throw away the second green.

Ah, yes that makes sense.
 
Upvote 0
dilbert said:
3kramd5 said:
neuroanatomist said:
While we're off-topic, I haven't looked in detail at Pentax's pixel shift technology, but why four images and not just three? You should only need to image each position once for each color in the CFA. In Zeiss' implementation (which is >15 years old), only three images are needed to eliminate the need for color interpolation (although Zeiss actually moved the sensor relative to the CFA).

Maybe for the same reason there are twice as many greens as reds or blues in a bayer pattern? Just a thought.

Plus, the more frames you have for superresolution algorithms, the better your noise reduction, so there is an advantage beyond colors per photo site.

The sensor can be moved in one direction at a time: left/right or up/down - no diagonal movements. So it is necessary (for example) to go right, down, left, up. For some pixels the sequence will be RGBG but for others it will be GBGR. Easier to capture the light from each 4 locations than to selectively throw away the second green.

I do not think this is correct statement.
With such limitations IBIS would not be working correctly.
For IBIS to work correctly sensor need to be able to move at any angle/direction from central point.
Movement direction is the Vd vector resulting from SUM of two sensor movement vectors - Vx and Vy.
Vd angle depends on the values and directions of Vx and Vy.
Simple school math.

Pixel shift is just side application of IBIS technology and could be used in different ways.
And pixel shift is just temporary workaround for Bayer sensor limitations.
For future sensors where each Pixel is full RGB pixel this will not be required any more.
The same for Bayer sensor with eclectically switched color layers (per pixel or just for the whole sensor).
I think I've seen patent for that from Sony couple of years back which allows to do this super fast.
May be we will see that in A9.
 
Upvote 0
Jul 21, 2010
31,234
13,096
Neutral said:
[quote author=the guy who thinks lenses are cameras and the 1D C isn't a dSLR]
The sensor can be moved in one direction at a time: left/right or up/down - no diagonal movements. So it is necessary (for example) to go right, down, left, up. For some pixels the sequence will be RGBG but for others it will be GBGR. Easier to capture the light from each 4 locations than to selectively throw away the second green.

I do not think this is correct statement.
With such limitations IBIS would not be working correctly.
For IBIS to work correctly sensor need to be able to move at any angle/direction from central point.
Movement direction is the Vd vector resulting from SUM of two sensor movement vectors - Vx and Vy.
Vd angle depends on the values and directions of Vx and Vy.
Simple school math.
[/quote]

Well, given the source of the statement, it's incorrectness shouldn't surprise anyone.

Simple school math is beyond some people.
 
Upvote 0