July 22, 2014, 07:49:07 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 68 69 [70] 71 72 ... 248
1036
Software & Accessories / Re: Adobe Lightroom for iPad Coming Soon
« on: January 24, 2014, 10:37:34 PM »
I'm a huge fan of Lightroom, I think it's fantastic. I would hate to see it go solely subscription-based, I was waiting until LR6 to upgrade from 4...... hopefully adobe still has those like me in mind  :-\

Might want to prepare yourself...by the time LR6 rolls around, I suspect it will be fully subscription based. :(

1037
EOS Bodies / Re: Will Canon Answer the D4s? [CR2]
« on: January 24, 2014, 09:49:33 PM »

Quote from: jrista

This is the most advanced imaging device I know of:

http://spectrum.ieee.org/tech-talk/at-work/test-and-measurement/superconducting-video-camera-sees-the-universe-in-living-color


Thanks, I liked reading this.  A bit troubling that the galaxy on the left does not have a remotely resolved "nucleus" when compared to the Hubble image, but perhaps some of that is due to an unfiltered capture of a broader spectrum of light (where the Hubble's is meant to only portray visible light)??


Remember, the test sensor only has 2024 pixels. The Hubble image is taken with a 15 megapixel sensor. BIIIG Difference. Once this kind of technology finds its way into megapixel sensors, you'll be able to tell the difference.

1038
EOS Bodies / Re: 7D Mark II on Cameraegg
« on: January 24, 2014, 08:02:30 AM »
...
I upgraded from my xs to the 60D primarily because of it's video function.  I didn't want to buy a video camera that cost $300  ...
...
For me... I can't have a single body in my house (only body in my house) that doesn't have video... because I do need it... not a ton... but I do need it to capture my 5 month old and my 125 month old.  So if Canon were to drop video entirely... I would probably have to jump ship in 5 years when I out grow my mkiii... I know it isn't likely that they will drop it... but it is what it is.

This is exactly why I would like all DSLRs to come in a "basic" stills-only version without video capturing capability [hardware disabled, easy to do]. And for those who really want or "need" steills and video in one single device, should be offered a video-enabled version of those cameras ... of course at a surcharge. Maybe 10% more, maybe 20% more or any other reasonable number, that would still make "one dual-use camera" a better deal than "two single-use cameras" (or rather camera systems). 

For every other product on earth the principle is clear: more features and/or more convenience = higher price.   
We can order cars in a basic, "no frills version" or "fully loaded". "2 wheel drive" or "all-wheel drive". Stronger engine, more "extras" ... no problem. But ... not for free. 
You want it  ... you select it ... you pay for it ... you get it.

Only video-users clamor for their extra video-capability and single-device convenience in EVERY camera ... and they DEMAND it "FOR FREE".

Now, as that demands shifts to ever more advanced video capturing (4k, 8k, 60fps, 120fps, 1000fps?) ... it gets very evident, that video capability in DSLRs does NOT come for free, but does cause rather significant extra cost: extra R&D effort, more CPU-power, stronger hardware, larger and faster storage media, additional firmware and software ... all of this has to be designed, developed, tested, manufactured, implemented and serviced. It requires extra capital and extra labor from (highly skilled) humans, who certainly do not work "for free". But the extra feature only wanted by a minority of buyers should be "free of charge", "all inclusive". Paid for by the majority of stills idiots, who neither need nor want video capability in their stills cameras, but are not given a choice. Unlike cars, we only  get our cameras "fully video loaded", and have to swallow the price for it. 

This is the single reason, why the topic of "video-capable DSLRs" is "emotional". Because the way (all) camera makers are currently dealing with the market demand for "dual-use cameras" is very UNFAIR towards those wanting cameras that are fully optimized towards one single use scenario, that DSLRs were really designed for: capturing still images.

The argument will be less pronounced when the shift to mirrorless cameras has happended, since these cameras are video-enabled by their very design [for viewfinder&backscreen image] without mechanical mirrors blocking the lightpath. Nevertheless, implementing video CAPTURE and video OUTPUT causes extra cost and is an extra feature and extra convenience. It should therfore come as a choice for those who want or need it AND ARE WILLING TO PAY at least a modest surcharge for it.  :)

I feel as though I've had this conversation before.  Video is software... so it doesn't really cost much to implement into a body with sufficient capabilities.  So you create a body capable of capturing 8 fps at 18mp... and the video capabilities are already built in.

Now... if the manufacturers HAVE to improve the specs to acheive high end video... then there's an argument... but having a stills only camera v. (not the other cameras in the lineup) the competition will severely hurt the stills only market. 

If Canon was the only one making cameras... sure... but they aren't.  Sony will include it... nikon, etc... and if the price is the same and offers what many would consider a significant upgrade of capability, then the stills only will lose. 

I don't think you can make a camera that doesn't provide some level of video.  When I got my XS (I was stupid) and didn't realize it didn't do video.  My old fuji 3mp camera did crappy video... so I was surprised my new slr didn't.  I made do... but if I had known that the nikon d3000 did video and it was the same price as the XS kit I got... then I probably would be at nikon rumors right now. 

We need a poll... but not of the people at canon rumors... but of just everyday soccer mom types who drive the entry level market.  And

Hmm, I think your vastly oversimplifying the hardware requirements of video. Sure, it is POSSIBLE to do video as a "software only" thing. I do not believe that is actually how it is done these days, though. First off, if video was purely software based, there is a limit at how fast modern high resolution sensors can be read out. This limit is on the FRONT END of the image processing pipeline...there is only so much bandwidth in a DIGIC5+ chip, for example...and we know that in say the 1D X, the total bandwidth is 500mb/s (250mb/s per DIGIC5+). That bandwidth ONLY allows for 14fps readout at full resolution.

SO, given that we can read the full sensor out at 14fps, that clearly indicates that something else is going on, at the hardware level during readout, to support 30fps and 60fps readout rates. There are a few options here. First is line skipping, or even row and column skipping, where only certain pixels of each row and/or column are read out. This reduces the time necessary to perform a read, therefor supporting faster readout. Line skipping results in pretty terrible quality, especially on the aliasing/moire front...this is what the D800 does, and moire is pretty terrible on the D800.

The more effective approach to supporting a high speed readout without losing quality is some kind of hardware-level pixel binning. By blending pixel data together at the point of readout, you reduce the data throughput from sensor to DSP, potentially considerably if you bin say an 18mp sensor down to the 2mp necessary for 1080p video. This is what Canon does...they bin the pixel readout, and perform some form of 4:x:y pulldown and processing.

Regardless of whether line skipping or binning is employed, the HARDWARE has to support it. It isn't a purely firmware based process. Video requires hardware changes to be effective and efficient. So there IS a drag on R&D, and that does ultimately impact the number and quality of the improvements for stills photography.

1039
EOS Bodies / Re: Will Canon ditch the AA Filter?
« on: January 24, 2014, 04:53:10 AM »
Hmm. I can't imagine that such a thing is a huge problem. It's not all that different from Sony's "Emerald Green" CFA that they introduced many years ago (they called it RGBE). Their "Emerald" had more blue in it than the standard green. Based on all the sample images at the time, it actually produced better color accuracy...but it would be the exact same thing as your describing with the 7D.

When you have two similarish but different greens a typical de-bayer will get tricked and create maze patterns.

Quote
I also can't imagine that it would cause a loss in resolution. I mean, the crux of any bayer demosaicing algorithm is interpolating the intersections between every (overlapping) set of 2x2 pixels. Because there is reuse of sensor pixels for multiple output pixels, there is an inherent blur radius. But it is extremely small, and it wouldn't grow or shrink if one of the pixel colors changed. You would still be interpolating the same basic amount of information within the same radius. I remember there being a small improvement in resolution with my 7D between LR 3 and 4, and things seem a bit crisper again moving from LR 4 to 5. I suspect any supposed loss in resolution with the 7D was due to the novelty of Adobe's implementation of support for the 7D, not anything related to having two slightly different colors for the green pixels. The quality with which LR renders my 7D images only seems to get better and better with time and each subsequent version, so as Adobe optimizes their demosaicing implementation, any inherent error is clearly diminishing. BTW, there is no way anything Adobe has ever done that could possibly "knock off 1-2mp worth of resolution" from the 7D.

Well if you compared the first ACR that handled the 7D, which also mazed a lot, the version that came out right after where they fixed the 7D mazing, there was a subtle lowering of micro-contrast. My 1-2MP was just a wild guesstimate. (1MP off of 18MP really is not much when you think about it, especially what it would mean for linear resolution change, although maybe calling it 1-2MP was overdoing it)

Ah, I see what you are saying. Well yes, you do need to properly integrate the specific bayer pattern. If there are two shades of green, that does need to be taken into account. It won't reduce resolution, though. As I said, spatially, it is the exact same source resolution, use of an alternative green is not going to reduce microcontrast or anything like that.

Quote
DPP will produce a fairly jagged result, ACR/LR produce a very clean result. Based on the sample below, ACR is actually sharper and supports even finer detail resolution:

It depends, I find ACR can get pretty jaggy in some cases, at least with non 7D cameras. I do find I can pull a bit more finer detail with it than DPP though.

I'd like to see some examples of ACR being jaggy with a camera that has an AA filter. If your camera lacks an AA filter, then certainly...but a sensor with a proper OLPF produces remarkably sharp results with LR's demosaicing algorithm. Sharper than DPP and most of the demosaicing options in RawThearapy. Some demosaicing options in RawThearapy operate more on a "super-pixel" type demosaicing algorithm, so you lose output resolution, but you gain color fidelity and lower noise...kind of hard to compare LR with those options.

1040
Animal Kingdom / Re: Show your Bird Portraits
« on: January 24, 2014, 01:58:38 AM »
Thanks Skatol,

That seems to be the consensus from a couple others close to home.  I'm thrilled with the quick change aspect now that a tedious two days of work is over.  Have added screw clamps to three recepticals so branches stay in place and removed the excess hight of the vertical tube.  My projects take form based on scrap I have (nothing purchased) and my welding experience, but I'm sure the same idea could be accomplished by a handiman another way.  Sure has made me more enthusiastic - all thanks to CR folk! ;)

Here's one of the side props and a shot of the clamp.

Jack

Sweet stuff, man! I love those little branch holders. That welding skill of yours is incredibly handy! :)

One recommendation...try creating some of those holders such that you can hold branches parallel to the ground. Well, not exactly parallel...pointed upwards by a small angle, maybe 10-20°. The general idea with setups is that you create the perch the birds will rest on, then you surround that one perch with more stuff to create an interesting depth of field.

Lets take a pine branch with multiple fronds, held maybe 12° above parallel to the ground, a few feet up. You then either place that maybe 10-15 feet or so in front of actual pine trees for a nice blurry pine-green backdrop, or if you don't have that option, then you could use your vertical branch holders and a bunch more pine branches to create that backdrop. Array them out in a cone behind your branch, at a great enough distance to appropriately blur into creamy pine-green boke...and there you have it. Perfect chickadee perch! :D

Oh, and the real trick is to get the Chickadees to land just on your perch. There are a few things you need to do in order to encourage that. First, close off or take indoors ALL your other feeders. Then, set up some small open tray feeders a foot or so below your horizontal branch(es). You might want to set out a few feeders with different types of seed underneath multiple branches of differing kinds with different background setups to attract a wider variety of birds to a greater variety of perch setups. Since you have these small trays, preferably with covers that have a small hole cut into them that would effectively only allow one bird at a time to find and pull out a seed...you force the birds to queue up on the branches, waiting their turn. THOSE are your moments, when the birds are all queued up.

(It is actually quite amazing, birds are EXCELLENT at waiting patiently for their turn and sharing! Of course, every so often a squabble erupts, but then they go right back to patiently waiting on your setup branches until they have their turn at the feeder trays. :D)

1041
Photography Technique / Re: How To Remove Weird Colours
« on: January 23, 2014, 03:06:58 PM »
Glad to be of service. :D Most of the time, the discovery of how powerful LR's color channel editing is surprises people...so you aren't alone. Your just part of the masses. ;P

1042
EOS Bodies / Re: 7D Mark II on Cameraegg
« on: January 23, 2014, 10:25:15 AM »
There have been several mentions of " new and innovative video features" on the 7d II. I would like to see a better in camera mic, focus peaking, bigger LCD screen, and possibly an improved upon variation of the dual pixel af found in the 70d.

Other than that I would like the 7d II to have a 24MP sensor , dual digic 5+ processors, and 10 fps burst rate.
Hope they announce it soon! :)
+1
Although I also hope for a better in-camera mic, you will always be better off with an external mic. An external mic is sort of like changing lenses.... you can go for directional mics, omni mics, and remote mics... far more versatile than a built-in mic, but not nearly as convenient.

You can also spend money, and I mean SPEND MONEY, on external mics like you can on lenses. There is a whole range of quality, some are ok, some are good, some are absolutely phenomenal. You get what you pay for there, but pretty much any external mic is better than the built-in one. Kind of like replacing that 18-55mm toy with a "real lens" first thing after you buy your camera. ;)

1043
Software & Accessories / Re: Screen gamut
« on: January 23, 2014, 09:40:32 AM »
Since we're all over the color management discussion, I'm wondering if anyone else has had the experience I've had since upgrading their color managed gear.  I'm running a full 10/30-bit set up with color managed everything including lighting - just about everything except the paint on my walls which are white, but not neutral gray.  Since doing this, I've discovered that I'm much more sensitive to my white balance than ever before, to the point that I'm thinking of using my ColorChecker Passport or other reference cards for as many of my shoots as I can.  I'm also noticing much smaller variations in my prints than ever before.  It's great and annoying all at the same time :)

*Me Too*

I have developed the "problem" of being able to always see the color balance of lighting. Light used to just be "white"...it didn't matter if it was the deep orange 2700kelvin of tungsten or the 3300kelvin of halogen or the 5500kelvin of sunlight or the 6500kelvin of daylight. Now, I really hate standard tungsten light...it is just WAAAAY TOOOO OOOOOORAAAAANNNG. I prefer at least 3300-3500 kelvin lighting for my upstairs rooms, and I've moved to 5000 kelvin white light for my downstairs living area. Even if I turn off the downstairs lighting, go up the stairs in the dark, and turn on the upstairs lighting, I can still always tell that it is much more yellow-orange than the downstairs light, even after having hours to "adjust".

Because I obsessed over tuning my computers calibration, I somehow seemed to have disabled the bit in my brain that automatically adjusts whitepoint for me mentally. It's interesting in one sense...I can have different lighting of different temperatures all throughout the house, and I can recognize each and every one as distinct....even without needing to compare them simultaneously. In the other sense...it's kind of annoying, as I really hate the orange light that illuminates most people's homes now....  :-\

1044
Photography Technique / Re: How To Remove Weird Colours
« on: January 23, 2014, 01:38:26 AM »
@ distant.star: agree on the mike, I have got plenty of other pics with his face visible, but happened to pick a bad example for this thread.

@ fuhrtographer: I did try with the eye dropper, first thing, but it kept telling me that this was not a proper place to obtain the reading for the white balance from. I went back and tried some more, and after ten tries I hit a spot where LR no longer complained. Here's the result, with just a touch of dimming the lights and reducing the overall exposure.

@ Sella174: exif-data are the following: f/1.2, 1/80 s, ISO 1250

[edit:] Thanks for the tips. This looks much better now, better than everything but the b/w one, actually :)

You need to do direct color channel editing to fix this. I assume you have Lightroom. Lightroom has extensive color editing capabilities. You should be able to pretty narrowly define just the range of purples that you want to adjust, and tweak them.

1045
Software & Accessories / Re: Screen gamut
« on: January 23, 2014, 01:24:38 AM »
jrista,

You mentioned the NEC PA302W-BK-SV monitor.  When do you expect to purchase it?  I believe that you will find it an excellent piece of equipment.

I have had mine for five or six days, and it is tremendous.

Well, congrats on the purchase! :D I'm a bit envious...I've read a LOT about what monitor to replace my current one with (an Apple CinemaDisplay 30"...one of the older ones, with the single piece injection molded aluminum body). The best on the market are of course Eizo's, which have built-in calibration so you don't even need a device...and the NECs. I'd certainly prefer a 16-bit hardware LUT and built in automatic calibration, but I can't justify spending three grand on such a screen, not for what I do. The NEC sounds, from all the raving reviews, to be quite amazing. I'm sure you'll love your screen for a long time to come.

For what it's worth, in the context of this thread, what I see on this monitor is a vast improvement over a more generic 23-inch monitor I was using previously  (an Acer S231HL).  To give a mundane example, the red color of the word "canon" in "canonrumors" at the top of this webpage stands out to me now.  I never noticed it before as a red distinct from the other reds on this site.  In fact, I notice differences in reds across many websites that were not so apparent before I had this monitor, and seeing wide ranges in reds in everyday use is at this point what is most striking to me.

You are using a true wide-gamut screen now, so deep reds will definitely be richer. Deep colors in general, as well as colors in the AdobeRGB gamut that extend beyond the reach of sRGB, will be more vibrant. Greens in particular should really start to pop with the NEC.

Differences in reds as they appear in photos is noticeable as well, relative to what they were with my previous monitor.  The banding in the in the red background of the attached JPG file is evident, whereas to see the banding with the Acer monitor I was using, one would have to pixel-peep the original image at full size (in the original RAW file, banding was not visible).

That would be the screen's hardware 14-bit LUT. Even though internally the operating system and most software is only capable of 8-bpc color, with the NEC screen, you actually have 14-bpc (or 42-bit) color. When working with images that have a higher bit depth than 8-bpc, such as 16-bit TIFF or RAW images in the AdobeRGB or ProPhotoRGB gamut, instead of multiple similar colors basically being "binned" into the same 8-bit color the screen is actually able to differentiate those colors, and the correct colors will be looked up via the LUT built into the monitor. You should RARELY see posterization and banding anymore, as that is usually caused by that binning effect, where similar colors are merged by ICM (which can occur with both Relative Colorimetric and Perceptual intents) when there isn't enough precision in the display space to account for all actual source colors.

1046
Software & Accessories / Re: Screen gamut
« on: January 23, 2014, 01:17:06 AM »
A simple acknowledgement you made a small mistake and that you have subsequently edited all your posts to rewrite what you lambasted me for would have been sufficient.

But, whatever.

Tit for tat. Two way roads. We both made mistakes. I apologize.

We both have good input to make to a thread about colour management, you are very theoretical based, I am more practical and pragmatic. Having printed commercially for other photographers I have a slightly unusual, for this forum, perspective, but it closely aligns with industry experts. I would never argue pure technicalities with you (apart from perspective and compression  :) ), indeed I often enhance my understanding of "why" because of posters like you. Of course consistency should be a given in an advanced thread like this, my input was merely to clarify a small point (which you vehemently denied but then edited) and to caution against obsessiveness and diminishing returns in a section of the workflow from capture to output viewing. Of course screen calibration and profiling are important, as are camera and printer profiles, as are final output viewing condition considerations. Obsessing over one or two and ignoring the rest are as invaluable for ultimate accuracy as not bothering.

I completely agreed with you in the end about perspective and compression. ;P I was trying to extend the meaning beyond what you and Neuro were insisting it was limited to...I failed.  ::) I am happy to accept the more limited description.

I edited the post to eliminate future confusion, nothing else. And I still wish you had read everything...there were multiple posts between my first, and the point at which you decided to ignore everything I said about ColorMunki Design and claim I was talking about the ColorMunki Display (and I DID use the word Design in my subsequent posts, many times, before you made an issue about it. I only edited the first post, to avoid further confusion of anyone else who came along and read it.)

As for output scenarios, as you said, as an industry printer, you have a rather unique perspective. For the average photographer, what matters is their own workflow. We can even eliminate the print context, and just deal with the web context. Most photographers publish their work on the web. I do myself, more than I print (although I do print quite a lot.) The thing that is most irksome about DataColor's system is the inconsistencies between calibration and subsequent recalibrations (which the software wants you to do pretty frequently, no longer than two months at the most, and more frequently than that most of the time.) When I did recalibrate often, the biggest issue was the tonal range in the shadows. It swings widely...sometimes blacks seem completely crushed, and at other times they are wide open and rich.

That becomes a serious problem for publishing to the web. A calibrated screen is also usually readjusted so that it's backlight is dimmer. A brightness of 120mcd is pretty standard, and in some cases as low as 80mcd is even better for workflows that involve print. When you edit images with crushed blacks and shadows, especially if you don't know they are crushed, inevitably results in you editing the shadows to be brighter. It was a while before I noticed this inconsistency, however when I would view my work on other peoples computers, I'd easily notice that some looked pretty good, while others were clearly too bright in the shadows, often too noisy in the shadows, and tonal gradients were really poor. I've actually kept an old Sony Vaio laptop around that has this huge 18.4" screen that is one of the worst screens I've ever used. It's sole value to me is to check my post-processing, and make sure that I edited the shadows properly.

This was the heart of what I was trying to write about. The inconsistencies in the DataColor system are a problem. Less so, really, for print...and you make some great points there. The inconsistencies in the DataColor system that frustrate me so much are actually most important for my web publishing workflow. I've actually avoided recalibrating for...months, at this point, maybe over five months...because I kept recalibrating last time until I managed to get perfect shadow tonality, along with the best green and red reproduction I could manage. I'm truly afraid to recalibrate with the Spyder system, as it could result in hours of fiddling and tweaking and fiddling more to avoid the crushed blacks problem.

I also believe I am not the only one who has noted such issues with DataColor's products. In my research about what screen to replace my current one with (the backlight on this one is going to go in the not too distant future, as I've had this screen for at least seven years now), many reviewers of products like NEC's PA301 and PA302 screens noted that they experienced much more consistent and accurate results with either any of the i1 products from X-Rite, or the bundled SpectraView II device. I've read about similar issues with people who use Dell's UltraSharp screens. Even if "perfect" calibration isn't and maybe shouldn't be "the goal", I do believe "consistent" calibration IS and SHOULD be a goal.

1047
Software & Accessories / Re: Screen gamut
« on: January 23, 2014, 12:19:07 AM »

You can't concern yourself too much about external viewing circumstances.

Nonsense, final output viewing is the goal, not some isolated and insular workspace that we create. Though the output is of course not limited to printing, there are many great printers that write on this subject regularly, Ctein and John Paul Caponigro, being my two favourites, though I don't agree with all their ideas, at the foundation of all great printers workflow is a clear understanding of where an image will be displayed. It is the first question any serious printer will ask, sure if you don't know then a "regular" print can be made, but that isn't the complete way nor is it going to get you anything like optimal results.

I agree in certain circumstances. For example, if you are going to be exhibiting your work at a gallery, and you can gather the specifics of the kind of lighting they use, then you absolutely want to print according to those specific viewing circumstances.

I am thinking more of the general circumstance. You, or maybe a lab, print on a regular basis for potentially thousands of customers. The average customer couldn't give you any meaningful information about what kind of light is going to fall on the print when it is being viewed, and even if they did, it would only be correct some of the time. Even if you did know, you still aren't going to be recalibrating your system every time for different output circumstances. You do something like what DataColor SpyderPrint does...generate a profile, then extrapolate potential alternative light sources and white points from that. And you simply select the profile you need for those special prints. However you calibrate your system to one single baseline...you don't keep recalibrating it for each of potentially countless output circumstances.

And I was just trying to put in perspective the comparative futility that obsessing about the last few percent of accurate profiling and calibration is. A robust colour managed workflow is a good general practice, but spending excessive time and money on "better" is rarely worth it, especially when you consider the workstation to be nothing more than an intermediate step from capture to final output viewing. and the wide gamut of conditions that might encompass.

It is less about accuracy and more about consistency. I thought I made that relatively clear in all my words...but perhaps you are not really reading everything I've written. I wrote a lot about the inconsistencies of the DataColor products, vs. the consistency and accuracy of the ColorMunki Design...the fundamental issue that I've had (after years of use with multiple devices, starting with a Spyder2, actually) is lack of consistency with DataColor's products from calibration to calibration.

Anyway, there has been more than enough context in everything I've written to avoid any of the ambiguities you seem to be picking up. Even if you pulled that quote from a prior answer, if you had actually read everything, you would have known I was referring to the ColorMunki Design by the time you made which device I was talking about an issue.

We obviously disagree here, and I'd rather not continue to hijack IMG_001's thread with another endless debate that goes exactly nowhere...so, TTFN!  :P

1048
Lenses / Re: Canon 300mm f4 L lens for sports photography?
« on: January 22, 2014, 11:53:58 PM »
On the side I am saving up for a Canon 7D.

Why?  Hopefully only as a backup to your 5DIII.  The only reason I can see to choose a 7D over a 5DIII is if the latter is broken.

The other reason would be the obvious crop factor advantage. I mean, that is basically WHY people pick the 7D for pretty much anything...REACH! Although, I would personally wait for the 7D II than the 7D, as that should bring APS-C IQ up a notch...not as good as the 5D III, but certainly better than the 7D.

Reach is another... So the 300 becomes a 480

I still like the 5DIII but some like the crop sensor for length

The 'reach benefit' is mostly an illusion.  It's a 'crop factor' not a 'magnification factor'.  A 300mm lens doesn't become 480mm on APS-C, you're just cropping away the outer part of the frame. 

A 'reach benefit' for a 7D over a 5DIII only exists if you need more than 8.6 MP for your desired output.  If 8.6 MP is sufficient (up to 16x24" / A2 prints), then the 5DIII image cropped to the FoV of the APS-C sensor will give you equivalent IQ at up to around ISO 800 (on the 7D), and progressively better IQ as ISO increases from there (at some point, probably ISO 3200 but certainly ISO 6400, the 7D's noise is so bad that an up-res'd 5DIII image would be better even if you do need all 18 MP).

Some might also cite the 2 extra fps as an advantage, and it is one in theory…but in practice, I believe the superior AF performance of the 5DIII will yield a higher overall keeper rate despite the lower fps.

The reach advantage is a pixel size thing as much as it is a crop factor thing. If the 7D had the same pixel size as the 5D III, then you would be correct, "reach benefit" would be an illusion. But the 7D has pixels that are quite a bit smaller than the 5D III's. If we only factor in pixel size, the 7D has a 1.5x "reach benefit", so a 300mm lens is quite literally like a 450mm lens on FF. That is no myth, that is ACTUAL advantage thanks to the pixel size difference.

I think that pretty much everyone knows that you get 18mp rather than 8.6mp with the 7D. Even if they don't know the exact numbers, the difference between the 7D and 5D III in terms of relative crop area is quite large. And it isn't just about printing large, either. The 7D offers you EVEN MORE cropping ability...so if you need to crop the center 50% out of the 7D image, your talking about maybe having 10mp vs. 4mp, at which point the argument for the 5D III almost becomes irrelevant. Don't forget the downsampling advantage, either. For those who primarily publish on the web, downsampling the 7D to web size has the effect of GREATLY reducing noise levels. Since your downsampling more than twice the number of pixels as the 5D III's 8.6mp, noise levels normalize quite nicely...such that you generally cannot tell any difference between the two, with the exception that the 7D images are more detailed, sharper, and crisper.

The general advantage of the 5D III is that you can use longer lenses with narrower apertures and achieve similar IQ for identical framing. For example, one could use the 500mm f/4 on a 5D III at f/8 with ISO 6400 and get similar IQ as a 300mm f/4 on a 7D at f/4 with ISO 1600. But, that assumes you can afford the 500mm f/4 lens, and need to stop it down to f/8 to get the DOF you want. In all other circumstances, the 7D + 300mm f/4 is the better option, because it puts more pixels on subject than the 5D III + 300mm f/4.

I completely agree about the frame rate. The 7D has it's AF issues, and the intrinsic jitter basically negates abut 2fps. That still doesn't make the 7D "worthless" relative to the 5D III. If you have the funds to buy a great white lens, then indeed, the 7D could really only serve as a seldom-used backup body. But if all you have is the 300mm f/4 L, then the 7D, even though it only has an "effective frame rate" of about 5-6fps from a keeper-rate standpoint, it still most definitely offers the reach benefit over the 5D III.

While you and I may be able to enjoy the benefits that Canon Great White Mark II lenses have to offer, it is still a rather rare commodity we have. Having only owned the 600/4 II for about eight months, I still remember quite clearly why I picked the 7D originally, rather than waiting for the 5D III. You can't deny the reach benefit with shorter lenses...it's real, and it's meaningful.

1049
Software & Accessories / Re: Screen gamut
« on: January 22, 2014, 11:51:15 PM »

The only reason I mentioned the ColorMunki, is because, as has been said, X-Rite's naming policy is very confusing and you didn't differentiate between the ColorMunki Display, a colorimeter, and the ColorMunki Design, a spectrophotometer.

Actually, I clearly stated it was the ColorMunki Design, and I even linked to the ColorMunki Design web site from where I took the quote about it being a spectro...



No, you didn't.

If you want to truly test out your screen and see what it is truly capable of, you should probably pick up an X-Rite ColorMunki. ColorMunki is a true spectrophotometer, which is a scientific-grade high precision device that will perform a much more realistic and accurate calibration of your screen, and should theoretically work for any gamut.

Your taking that out of context. Click the damn link in my quote about the ColorMunki...it takes you directly to the ColorMunki Design web page. Everything I put after my quote assumes you actually READ the quote and clicked the link...since it came BEFORE.

1050
Software & Accessories / Re: Screen gamut
« on: January 22, 2014, 11:48:49 PM »

You can't concern yourself too much about external viewing circumstances.

Nonsense, final output viewing is the goal, not some isolated and insular workspace that we create. Though the output is of course not limited to printing, there are many great printers that write on this subject regularly, Ctein and John Paul Caponigro, being my two favourites, though I don't agree with all their ideas, at the foundation of all great printers workflow is a clear understanding of where an image will be displayed. It is the first question any serious printer will ask, sure if you don't know then a "regular" print can be made, but that isn't the complete way nor is it going to get you anything like optimal results.

I agree in certain circumstances. For example, if you are going to be exhibiting your work at a gallery, and you can gather the specifics of the kind of lighting they use, then you absolutely want to print according to those specific viewing circumstances.

I am thinking more of the general circumstance. You, or maybe a lab, print on a regular basis for potentially thousands of customers. The average customer couldn't give you any meaningful information about what kind of light is going to fall on the print when it is being viewed, and even if they did, it would only be correct some of the time. Even if you did know, you still aren't going to be recalibrating your system every time for different output circumstances. You do something like what DataColor SpyderPrint does...generate a profile, then extrapolate potential alternative light sources and white points from that. And you simply select the profile you need for those special prints. However you calibrate your system to one single baseline...you don't keep recalibrating it for each of potentially countless output circumstances.

Pages: 1 ... 68 69 [70] 71 72 ... 248