Outgoing link to Luminous Landscape
, article contributed by Nick Devlin.
Being from LL, it's no surprise that the article would be about ergonomics. Surprisingly, instead of looking like a laundry list of lesser-known camera problems, it actually could be split into two large areas covering rather exotic areas: Voice recognition and tablet functions.
# 1. Voice recognition
It might be useful for some photographers working in kind enough conditions (and then many of us are tactile-oriented enough or typically need absolute quiet that it wouldn't be a big boon). I won't say it's a dumb idea and it deserves some focus for the future; whatever my cell phone (c. 2006) is worth, it isn't too cheap or too old to have its own voice recognition. However, I have to question Mr. Devlin's argument that it is an area where things could be simplified: While I'm sure new users could be expected to use the voice recognition to take pictures of themselves in group shots or to switch to "running guy mode," the scenario he gives himself is pretty much the same as the normal camera use...sans menus. My question isn't with the idea of reducing the use of menus or even offering an alternative to physical controls (ever gone hunting for BULB exposure mode and forgetting which end of the dial it's on?), it's with the problems this pose to feedback. It just about calls for the camera to talk back and confirm what you've told it, especially for those times when you aren't behind the viewfinder. I also question how it would help stability or even be usable with the usual microphone placement if you were talking under the camera body while the viewfinder was up to your eye.
I'm sure the camera could be taught to filter out even pranksters, but the last I have heard, bulletproof sound recognition software that reliably filters out background signals (be it some strange frequency that drowns out your voice, or somebody else yelling commands to fool your camera) are expensive, both in terms of $ and in terms most likely of processing power. They are getting better though - for a pretty-current idea, Raytheon "Boomerang Warrior-X" sniper detection system is small and can be placed in a vest
, and probably has battery life to boot. Elsewhere I've read that the vehicle-mounted Boomerang averaged "just one" false positive report in 1000 hours of field use - not too bad really. Like our hypothetical camera system, the Boomerang has to detect the difference between correct (incoming sniper bullets) and false (firecrackers, backfiring car exhausts) information. Unlike the camera system, though, a lot of Boomerang's problems stem from the utility it provides of tracking and reporting the actual direction of the sound - something camera systems likely don't need; however, I think that we can expand on Nick Devlin's wish list here and say that a direction-activated sound system would be exceptionally handy for wildlife photographers, like this guy
. Worth the development time? Eeh, maybe not. But of course, Nick Devlin's request isn't that complicated by far - but I do think that the problems (ergonomics and camera design mainly) may not be trivial to overcome.
#2 "Live View Focus Masking"
Definitely requires a toggle control. Sounds interesting though. However, I'm doubtful that even better AF systems can provide a bulletproof indication of the exact center - which is darn important - of the zone of sharp focus. Try using AF confirmation and the viewfinder of a pentamirror camera to compose some different kinds of photos, and you'll see what I mean. For ENG and other camcorder style use, I'm sure this is "good enough," but I have my doubts it even approaches the "built in loupe" abilities of Live View. More on this thought in a moment.
# 3. "Expose to the right" mode
A worthwhile suggestion, and this one really fits under "photographer pet peeves" really. Perhaps not as sweeping a suggestion as it might seem, since the camera already has warning flashers for clipped (blown) highlight areas in the image. However, it fits pretty nicely with the other suggestions in the article.
I'm not really going to cover the #5 "zone system" point, other than to say I got the impression Nick Devlin is asking for something like the ability to do some Ansel-style darkroom work on the iPad during composition - perhaps dodging and burning exposure areas isn't going too far here. However, I'm mostly reminded about how little I know about the Zone System (next to nothing other than the name).
The part I found interesting about the iPad: I'm not so big on Vendor X's tablet computer versus the next, but we should be soon getting to the point where video feeds will be "good enough" to give a pretty quick HD resolution output for movie makers - and by extension it doesn't seem absurd to say that one could also have a much slower-updating version for stills images. There is a problem posed by overheating and power use, but I think that eventually the market will coalesce around a more or less standard (but optional) solution for plugging, perhaps even wirelessly, into a tablet or other ultralight notebook for additional ease in composition - it will become the new version of a ground glass, finally. The tablet option, being a computer itself, provides the muscle to do whatever you want, as long as some programmer shares your opinion - thus taking care of the zebra stripes option (and many other possibilities), possibly (hopefully) even on third party software.
At the same time, I would say this: Touchscreen zooming, both on tablets and on the back screen of the camera itself, is pretty important (of course, back to point #1, one can also opt for the Blade Runner style of voice activated zooming and panning except while compositing instead of on the finished image, minus the light-gathering magic that allows the camera to see around corners in that movie). Especially on entry cameras, using the single zoom button and the crosspad controller to select the three zoom levels and slowly navigate across the frame (while losing your framing consciousness or being able to zoom into multiple areas simultaneously) is very limiting. Current screens (and industrial design as a whole, see: iPhones) is pretty bad about fingerprints, especially screens, but I think it can be done.
To boil my additional ideas down: Standardize not just video feeds but also remote control (already possible in a limited fashion for tethered studio users or somebody with a compatible tablet computer using EOS Utility), allow multiple areas of focus (at certain zoom levels it should be quite possible to avoid surpassing available data bandwidth, although I'd be more worried about any binning or line skipping effects if you try this, especially if a "full frame" image is to be displayed at the same time - but maybe this is less of a challenge than I imagine), and allow more touchscreen controls in various places for zooming especially.
Personally, I'm a buttons guy - I don't like touchscreens much and don't trust them far, especially when you waste valuable screen area on implementing usable touch combos or whatever. But it's a development worth mulling over at the least.
Of all these areas, I think that options to "tether" to better devices than notebook (i.e. full size) computers is most likely to be implemented, and most likely of the proposed options to bring about the most and best changes.