So here’s some fun stuff
Want a 3D Camera?
Dean Francis has shown us his version of a Canon 3D camera!
http://www.deanfrancisstudios.com/index2.php?v=v1
House Tonight
Don’t forget to watch House tonight. It was shot 100% with the Canon 5D Mark II.
http://www.fox.com/house/index1.htm
Sony NEX Editorial
I played with and fell in love with the Sony NEX cameras at the Henry’s show in Toronto.
I much prefer the feel of the camera over the Micro 4/3 stuff I have used. I cannot wait to get my hands on one for a longer time.
I’d definitely buy the NEX 3
cr
When you purchase through links on our site, we may earn an affiliate commission. Here's how it works. |
First. and yea thinking of getting the Nex3…mmm can it fit in a coat pocket?
With the 16mm, yeah it can.
I guess so this is the rumored 5/17 announcement.
Those Sony NEX cameras do look nice, was the NEX 5 just too thin/smaller for you? I was hoping Sony would at least release a 2.0 nex prime over a 16 2.8.
Yay – something *less* likely from Canon than that square sensor!
For his sake, I hope Dean bothered to patent that idea before publishing that, because I do think RED will take the idea and run with it.
I am sooooo excited for House tonight! Can’t wait to see what they’ve been able to do with these HDSLR’s…
what is it?
the video is a ugly joke
Click past the initial House video preview and you can get to the good previews…
where are 720p previews?
the shot i’ve seen in low quality were very nice ;)
As you wait for tonight’s episode and season finale of House MDto air – you may find my interview with Gale Tattersall – the DP and Cinematographer of House – of interest. Gale Tattersall talks about the why and how they used the Canon 5D Mark II’s for this episode. http://bit.ly/bfUcPA
A bit out of topic whats the tittle of the music in the 1st link;)?
Artist: Snow Patrol
Song: Just Say Yes
The images from the sony NEX really impressed me. Maybe they are getting their act together. Some stiff competition would help Canon show their stuff. Nikon farms out production of their point and shoots, and isn’t very interesting.
I loved my Nikon CP-990, I wish they would produce cameras of that quality again. I gave mine to my daughter when I purchased my first Canon DSLR 8 or 9 years ago, and she still uses it.
Great and very interesting information.
Thanks a lot for posting and sharing it!
P.S. to all 5D Mark 2 users regarding the malfunction found on Firmware 2.0.4.
We got real confirmation that Canon is aware of the “iris issue” on Firmware 2.0.4 on the 5D Mark II, and they are discussing/planning improvements for the next firmware update.
So we encourage all users experiencing this issue to REPORT it to Canon AND at the same time request the features you still need most.
– Q: Why to report if Canon is already aware of it?
– A: To show them lot of people are experiencing it, to ensure they will fix it sooner than later, and also to take advantage and request those features you still need that could be implemented via firmware update (not to mention so easy improvements like more frames in Bracketing mode and Auto ISO limiters, etc.)
– The iris issue is VERY easy to reproduce, as we explain here: http://5dmark2.wordpress.com/2010/04/08/malfunction-in-firmware-2-0-4-update/
We also posted a ” List of Lenses Affected ” at our blog. Please note it is a firmware issue and NOT a lens issue. The lenses are great.
Cheers.
-5D2Team
From an ergonomics perspective it looks bad, but I haven’t handled one. How’s the handling?
I love the look of the NEX cameras, I really think that they made the right decision to do the High-Quality approach, instead of something like the cheapo MFT or the ugly panasonic gs.
I wish Canon would make such a camera, they would be perfect as a small HQ camcorder or, via an adaptor, as a backup for my 3D;)
My prediction is a PowerShot SX2 IS tomorrow.
wow… the House episode so far is remarkable. I hope it will stop people from saying the fotage cannot be used for broadcast.
I think you already configure a red camera like that. I’ve seen stereo setups with there cameras before.
It doesn’t matter what the video haters say, its all about money. Producers are really hurting for money, and looking for a way to cut costs. The cost of 35mm film and the film cameras/ lenses is extremely expensive. Lost of $$$ can be saved, and thats the bottom line.
Can I watch it online?
Sure, on FOX.com probably. Depends where you are in the world. If your IP is US then you should be fine.
that does look great but having it as a battery pack would be more sensible and comfortable and easyer to implement.
just click on the second body instead of a battery pack have the shutter realease synched and have a storage sync in the second body.
I think it will come (faster primes)..
Sony has released this new lens line (E mount) that would fit the NEX line and their camcorder line as well..
I think they would cater more to what the video community would want next..
if you can’t wait for faster primes, buy an adapter and fit it with the Alpha lens..
I would agree with CR guy..
m4/3 time is coming to an end (good riddance)..
but I choose the NEX 5 with the pancake lens over the NEX 3..
I don’t think Sony intends to compete with the DSLR, m4/3 nor the P&S market..
they released the NEX cameras because they saw an opening in the market (possibly inspired by the Leica M8.2 and M9 seeing how they managed to fit their sensors in the M body)..
for $650, you get a 1.6x crop camera with 16mm 2.8 that does AVCHD 1080i.. yes it’s not a Leica, nor is it a 5D2, not even a 50D nor D300..
but that’s the point exactly..
it’s not in any of the above mentioned camera’s category.. which makes it interesting to say the least..
Sony just reacted to the market. otherwise it would be too late. Sony had all the resources to jump into this before pana or oly, but they didnt want to.
sony never liked the idea of changing SLR into movie making tool, reason – check the prices of their movie cameras. Now the game has changed and their real competition is RED. Things are gonna be interesting next year
House was shot most likely with a 5D mkII because they were in close quarters or some other practical reason in a sense, not because of money.
Big time TV shows have a lot of money and a reputation to maintain. I know I worked on them lol
Triple flash mount on the “3d” 3D? Brilliant! :) The on has it all strobist solution! Bounce the flash off of all the walls around you! :)
3d?…oh I get it! haha! two lenses!
What is a “HDSLR”?
How about some 60D news! I want :D
No 17th. May announcement, too bad..
Funny Idea with the 3D though.
But that’s again another reason why they will never make a “3D”, it would be confusing.
what the hell is this TEAM that is always posting here?!
http://5dmark2.wordpress.com ;)
Why are you guys interested in the NX-3 over the NX-5?
I want to preorder a NX-5.
“always posting here” is a very exaggerated…
Don not confuse us with other people that has began to use the “II team” or derivatives in their nicknames….
We have nothing to do with them.
Stereoscopic video, and two internal microphones for stereo audio pickup. But, would Canon really be dumb enough to put a mic right under where your little finger would grip the camera? I think not…
In the case of this episode of House, money was not the consideration. It had to do with choosing a camera that could work in very tight quarters and in low light. For the most part they will return to using film for most episodes except where using the 5DM2 makes sense.
Je suis americaine!
HD video DSLRs. Basically DSLRs that can shoot video.
I really think that’s the future direction of even professional cameras. Not for a while, mind you – but in 15 – 20 years I think we’ll view analog viewfinders much the same way as we do analog sensors (i.e. film) now.
Canon will eventually make one – the only question is when.
Unless it really is “3D” :D
If and when they ever do make a binocular camera (obviously not happening anytime soon) they may have a hard time resisting calling it the 3D.
Also: shouldn’t that have two viewfinders, one for each eye? :)
Yea of course for a real 3D camera, “3D” would be THE name. But since this won’t come anytime soon, this name will never be used.
2 Viewfinders?
I Wonder how it would work. Would it take one or two pictures?
Becaus if it took two, then you could shot every situation twice with different settings including lenses!
But then you would still have to make it 3D on your own. But then again there could be a mode for both…
Oh hell, that thing would be funny I guess, but also incredible expensive.
Sure you need two viewfinders, one for each eye, and a big hole for the nose in between them.
For closeup work you would even have to select different AF points ;-)
Forget two viewfinders, this needs a direct print button on each of the 3 components, if all pushed simultaniously and used with a new modelPROGRAF (as opposed to imagePROGRAF) 3D 128000 mark II printer it will render a solid model of your image!
too big, bulky, with the adapter that alone is 200 bucks more.
Things have changed.
Although House has a lot of money, most are suffering. Thats why MGM is going under, and many others are hanging on by their fingernails.
A hot shoe on each of the 3 pieces? what for? I understand one on top of each lens but not on the grip.
Interesting concept though. Real stereo photos and 3D video. Not bad.
first off, it’s NEX with the ‘E’ in between..
second, the NEX 3 has the same specs as the NEX 5 aside from the pull out LCD.. if you want to spend extra $150 to get that, be my guest..
it’s not like it’ll make your pictures any better.. lol..
No, the NEX-5 captures Full HD Video, the NEX 3 does not.
More Is Better
Binary Representation of Pictures
The fact that color can be broken down into individual components is extremely important to digital imaging – the process of breaking down a picture into the “1”s and “0” of digital communications. The process of breaking down a picture into individual components can be done in two basic steps:
Breaking down the picture into a pixel grid – For a picture to be described as a series of 1s and 0s, it first must be broken down into a grid, or array. This process simply places a grid over a picture and assigns a single color for each square in the grid. This single color grid square is called a “pixel” (short for picture element).
The number of pixels used for the picture breakdown is called the “spatial resolution” and is usually referred to by its horizontal and vertical number of pixels, such as “640×480”, meaning 640 pixels horizontally and 480 pixels vertically.
For a given picture, the number of pixels will determine the quality of the digital picture. That is, the smaller number of pixels, the larger they must be and the lower the picture quality. The higher number of pixels, the smaller each pixel is and the better the picture quality:
Low Spatial Resolution High Spatial Resolution
Large Pixel Size
Fewer Pixels
Low Picture Quality
Small Pixel Size
More Pixels
High Picture Quality
Standard Spatial Resolutions
There are a number of standard resolutions, or arrays, in the sensor industry. Most of these formats come from the display monitor industry, which drives the number of pixels you see on computer monitors. Since sensors typically display on monitors, they commonly match monitor resolutions. Term one will hear in regards to spatial format include:
CIF – Common Intermediate Format – 352 x 288 pixels for a total of 101,376 pixels (commonly referred rounded at 100,000 pixels). This format was developed for PC video conferencing. The number of pixels is fairly small, but was needed in order to get full motion video at 30 frames per second.
QCIF – Quarter CIF – One quarter of a CIF format, so 176×144 for a total of about 25,000.
VGA – Video Graphics Array – 640×480 pixels for a total of 307,200 pixels. The VGA format was developed for computer monitors by IBM and become the standard for monitors for many years. Although monitor resolutions today are higher, VGA is still lowest “common” display which all PCs will support.
SVGA – Super VGA – 800×600 pixels for a total of 480,000 pixels. The next highest monitor resolution developed for PCs.
XGA – “Xtended” Graphics Array – 1024×768 for a total of 786,432 pixels. Another monitor standard.
If a sensor is not one of these standards, its resolution is simply displayed as vertical by horizontal (200×300, for example). Typically, if a sensor has more than 1 million total pixels (anything more than 1000×1000 pixels), it is termed a “megapixel” sensor, which has come to mean any sensor with more than one million pixels.
Digital Representation of Pixels
Now that the picture is represented as an array of pixels, each pixel needs to be described digitally. To do this, each pixel is assigned two main components: its location in the picture and its color. Its location is usually just represented by its “x and y” coordinate in the grid. Its color is represented by its color resolution, which is the method of describing a color digitally.
Using the RGB method of color representation, a color can be divided into an arbitrary number of levels of that color. For example, red can be broken down from total red to no red (or white):
No Red Total Red
Each step in the arbitrary breakdown is called a “gray level” (even though the color is not gray). The same breakdown can be done for green and blue.
By experiment, the naked eye can distinguish about 250 shades of each color. Using binary math, the closest binary number is 256, which is 28, gray levels can be used for each color. This means for each color component of a picture, there are 8-bits used for each R, G, B element, for a total of 24 bits of color representation. The complete R, G, breakdown of 224 colors represents about 16.7 million colors that can be represented digitally. The number of colors represented by a pixel is called its “tonal resolution” or its “color dynamic range”. If fewer bits are used, the number of colors represented is smaller, so its dynamic range is smaller.
Image Sensors
Image sensors are devices that take an image and directly convert it to a digital image. Referred to in marketing literature as “silicon firm” or “silicon eyes”, these devices are made of silicon since silicon has the properties of both being sensitive to light in the visible spectrum and being able to have circuitry integrated on-board. Silicon image sensors come in two broad classes:
Charge-Coupled Devices (CCD) – Currently the most commonly used image sensor, CCDs capture light onto an array of light-sensitive diodes, each diode representing one pixel. For color imagers, each pixel is coated with a film of red, green, or blue (or complementary color scheme) so that each particular pixel captures that one particular color.
The pixel, made up of a light sensitive diode, converts the light photon into a charge, and the value of that charge is moved to a single location in a manner similar to a row of people passing buckets of water. At the end, the charge is amplified. Since this “bucket brigade” is accomplished by applying different voltages to the pixels in a succession, the process is called charge-coupling. Because the value in the pixel is moved by applying different voltages, CCD sensors must be supported by several external voltage generators. In addition, CCDs require a specialized manufacturing process that cannot be used by any other device.
Graphical representation of CCD Image source: Digital Photography Review
CMOS Imagers – Like CCDs, these imagers are made from silicon, but as the name implies, the process they are made in is called CMOS, which stands for Complementary Metal Oxide Semiconductor. This process is today the most common method of making processors and memories, meaning CMOS Imagers take advantage of the process and cost advancements created by these other high-volume devices.
Like CCDs, CMOS imagers include an array of photo-sensitive diodes, one diode within each pixel. Unlike CCDs, however, each pixel in a CMOS imager has its own individual amplifier integrated inside. Since each pixel has its own amplifier, the pixel is referred to as an “active pixel”. (note: There are also “passive pixel sensors” (pps) that do not contain this amplifier). In addition, each pixel in a CMOS imager can be read directly on an x-y coordinate system, rather than through the “bucket-brigade” process of a CCD. This means that while a CCD pixel always transfers a charge, a CMOS pixel always detects a photon directly, converts it to a voltage and transfers the information directly to the output. This fundamental difference in how information is read out of the imager, coupled with the manufacturing process, gives CMOS Imagers several advantages over CCDs.
CMOS Sensor Array
CMOS vs. CCD
Due to both design and manufacturing considerations, there are a number of advantages that CMOS Imagers have over CCD:
Integration – Because CMOS Imagers are created in the same process as processors, memories and other major components, CMOS Imagers can integrated with these same components onto a single piece of silicon. In contrast, CCDs are made in a specialized process and require multiple clocks and inputs. This feature limits CCDs to discrete systems, which in the long run will put CMOS Imagers at a cost advantage, as well as limit what kinds of portable devices CCDs can be integrated into.
Reduced Power Consumption – because of all the external clocks needed to “bucket brigade” each pixel, CCDs are inherently power hungry. Every clock is essentially charging and discharging large capacitors in the CCD array. In contrast CMOS imagers require only a single voltage input and clock, meaning they consume much less power than CCDs, a feature that is critical for portable, battery operated devices.
Pixel Addressibility – CCDs use of the bucket brigade to transfer pixel values means that individual pixels in a CCD cannot be read individually. CMOS imagers on the other hand have the pixels in an x-y grid allowing pixels to be read individually. This means that CMOS imagers will be able to do functions such as “windowing”, where only a small sample of the imager is read, image stabilization to remove jitters from camcorders, motion tracking and other advanced imaging techniques internally that CCDs cannot do.
Manufacturing Cost – Since CMOS imagers are manufactured in the same process as memories, processors and other high-volume devices, CMOS imagers can take advantage of process improvements and cost reductions these devices drive throughout the industry.
CMOS Imager Characteristics
There are a number of phrases and terms for describing the functional capability, physical features or competitive characteristics of an imager:
Active Pixel Sensor (also APS) – As explained above, an active CMOS Imager pixel has its own amplifier for boosting the pixel’s signal. Active Pixels are the dominant type of CMOS Imagers in the commercial market today. The other type of CMOS Imager, a passive pixel sensor (PPS), consists of only the photo detector without a local amplifier. While very sensitive to low light conditions, these types of sensors are not suitable for commercial applications due to their high amount of noise and poor picture quality when compared to active pixels.
Fill Factor – The amount of a CMOS Pixel that is actually capturing light. In an active pixel, both the photo detector and the amplifier take up “real estate” in the pixel. The amplifier is not sensitive to light, so this part of the pixel area is lost when taking a picture.
The fill factor is simply the percentage of the area of the pixel that is sensitive to light. In the picture above, this is about 40%. As semiconductor process technologies get smaller and smaller, the amount of area taken up by the amplifier is taking up less space, so low fill factors are becoming less of an issue with active pixels. Note that in passive pixels – where there is no amplifier at all – fill factors typically reach over 80%. The reason they do not reach 100% is due to routing and pixel selection circuitry that are also needed in a CMOS imager.
Microlenses – In some pixel designs, the fill factor becomes too small to be effective. For example, if a fill factor in an imager were 25%, this would mean that 75% of the light falling on a pixel would be lost, reducing the pixel’s capability. To get around this situation, some CMOS imagers have small lenses manufactured directly above the pixel to focus the light towards the active portion that would otherwise fall on the non-light sensitive portion of the pixel. Microlenses typically can increase the effective fill factor by two to three times.
Color Filter Array (also CFA or just “color filter”) – CMOS Pixels are sensitive to light photons but are not, by themselves, sensitive to color. Unaided, the pixels will capture any kind of light, creating a black and white image. In order to distinguish between colors, filters are put on top of a pixel to allow only certain colors to pass, turning the “rods” of the array into “cones”. Since all colors can be broken down into an RGB or CMYk pattern, individual primary or complementary color schemes are deposited on top of the pixel array. After being read from the sensor, software takes the different values of the pattern and recombines the colors to match the original picture. There are a variety of different filters, the most popular being the Bayer Filter Pattern (also known as RGBG). Note the large amount of green in the pattern, due to the fact that the eye is most sensitive to color in the green part of the spectrum.
Bayer Color Filter Pattern
Noise – The same as static in a phone line or “snow” in a television picture, noise is any unwanted electrical signal that interferes with the image being read and transferred by the imager. There are two main types of noise associated with CMOS Sensors:
Read Noise (also called temporal noise) – This type of noise occurs randomly and is generated by the basic noise characteristics of electronic components. This type of noise looks like the “snow” on a bad TV reception.
Fixed Pattern Noise (also FPN) – This noise is a result of each pixel in an imager having its own amplifier. Even though the design of each amplifier is the same, when manufactured, these amplifiers may have slightly different offset and gain characteristics. This means for any picture given, if certain pixels are boosting the signal for every picture taken, they will create the same pattern again and again, hence the name.
Blooming – The situation where too many photons are being produced to be received by a pixel. The pixel overflows and causes the photons to go to adjacent pixels. Blooming is similar to overexposure in film photography, except that in digital imaging, the result is a number of vertical and/or horizontal streaks appearing from the light source in the picture.
This photo illustrates two undesirable characteristics: blooming, the slight vertical line running from the top to the bottom of the picture and lens flare, the star shape light which is a function of the lens and not the imager.
Optical Format – is a number in inches that is calculated by taking the diagonal measurement of a sensor array in millimeters and dividing by 16. For example, a CMOS Imager that has a diagonal measurement of 4mm has an optical format of 4/16, or ¼”.
What Optical Format calculates is the type of lens system that must be used with the imager. In the lens industry, there are standard sets of ¼”, ½”, ¾”, etc. lens systems. By using Optical Format, a user of imagers can use standard, mass-produced (and inexpensive) lens systems rather than having to design and custom build a special lens system. The terms and measurement comes from the days of electron tubes and pre-dates solid-state electronics. Generally speaking, larger optics are more expensive, so a ¼” lens system is less than a 1/3″ lens system.
Aspect Ratio – The ratio between the height and width of a sensor or display. It is found by dividing the vertical number of pixels (height) by the horizontal number of pixels (width) leaving it in fractional format.
For example, a pixel with resolution of 640×480 would have an aspect ration of 480/640= ¾.
The most common aspect ratios are ¾ and 9/16. The ¾ aspect ratio is the ratio for computer monitors and TVs. The newer 9/16 aspect ratio is used for High Definition Television (HDTV)
Quantum Efficiency (or QE) – Imagers create digital images by converting photon energy to electrical energy. The efficiency in which each photon is converted to an electron is the imager’s quantum efficiency. The number is calculated by simply dividing electrons by photons, or E/P. If no electrons are created, the efficiency is obviously zero, while if each photon creates one electron the efficiency is 100%. Typically, a sensor has different efficiency at different light frequencies, so a graph of the quantum efficiency over the different wavelengths is typically shown:
Dark Current – A situation in CMOS imagers where the pixels fill with thermally created electrons without any illumination. This problem is a function of the manufacturing process and layout and increases with increasing temperature.
http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm
Hilarious! Too bad I don’t have time to mock up a view of the back. Maybe include a dead kitten (fuzzy mic) for that third hot shoe.
Dead Kitten (fuzzy microphone) in the middle shoe, flashes on the outside shoes.
like this blog, but: sorry, but why do you keep refering to this lame “Dean Francis”. also remember this lame square-sensor article – i still don’t get it, why you linked to this
Here’s what I’m expecting to see roll out:
PowerShot SX2 IS and PowerShot SX30 IS
12.1-megapixel backside-illuminated 1/2.33″ CMOS (SX2)/14.1 megapixel 1/2.33″ CCD (SX30)
24mm-720mm equivalent lens; f/2-5.6 minimum aperture
LP-E8 battery pack (found in EOS Rebel T2i)
Compatible with SDXC memory cards
10fps (SX2)/1.1fps (SX30) burst rate
RAW (SX2)
1080p@30fps/720p@60fps (SX2); 720p@30fps (SX30)
1,555k EVF (SX2)/920k EVF (SX30)
920k 3″ Vari-angle LCD (SX2)/460k 2.7″ Vari-angle LCD (SX30)
slow two days
What’s the relevance here?
Got any more predictions?
that is the reason sony hates RED
more relevant than your useful post
its been slow for a really long while. hell its been slow since the 550d came out. a few similar-sounding 60d rumors and 1ds4 square sensor ones…thats abt it
slow three days
Please give us a new rumor, we need our rumor fix!!
Is this a rumour or is it true? If a rumour, is it CR1, CR2 or CR3. If it is true, thanks for this great post.
We should start an online betting competition
Variables to bet on
1) Model name of next camera to be released
2) No. of said camera’s MPs
3) Frame rate
4) previously unavailable features in same category
Winners pocket cash hahaha
it’s always been slow here..
even when there’s news to be had, the other rumor sites are first to publish it..
nothing new..
that’s why CR guy had the chance to open a rental store..
he’s got a LOT of time between rumors..
rotfl..
If Sony hated Red so much they would just buy them. Red is no competition to players like Sony or Canon.
Not100% shot on 5D MK2, a small bit of film was used for slow motion shot.
Just being 100% correct :)
We all need day jobs. I just find the content on 4/3 rumors for instance to be more compelling. He puts up all kinds of stuff. CR guy limits it to lenses and bodies and when he throws in a non-rumor everyone gets upset because it’s usually when we’ve gone 4-5 days without content. Need moar information.
Sadly not cutting it for me lately. Blame Canon, CR guy, or whomever, but just not much fun to come here lately. Oh well. Maybe next week.
AHAHHAHAHAhahahaa
yea..a new camera!
I’m going to be saving up just for this camera!!!
Yes, finally upgrading my point-and-shoot!
Thanks canonrumors.com!
Also, the NEX-5 has a magnesium body, versus the NEX-3 with it’s plastic body.
you think this site is boring..
don’t even try to go at LeicaRumors.com..
it’s dreadfully stagnant..
and Leica guy actually has the guts to posts on top ‘Leica news before they happen’.. when all the ‘rumors’ he posts are of contents that actually happened a day before.. not only that, he tries to pass Leica eBay listings for ‘rumors’.. quite pathetic..
which makes me somewhat happy with what CR guy is able to post here.. atleast there are really rumors with varying levels of truthfulness to it..
The NEX-5 is smaller and lighter too.
If you are willing to spend $650 for a point and shoot camera, what is an extra $150 for a better camera?
Next Tuesday, then.
Canon U.S.A. doesn’t show the SX1 IS on the main PowerShot page.
uhm, I don’t think Sony would appreciate it when you compare the NEX camera to a P&S.. I wouldn’t classify it as a P&S..
why would I spend $150 more for a ‘better’ camera?
like I said, spending more on a camera doesn’t make your pictures any better.. :-P
It’s RED, not Red.
In what sense?
Not if you are Sony or Canon it isn’t…….
3D… why bother with a new model. I think Canon should make it a peripheral for all current and newer models, they’ll make a killing without many people selling off their current bodies to get the only one that is capable of shooting 3D. They’d get the rebel, XXD and XD crowd all at once.
I like the image Dean made.
One the note about the duo-viewfinder, I would assume that wouldn’t be done and use of the 3D displays would be used to be more ergonomic. Personally I like using one eye for seeing the rest of the scene to see what I’m missing.
Quick question, since there is software to improve image quality of a scene with more than one image taken of it, bumping up the size of the image, would the same apply with a 3D image (i.e. 2 18MP sensors with their own lens combine to make 36MP) and would HDR have more dynamic range?
I would love to play around with separate manual settings for each lens to see the effect on the image and different colored filters/gels on each lens (I wonder what that would do). I’m not too much into gels yet at this time, but would love to try it out.