This is a S35 sensor Cine Camera..this is Not a Photograph camera.
This is a Pro Body. It'll have 16stops DR where as R5 is 14 stops. About 12 usable with clog.
As for competition this is Komodo competition more than anything else. Built in ND filters komodo dont have.
DGO S35 sensor is amazing IQ.
This will use canons pro codec 10bit xavc intra.. this codec is NOT on the R5.
Big difference bt R5 and A7siii is Sony put Fx9 cine codec h264 xavc intra into the A7siii.
Canon only used h265 codec. Sony uses h265 to9 but offers pro codec for easy editing.
This will have pro xlr inputs.. has internal nDs and pro cine body. No overheating. No record limits and AF that woks. Dual puxel AF. Where red has no aF yet for rf glass. Let alone communication.
Takes years and millions of developing an AF system. Unless u steal it by hackng. Or reverse hack software. Takes years to develop. PDAF will take millions in development. Canon recently got hacked hopefully no sensitive AF info was stolen.
----
"......Takes years and millions of developing an AF system. Unless u steal it by hackng. Or reverse hack software. Takes years to develop. PDAF will take millions in development. Canon recently got hacked hopefully no sensitive AF info was stolen. ......"
---
NO! It does NOT take that much time to create an autofocus system! It took me barely THREE weeks to create a 2D-XY AND 3D-XYZ SOBEL Edge detector which runs a pixel to line/curve vector conversion process which THEN compares the lines and curves to a database of over 200,000 common objects, persons, animals, vessels, vehicles, aerospace systems, weapons systems, buildings, structures and terrain.
To autofocus using edge-detection only, you determine the amount of natural softness (i.e. aliasing) on neighbouring pixels of any detected line and curve and THEN convert that to a percentage-based value of the width and height of the current sensor in pixels/photosites AND in millimeters which allows me to take into account the likely refraction and diffraction of specific light rays coming into a sensor based upon KNOWN characteristics of a given single lens element and all in-series of lens elements, which then allows me to calculate 3D-XYZ light vectors (i.e. ray-tracing) which THEN allows me to calculate distance, orientation and motion of any object.
I send the differences in position, orientation and motion from previous frames as a signal to peizoelectric motors (or in our case, MASSIVE hydraulic systems!) to move a lens assembly in mere milliseconds!
In our parent company's case, I'm imaging very fast moving objects in real-time using a multi-spectral system consisting of four Optical-band and IR/UV cameras and four RADAR/LIDAR imagers each running at 4096 by 2160 pixels (64-bit RGBA) at 10,000 frames per second, so my latency is only three frames to figure out current and estimated position, orientation and motion of ANY object. (we can detect, recognize, track and target 65,000+ different moving objects PER SECOND!)
I then send motor control signals to a large combined electro-hydraulic system which can move a multi-tonne railgun within less than twenty milliseconds to any 3D-XYZ position I so desire for accurate targeting and fire control against any incoming projectile with a velocity up to 160,000 kmh (100,000 mph). (i.e. for MIRVed ICBM and Hypersonic cruise missile defence!)
Canon does something similar but at much slower frame rates and using piezoelectric motors to move a much smaller lens assembly. Autofocus is NO BIG DEAL these days because of 2D-XY SOBEL edge detection. Like I said, it took me barely three weeks to code it and test it!
Autofocus is TRIVIAL these days when you use a proper edge detector on a modern Qualcomm 845 series ARM or better or any AMD RYZEN-3/5/7/9 cpu. These CPUs are so fast these days that I could easily do 120 fps edge detection AND face/eye recognition for most cameras in real time with only one to three frames latency depending upon the processor.
The lower-horsepower DIGIC 8 and above ARM-based CPU's on most modern Canon cameras can EASILY do 30 fps up to 60 fps in DCI 4K real time with the right type of microcoding (i.e. using pure hand-coded assembler). Doing full autofocus at 120 fps is a tad hard using just edge detection BUT it could be refined over time to eventually get there.
SO again, NO! it does NOT take YEARS and millions of dollars to create an autofocus system! ANY competent graphics programmer can do it in mere weeks or maybe three months if they're inexperienced. The ONLY issue is getting the refraction/diffraction and lens position data of ALL lens elements in a typical lens assembly which gets put into a lookup table by the programmer to allow them to determine 2D-XY and 3D-XYZ angles and distance or incoming light rays. SOMEONE has to figure all that out and it ain't the programmer but rather the optical engineer who will be making those spreadsheets of numbers that get converted to a lookup table array.
With Canon, it use multiple photosites to determine minute changes in luminance and chroma between neighbouring photosites which lets the DPAF system INFER changes in orientation, angle and velocity of an object. A faster and simpler edge detector is used to find face and eyes (i.e. PROBABLY the CANNY edge detector) on any ARM-based Canon DIGIC-6 and above procesor.
Canny Edge Detector:
en.wikipedia.org
SOBEL operators: (i.e. combined for 2D_XY and 3D-XYZ edge detection)
en.wikipedia.org
Since I have access to much higher end processors than even Intel, AMD and IBM (i.e. we make our own 575 TeraFLOP GaAs combined CPU/GPU/DSP processors), I can run a FULL 2D-XY and 3D-XYZ SOBEL edge detector system on any incoming imagery AND run it at up to 10,000 fps for the ULTIMATE in accuracy!
That said, ANY modern AMD Ryzen 3/5/7/9 and Intel iCore-5/7 and Qualcomm 845 series or better CPU can run TRULY ACCURATE face and eye detection at 60 fps to 120 fps DCI 4K video using just SOBEL edge detectors.
V