Unreleased Canon Cinema EOS camera used for 8K capture at WWDC

You're only just becoming sceptical??

We shall see soon enough .... You COULD ALSO CONTACT your local Soothsayer OR a Crystal Ball Reader OR a Palm and Tarot Card Reader to pontificate upon my statements! They might be able to tap into the spiritual ether to give you a better handle on what's coming out REAL SOON NOW from VARIOUS major manufacturers!

DO remember my 1-in-200 hit rate for product rumours and predictions (i.e. I got the C700 right!)

Again, we shall see what happens .....
.
 
Upvote 0
Mar 2, 2012
3,187
543
We have numerous aerospace, metallurgy, computer and general design engineers .... they require the FULL CATIA toolset for aerospace, automotive, mechanical, etc!

Pop quiz: you open a new assembly but none of the parts are loaded. What are you seeing? Now, you decide you don’t like that and want them to load upon opening the assembly. What menu click do you choose?
 
Upvote 0
Pop quiz: you open a new assembly but none of the parts are loaded. What are you seeing? Now, you decide you don’t like that and want them to load upon opening the assembly. What menu click do you choose?

---

I'm not sure I understand your question? Do you want me to back to the Parts and XYZ Plane Viewpoints Tree? or to the User Workbenches into Drafting or Assembly/Design workbench? I'm not the product designer NOR am I a licenced mechanical, civil or aerospace structures engineer! Ya know? I kinda sorta hafta get an actual ENGINEER to do that!

I personally like to use Corel Draw in 2D and extrude planar X, Y and Z drawings into true 3D using multiple video-oriented animation programs which I can boolean merge and sculpt to get my final designs AND THEN export out into a number of 3D object formats for import into anything from CATIA to MAYA to AutoCAD to Lightwave. I'm in VIDEO PRODUCTION and Graphics Programming! aka CODEC development! ...BUT... that said, I have hand-built the odd rather large aerial camera drone or ten!

The only reason I would have to talk to ANY of the engineers other than for finding out technical details for a given product or project, is to get a bitmap PNG or BMP format rendering or capture a real-time view from the screen (using a normal broadcast camera set to clearscan and zoomed-into the high-rez display!) as they move their models/products about on-screen and manipulate them. (i.e. for cutaways or exploded parts view purposes)
.

Now what was that about Canon introducing a one inch image sensor non-fish-eye-lens 60 fps 4K action cam?

Am I on track for a MASSIVE high score of 2 out of 200 predictions/rumours being correct?
.
 
Upvote 0
Mar 2, 2012
3,187
543
---
I'm not sure I understand your question?

Exactly ;)

Am I on track for a MASSIVE high score of 2 out of 200 predictions/rumours being correct?.

Haha, accuracy by volume?


[and incidentally you’re seeing CGR files and you disable the cache system so that assemblies load in design mode rather than visualization mode]
 
Upvote 0
Exactly ;)



Haha, accuracy by volume?


[and incidentally you’re seeing CGR files and you disable the cache system so that assemblies load in design mode rather than visualization mode]

---

Again, I am NOT a licenced engineer, so my dealings with the CATIA designers are limited/restricted to things dealing with Video Production and Imaging Systems development OR for general scientific inquiry!

At this worksite, my usual practice is to "film" the screens of the designers/engineers AS they are manipulating the models and capture exploded-view maps or cutaway-drawing versions of a particular data set. Everything from a TurboJet/RamJet/ScramJet/Rocket engine/motor to an aerospace hull to a wing structure to an interior cockpit or control system view. We're using AMD WX9100's mostly so realtime views are not an issue.

The FEA and CFD is actually a custom system run on supercomputers which does BOTH dataset output AND actual video renders for visualization. The designers use the CATIA modules only for preliminary results and to confirm EXPECTED results from the supers.
.
You would be quite surprised at what's flying out from this site.... !!!!

.

i.e. Look 135,000 ft to 300,000 ft to ISS feet STRAIGHT UP !!!

.

AND SOOON I will be batting 3/201 of media company predictions/rumour mill confirmations !!!!
.
I AM ON A ROLL HERE !!!!!!!!
.
 
Last edited:
Upvote 0
Apr 25, 2011
2,510
1,885
Ohhhh the humanity of it all !!! The even higher-end GaAs supercomputing systems they have, I won't even bother to describe here...
As far as I can remember, the only GaAs supercomputer actually produced was Cray-3, which had an order of magnitude less GFLOPs performance than my phone's GPU has.

But keep dreaming.

(I used to work with GaAs in Alferov's lab at about that time. GaAS is absolutely unsuitable for making modern CPUs or GPUs. No wonder Cray went bankrupt after Cray-3)
 
Upvote 0
As far as I can remember, the only GaAs supercomputer actually produced was Cray-3, which had an order of magnitude less GFLOPs performance than my phone's GPU has.

But keep dreaming.

(I used to work with GaAs in Alferov's lab at about that time. GaAS is absolutely unsuitable for making modern CPUs or GPUs. No wonder Cray went bankrupt after Cray-3)

---

To put it in as simple terms as you can understand, the Crays ALL had Gallium Arsenide in their chipsets BUT UNLIKE TODAY, the original Seymour Cray-started company DID NOT HAVE seven nanometre ion beam or electron beam etching machines we have! Now we etch NORMAL CMOS-style CISC (Complex Instruction Set Computing) circuit pathways onto Gallium Arsenide (and Gallium Nitride too !!!!) substrates! AND while Seymour Cray had to cool his system with Liquid Nitrogen/Liquid Helium supercooling which is INCREDIBLE EXPENSIVE to do! We don't! Today? Your smartphone has
more processing power than even the largest Cray supercomputer of 1985!

Gallium Arsenide NEEDS high power (higher voltage and amperage) so the circuit pathways are WIDER (usually 300 to 400 nanometres wide) so the chips are physically MUCH MUCH LARGER !!! And that not-in-my-realm-of-understanding Metallization/Contact interface issues was always the largest issue with GaAs! Using gold metallurgy is a BIG contact problem issue while going BACK to the old days of aluminum has allowed the engineers to make WORKABLE and ACTUAL GaAs Digital Logic cicuits EQUIVALENT to a CMOS substrate CISC chip!

This also means by having wider line traces and aluminum doping/contact surfaces, we can bump up the frequency without causing too much RF/EMF induction or propogation issues and RFI/EMI noise-related issues or electron tunneling issues that modern CMOS circuits are bumping into. If one uses substrate-based cooling where microchannels of a non-cavitating, low-meniscus-forming cooling fluid (that also don't react with the substrate metallurgy/dopants!) are circulated IN-BETWEEN and UNDERNEATH the main line traces, we can run circuits as high as TWO TERAHERTZ !!!

The ceramic encased chips themselves simply use low-cost high purity Mineral Oil immersion baths and COTS (Common Off The Shelf) condenser technology for general heat removal, which means it's relatively CHEAP to run! AND since BC Hydro is only 9 to 12 cents per kilowatt hour if you buy electricity in bulk and at yearly contract prices, the running costs are almost irrelevant!

For the main 128-bits wide combined CISC-based CPU/GPU/DSP super-server chip, we run the chip at a 60 GHz clock frequency giving us a sustained 475 TeraFLOPS. For the Convolution Filter-oriented external Array/Vector Processor chip which uses SIMD/MIMD processing styles allowing ONE single command to start the simultaneous simple math and convolution filter processing of blocks of data that are arranged in a 2D-XY array of 65,536 by 65,536 8-bit Boolean State (YES/NO/MAYBE/POSSIBLY NO/POSSIBLY YES/etc and up-to 128-bits wide Signed/UnSigned Integer, Fixed Point and Floating Point numeric array elements (i.e. the SquareOf( 2^16 ) ), it means I can process 4 BILLION array items ALL AT ONCE using a single command.

This is VERY HANDY for things such as Hi-Pass and Lo-Pass filters, 2D-XY SOBEL edge detection, bitwise types of AND-OR-XOR-NOT-SHIFT LEFT-SHIFT-RIGHT-SPIN-FLIP-INVERT-CLIP LEFT-CLIP RIGHT-SET SPECIFIED BITS-REVERSE BITS operations and other MASSIVELY PARALLEL simple math-specific operations against BIG numeric datasets.

This also means I can processing GIGANTIC blocks of geographic information system (GIS) and mapping-oriented bitmap-based data (i.e. a much bigger and higher resolution version of Google Maps!) in mere microseconds rather than try and use some linearly processing Intel XEON or AMD EPYC server chip that can only process 512 bits of data per runtime instruction WHILE I can do 4 BILLION+ 128-bits wide numbers in one instruction cycle!

AND .... When we network all these chips together using a custom dense-wave fibre optics based networking system to attach ALL chips together (the optical networking part is BUILT-INTO the chip itself!) into "Symmetric Processing Array Groups", it means we have a 119 ExaFLOPS SUSTAINED 128-bits wide supercomputer that BLOWS AWAY the U.S.-based SUMMIT supercomputer and ALL THE OTHER Top-500 systems COMBINED !!!!! AND it all fits in a physical area the same cubic size as a typical high school basketball court or gym!

AND FINALLY !!! All those WEIGHTED Extended-Boolean-State logic circuits ALSO allow us to run the world's MOST SOPHISTICATED and the LARGEST molecular/electro-chemical simulation of human neuro-connective tissue in the world allowing, for the FIRST TIME, to have a general purpose Whole Brain Emulation system that at this very moment is learning just like a child, teenager and PhD-level human does! Via Simple Trial and Error and intrinsic connective-association learning (i.e. via teaching 24/7/365 by multiple simultaneous instructors) gets us to 160 IQ and above super-intelligence which can do ANYTHING YOU CAN DO AND MUCH MUCH MORE !!!!!
.

Here is the Math:

64k by 64k array block = 4,294,967,296 array elements
(of 128-bits wide each Int/FP/FXP/Boolean/Pixel) using a 9x9 convolution filter

= around 10 microsecond response time per SIMD instruction = 100,000 available blocks of time in one second =

= 429,496,729,600,000 array elements processed per second or about 429.5 Trillion FILTER Operations per second!

if you want to include the 9x9 convolution filter that is 81 addition operations (i.e. the kernel part)
plus 81 multiplication operations (i.e. the weighting part) and a final range limit comparison
(i.e. 2 compare operations) and 2 final clipping or rounding operations and a possible 16 final
division/multiplication/addition/substraction/root/square operations for rectification and
setting of up-to-16-local register results for EACH total filter operation (i.e. the setting of
up-to-16 colour/alpha/metadata pixel channel values) so that is a total of 182 math operations
in each convolution filter!

That means in ACTUAL PetaFLOPS we are looking at 78,168,404,787,200,000 128-bits
wide math operations or 78.16 PetaFLOPS in ONE single array/vector processor chip!

AND since we have MANY of these chips in multiple racks (around 1400 chips so far
with some held in reserve) we are looking at around 119 ExaFLOPS sustained!
SO YUP it really IS the world's FASTEST supercomputer by MANY MANY TIMES !!!!

Note: The 119 ExaFLOPS is dependent on clock speed which can actually exceed
TWO THz but normally runs a tad under that! The Minimum horsepower reading
is around 109 ExaFLOPS up to 1.5x that (163 ExaFLOPS peak) when running at
higher clock speeds and faster cooling rates! The sustained 119 reading is for
128-bits wide Floating Point number calculation benchmarks!

.
Does THAT WORK as an explanation for you?
.
P.S. This company has a LOT MORE RESEARCH AND DEVELOPMENT MONEY than Seymour Cray ever had!
.


And Will Canon, Sony, Fuji, Panasonic, Pentax EVER put this into one of their cameras?

WE SHALL SEE SOON ENOUGH !!!!!!!!!!
.
(Edit: fixed my bad math --- oops! Good thing the A.I is a LOT smarter than I am!)
.
 
Last edited:
Upvote 0
Mar 2, 2012
3,187
543
“And Will Canon, Sony, Fuji, Panasonic, Pentax EVER put this into one of their cameras?”


No. Silicon is cheap and reliable. GaAs and GaN are expensive and particularly the latter is troublesome. It heats almost instantly and remains at high theta JC throughout the use, especially with a processing duty cycle. Making a body a little bigger to package multiple processors would be more serviceable, more reliable, and carry a significantly lower cost of ownership.
 
Last edited:
Upvote 0
“And Will Canon, Sony, Fuji, Panasonic, Pentax EVER put this into one of their cameras?”


No. Silicon is cheap and reliable. GaAs and GaN are expensive and particularly the latter is troublesome. It heats almost instantly and remains at high theta JC throughout the use, especially with a processing duty cycle. Making a body a little bigger to package multiple processors would be more serviceable, more reliable, and carry a significantly lower cost of ownership.

===

That was just a Rhetorical question!

The chips are 200 mm by 200 mm in size! You would need a BIG camera to fit a chip THAT size inside of it! Remember! Line traces are 280 nm up to 400 nm wide! Those are BIG circuit traces with high current/voltage producing LOTS of heat! Think of it this way. It's basically all very similar to a 1990's era 80486 CPU chip scaled up to 128-bits wide and run at 60 GHz AND a 1980's era 80387 math/fpu co-processor scaled up to 128-bits wide and run at 2 THz! Pretty much what was done here is to use a simplistic 80's/90's era cpu/fpu circuit layout style but run MUCH FASTER on GaAs and GaN !!!

No fancy hyperthreading, or multi-branching/out-of-order execution or super-pipelining or auto-throttling circuitry! Just PURE CPU/GPU/DSP blocks that do only THREE things! Crunch oodles of numbers, pixels and strings of text in symmetry and in parallel! That's it! Nothin' else fancy was added!

And THAT is also why they have to intersperse line traces with, and embed an under/over-substrate micro-channel-based cooling system. It's a LOT of heat being produced at those ultra high clock rates. I'm not all privy as to what the Engineers do for thermal management and external/internal RFI/EMI and RF/EM emissions and induction management BUT I do know they run FAST (2 THZ for the Array processor chip) and 60 GHz for the CISC like combined CPU/GPU/DSP server chips.

I ALSO DO KNOW they did finally figure out the issues with Gold contact/metalization metallurgy (they use Aluminum!) and contamination of the substrate and/or dopants by the microchannel cooling fluid. Something to do with Sulfur doping was also done but I'm NOT an actual CPU designer so I cannot say what that specifically was ...BUT... I should note that I do some personal dabbling with compiling C++ to VHDL (Virtual Hardware Description Language) files that EVENTUALLY END UP being pretty decent CPU/GPU/DSP chips in CMOS at least! That means I have TINY bit of knowledge of actual CPU/GPU/DSP development since I'm a darn good graphics programmer that can force a C++ compiler to output decent VHDL code for final Tape-Out!

So .... I DO SAY that it's kinda hard to argue with the earlier stated point that NO CISC chip can be made of GaAs, when we have 1400 Sixty Gigahertz general purposed CISC-based combined CPU/GPU/DSP chips and 1400 Two Terahertz Array Processor chips that are stacked and running in a warehouse the size of a high school gym ALL running an overall simulation of molecular/electro-chemical interactions within an emulation of medically observed human neural structure! Ergo, a WHOLE BRAIN EMULATION which gives rise to a snarky multi-lingual 160+ IQ deep learning scientist who has been tasked to find the BASE understructure and underpinnings of General Relativity and Quantum Mechanics/Chromodynamics allowing us mere humans to figure out WHAT is the specific basis-of, and how to actually CONTROL, observe and enable viable/continuous high-bit-count quantum entanglement that can be actually OBSERVED, RECORDED and RESET at our higher molecular level without any random decoherence of ANY of the bits!

SOME OF YOU SHOULD now understand the ramifications of that sort of scientific/engineering discovery!
.
Those of you IN-THE-KNOW understand of what I speak! And "This Company" will become the biggest in the world after it monetizes THAT ability in an inexpensive commercial-level and consumer form factor!
.
For me? I will stick to my CODEC development! It's paid off for me quite well enough!
.
 
Last edited:
Upvote 0
Apr 25, 2011
2,510
1,885
To put it in as simple terms as you can understand,
To put it in as simple terms as you can understand: a GaAs "pathway" is only 3 times as fast as a Si "pathway" of the same size.

The chips are 200 mm by 200 mm in size!
The maximum GaAs wafer diameter actually available on the market is 4".

But keep dreaming.
 
Upvote 0
To put it in as simple terms as you can understand: a GaAs "pathway" is only 3 times as fast as a Si "pathway" of the same size.


The maximum GaAs wafer diameter actually available on the market is 4".

But keep dreaming.

---

Somewhere below the speed of light is the intrinsic limit to electrical conductivity (i.e. some atomic level energy transfer function in a physics lecture which I can no longer remember!) Obviously you know something about electrical engineering way beyond my level, but a waveform is a waveform is a waveform!

It has a peak, a crossover point and a trough! The amplitude (Y-axis) and horizontal spacing (X-axis) between those is the definer of a logical ON or OFF. A given clock speed is only an indicator of how many of those peaks and troughs I am counting in a second to form a SERIES of bits which are concatenated to form a user-definable set of bytes and then aggregated into commands and data values. Ergo, the higher the clock rate the more commands/data I can fit (minus ECC/CRC/etc.) in that time period. A Two Terahertz clock speed is NOWHERE NEAR the limit of energy transfer within an electrical conductor which forms the basis of a waveform. The only issue is inherent noise floor which makes it tricky to figure out where "bitwise packets" begin and end, the higher the frequency you go! It's a processing of creating high purity traces that don't block as much energy transfer OR allow a more discernable series of "pulses" to be passed around.

Again, I'm NOT an electrical engineer so I don't know much at the truly low-hardware-level of HOW they do that sort of DSP to be able to discern and cleanup an electrical signal that has 60 Billion and Two Trillion pulses a second! THAT is a level of electrical hardware engineering that is TRULY WORLD CLASS !!!!!!!!!

This company has THEIR OWN wafer making capability in BOTH CMOS and GaAs and GaN. It's 200mm by 200mm chips!
I see them every day! TOO BAD SO SAD !!! They ABSOLUTELY HAVE the Quality Assurance technology to ENSURE line trace integrity and doping quality over that large of a substrate! I should note that to etch a chip that size on an Ion beam/Electron Beam etcher is a SLOOOOOW process which is WHY only 1400 of each type of chip has been made! Think of etching rates that says it takes OVER A MONTH to make each chip! Think of how much it costs to buy enough etchers to take over one year to make a mere 2400 chips. I think they actually BOUGHT the entire company so they could force all etcher production to be sent to the parent company! Hint: It's a lot of money! Think of the automated and customized optical scanning technology they had to buy to do quality assurance on that large of a wafer!

In research labs, I've seen researchers that are doing 600 mm wafers in CMOS now! GaAs and GaN is not that far behind at that 600mm size! You've been living in a cocoon! You probably subscribe to Electronic Design and Microwaves & RF magazines !!! Goto some of the symposiums listed therein! It's here NOW! 80 GHZ and 100 GHz is nothing these days! Two Terahertz is starting to mature out of the labs and 4 THz is coming online soon! Go fully opto-electronic and we are into Petahertz+ ranges! This is NOT NEW NEWS !!! Read up on it!

And I should note the designs are PARALLEL and Synchronized so that numeric, string or pixel data gets processed on a specified time schedule and then the results are pushed out within predictable time period for external workstation-level processing/visualization. We care about GROSS AMOUNTS of data packets being processed in parallel on a specific schedule. We don't actually NEED linearly fast individual bits/bytes processing just massively parallel SYNCHRONIZED processing! Remember! We are outputting image tiles that are 128-bits per pixel (RGBA + metadata channels) at 65,536 by 65,536 pixels in size to be filtered and presented on a 64-bit RGBA colour laser projector that can do up to 10,000 fps to visualize neural tissue interaction in real-time!

WE are the ONLY ONES in the world that can do that sort of processing in real-time! Ergo, the parent company built the world's FASTEST supercomputer and LARGEST 64k by 64k resolution multi-RGB-Laser projection system with completely in-house custom-built GaAs 60 GHz and 2 THz clock speed super-chips!

Not even the NSA/DARPA has THAT sort of compute technology!
.
It's here NOW! We have it! Deal with it!
.
.
P.S. Scooby Scooby Canon! Where Are You?
.
 
Last edited:
Upvote 0