I'm interested in how long a "supercomputer" remains relevant and cost effective? I'm sure Canon have done the cost benefit analysis, but aren't these things usually wheeled out 10-20 years later as part of memes to show we now put something that speed into our phones?
I think it's a terrible idea. I test semiconductor equipment for work most of the time based on simulated models and more often than not, significant differences between models and actual parts can be seen. Supercomputers can simulate designs real well, but the deviations are mostly caused by engineers not taking certain factors into account in their models. We see the issues with the R5 which is tested on real products and already see problems. That won't become better if you go the no-prototype approach.
It depends on the configuration. "Fugaku" is the $US1.2B flagship machine at RIKEN, which has ~160,000 nodes.
Their two-node FX700 rack server costs about $US40K (which you'll eventually be able to buy from Penguin Computing).
From the article it appears that Canon bought a single 650tflop FX1000 rack, which has 192 nodes.
So if you ballpark $20K per node then you're looking at ~$US3.8 million.
Given that Canon is the flagship commercial customer for FX1000, you can bet they didn't pay that much for it.
However they probably paid at least 1/3 again the purchase price for setup & maintenance services. Supercomputer clusters are not exactly turnkey commodity IT devices (although Fujitsu is angling to make the "little" FX700 pretty easy to get up & running).
To put things into perspective, for pure CPU compute (talking apples to apples, not GPUs here), a cluster of amd64 dual-EPYC 7742 traditional 1u servers crams about 280tflops into a similar rack, so Fujitsu's A64FX has almost 2.5x the compute density of the best amd64 CPUs you can get your hands on at the moment.