Butterflies, Moths and Assorted Insects...
- By Nemorino
- Animal Kingdom
- 1813 Replies
I took those pictures Wednesday in western Germany. Yes very early!
Upvote
0
Sorry if I didn't use the Generative vs ML labels, but I did point out that I have a pretty cheap machine that will do "generative AI" in my office and the consequence being that there is not likely a monetization option for data centers to do that kind of "work". Data centers built to do generative AI are really just an extension of the whole "cloud" mindset, which is all about control (marketed as convenience and safety). The number people compromised when a "cloud" facility gets hacked continues to get bigger. At some point, customers will figure out that safety is not really one of the features of the cloud. This is particularly true of generative AI, which is constantly integrating your every interface with it into its data base. At some point, your entire persona becomes public. Not a desirable outcome for most folks.You're engaging in the exact conflation that billions of dollars of marketing for generative AI have intended you to make.
Generative AI and classical ML approaches with actual design intent are not analogous or appropriately lumped together.
Classical deep learning applications like noise reduction, subject recognition, etc. do work, and they are used to defend "AI" generally, meaning generative AI, which is slop that has extremely little value outside of generating spam.
It is a classic motte and bailey. "AI" is a completely worthless label that doesn't reflect any of the technologies allegedly included under it.
That's very early. I saw them first at the end of July last year. Are you home in Germany or travelling somewhere warm?First hummingbird moth of the year!
R5m2 + RF 100L
@ 1/3200s, f/13, iso 4000/2500
View attachment 228461View attachment 228462
Indeed. I use ML routinely for scientific image analysis, virtual chemical library screening, etc. Very much value added. But even the companies that provide the software/services have taken to calling it AI. Customers expect it, venture backers ask if companies are using ‘AI-based drug discovery’, and the misnomer perpetuates itself.You're engaging in the exact conflation that billions of dollars of marketing for generative AI have intended you to make.
Generative AI and classical ML approaches with actual design intent are not analogous or appropriately lumped together.
It depends on what the AI is being used for. All the hype is around artificial image creation and getting answers from the likes of Chat GPT. Neither of those are worth much, nor are likely to monetize well. I have a fairly small computer with an Intel B50 graphics card in it that I set up to do local AI. It will do most of the image generation that the big dogs do and Intel AI Playground is free for the loading, which really challenges the likelihood of huge data center monetization. Now the places where AI is useful. Nearly all modern cameras are using some form of AI to assist with autofocus and the relative improvement in AF over the last few years has been huge. If you do serious processing of photos, the Topaz suite (which uses AI extensively) is pretty much a must have, particularly if you shoot in difficult conditions at high ISO. The noise reduction and sharpening tools are unmatched. Even Adobe is using AI for noise reduction and object removal in Lightroom Classic. That is just in our little neck of the woods. You can question the value of the feature, but Tesla self-driving cars actually do work remarkably well. At the other end of the spectrum (and maybe the only place the data centers could see real revenue), AI driven warfare is currently being demonstrated and will only increase in capability in the future. That one is more than a little scary. In the end, I don't see any applications for AI that are going to pay for the enormous cap ex that is being thrown at data centers, not to mention the power bills. Virtually all the useful applications for AI to date are distributed functions that don't need data centers, except possibly once for training.
This AI bullshit has to be stopped. Governments have to step in and regulate this sector so the ordinary people can buy memory.
This creates a problem for junior developers, but also for the future as eventually there will be less senior developers, whom are still needed... but it is unfashionable to discuss such unsavory topics![]()
Time to switch to new Chinese brands. These compare making a killing at the expense of consumers. Time to boycott these greedy companies.
This may be odd, but I feel bad for the kids that want to build gaming rigs. I remember spending every penny I made one summer doing my first one. Some of the PC builder groups are therapy sessions at this point.
It also depends on which type of AI we are talking about.Does anybody actually think AI is worth it? I don't, for the most part. I'm sure there's some good that can come from it but it seems like a lot of bad stuff. That being said, I don't know all of the uses, so I'm curious of others opinions or knowledge on the matter.
Partially stacked? I like your optimism.It still seems entirely possible that Canon will feel that a partially stacked sensor with 13-15ms readout is more than enough to meet their requirements
IMO: the third, the trees in the background are more interesting than the blue sky of the first. The white contour of the bird separates it from the trees. The branches in the second picture ‘touch’ the falcon and distract my view from the falcon.Following on from @becceric's comment, which of the three shots of the Kestrel do people think is the best? (Taken this morning with the R5ii and RF 200-800mm).
View attachment 228460
I'd imagine that heat/recording time would gimp any 8K ability the R7 II might have (and 6K or 4K for that matter). The R50 has a built-in fan for long recording times, as does the R5C. If the R7 II can shoot 8K, I'm pretty sure you'd be limited to short clips due to heat. But yeah, Canon still might give it a couple other whacks with the cripple hammer as well.It seems pretty certain that '39MP' means 8K UHD. But I think it raises as many questions as it answers. If it's truly a stacked sensor with 10ms-ish readout, it effectively has the capabilities of the R5C but as a Super35. What does Canon see as the market for an 8K Super35 video camera when they've just introduced the ~$4k C50 that doesn't have 8k? Would they dare position it above the C50? If it's below do they gimp it?? What does it say about the price point of the R7II and how much of that capability do they allow to surface in the R7II? So many possibilities.
That is never going to happen.This AI bullshit has to be stopped. Governments have to step in and regulate this sector so the ordinary people can buy memory.
I use AI daily for work. It has valid & valuable use cases. It's also still in its infancy and will continue to get better over time. Where that leads us (good or bad) is anyone's guess, but it's not going away.Does anybody actually think AI is worth it? I don't, for the most part. I'm sure there's some good that can come from it but it seems like a lot of bad stuff. That being said, I don't know all of the uses, so I'm curious of others opinions or knowledge on the matter.
For us customers: Yes. But at the moment I have a mix of cameras which have unique properties due to the different ergonomics and features.The third wheel on the R7 II makes a lot of sense. Frankly, if I were Canon I would use the three wheel design on all cameras if physically possible because it would standardize the controls across the brand.
It depends on what the AI is being used for. All the hype is around artificial image creation and getting answers from the likes of Chat GPT. Neither of those are worth much, nor are likely to monetize well. I have a fairly small computer with an Intel B50 graphics card in it that I set up to do local AI. It will do most of the image generation that the big dogs do and Intel AI Playground is free for the loading, which really challenges the likelihood of huge data center monetization. Now the places where AI is useful. Nearly all modern cameras are using some form of AI to assist with autofocus and the relative improvement in AF over the last few years has been huge. If you do serious processing of photos, the Topaz suite (which uses AI extensively) is pretty much a must have, particularly if you shoot in difficult conditions at high ISO. The noise reduction and sharpening tools are unmatched. Even Adobe is using AI for noise reduction and object removal in Lightroom Classic. That is just in our little neck of the woods. You can question the value of the feature, but Tesla self-driving cars actually do work remarkably well. At the other end of the spectrum (and maybe the only place the data centers could see real revenue), AI driven warfare is currently being demonstrated and will only increase in capability in the future. That one is more than a little scary. In the end, I don't see any applications for AI that are going to pay for the enormous cap ex that is being thrown at data centers, not to mention the power bills. Virtually all the useful applications for AI to date are distributed functions that don't need data centers, except possibly once for training.Does anybody actually think AI is worth it? I don't, for the most part. I'm sure there's some good that can come from it but it seems like a lot of bad stuff. That being said, I don't know all of the uses, so I'm curious of others opinions or knowledge on the matter.