April 18, 2014, 11:05:40 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 96 97 [98] 99 100 ... 216
1456
multiply d800 pixels with FWC  and you get the same  as 1dx

(a new question to discuss?)
then please keep your self  to the subject, so it not will be a new head room or BSI/FSI discussion

There is no such thing as multiple pixel FWC. Each pixel is independent, and is read out independently. Full Well Capacity is a hardware trait...image size normalization has nothing to do with it. Once the analog signal (as a series of signal values of electric charge) is converted into a digital image (which compresses or expands the analog signal into a fixed digital range...i.e. 10, 12, or 14 bits), FWC is no longer a valid term.

1457
why not, d800 has the same FWC as 1dx but 36 Mp
The best  SLR sensor on the market  today

I'm sorry, but that is a bold faced lie! The D800 has a full well capacity of 44k, while the 1D X has a full well capacity of 90k. That is a TWO FOLD difference between the two, bub!

Indeed.  But let's not allow facts get in the way of the same restated DRivel from the DRoll DRones who DRool over DR.   ::)

D800/E is already 36Mp with 14.5 stop DR... without the ISO tricks

You fell victim to one of the classic blunders - The most famous of which is "never get involved in a land war in Asia" - but only slightly less well-known is this: DxOMark's Scores are useless, biased Bovine Scat.


from the whole sensor area you get exactly the same FWC  from 1dx and d800


Full Well Capacity has to do with the physical pixel, nothing else. There is no "whole sensor area" when talking about FWC. Simply put, the maximum charge accumulation possible in the photodiode of each D800 pixel is 44972e-, while for each 1D X pixel it is 90367e-. There is no changing that, it is a fixed attribute of the hardware. If you downsample a digital image in post, you are normalizing noise across digital pixels, not physical pixels, and that is an entirely different process. Your numeric range is the same...either 8 bits or 16 bits, regardless of whether you have a D800 or 1D X, and for any levels above the noise threshold on the 1D X, the gains would be the same as for the D800. The primary gain with the D800 has to do with shadow levels, where you have more usable detail strait out of the camera than with the 1D X (however that still has nothing to do with FWC).

1458
Im not very interested of what you think   when I have a dialog with Eric Fossum, Emil Martinec, BOBn2, John Sheehy  and several others about the benefits of BSI  at Dpreview  years back and also private

Well...good to see your keeping the culture of obfuscation and misinformation alive.  ::) Good day, Mikael.

At least he is very consistent. He always goes on and on and then makes unsupported statements and assertions often claiming to have years of inside knowledge. I've got to the point were if a post says ankorwatt at the top then I probably won't read it.

I only respond so that other readers don't take his information at face value. It's always twisted in some way or another...I think people should be privy to at least some of the facts.

1459
Dynamic range is Probably more Important to me than noise reduction or pixel count.
I think the way to do this is to split the sensor into highlights and shadows...
I rather have an 8mp 16-stop split sensor, than a 16mp one with less DR.
It can be done.
It is interesting that most HDR cameras take 3 exposures but nobody thought about changing the ISO Instead.
I believe in pushing things to the limit and opening up the possibilities of a device...
Like my jailbroken iPhone 5. Which is amazingly more useful and fun than the limited iOS from apple.


Kudos to magic lantern!
Keep up the great work!

why not 36Mp and the best DR ?

D800/E is already 36Mp with 14.5 stop DR... without the ISO tricks

Again, more misinformation. The D800 has 13.2 stops of native dynamic range (the dynamic range at it's full resolution). It is only able to achieve 14.3 stops (not 14.5) when downscaling from 36.3mp to 8mp (an overall LOSS in detail of over 200%!!!)

This is why DXO's reports are so misleading. From a NATIVE CAPABILITY standpoint, the D800 is a 13.2 stop camera. Depending on how much you downscale, you might gain DR via a reduction in noise, at the cost of original detail. In other words, it is impossible to get 14.3 stops of DR at the native resolution of 7360 x 4912 pixels. Given that when editing a RAW photo, you ALWAYS edit at full resolution (i.e. the extreme shadow pushing we see in something like Lightroom), it is only valid to say that the D800 is a 13 stop camera, not a 14 stop camera.

In that respect....if ML has actually managed to extract the full 14 stops of dynamic range from the 5D III and 7D (which, given that they are effectively doing two-frame HDR, I believe is highly likely...you have well more than 14 stops of original data to work with, and are only limited by the bit depth of the ADC), that means a 5D III with ML is actually capable of almost a stop more DR than the D800...at a lower native resolution.

1460
If you really want to have 14 stops of DR... just buy a D800.

Not with those tiny pixels, thank you. I edited 1500 images from one the other day and I wasn't impressed.

why not, d800 has the same FWC as 1dx but 36 Mp
The best  SLR sensor on the market  today

I'm sorry, but that is a bold faced lie! The D800 has a full well capacity of 44k, while the 1D X has a full well capacity of 90k. That is a TWO FOLD difference between the two, bub!

1461
Software & Accessories / Re: Normal RAW vs Dual ISO Raw Example Video
« on: July 17, 2013, 07:29:26 PM »
How do displays and prints compare with 14 stops dynamic range?  Are displays just compressing the range back to some lower level?

The general idea is that if you don't have noisy shadows, you can lift them to bring them within the DR of a screen or print. Increased dynamic range supports editing latitude...you get to push the exposure around more. Keep in mind, in these examples, the shadows WERE lifted...if they were left as-is, those shadows would be nearly black. The normal part is noisy because of the lift.

1462
Software & Accessories / Re: Normal RAW vs Dual ISO Raw Example Video
« on: July 17, 2013, 07:28:09 PM »
Well the colour looks nice in dual-iso, but that's some crazy bad moire on the cushion.

+1 yeah it makes it perform like an Exmor or better for shadow pulling but then again the resolution is what cut in half in each direction so.... and the moire and aliasing are so bad that it looks more or less unusable.

Cut in half only vertically, full resolution horizontally.

1463
Animal Kingdom / Re: Show your Bird Portraits
« on: July 17, 2013, 05:56:36 PM »
Bar-winged Prinia(Prinia familiaris)
60D, 70-300mm L - 1/320 f5.6, 300mm, ISO 320

IMG_3648cropped by sleon_falconity, on Flickr


Very interesting. That bird looks almost identical to the Western Kingbirds we have here in the US. I think the Prinia has a lighter throat than the Kingbird, but outside of that...they are extremely similar. I'm curious....how big is the Prinia? A Kingbird (western or eastern) is just slightly smaller than an American Robin...I wonder if a Prinia is similar in size.

1464
Actually they are all sort of bad. It's a shame stuff like AmigaOS and such are forgotten and stuff like Windows hangs around.

Anyway, the above list is sort of accurate, but they also never did anything as radically silly as trying to think that a tablet interface is ideal for desktop usage. Like we really want to smear greasy fingers all over photo-editing monitors or hold arms up and lean up to reach 24-36" monitors (or worse if you hook it to an HDTV too).

I'm curious if the assumption, that you MUST use touch to use the start screen, is a common one. The start screen is not inherently touch only. You can use Windows 8 without touch, and it works just fine. There is no reason to touch a screen in order to be capable of using the new start screen. If that is what most people think, then I guess it is no wonder that people aren't buying Windows 8.

I'd also point out that it works even better on an HDTV. I have Win8.1 on my Media PC, attached to a 46" Samsung. I use the standard Media PC remote to control it, along with a companion Logitech T650 touchpad for supporting any of the gestures (which, I'd add, is fully compatible with any desktop, allowing you to take advantage of the touch interaction without needing to ever touch a screen, if that kind of think irks you.)

1465
Im not very interested of what you think   when I have a dialog with Eric Fossum, Emil Martinec, BOBn2, John Sheehy  and several others about the benefits of BSI  at Dpreview  years back and also private

Well...good to see your keeping the culture of obfuscation and misinformation alive.  ::) Good day, Mikael.

1466
I actually like my HTC Windows Phone. The interface is consistent and simple and it is much easier to use than an Android phone (try giving an Android phone to somebody 60+ and over, I feel the Android phones are more for the tech-customizable-oriented people). I like the simplicity of iPhones as well but I think the Windows phones are quite easy to use. My HTC phone is also a lot thinner, lighter than the iPhone, and the LCD is fantastic. Battery life, admittedly, is so-so but I've just learned to have a lot of USB chargers laying around. Also their speech recognition still needs some work (as compared to Android - man, it is good).

As far as the camera capabilities, I think most people will admit that iPhone cameras can produce very good pics and even the camera in my HTC phone can produce great colors and picture quality - good enough for me to point on a monitor and ask people which one was taken by a DSLR and which one by phone (obviously we're talking outdoor pics). Sometimes I even use my phone as a lighting source when I want to take pics inside a dimly lit place. I just turn on the flashlight app, get a white paper napkin for diffusion and there you go.

I've always wondered why we don't have phones with the thickness of a Canon S95 and a proper zoom lens. I'd put that in a small case on my belt. It'd just be a multi-purpose device. Sorta like a p&s but you can have it upgraded to be a phone - linked to your regular cellular carrier.

Not sure why we would pick one brand over another - most cellphone makers wouldn't be in biz if they didn't perform to a minimum. Most of us switch cellphones as often as 6 months to one year so I just pick the model that has the right OS, price point and features.

Speech recognition in Windows Phone 8 is phenomenal. It isn't as interactive as Siri, but it is flawless, and even works in noisy environments now. If you haven't tried it, its worth messing with a Windows Phone 8 device in a store somewhere...the voice control, voice texting, etc. is pretty nice.

1467
I have to concur that Win-8 sucks big time on a desktop. Metro has no business on a PC machine...

but on a tablet device or a phone, Its far better than my Iphone. The only reason I haven't jumped ship is that the more and more stuff you buy on itunes and the app store, the more it ties you down on the system.  :P I couldn't switch if I wanted too with all my purchases.


It did, I agree. I think Windows 8.1 fixes most of that, though. It still isn't ideal, but a hell of a lot better than what it was. That's Microsoft's MO, though. It always takes a couple versions for quirks to iron out. Also, keep in mind, people utterly HATED Windows XP when it first hit (I remember reading scathing, hateful articles months and months after its initial release), and it was over a year before it became the most used and most loved Windows OS ever. I don't suspect things will be any different for Windows 8...and it is a hell of a lot better release than Windows Vista was (so the next major release should be a pretty significant improvement even over Win8.1).

Microsoft has a different release MO. Apple builds up an unquenchable fervor by not releasing ANY details about its releases until the day they unveil. (Well, they did....seems that may change under Cook, and I guess we'll see whether that is to the detriment of apple in the long term.) Microsoft has always approached releases with lots of software leaks, beta versions, community technology previews, etc. I think that can be good and bad, but these days, it seems it gives people too much time to play with new products before they are even released, encounter all the pre-release bugs, and decide they don't like the product. I would prefer Microsoft take the old Apple/Jobs approach. Don't release anything until its done, and when its released, make sure its solid, and make it a big party. They wouldn't lose people in the beta and CTP phase that way, they wouldn't get a bunch of pre-release bad press, and they would gain the benefit of people being antsy and excited to see and use the next greatest Microsoft thing. People just end up bored with the bugs before new Windows versions are actually released, the excitement is gone, so the release suffers, and it takes longer to build momentum.

Maybe the MS reorg will change things...but I don't really trust Ballmer to be anything other than a raging tool...so....


Windows 2000/NT - Good

Windows ME - Bad

Windows XP - Good

Windows Vista - Bad

Windows 7 - Good

Windows 8 - Bad

Windows 9 - ? Fill the blank.

I love M$ products but not when they revamp something the first time. The second attempt is usually perfect.


Yup, that's pretty much it! :D It would be nice if it became:

Windows 9: Good
Windows 10: Good
   .
   .
   .
Windows N: Good

I get the feeling it will probably be more along the lines of :

Windows 8: So-So
Windows 8.1: Better
Windows 8.2: Even Better
Windows 8.5: Good
Windows 9: Better than Good
Windows 9.1: Even Better than Good

And if there are six to eight months between each release, then reaching Even Better than Good could take years. Assuming they don't end up continuing to flipflop.


I think windows 8.x would be likely received MUCH better, if they would give the user the choice, especially with respect to a real computer (laptop/tower) to completely divorce the Metro UI from the system and allow you to go fully and ONLY into a classic desktop only paradigm.


Why? Seriously, Why?

There are several reasons I don't see any reason to do that. First, if you don't want the new start menu, then you might as well stick with Windows 7. If you want that kind of desktop, then there is very little benefit to moving to Windows 8. Aside from doing away with a lot of the more fancy glass effects, which improve performance a smidge, and a slightly faster boot time...Windows 8 in desktop mode is nearly identical to Windows 7 in desktop mode. There isn't any compelling reason to move to Windows 8 if you loath the new Metro UI that much.

Second, Microsoft has long had the desire to move to a 2D immersive, interactive experience. They started with the "Office 2019" videos from a few years ago, and recently have a few new ones. For the latest, see the following link and click "Future Vision":

http://www.microsoft.com/office/labs/index.html

You can see the older videos here:

http://blogs.technet.com/b/next/archive/2011/10/25/looking-back-on-2019.aspx#.Ueb_kI3bPzw

There are some awesome concepts in those videos. The ubiquitous touch integration on all 2D surfaces, phones, tablets, and other devices that work in harmony, and allow instant transfer of data and responsibility, etc. I really want that. I can't wait until I can tap my phone on my glass coffee table in my living room, and see the days photos, my schedule, etc. all in a clean, pristine 2D touch interface. Windows 8 is the first real stepping stone I've seen towards this vision from Microsoft...and they first started releasing the Offie 2019 videos at least five or six years ago.

The way people complain about the new 2D UI...I think its just the fact that it is new and uncomfortable. I know too many people who have seen Corning's videos of ubiquitous touch computing, and said they would love such a thing...then say they hate Windows 8. Well, hey....pretty much ANY of the Office future concept UIs could be created on Windows 8 today. The parallax scrolling seen in the corporate agriculture apps could be done right now (actually, its already done on the Office Future Vision site I linked above, and it performs like fluid glass on IE10). The transfer of responsibility can already be done today with XBox Glass, which allows you to either remotely control an XBox, or transfer responsibility for playback of music or video from a tablet to a TV. And all people can do is complain about it. Sorry, but that just boggles my mind.

It's just a matter of time before better apps find their way into the Windows Store. Some already are...some of the fitness apps are already getting quite good (i.e. FitBit), and offer very advanced interactive UIs. I think it is also just a matter of time before the kind of advanced manufacturing and agriculture apps find their way into real-world corporations. They just need people to have the vision for them, and to develop them. I don't know when ubiquitous touch computing finds its way into tabletops, windows, walls, etc., but I can't wait.

I think that's largely the main gripe about it, trying to use a tablet UI on a desktop (even if it had touch, not many want to keep their arms up off the desk a lot, constantly touching the screen)....


I agree a bit here, touch shouldn't be the primary mode of interaction for a desktop. I think Windows 8.1 has already fixed a lot of that, and even that being said, Windows 8 started out with very good mouse and keyboard support. You never HAD to use touch for the start screen...it has always supported mouse and keyboard. For that matter, it also has great support for a remote control...I use Windows 8 on my Media PC, and use the remote to move around the tiles and run programs. The remote works well in most Microsoft Win8 apps as well...for example, I just hit the left or right buttons to scroll through news articles,  up and down to scroll through emails, etc.

I think the complaints about the new start screen not working on a desktop are overblown. I also think that Microsoft has done a poor job educating new users how easy it is to use the mouse to control the new UI. Closing an app, for example, often baffles people. It is actually a simple gesture...point to top of screen, click and hold, drag to bottom. It's a simple, fluid motion once you know it exists...most people don't...and that's the real problem. It's one of Microsofts fundamental problems...they have a severe gap in helping their users KNOW how to use their OS.

I hear Win8 is pretty snappy and does good things with memory management, but if they don't allow classic computer users to turn Metro OFF, I think they're gonna lose business. People are NOT in a rush to migrate off Win7, businesses certainly aren't going to migrate, heck, they're just now coming off XP still in many cases.


In Windows 8.1, you can't entirely decouple yourself from Metro, but you can get pretty close. You can boot directly into the classic desktop now, and do everything you used to do...with the exception that the restored start button still brings up the start screen, rather than the classic start menu. There are a myriad of third-party tools (really just registry hacks) that restore the classic start menu. You can do that, if that's what you want...but again, Why? You might as well stick with Windows 7 until it EOLs if that's really how you feel. If you are entirely uninterested in moving into a new era of computing, it isn't like Microsoft is holding a gun to your head. ;)

I mean, look at Apple...they don't have the same OS on the tablet/phone that they have on the computer...iOS vs OSX...different beasts. Sure, they are converging to some extent, but not to the same extent MS tried with Win8.

My $0.02,

cayenne


I guess I think that the dual-platform nature of Windows 8 is its strength against iOS. I think that is how most people feel as well. Windows RT is largely a flop. People don't WANT just Windows Metro, even just on a tablet. People, including myself, explicitly held out for Surface Pro, because we WANT that dual nature. I really love it. I waited years for the ability to run Lightroom on a tablet out in the middle of nowhere Colorado, where I can tether my DSLR to my fully mobile, fully featured PC that neatly rests in the palms of my hands, and effectively get a large screen view camera out of a lowly Canon 7D. I didn't need any extra accessories, custom cables, or anything like that to get it working, either. Personally, I think that is a highly valuable thing. That's something no other company has offered me yet, not even Apple.

1468
read what I write, the real improvements are around 1,1 to 1,4 um sensel  size

and there are no APS or 24x36 from Canon or others yet= with that small pixel size

BSI cost about 30% more than FSI

Eric Fossum:

Improvements like BSI typically improve image quality mathematically and from a perception point of view, by increasing QE and reducing effects orginating from pixel stack height, when comparing two pixels of equal size. At 1.4 um pixel pitch the improvement offered by BSI is small. By 1.1 um pixel pitch, BSI offers a substantial advantage, unless some FSI breakthrough is made. BSI costs more to make so there is motivation for the FSI breakthough

It really depends on the photodiode size. A 7D has 4.3 micron pixels, but the actual photodiode is smaller than that. The entire pixel is surrounded by 500nm (.5 micron) transistors and wiring, which would mean the photodiode...the actual light sensitive area embedded in the silicon substrate, is only about 3.3 microns at best (and usually, the photodiode has a small margin around it...so closer to 3 microns). A 24.4mp sensor would have pixels in the range of 3.2 microns, however with a 500nm process, the actual photodiode pitch is closer to 2 microns.

Canon has already demonstrated that larger pixels can be huge for overall SNR (and therefor actual light sensitivity) with the 1D X. Despite the fact that the 1D X is a FF sensor, it benefits greatly from a larger pixel, and thus a larger photodiode size...as the gain is relative to the square of the pixel pitch. Production of a BSI APS-C 24.4mp sensor would mean that it could have 3.1 micron photodiodes that perform at least as well as the 7D's 18mp sensor, as total electron capacity is relative to photodiode area. A 24.4mp BSI 7D II could then be roughly as capable (~21,000 electrons charge FWC @ ISO 100) as an 18mp FSI 7D.

Personally, I find that to be quite a valuable thing. Especially given that the 7D currently performs about as poorly as one could expect by today's standards. A 2 micron photodiode in the 7D II would mean SNR suffers even more, which is going to have an impact on IQ, especially for croppers, so I can't imagine Canon doing that.

you are mixing up things, why do you think Im saying that Canon needs 180 or smaller  tech?

The real benefits of BSI you find in very small sensel, and I do not think it is a good idea  to talk to much what you believe or think when Eric Fossum have   shown when the benefits starts of a BSI construction.
And that is around 1,4 micron and smaller
A tipping point for BSI will be the 1.1 micron pixel node where FSI will likely be unable to achieve the market-required performance – necessitating a transition to BSI for applications that require this smaller pixel."

There are some benefits of BSI and larger pixels and that is with wide angle lenses and corners, as for example SLR+ wide angel lens  and incident light angle

I am not mixing anything up. The primary benefit of 180nm is that you have more area per pixel to dedicate to the photodiode. In the case of 1.4 micron pixels, use of a 500nm process is already a non-option...you would have already passed the limit you claim would be reached with 1.1 micron pixels on a 180nm process...the photodiode of a 1.4 micron pixel on a 500nm process would be maybe .3 microns (300nm). You have to translate from a 180nm 1.4 micron pixel to a 500nm 3.2 micron pixel. The wiring and transistors in a 500nm process take up a lot of space. That space could be put to better use...and assuming one does not change from a 500nm process....well, then BSI DOES have value.

Instead of taking up ~1 micron of pixel pitch for wiring and other logic, you take up a quarter of a micron if you moved to a 180nm process on APS-C. That means, for a 4.3 micron pixel pitch, the actual photodiode could be ~3.95 microns, rather than 2.1 microns. That increase in area is where you gain the greatest potential for an improvement in IQ. Now, with 180nm transistors, you can pack more of them in. Canon could stick with a 2.1 micron photodiode, and have a lot more logic circuitry around it with a 180nm process. That would allow them to add more sophisticated noise reduction logic, maybe drop in some on-die ADC, etc....simply because each transistor and all the wiring consumes less space. But fundamentally, photodiode area is the key thing from an SNR standpoint, and a higher   SNR leads to less noisy images.

When it comes to Canon's read noise, the primary issue there is high frequency components and binned pixel processing on an off-die component. The longer the signal remains analog, and the closer any pixel processing is to a high frequency component (a DIGIC processor is a CPU...the whole thing is a high frequency component), the greater the chance that read noise will interfere with shadow detail. It doesn't matter what the fabrication process is...Canon could move to 180nm, and keep using their Digic processors with off-die ADC. They will continue to have shadow noise problems, despite the move to a better process. If they move the ADCs on-die, and do something akin to what Exmor does, by moving the PLL, Clock, and other high frequency components to an isolated area away from those ADC units, then Canon could reduce their shadow noise.

That only affects low ISO, however, and a lot of Canon users care more about high ISO. Using a BSI design, even in APS-C, allows photodiode area to remain large. Canon could also still add more advanced per-pixel logic in a BSI design even if they stay on a 500nm process, as they would have the full photodiode area on the front side to utilize for logic (i.e. additional noise reduction circuitry...one of their patents described a power-source free CDS system that decoupled the power input while performing CDS, as keeping the power coupled continued to add dark current noise.)

It is not NECESSARY for Canon to move to a 180nm process, or only use BSI with small form factor sensors having 1.4 micron pixels or smaller, in order to continue innovating and improving IQ. As far as I am concerned, for the kind of high ISO work I do, I would LOVE to see Canon produce a FF BSI sensor. That would allow them to increase photodiode area, particularly in a shared pixel architecture, by another micron. Right now, in the 1D X, photodiode pitch is around 5.8 microns, while the actual pixel pitch is 6.95. I think it would be awesome to see a 1D XI with a BSI design that had 6.95 micron photodiodes. That is a 43% increase in total photodiode area, an increase that would have a measurable improvement in high ISO performance (imagine an actual usable ISO 25600 and maybe 51200 for wildlife and birds.) Again, Canon could move to a 180nm process, and either pack more logic into each pixel and improve readout NR (i.e. CDS), or reduce the logic, increase photodiode area, and move the ADC on-die, which at the very least should increase the maximum readout rate and possibly improve read noise performance. There are a whole lot of options...Eric Fossum isn't the only source of CIS innovation, nor the bible of what is and is not possible with CIS devices. Eric Fossum has done a lot of research in the area, however so has Canon (remember, it wasn't that long ago that Canon had the best sensors in the digital camera arena...they certainly have the knowledge and knowhow...I think their current reliance on 500nm is more of a business and financial matter than a lack of ability.)

I think moving to BSI, even if Canon sticks to 500nm, is a better option. It frees up the entire front side for logic, and the entire back side to light sensitive photodiodes. It is something Canon could do with their current process, potentially freeing up a billion dollars for other purposes (R&D, greater production capacity, whatever.)

1469
I have to concur that Win-8 sucks big time on a desktop. Metro has no business on a PC machine...

but on a tablet device or a phone, Its far better than my Iphone. The only reason I haven't jumped ship is that the more and more stuff you buy on itunes and the app store, the more it ties you down on the system.  :P I couldn't switch if I wanted too with all my purchases.

It did, I agree. I think Windows 8.1 fixes most of that, though. It still isn't ideal, but a hell of a lot better than what it was. That's Microsoft's MO, though. It always takes a couple versions for quirks to iron out. Also, keep in mind, people utterly HATED Windows XP when it first hit (I remember reading scathing, hateful articles months and months after its initial release), and it was over a year before it became the most used and most loved Windows OS ever. I don't suspect things will be any different for Windows 8...and it is a hell of a lot better release than Windows Vista was (so the next major release should be a pretty significant improvement even over Win8.1).

Microsoft has a different release MO. Apple builds up an unquenchable fervor by not releasing ANY details about its releases until the day they unveil. (Well, they did....seems that may change under Cook, and I guess we'll see whether that is to the detriment of apple in the long term.) Microsoft has always approached releases with lots of software leaks, beta versions, community technology previews, etc. I think that can be good and bad, but these days, it seems it gives people too much time to play with new products before they are even released, encounter all the pre-release bugs, and decide they don't like the product. I would prefer Microsoft take the old Apple/Jobs approach. Don't release anything until its done, and when its released, make sure its solid, and make it a big party. They wouldn't lose people in the beta and CTP phase that way, they wouldn't get a bunch of pre-release bad press, and they would gain the benefit of people being antsy and excited to see and use the next greatest Microsoft thing. People just end up bored with the bugs before new Windows versions are actually released, the excitement is gone, so the release suffers, and it takes longer to build momentum.

Maybe the MS reorg will change things...but I don't really trust Ballmer to be anything other than a raging tool...so....

Windows 2000/NT - Good

Windows ME - Bad

Windows XP - Good

Windows Vista - Bad

Windows 7 - Good

Windows 8 - Bad

Windows 9 - ? Fill the blank.

I love M$ products but not when they revamp something the first time. The second attempt is usually perfect.

Yup, that's pretty much it! :D It would be nice if it became:

Windows 9: Good
Windows 10: Good
   .
   .
   .
Windows N: Good

I get the feeling it will probably be more along the lines of :

Windows 8: So-So
Windows 8.1: Better
Windows 8.2: Even Better
Windows 8.5: Good
Windows 9: Better than Good
Windows 9.1: Even Better than Good

And if there are six to eight months between each release, then reaching Even Better than Good could take years. Assuming they don't end up continuing to flipflop.

1470
I have to concur that Win-8 sucks big time on a desktop. Metro has no business on a PC machine...

but on a tablet device or a phone, Its far better than my Iphone. The only reason I haven't jumped ship is that the more and more stuff you buy on itunes and the app store, the more it ties you down on the system.  :P I couldn't switch if I wanted too with all my purchases.

It did, I agree. I think Windows 8.1 fixes most of that, though. It still isn't ideal, but a hell of a lot better than what it was. That's Microsoft's MO, though. It always takes a couple versions for quirks to iron out. Also, keep in mind, people utterly HATED Windows XP when it first hit (I remember reading scathing, hateful articles months and months after its initial release), and it was over a year before it became the most used and most loved Windows OS ever. I don't suspect things will be any different for Windows 8...and it is a hell of a lot better release than Windows Vista was (so the next major release should be a pretty significant improvement even over Win8.1).

Microsoft has a different release MO. Apple builds up an unquenchable fervor by not releasing ANY details about its releases until the day they unveil. (Well, they did....seems that may change under Cook, and I guess we'll see whether that is to the detriment of apple in the long term.) Microsoft has always approached releases with lots of software leaks, beta versions, community technology previews, etc. I think that can be good and bad, but these days, it seems it gives people too much time to play with new products before they are even released, encounter all the pre-release bugs, and decide they don't like the product. I would prefer Microsoft take the old Apple/Jobs approach. Don't release anything until its done, and when its released, make sure its solid, and make it a big party. They wouldn't lose people in the beta and CTP phase that way, they wouldn't get a bunch of pre-release bad press, and they would gain the benefit of people being antsy and excited to see and use the next greatest Microsoft thing. People just end up bored with the bugs before new Windows versions are actually released, the excitement is gone, so the release suffers, and it takes longer to build momentum.

Maybe the MS reorg will change things...but I don't really trust Ballmer to be anything other than a raging tool...so....

Pages: 1 ... 96 97 [98] 99 100 ... 216