dgatwood said:
jrista said:
zim said:
Why do the etchings always have to go in the same direction?
I guess it's how are they cut out? what do they use, a saw ;D
Well, there is no specific reason why they couldn't etch some additional sensors in the perpendicular direction, but it would be costly. The way sensor fabrication works is by etching the silicon with extreme UV light via a template. The template is oriented in a single direction. The wafer is moved underneath the light beam so that multiple sensors can be etched. Etching of a single sensor is a multi-step process, with various steps involving masking, etching, dissolution of masks, more etching, doping and layering of new materials, masking, etching, etc. This stuff has to be precise to the level of a few nanometers at most, so it is entirely automated. Rotating the wafer to etch additional sensors in a different direction introduces a source of error that could hurt yield.
I was under the impression that chip vendors typically used a single mask (template) for the entire wafer, though. If so, then the additional work to add a sensor in the other direction would be limited to modifying the mask with an additional set of clear spots for the additional chip's features, modifying the cutting program slightly, and then modifying the picker to grab that one chip and rotate it ninety degrees.
If they aren't using one mask per wafer, then I suspect they're in a world of hurt, yield-wise, because the alignment of the mask would have to be perfect twenty to eighty times per pass across a given wafer, whereas with a single mask, it only has to be perfect once per pass across the wafer.
If Canon hasn't done this already, they should probably sit down, do the math on what percentage of chips are full-frame, and then design masks to etch the full-frame sensors at the center of the wafer, and surround them with crop sensors to maximize the surface coverage. In theory, they could also mask the DIGIC chips, lens microcontrollers, etc. in the borders, so that only a tiny bit of the silicon wafer is wasted (because I'm pretty sure the robots have to have some bare spots near the edge of the wafers so that they can safely grab them).
Granted, you can't do that for every combination of chips—IIRC, some silicon parts likely require significantly different doping—but for parts that are fairly similar, you should be able to do so. At a bare minimum, I would expect that you could combine different sizes of sensors almost arbitrarily, including not only full-frame and crop sensors, but also smaller sensors for use in camera phones and point-and-shoot cameras.
If you are assuming they use a single mask in a single exposure to generate an entire wafer of sensors, then you would be incorrect. Remember that the whole point of using a mask and deep or extreme ultraviolet light wavelengths is that it allows the mask to be orders of magnitude larger than the actual CMOS device being fabricated. Were talking many thousands to millions of times larger...macro scale vs. nano scale. To make a mask large enough to expose an entire wafer at once would be....immense. Generating and focusing the light beam would be an equally immense undertaking (assuming it's even possible to bend light enough to do it.) You seem to think that making a single mask to expose the wafer in one shot is easier...if it was, I'm sure everyone would have moved to that approach decades ago. Fabbing one die at a time is how it's done in all industries, including CPUs, GPUs, etc. (which are considerably more complex devices than an image sensor, and use smaller processes as well.)
Fabricating a sensor is a multi-step, multi-layer process, per-sensor (or per-cpu, per-gpu, per-IC), not per-wafer. They design a sensor, generate the template
s necessary to etch and layer the necessary materials for all of the transistors, wiring, and other components involved in that sensor, then use that template again and again to fabricate multiple sensors per wafer. For each pass, the wafer is coated with a photoresist, which when exposed by DUV or EUV light, changes it's chemical structure. Every die on the wafer is exposed one after the other with the first template, then the entire wafer is bathed in chemicals to remove the exposed photoresist, etch away the exposed silicon, and dope the remaining silicon if necessary. The rest of the photorisist for the first pass is removed, a new layer of silicon or silicon-based material is added, another layer of photoresist is added, and the wafer is sent through the stepper again. Rinse, repeat, etc.
There are steppers, and there are scanners. Some large CMOS (like the very large ultra-sensitive CMOS sensor Canon developed a few years ago) devices cannot even be exposed by a single beam, in order to get proper focus, the beam has to be smaller than the full size of the template...so photolithography scanners allow larger devices to be fabricated via a longer exposure by moving both the wafer and the UV reticle opposite each other
during exposure. Canon manufactures both photolithography steppers and scanners, and according to their site, these devices support 200mm and 300mm wafers, and their latest devices can apparently use some techniques to image below the 90nm diffraction limit of the DUV light they use (so Canon is more than capable of fabricating sensors on a 180nm process with their own photolithography technology, and on 300mm wafers at that).
It's all automated and computerized, human hands aren't directly involved in moving the wafer or anything like that (at least not until it's done), so redirecting the beam or moving the wafer can be exceptionally precise. There has to be some negative space around each sensor anyway to allow them to be cut out of the die, but that's a very careful balance of just exactly the right amount of space...not too little as you risk damaging dies during cutting, and not too much that you waste space. The thing of it is, it all works in one orientation...while the wafer and reticule can be moved horizontally, from the things I've read about photolithography devices, there is no rotation of the wafer or template or anything like that. It moves under the template and UV beam, out to the chemical bath for etching and processing, on to have another layer of silicon deposited, back under the template, so on and so forth. It is probably possible to build a fab that could fabricate devices in multiple orientations, however I'm certain there are multiple challenges to making that possible, and it would likely increase cost exponentially (it wouldn't just be changes to the stepper or scanner...you would have to make sure the entire manufacturing pipeline was capable of dealing with devices of differing orientation...that includes the steps involved in cutting the wafer and separating out each die, packaging the die which involves either adding pins or a land grid array and the like, etc.)