There isn't any feature of the transistors taht are 7nm wide, the smallest feature IIRC is the interconnect, which is about 30n-40nm. "7nm" is marketing wankery. https://en.wikichip.org/wiki/7_nm_lithography_process
If you look at the SRAM bit cells, they halved in area from 28nm to 14nm, and halved again from 14nm to 7nm. This is likely due to metal interconnects or other features only halving in spacing going from 28nm process to 7nm. So the scaling almost appears to apply to only one dimension (full linear scaling of two dimensions would have quartered the cell area at each step resulting in a 16-fold total reduction). But it is still scaling nevertheless.
There are different features sizes in the Z direction and different feature sizes in the XY plane. When it comes to XY, the smallest features size is probably the gate length. When it comes to Z, the gate oxide width is probably the smallest at 1-2nm. We have something called atomic layer deposition which gives control over single atomic layer thicknesses.
Smallest feature size no longer correlates with transistor density in recent years. So manufacturers just use the number to convey increases in transistor density.
That's a problematic metric as well because we don't know how much area the assist circuit takes up. Modern high-density SRAM cells cannot operate as is, they need an assist circuit to compensate for variations. For example for Intel's 10nm SRAM, they claim 77% area effiency (https://fuse.wikichip.org/news/525/iedm-2017-isscc-2018-inte...). But without those values, just bits/mm2 or so is problematic.
For a while I think it was the other way around: Intel's processes were, in practice, actually substantially less dense than their competitors at the same nominal process node size.
I've always assumed that "XX nm" doesn't represent the geometry of any particular feature of the end product, but something related to the wavelength of the radiation used for the imaging process.
Maybe it corresponds to the principle emission line of the light source (synchrotron?) In spectral terms, 7 nm is near the border between hard UV radiation and soft X-rays.
Most chips made today are created with multipatterning processes using 193nm lasers and optical masks, known in the field as "Deep Ultraviolet Lithography" (DUV(L)). The industry is pushing towards replacing DUV (which has been pushed to its extremes) with "Extreme Ultraviolet Lithography" (EUV(L)), which uses a 13.5 nm light source (just watch https://www.youtube.com/watch?v=5yTARacBxHI - it's both fascinating and terrifying the trouble that EUV brings) and mirrors since pretty much all matter is opaque to EUV light.
It's a bit maddening to think that features that much smaller than the wavelength of light used can be patterned with that light source, but we've made a science out of it over the past decade with multipatterning and immersion lithography.
Not just science, but working, high volume, commercially viable production processes. The science itself is extremely impressive, but then adding commercial requirements and pull it off. Over and over again.
>- it's both fascinating and terrifying the trouble that EUV brings
It is only peanuts in comparison to what is to come. The "nuclear option" on the table is to build a whole fab around a freaking synchrotron light source.
I watched the video recommended by awalton, and it's definitely as insane as he says it is. I can't believe a synchrotron would be that much more expensive, considering that a single storage ring can have an arbitrary number of output ports.
The main issue with using a synchrotron or Free Electron Laser EUV source might not be so much about the technology, but more about the mindset of the clients (Samsung, TSMC, Global Foundries, Intel). Up until now lithography has been something they would buy as a "box" which would be shipped to their fab, "plugged in" and commissioned.
A Free Electron Laser EUV source would be a facility on it's own, similar in size to a small powerplant built adjacent to your fab, and multiplexed to a dozen or so EUV wafer scanners, that's quite a different endeavor.
And after that, and few EUV multiple patterning litho generation, lies a pitch black abyss called Deep X-Ray Lithography, the only thing that can push things closer to 1nm
Great movie. Those machines are monsters. A 2m high machine that sits inside the ASML machine. And is fed laser light from something like a shipping container in the basement.
So, what does determine the "marketing" nomenclature for a given process node, if not a specific feature size or wavelength?
I find it impossible to believe that some obscure semiconductor industry people get together in a hidden smoke-and-particulate-matter-free room, come up with a completely-random number, and name their process after it.
It's worse than that... there's not jist an industry group (ITRS), there's also the foundries making it up themselves. They're buying similar equipment and selling similar transistor densities (except for Intel which has finer geometries and higher densities for a given node), but few are directly compatible on density, clock-rate, or power consumption after 28nm.
Scalling really started falling apart in the late 90s around .25u, and then (incidentally) about the time CPU MHZ stopped scaling... by 65nm both gate and transistor length got wonky. Then after 28nm they moved to fin-FETs and multi-patterning making comparisons even more difficult.
It used to be a specific feature size: the length of the gate. But, even as the wheels came off of that pretty quickly, the convention stayed pretty simple and familiar: each process node halves the size of the one before it - you can print double the number of transistors at the smaller process node. To make the math work out, that means each process node "name" is sqrt(1/2) ~= 0.7x smaller than the last, which gives you the easy-to-follow node names from 3um to about 3nm (when people tend to switch to ångströms or picometers since it moves the decimal kindly).
Even as things got sticky due to advancements in transistor construction meaning that old metrics like gate length were obsolete, we were still roughly following the trend laid out ahead of us for decades. A new process would double your density, letting you roughly cut the size of your old chip in half.
So pretty quickly, TSMC decided that they'd just advance the node table, despite not actually increasing the density by double as you'd expect. "Next generation" 20nm processes became "16nm" and "14nm" on marketing docs, despite the process capabilities not changing that much (or even at all in some cases), with the only thing close to a justification given is that "FinFETs are different. They perform better than planar FETs so we should be able to give them a new node name." GloFo and Samsung quickly took the bit and followed their lead as they began FinFET manufacturing.
And apparently since nobody blinked an eye or set off alarm bells about these fabs basically lying about their capabilities, they got away with it and are now continuing the trend downwards. "10nm" processes from TSMC, Samsung and GloFo measure up to Intel's 14nm, and now "7nm" processes measure up to Intel's 10nm. It's actually pretty surprising Intel hasn't thrown up its hands and joined them on the fun, or even come up with their own marketing spin on it yet. "Intel's new 7nm-xtreme manufacturing process (actually it's just 10nm+)" or whatever.
Who exactly would be responding to these alarm bells?
If I'm making a chip I want to use the node that best fits my product. Might not even be the latest one. But if they offer me 2x the memory destiny, 1.6x the logic density all at the same/lower power - I'll take it! Sure, the tracks are huge and I need a huge tall stack-up but that's not really my problem. I really don't care what marketing speak they use to refer to it. I have zero interest how long the gate is, I care about what chip I can make with this.
And Intel can do what they want. Their fab offering is very uncompetitive.
With current technology, a wafer is 10" in diameter (254mm), the aperture size limiting the die size is around 900 mm^2, so a chip is at most about 30x30mm. This would make it about 60-70 chips per wafer at maximum chip size (e.g. high end GPU). Most chips are a lot smaller, but there's a ballpark figure for you.
Chips are square but wafers are round, so there's a lot more wasted area with large chips.
> Semiconductor manufacturing improvements like this really have enabled the whole tech world improvements of the last few decades.
I would argue that for software it had the opposite effect, and has led to layers upon layers of crap. No need to ever fix that when you can rely on the next cpu being twice as fast.
Don't forget that optimization also costs time. Performance improvements just shifted the equilibrium between optimization and output. In the world where every programmer was forced to optimize their code, we'd have a lot less code.
Fun stuff. I've got a key chain with an 80186 die, and a couple of the binders from Microprocessor Forum when they had dies on the cover. The features on those chips are "huge" compared to these things. I suspect a die using one of these processes that was big enough (say 100 sq mm) would just look like a mirror with chromatic interference lines running across it.
Ah yes you're right of course. Has there been anything in the past few years made on modern processes with the die actually visible? Desktop/laptop CPUs and GPUs have all been flip chip for at least the past decade, mobile SoCs are package on package, most other stuff is encased in plastic.
Some of the RF processes. The HMC6300 is a mmWave flip chip with visible (to the eye) structures. Even then, it’s limited to passive structures, and maybe the PA output transistors.
when we went above ~3 metal layers upper layers started to be used power and ground planes - that tends to mean that you can't really see much any more
Good question. Node size used to have a pretty firm definition, but it's malleable these days.
Regardless of the absolute number, you can think of it as defining the smallest width of pencil you are used to make a drawing with. You can still use fatter pencils, and in many cases you would want to... shading in a large area for example. But having the smaller pencil lets you put in finer details.
Some thing just won't work if they are too small though, so even though you have a fine sharp pencil available to use, somethings will not change.
Regarding reliability over time, electromigration is the only thing I know of in a typical IC that causes degradation. It is affected by the size of conductors, so potentially things could get worse. It's a well understood phenomenon though so it's usually mitigated by design rules.
Non volatile memories have their own degradation problems.
So one of the most important questions when producing >10nm process is yield. In the article I can't really find yield numbers, and the only thing they mention is the SRAM chips getting "consistent double digits" -- so pessimistically consistently 10% yield. That's not "good" yield.
Also, IIRC SRAMs are just about the simplest blocks you can make, which means they're the simplest lithography wise. Simple blocks to make, so they're usually good candidates for exploration of new process...
7nm is hard (I've said in a previous post, but you're just fighting physics at that point, nevermind all the issues you start facing with crosstalk/etc), color me skeptical that they've really nailed down a "volume" process for doing it just yet.
This is not surprising - this is trade secret information as it directly translates to profit margin for the process. AFAIK nobody publishes process yield information other than vague handwavy percentages.
You should also expect percent yields of most processes these days to be pretty poor (numerically): each step brings its percent breakage along, so at 100+ steps for these current processes, you're losing a significant fraction of chips. (Even if all of your steps yield 99%, you're down to 36.6% yield after 100 steps, so it's really important to reduce the number of steps as well as their complexity.)
For a process to be profitable it doesn't need to have a perfect, or even "good" yield though (and so-called "perfect wafers" have been vanishingly rare since about the 14nm process step). I've read documentation about very old processes with yields of 60% that were considered "good" at the time, so 10% might not be a terrible overall yield at this process node (e.g. if we take our 36.6% yield example above, 10% total yield would be of 27% of theoretical - certainly room for improvement, but better than many pharmaceutical processes).
This underlines the importance for the switch to EUV - a dozen or more DUV multi-patterning steps can be dropped down to a single EUV step, which eliminates losses along those multi-patterning steps.
Agreed. About the only number I'd expect from a foundry is a defect density per unit area, which can be used to derive a yield estimate. It's die-size and design-quality dependent, so the quoted D0 is going to be a best case.
10% most certainly is a terrible yield. If TSMC is kicking off volume production, then they have a mostly-SRAM test vehicle somewhere, probably 288Mb or larger, yielding at 65 pct or better on a consistent basis. A 10% wafer would be a financial disaster.
Does it matter though? What matter is cost, because that is the results of yield and direct relate to how many customers will use 7nm. Yield is not a constant and changes / improves over time.
60% Chance Apple will have a new iPad Pro in WWDC using 7nm, and 90% Chance Apple will use 7nm in their next iPhone.
Whatever the yield is, it is good enough for ~200M iPhone next year and the cost is acceptable for close to 50 customers in the next few months. And assuming Apple is going to go 7nm ( I don't see why not ), this volume will likely be more than double Intel shipping their 10nm in the next 12 months.
Yields will suffer with current DUV process which must be multiple, multiple patterning. The mentioned switch to EUV reduces multiple patterning and should improve yield. But EUV has had its own challenges.
Good catch, I definitely meant to write <10nm but I guess it also applies to >10nm too, basically it always applies. Doesn't matter if you can make Xnm process if it's not mass producable (unless your market is specialty chips of course)
Yeah, you should take the fact they're not using EUV as the tremendous boulder of salt that it is: these are actually 10nm parts. Node names continue to be washed into Megahertz War-style naming schemes.
TSMC's process lines up pretty well with Intel's published numbers for their P1274 10nm process (Contacted Gate Pitch T & I: 54 nm, Minimum Metal Pitch T: 40nm I: 36 nm, High-Density SRAM bitcell size T: 0.027 µm^2 I: 0.0312 µm^2, etc).
What we've learned out of all of this is that Intel's struggles to push tooling towards EUV have benefited the industry at wide, as everyone's spent so much effort there that existing processes and tooling has become cheaper and faster to iterate. They've certainly fallen behind their all-time lead of almost two process generations, but they still appear to be about a generation (18-ish months) ahead of TSMC by published numbers.
But, who actually cares about the numbers, Marketing says 7nm so it's 7nm.
It’s not 7nm in any feature size it’s seceral times that.
It’s achievable usually with multiple patterning and submersion 193nm wavelength in vacuum which goes to about 145 in water and they likely are using something other than water and you also have temperature which affects the refractive index.
Well... the article didn't say what the line width, say, actually is. It just said that the process is labeled 7nm, which doesn't necessarily correspond to any features actually being 7nm in size. It just means it's that much smaller than the previous process.
Don't get me wrong, it's still amazing, and the line widths are still going to be much less than 193 nm. I just wish they'd say what some standard feature size actually is.
I assume the production is already sold out for the next year for Bitcoin/Ethereum miner ASICs...
I am wondering for how long will we be stuck with 7mm/10mm (Intel) tech; maybe we will see the last silicon-based improvements in CPUs & GPUs for the next two decades...
This makes me hope that maybe all those chipmakers will finally move on from that extremely long-lived 28nm node. It has been heavily used since 2011-2012 and even companies like Qualcomm have been using it as recently as last year in the lower-end range of products. Companies like Allwinner, Rockchip, Amlogic have yet to release a single chip on newer nodes.
What this means is that the 28nm node has been the cheapest now for 7 years running. Usually, when new nodes came up, the cost per transistor initially was lower,
while the yields are slowly improving. But after ~1-2 years, the new node was always more cost-effective then older nodes, per-transistor.
This wasn't the case with the 28nm, which stuck around for 7 whole years, and still seems to be the most cost-effective. But surely, now with 7nm available that won't be the case for much longer. Good riddance.
28nm will be around for a long time. There are SoCs up at 130 or 180nm still because for what they need those are the cheapest way to go. 28 is as far as you go with planar transistors and single (or is it double?) patterning. The 20-22 node is not popular because it kinda sucks for both planar and FinFET designs. I think 14-16 will be around for a long time too because of the increased complexity of going lower.
While the end of scaling is a bummer, it will be nice to have the industry settle on a few common nodes that are mature and very well understood.
I have a question - why does the fab machine lithographic printing etch out a half a chip on the edge of the wafer like in the article picture. I presume the wafers are a standard 300mm size, so is it just because the designers were too lazy to remove the half chips on the edges from their mask template designs ?
It makes it so all of the chips they will use have all of their neighbors, so the process (and thus electrical properties) will be more uniform between chips near the edge and chips near the center.
You could slice a cylinder the other way. That would give you rectangles, with the short dimension varying. I'm not so sure you'd save any space, but recycling the unused portion might be more reasonable to do.
The problem is that it is a single crystal of silicon. To keep it atomically flat you cannot cut it at any angle. You must cut it across the axis the crystal grows in or the surface will be rough and the wafer will likely crack in half in a stiff breeze.
I suppose you could go for some more on-axis cuts, but the crystal planes are pretty restrictive on what you can do. You'd end up having super long and narrow pieces and such.
The orientation is determined by the seed crystal. You can choose whatever you want.
There will of course be long and skinny pieces. These will be much easier to recycle than the unused portions around the edge of a circular wafer; they need not leave the facility that cuts the wafers.
The end result is rectangles of a standard size in one dimension and varying side in the other dimension.
The manufacturing process for semiconductors uses a standardized wafer size (in a given fab) so everything can be automated and simplified as far as possible. Having even something like just two wafer sizes makes a lot of the tooling/storage/transfer more complicated and expensive, so it's best to stick to one size and shape.
In order to have uniform rectangular slices of the silicon crystal, you'd have to slice off horizontal cylinder segments. And that would defeat the purpose because if you just used circular wafers you'd have been able to get a couple dies out of that area.
If your chips aren't very large, you can fit multiple copies in the reticle. E.g., if the litho machine has a 40mm by 50mm reticle, and you're making a 10mm by 10mm die, you can expose 20 dice at a time. It's usually worth it to go out to the edges even if you only get a few complete chips from the exposure.
I can't find a wafer picture for Nvidia's GV100, but presumably a giant chip like that uses the whole reticle, so I don't expect to see partial chips on its wafer.
Semiconductor manufacturing improvements like this really have enabled the whole tech world improvements of the last few decades.