Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
TSMC Kicks Off Volume Production of 7nm Chips (anandtech.com)
237 points by dbcooper on April 24, 2018 | hide | past | favorite | 102 comments


Wow. I still remember being amazed when micron widths came out. 7nm is close to a miracle.

Semiconductor manufacturing improvements like this really have enabled the whole tech world improvements of the last few decades.


There isn't any feature of the transistors taht are 7nm wide, the smallest feature IIRC is the interconnect, which is about 30n-40nm. "7nm" is marketing wankery. https://en.wikichip.org/wiki/7_nm_lithography_process


Not quite accurate. There are a number of dimensions involved. Interconnect is definitely not the smallest.

This article show some examples: https://www.semiwiki.com/forum/content/3046-new-frontiers-sc...


If you look at the SRAM bit cells, they halved in area from 28nm to 14nm, and halved again from 14nm to 7nm. This is likely due to metal interconnects or other features only halving in spacing going from 28nm process to 7nm. So the scaling almost appears to apply to only one dimension (full linear scaling of two dimensions would have quartered the cell area at each step resulting in a 16-fold total reduction). But it is still scaling nevertheless.


Wait for 3d sram


There are different features sizes in the Z direction and different feature sizes in the XY plane. When it comes to XY, the smallest features size is probably the gate length. When it comes to Z, the gate oxide width is probably the smallest at 1-2nm. We have something called atomic layer deposition which gives control over single atomic layer thicknesses.


Well that's annoying. How is anyone supposed to judge anything if they just make numbers up?


Smallest feature size no longer correlates with transistor density in recent years. So manufacturers just use the number to convey increases in transistor density.

See https://en.wikichip.org/wiki/technology_node#History


That's why they do it. The sizes of Intel's 10nm is similar to or slightly smaller than everyone else's 7nm. Compare https://en.wikichip.org/wiki/7_nm_lithography_process and https://en.wikichip.org/wiki/10_nm_lithography_process .


I think bits of SRAM per unit of area is a benchmark that some people use, as it corresponds to something that's meaningful and measurable.


The problem is even that is flawed as things are going three dimensional.


So just start measuring in bits of SRAM per unit of volume?


That's a problematic metric as well because we don't know how much area the assist circuit takes up. Modern high-density SRAM cells cannot operate as is, they need an assist circuit to compensate for variations. For example for Intel's 10nm SRAM, they claim 77% area effiency (https://fuse.wikichip.org/news/525/iedm-2017-isscc-2018-inte...). But without those values, just bits/mm2 or so is problematic.


Hopefully their customers are more sophisticated than designing their chip based on a single number.


Well, after a while or is sort of public knowledge that 10nm Intel is about the same as 12nm AMD, etc.

But yes, the industry needs standardized advertising practices badly.


Uh, it's the other way around. 10nm Intel is 7nm global foundries. 22nm Intel is superior to 14nm global foundries.

Intel is the one that doesn't fudge their numbers.


Intel still fudges their numbers. They just fudge their numbers less than everyone else.


How so?

The numbers don't seem to have much relation at all with all of the real process node numbers.


Even Intel's numbers are just marketing, however.

Everyone is fudging the numbers...

SemiWiki has a good overview of Intel 10nm vs GF 7nm:

https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-...


For a while I think it was the other way around: Intel's processes were, in practice, actually substantially less dense than their competitors at the same nominal process node size.


>standardized advertising practices

Oh man, I wish this would happen in any industry...sadly, advertising doesn't seem to see any benefit. Why standardize when you can differentiate?


Thanks, kinda ruined my day but I learned something.


I've always assumed that "XX nm" doesn't represent the geometry of any particular feature of the end product, but something related to the wavelength of the radiation used for the imaging process.

Maybe it corresponds to the principle emission line of the light source (synchrotron?) In spectral terms, 7 nm is near the border between hard UV radiation and soft X-rays.


This is not correct at all.

Most chips made today are created with multipatterning processes using 193nm lasers and optical masks, known in the field as "Deep Ultraviolet Lithography" (DUV(L)). The industry is pushing towards replacing DUV (which has been pushed to its extremes) with "Extreme Ultraviolet Lithography" (EUV(L)), which uses a 13.5 nm light source (just watch https://www.youtube.com/watch?v=5yTARacBxHI - it's both fascinating and terrifying the trouble that EUV brings) and mirrors since pretty much all matter is opaque to EUV light.

It's a bit maddening to think that features that much smaller than the wavelength of light used can be patterned with that light source, but we've made a science out of it over the past decade with multipatterning and immersion lithography.


"but we've made a science out of it"

Not just science, but working, high volume, commercially viable production processes. The science itself is extremely impressive, but then adding commercial requirements and pull it off. Over and over again.


Imaging presenting the idea for that light source. "How about letting droplets of molten tin fall through a vacuum and then blast it with CO2 lasers."

BTW: The light must be blinking, right? In time with the frequency of the droplets.


>- it's both fascinating and terrifying the trouble that EUV brings

It is only peanuts in comparison to what is to come. The "nuclear option" on the table is to build a whole fab around a freaking synchrotron light source.


I watched the video recommended by awalton, and it's definitely as insane as he says it is. I can't believe a synchrotron would be that much more expensive, considering that a single storage ring can have an arbitrary number of output ports.


The main issue with using a synchrotron or Free Electron Laser EUV source might not be so much about the technology, but more about the mindset of the clients (Samsung, TSMC, Global Foundries, Intel). Up until now lithography has been something they would buy as a "box" which would be shipped to their fab, "plugged in" and commissioned.

A Free Electron Laser EUV source would be a facility on it's own, similar in size to a small powerplant built adjacent to your fab, and multiplexed to a dozen or so EUV wafer scanners, that's quite a different endeavor.


And after that, and few EUV multiple patterning litho generation, lies a pitch black abyss called Deep X-Ray Lithography, the only thing that can push things closer to 1nm


Great movie. Those machines are monsters. A 2m high machine that sits inside the ASML machine. And is fed laser light from something like a shipping container in the basement.


So, what does determine the "marketing" nomenclature for a given process node, if not a specific feature size or wavelength?

I find it impossible to believe that some obscure semiconductor industry people get together in a hidden smoke-and-particulate-matter-free room, come up with a completely-random number, and name their process after it.


It's worse than that... there's not jist an industry group (ITRS), there's also the foundries making it up themselves. They're buying similar equipment and selling similar transistor densities (except for Intel which has finer geometries and higher densities for a given node), but few are directly compatible on density, clock-rate, or power consumption after 28nm.

https://en.m.wikipedia.org/wiki/International_Technology_Roa...

Scalling really started falling apart in the late 90s around .25u, and then (incidentally) about the time CPU MHZ stopped scaling... by 65nm both gate and transistor length got wonky. Then after 28nm they moved to fin-FETs and multi-patterning making comparisons even more difficult.

https://semiengineering.com/a-node-by-any-other-name/


It used to be a specific feature size: the length of the gate. But, even as the wheels came off of that pretty quickly, the convention stayed pretty simple and familiar: each process node halves the size of the one before it - you can print double the number of transistors at the smaller process node. To make the math work out, that means each process node "name" is sqrt(1/2) ~= 0.7x smaller than the last, which gives you the easy-to-follow node names from 3um to about 3nm (when people tend to switch to ångströms or picometers since it moves the decimal kindly).

Even as things got sticky due to advancements in transistor construction meaning that old metrics like gate length were obsolete, we were still roughly following the trend laid out ahead of us for decades. A new process would double your density, letting you roughly cut the size of your old chip in half.

...Until a few fab companies just up and decided "You know what, fuck it, we can't actually catch up to Intel, but if what if we just... say that we did?" (...and I wish I was kidding. Take a look: https://m.eet.com/images/eetimes/2013/10/1319679/20-Value.jp... vs https://m.eet.com/images/eetimes/2013/10/1319679/16-Value.jp...).

So pretty quickly, TSMC decided that they'd just advance the node table, despite not actually increasing the density by double as you'd expect. "Next generation" 20nm processes became "16nm" and "14nm" on marketing docs, despite the process capabilities not changing that much (or even at all in some cases), with the only thing close to a justification given is that "FinFETs are different. They perform better than planar FETs so we should be able to give them a new node name." GloFo and Samsung quickly took the bit and followed their lead as they began FinFET manufacturing.

And apparently since nobody blinked an eye or set off alarm bells about these fabs basically lying about their capabilities, they got away with it and are now continuing the trend downwards. "10nm" processes from TSMC, Samsung and GloFo measure up to Intel's 14nm, and now "7nm" processes measure up to Intel's 10nm. It's actually pretty surprising Intel hasn't thrown up its hands and joined them on the fun, or even come up with their own marketing spin on it yet. "Intel's new 7nm-xtreme manufacturing process (actually it's just 10nm+)" or whatever.


Who exactly would be responding to these alarm bells?

If I'm making a chip I want to use the node that best fits my product. Might not even be the latest one. But if they offer me 2x the memory destiny, 1.6x the logic density all at the same/lower power - I'll take it! Sure, the tracks are huge and I need a huge tall stack-up but that's not really my problem. I really don't care what marketing speak they use to refer to it. I have zero interest how long the gate is, I care about what chip I can make with this.

And Intel can do what they want. Their fab offering is very uncompetitive.


Intel's nm numbers are also pure marketing, I believe.


He says they hope to do 100 wafers per hour. I wonder how many dies will be on a wafer.


With current technology, a wafer is 10" in diameter (254mm), the aperture size limiting the die size is around 900 mm^2, so a chip is at most about 30x30mm. This would make it about 60-70 chips per wafer at maximum chip size (e.g. high end GPU). Most chips are a lot smaller, but there's a ballpark figure for you.

Chips are square but wafers are round, so there's a lot more wasted area with large chips.


In a sense it has been the "free lunch" of the industry, much like stored energy in hydrocarbons has been for manufacturing in general.

But the supposed exponential curve that stock markets loves to salivate about seems to once again turn into an S curve.


> Semiconductor manufacturing improvements like this really have enabled the whole tech world improvements of the last few decades.

I would argue that for software it had the opposite effect, and has led to layers upon layers of crap. No need to ever fix that when you can rely on the next cpu being twice as fast.


Don't forget that optimization also costs time. Performance improvements just shifted the equilibrium between optimization and output. In the world where every programmer was forced to optimize their code, we'd have a lot less code.


From your lips to God's ears.


This talk host said he doesn't believe 5nm will crack (unless major electronics breakthrough)

https://www.youtube.com/watch?v=JpgV6rCn5-g&t=1136s


Fun stuff. I've got a key chain with an 80186 die, and a couple of the binders from Microprocessor Forum when they had dies on the cover. The features on those chips are "huge" compared to these things. I suspect a die using one of these processes that was big enough (say 100 sq mm) would just look like a mirror with chromatic interference lines running across it.

This announcement also bookends nicely with the first home made IC one (https://hackaday.com/2018/04/24/first-lithographically-produ...)


ICs have just looked like mirrors for a while now.

See this shot of a 14nm Vega die on the left and 28nm Fury die on the right: https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_Previ...


I think you linked to a flip chip---you would expect a mirror finish from the bulk silicon side.


Ah yes you're right of course. Has there been anything in the past few years made on modern processes with the die actually visible? Desktop/laptop CPUs and GPUs have all been flip chip for at least the past decade, mobile SoCs are package on package, most other stuff is encased in plastic.


der8auer did flip the Zen dies in the Threadripper over and ground them down to look what's inside.

The video gives a good impression of the dimensions of the die and the structures inside.

https://youtu.be/N-uKQ6RfUdk


Some of the RF processes. The HMC6300 is a mmWave flip chip with visible (to the eye) structures. Even then, it’s limited to passive structures, and maybe the PA output transistors.



when we went above ~3 metal layers upper layers started to be used power and ground planes - that tends to mean that you can't really see much any more


I hadn't thought of that. It will make the chip decapping people sad as well because you really need xrays to see through the metal layers.


I have an IBM pin with a 1 Mbit chip. It was a big deal in the 80’s


I wonder how reliable and durable 7nm chips will be. We might start to see chip failures even after a few years of use.

https://semiengineering.com/transistor-aging-intensifies-10n...


Good question. Node size used to have a pretty firm definition, but it's malleable these days.

Regardless of the absolute number, you can think of it as defining the smallest width of pencil you are used to make a drawing with. You can still use fatter pencils, and in many cases you would want to... shading in a large area for example. But having the smaller pencil lets you put in finer details.

Some thing just won't work if they are too small though, so even though you have a fine sharp pencil available to use, somethings will not change.

Regarding reliability over time, electromigration is the only thing I know of in a typical IC that causes degradation. It is affected by the size of conductors, so potentially things could get worse. It's a well understood phenomenon though so it's usually mitigated by design rules.

Non volatile memories have their own degradation problems.



Note that the TSMC 7nm process is similar in feature size to Intel 10nm.

https://en.m.wikipedia.org/wiki/7_nanometer#7_nm_process_nod...


Strange that there's no citation for that claim.. do you have one?


Both Transistor Gate Pitch (nm) and Interconnect pitch (nm) are same or smaller for Intel's 10nm process compared to TSMC.

https://en.wikipedia.org/wiki/10_nanometer


Here’s a comparison of the GF 7nm and Intel 10nm. It’s the same for TSMC afaik https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-...


Looks like Intel isn't having much luck there.


So one of the most important questions when producing >10nm process is yield. In the article I can't really find yield numbers, and the only thing they mention is the SRAM chips getting "consistent double digits" -- so pessimistically consistently 10% yield. That's not "good" yield.

Also, IIRC SRAMs are just about the simplest blocks you can make, which means they're the simplest lithography wise. Simple blocks to make, so they're usually good candidates for exploration of new process...

7nm is hard (I've said in a previous post, but you're just fighting physics at that point, nevermind all the issues you start facing with crosstalk/etc), color me skeptical that they've really nailed down a "volume" process for doing it just yet.


> I can't really find yield numbers

This is not surprising - this is trade secret information as it directly translates to profit margin for the process. AFAIK nobody publishes process yield information other than vague handwavy percentages.

You should also expect percent yields of most processes these days to be pretty poor (numerically): each step brings its percent breakage along, so at 100+ steps for these current processes, you're losing a significant fraction of chips. (Even if all of your steps yield 99%, you're down to 36.6% yield after 100 steps, so it's really important to reduce the number of steps as well as their complexity.)

For a process to be profitable it doesn't need to have a perfect, or even "good" yield though (and so-called "perfect wafers" have been vanishingly rare since about the 14nm process step). I've read documentation about very old processes with yields of 60% that were considered "good" at the time, so 10% might not be a terrible overall yield at this process node (e.g. if we take our 36.6% yield example above, 10% total yield would be of 27% of theoretical - certainly room for improvement, but better than many pharmaceutical processes).

This underlines the importance for the switch to EUV - a dozen or more DUV multi-patterning steps can be dropped down to a single EUV step, which eliminates losses along those multi-patterning steps.


Agreed. About the only number I'd expect from a foundry is a defect density per unit area, which can be used to derive a yield estimate. It's die-size and design-quality dependent, so the quoted D0 is going to be a best case.

10% most certainly is a terrible yield. If TSMC is kicking off volume production, then they have a mostly-SRAM test vehicle somewhere, probably 288Mb or larger, yielding at 65 pct or better on a consistent basis. A 10% wafer would be a financial disaster.


Does it matter though? What matter is cost, because that is the results of yield and direct relate to how many customers will use 7nm. Yield is not a constant and changes / improves over time.

60% Chance Apple will have a new iPad Pro in WWDC using 7nm, and 90% Chance Apple will use 7nm in their next iPhone.

Whatever the yield is, it is good enough for ~200M iPhone next year and the cost is acceptable for close to 50 customers in the next few months. And assuming Apple is going to go 7nm ( I don't see why not ), this volume will likely be more than double Intel shipping their 10nm in the next 12 months.


Yields will suffer with current DUV process which must be multiple, multiple patterning. The mentioned switch to EUV reduces multiple patterning and should improve yield. But EUV has had its own challenges.


I think you mean <10 nm?


Good catch, I definitely meant to write <10nm but I guess it also applies to >10nm too, basically it always applies. Doesn't matter if you can make Xnm process if it's not mass producable (unless your market is specialty chips of course)


It's just stunning how they have been able to achieve 7nm features. With a 193nm wavelength laser no less.


Yeah, you should take the fact they're not using EUV as the tremendous boulder of salt that it is: these are actually 10nm parts. Node names continue to be washed into Megahertz War-style naming schemes.

TSMC's process lines up pretty well with Intel's published numbers for their P1274 10nm process (Contacted Gate Pitch T & I: 54 nm, Minimum Metal Pitch T: 40nm I: 36 nm, High-Density SRAM bitcell size T: 0.027 µm^2 I: 0.0312 µm^2, etc).

What we've learned out of all of this is that Intel's struggles to push tooling towards EUV have benefited the industry at wide, as everyone's spent so much effort there that existing processes and tooling has become cheaper and faster to iterate. They've certainly fallen behind their all-time lead of almost two process generations, but they still appear to be about a generation (18-ish months) ahead of TSMC by published numbers.

But, who actually cares about the numbers, Marketing says 7nm so it's 7nm.


There's a big difference when you compare making 7nm features with about 100 process steps whereas EUV are doing it in like 4 steps.


EUV is still going to be 60+ steps.


7nm is really just a node name. Gate length is probably about 20nm. Still very impressive though!


The gate pitch is 54mm for TSMC and 56nm for Global Foundries @ 7nm.


That's the distance between gate midpoints, not their characteristic dimension.


> 54mm

Typo for nm, presumably :-).


The funny thing about 54mm is that this is pretty close to the original gate pitch, back when we had to use vacuum tubes or relays.


It’s not 7nm in any feature size it’s seceral times that.

It’s achievable usually with multiple patterning and submersion 193nm wavelength in vacuum which goes to about 145 in water and they likely are using something other than water and you also have temperature which affects the refractive index.


Well... the article didn't say what the line width, say, actually is. It just said that the process is labeled 7nm, which doesn't necessarily correspond to any features actually being 7nm in size. It just means it's that much smaller than the previous process.

Don't get me wrong, it's still amazing, and the line widths are still going to be much less than 193 nm. I just wish they'd say what some standard feature size actually is.


1. Not really 7nm, only the fin width is ~7nm, metals, etc are more like 54nm.

2. Immersion lithography (water as refractive index) + multiple patterning + computational lithography.


I assume the production is already sold out for the next year for Bitcoin/Ethereum miner ASICs...

I am wondering for how long will we be stuck with 7mm/10mm (Intel) tech; maybe we will see the last silicon-based improvements in CPUs & GPUs for the next two decades...


ASICs usually trail leading edge nodes by several years.


According to what evidence? Due to yeild tolerance, they tend to be among the first made, even getting made during risk production.


This makes me hope that maybe all those chipmakers will finally move on from that extremely long-lived 28nm node. It has been heavily used since 2011-2012 and even companies like Qualcomm have been using it as recently as last year in the lower-end range of products. Companies like Allwinner, Rockchip, Amlogic have yet to release a single chip on newer nodes.

What this means is that the 28nm node has been the cheapest now for 7 years running. Usually, when new nodes came up, the cost per transistor initially was lower, while the yields are slowly improving. But after ~1-2 years, the new node was always more cost-effective then older nodes, per-transistor.

This wasn't the case with the 28nm, which stuck around for 7 whole years, and still seems to be the most cost-effective. But surely, now with 7nm available that won't be the case for much longer. Good riddance.


28nm will be around for a long time. There are SoCs up at 130 or 180nm still because for what they need those are the cheapest way to go. 28 is as far as you go with planar transistors and single (or is it double?) patterning. The 20-22 node is not popular because it kinda sucks for both planar and FinFET designs. I think 14-16 will be around for a long time too because of the increased complexity of going lower.

While the end of scaling is a bummer, it will be nice to have the industry settle on a few common nodes that are mature and very well understood.


I have a question - why does the fab machine lithographic printing etch out a half a chip on the edge of the wafer like in the article picture. I presume the wafers are a standard 300mm size, so is it just because the designers were too lazy to remove the half chips on the edges from their mask template designs ?


It makes it so all of the chips they will use have all of their neighbors, so the process (and thus electrical properties) will be more uniform between chips near the edge and chips near the center.


Aha I had a question too why wafers are round. The answer is Crystals form in a cylinder then sliced. https://en.wikipedia.org/wiki/Wafer_(electronics)#Formation. Sorry it doesn't answer your question but helped me. :)


You could slice a cylinder the other way. That would give you rectangles, with the short dimension varying. I'm not so sure you'd save any space, but recycling the unused portion might be more reasonable to do.


The problem is that it is a single crystal of silicon. To keep it atomically flat you cannot cut it at any angle. You must cut it across the axis the crystal grows in or the surface will be rough and the wafer will likely crack in half in a stiff breeze.

I suppose you could go for some more on-axis cuts, but the crystal planes are pretty restrictive on what you can do. You'd end up having super long and narrow pieces and such.

http://www.crystal-scientific.com/xtal_orientation.html


The orientation is determined by the seed crystal. You can choose whatever you want.

There will of course be long and skinny pieces. These will be much easier to recycle than the unused portions around the edge of a circular wafer; they need not leave the facility that cuts the wafers.

The end result is rectangles of a standard size in one dimension and varying side in the other dimension.


The manufacturing process for semiconductors uses a standardized wafer size (in a given fab) so everything can be automated and simplified as far as possible. Having even something like just two wafer sizes makes a lot of the tooling/storage/transfer more complicated and expensive, so it's best to stick to one size and shape.

In order to have uniform rectangular slices of the silicon crystal, you'd have to slice off horizontal cylinder segments. And that would defeat the purpose because if you just used circular wafers you'd have been able to get a couple dies out of that area.


If your chips aren't very large, you can fit multiple copies in the reticle. E.g., if the litho machine has a 40mm by 50mm reticle, and you're making a 10mm by 10mm die, you can expose 20 dice at a time. It's usually worth it to go out to the edges even if you only get a few complete chips from the exposure. I can't find a wafer picture for Nvidia's GV100, but presumably a giant chip like that uses the whole reticle, so I don't expect to see partial chips on its wafer.


Did any chip vendor announce 7nm chips yet? Or is this still secret?


AMD's Zen2, which should come in or around 2019, will be on GloFo's 7nm.

I believe TSMC will use 7nm for some memory and mobile applications.

As for Intel their largely accepted equivalent to other foundries' 7nm is their 10nm, which is introduced in Cannon Lake/Ice Lake.


Also "Apple's 'A12' chip reportedly in production using 7nm"... "destined for 2018 iPhone models."


It's widely reported that the Snapdragon 855 will be on 7nm.


There a quite a bit of them already - just google it:

https://www.google.com/search?q=7nm+tapeout+press+release&cl...

AMD, GPU, FPGA, etc.


Great. I hope Esperanto Technology can come out with their RISC-V chip soon.


Deep into science fiction here with EUV...


Interesting time. This is the first time TSMC leads Intel in terms of node features if I remember correctly.


Wake me up when EUV's here...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: