Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whats a more realistic config?




8xGPUs per box. this has been the data center standard for the last 8ish years.

furthermore usually NVLink connected within the box (SXM instead of PCIe cards, although the physical data link is still PCIe.)

this is important because the daughter board provides PCIe switches which usually connect NVMe drives, NICs and GPUs together such that within that subcomplex there isn't any PCIe oversubscription.

since last year for a lot of providers the standard is the GB200 I'd argue.


Fascinating! So each GPU is partnered with disk and NICs such that theres no oversubscription for bandwidth within its 'slice'? (idk what the word is) And each of these 8 slices wire up to NVLink back to the host?

Feels like theres some amount of (software) orchestration for making data sit on the right drives or traverse the right NICs, guess I never really thought about the complexity of this kind of scale.

I googled GB200, its cool that Nvidia sells you a unit rather than expecting you to DIY PC yourself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: