Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And robust air conditioning, too. Source: worked in an office full of dual-GPU deep learning workstations. We had to bring in those gigantic rental portable conditioners in the summer, and re-do pretty much all of the existing wiring in the office, because electricians who installed the first iteration could not conceive of having two dozen ~900W workstations cranking away 24x7x365 in a single open office plan.


Why do you have the GPUs inside the office with the people rather than in a server room under temperature control?


> Why do you have the GPUs inside the office with the people rather than in a server room under temperature control?

Because Nvidia, the 800 lb / 400 kg gorilla of the GPU ML/AI space, has decreed that only "data centre class" GPUs can be used in a server room, and those are much more expensive than the GeForce and Titan cards:

* https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/

* https://www.cnbc.com/2017/12/27/nvidia-limits-data-center-us...

So instead of spending $1000 on an 2080 or US$ 3000 on a Titan, per the EULA you have to spend US$ 8000 on a Tesla.


There's _also_ a sizable server room with temperature control (which was also woefully underpowered when we moved into the office). It's just more convenient for researchers to have a couple of GPUs locally.


You can remote GPUs pretty far these days - it’s possible to have a local system and have the cards still connected with PCI but on the other side of a wall.


You'd need 100GbE NIC and network fabric for that to not be a waste of time though. That costs more than renting an industrial portable air conditioner.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: