And robust air conditioning, too. Source: worked in an office full of dual-GPU deep learning workstations. We had to bring in those gigantic rental portable conditioners in the summer, and re-do pretty much all of the existing wiring in the office, because electricians who installed the first iteration could not conceive of having two dozen ~900W workstations cranking away 24x7x365 in a single open office plan.
> Why do you have the GPUs inside the office with the people rather than in a server room under temperature control?
Because Nvidia, the 800 lb / 400 kg gorilla of the GPU ML/AI space, has decreed that only "data centre class" GPUs can be used in a server room, and those are much more expensive than the GeForce and Titan cards:
There's _also_ a sizable server room with temperature control (which was also woefully underpowered when we moved into the office). It's just more convenient for researchers to have a couple of GPUs locally.
You can remote GPUs pretty far these days - it’s possible to have a local system and have the cards still connected with PCI but on the other side of a wall.
You'd need 100GbE NIC and network fabric for that to not be a waste of time though. That costs more than renting an industrial portable air conditioner.