Presumably that's with the dual-GPU, 28-core processor model under full load (and the MPX module drawing its full 500W, among other peripherals). An office with this kind of workload is going to require robust wiring regardless of which particular workstations are installed.
And robust air conditioning, too. Source: worked in an office full of dual-GPU deep learning workstations. We had to bring in those gigantic rental portable conditioners in the summer, and re-do pretty much all of the existing wiring in the office, because electricians who installed the first iteration could not conceive of having two dozen ~900W workstations cranking away 24x7x365 in a single open office plan.
> Why do you have the GPUs inside the office with the people rather than in a server room under temperature control?
Because Nvidia, the 800 lb / 400 kg gorilla of the GPU ML/AI space, has decreed that only "data centre class" GPUs can be used in a server room, and those are much more expensive than the GeForce and Titan cards:
There's _also_ a sizable server room with temperature control (which was also woefully underpowered when we moved into the office). It's just more convenient for researchers to have a couple of GPUs locally.
You can remote GPUs pretty far these days - it’s possible to have a local system and have the cards still connected with PCI but on the other side of a wall.
You'd need 100GbE NIC and network fabric for that to not be a waste of time though. That costs more than renting an industrial portable air conditioner.
That'll be a peak amperage; it won't draw 12 amps by itself.
Even if it did, most displays pull less than 1A, so theoretically you have room for 3 more displays on your standard 15A North American residential circuit. Any office should be on 20A circuits, anyways.