Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is kind of surprising that CRTs have been dead for so long but we're still driving our displays in pretty much the same fashion, except with reduced blanking intervals. We still treat video connections as constant data-rate streams, when we should be bursting frames to the monitor as soon as they finish rendering and letting the monitor worry about when it will be able to update which part of the screen.


That's how it would work with OLED, right? So we should have that in a few years once density and cost catch up to traditional LED-backed LCDs.


That's how it could work on any of the common display technologies that don't use a single electron beam tracing across the screen. Active-matrix displays - be they OLED or TFT LCD - don't require that pixels be updated in any particular order or at any specific frequency save for the minimum update frequency that is analogous to DRAM refreshing.

The way we currently send pixel data to monitors is basically optimized to make the monitor's electronics as simple as possible and to minimize the bandwidth used at all times, even if the hardware is capable of communicating at higher speeds. Just simply changing DisplayPort to always send pixel data at the highest speed even when operating at less than the maximum resolution supported by that link would result in a significant reduction in latency, by no longer taking 16ms to send each frame (which almost all monitors fully buffer in order to apply color transformations or scaling). The next step after that would be to allow frames to start on irregular intervals, which is apparently what NVidia's implementing. But it's still all just about how the contents of the GPU's framebuffer are transmitted to the monitor's framebuffer, and is in no way dependent on what kind of technology is downstream of the monitor's framebuffer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: