I think one interesting aspect of this is that it couples spatial as well as temporal interpolation. This means that you get a higher resolution as well as a higher framerate, but on the downside seems to introduce additional artifacts depending on how these two interpolations interact.
I have not yet read the technical paper and only watched the video without sound, but from this video it seems that moving sharp edges introduce additional artifacts (can be seen when looking at the features of the houses in peripheral vision at 5:11 in the video). This is what you would roughly expect to happen if both pixel grids try to display a sharp edge, but due to their staggered update, one of these two edges is always at a wrong position.
This problem could probably somewhat alleviated through an algorithm that has some knowledge about the next frames, but this would introduce additional lag (bad for interactive content, horrible for virtual reality, not so bad for video).
I intend to read the paper later, but can anyone who already read it comment on whether they already need knowledge about the next frame or half-frame for the shown examples?
> on the downside seems to introduce additional artefacts
It definitely introduces a form of ghosting visible near the rear end of the motorcycle.
As for lag, I can already see John Carmack cringing! There may be an interesting effect though, in that the increase in apparent resolution is quadratic when the increase in computation is linear. Hardware-wise this possibly could be done straight in the double-buffering phase without additional lag if it can be made to race the beam.
I think one interesting aspect of this is that it couples spatial as well as temporal interpolation. This means that you get a higher resolution as well as a higher framerate, but on the downside seems to introduce additional artifacts depending on how these two interpolations interact.
I have not yet read the technical paper and only watched the video without sound, but from this video it seems that moving sharp edges introduce additional artifacts (can be seen when looking at the features of the houses in peripheral vision at 5:11 in the video). This is what you would roughly expect to happen if both pixel grids try to display a sharp edge, but due to their staggered update, one of these two edges is always at a wrong position.
This problem could probably somewhat alleviated through an algorithm that has some knowledge about the next frames, but this would introduce additional lag (bad for interactive content, horrible for virtual reality, not so bad for video).
I intend to read the paper later, but can anyone who already read it comment on whether they already need knowledge about the next frame or half-frame for the shown examples?