I got a live demo of this at a conference a few years back. There was a line to see the demo, so I first waited while others did it. They first showed the full resolution image, then enabled the foveated rendering. The latter looked terrible - a tiny patch of hires imagery and a massive blur for everything else.
When it was my turn, they started with the full resolution image. I was waiting for them to enable the foveated rendering - until they told me it was already running. I could not tell at all that it was not all rendering at full resolution. Really impressive.
The main presenter mentioned that the optic nerve/brain processing 'shuts down' for up to 40ms during a saccade, so they have that long to render the small region at full resolution between the time when your eye has its new target and the optic nerve comes back on line.
I'm so glad to hear this! Do you know of any progress since then? That very closely reminds me of my first time trying VR (last week). I sat and watched on the 2D monitor someone else doing it, and thought it looked incredibly stupid. Then I put on the headset..
I don't know of any explicit details of progress, but everyone I've mentioned it to in the VR space nods knowingly when I mention it, so I take that to mean that it is an area of active research and development.
When it was my turn, they started with the full resolution image. I was waiting for them to enable the foveated rendering - until they told me it was already running. I could not tell at all that it was not all rendering at full resolution. Really impressive.
The main presenter mentioned that the optic nerve/brain processing 'shuts down' for up to 40ms during a saccade, so they have that long to render the small region at full resolution between the time when your eye has its new target and the optic nerve comes back on line.