After experimenting with Neural Nets for style-transfer and other generation of imagery I conclude the following:
The future for fast-rendering of immersive environments, lies in training a network, to convert blocky-voxel renders (handy because the model is simple) and do the time-consuming stuff, like ambient occlusion, god-rays etc in one big post-processing sweep.
Where traditional post-processing does not change the shapes or add detail, neural nets are very capable of doing this.
Also objects can be detected, so that regions can be treated differently.
Finally, this will probably also mean a (very-welcome) break from the efforts to go Photo-realistic and add back some Charm into the imagery.
With this I mean, that we will make use of the enormous pattern-matching abilities of our built in Neural Net, the optic mind. The whole process from going to light rays hitting our eyes to mental image..
Kind of what happens with a good painting. I'd rather walk around in a good painting with lots of athmosphere than a very detailed but uninspired world or worse: a simplified, ultrasharp world.