There is a paper from 1968 which described a recurring pattern in the design of computer graphics hardware: it would turn out that certain graphics algorithms were too slow to run on general-purpose CPUs, so simple specialized hardware was created to speed up those functions. Then the algorithms got more complicated, and so did the specialized hardware. Eventually the hardware was effectively turning into something resembling a general-purpose CPU. Whereupon it was found worthwhile to offload certain functions onto yet another layer of specialist hardware, and so the circle went round again.
The precise details of those specialist functions are no longer relevant with today’s technology. But I think the general principle still applies.
Before delving more into this, first let us be clear about the two main kinds of 3D graphics rendering: there is real-time rendering, where successive frames have to be output f