Deviation Actions

greenhybrid's avatar

An alpine landscape.

By greenhybrid
7 Favourites
22 Comments
1K Views
This one will be part of an example set packed in the next release of picogen-wx (I have it ready to be uploaded on my harddisk, both for windows and gnu/linux, but it'd be better with examples, I thought :))

Pure Render Time: 8 minutes
Preparation time (building quadtree etc.): way tooo long (an hour? dunno remember exactly)

For the curious regarding picogen, please visit my gallery and [link] or the new wikipedia page at [link] .

Thanks for watching!
Image details
Image size
1680x1050px 1.82 MB
Published:
© 2008 - 2021 greenhybrid
Comments22
Join the community to add your comment. Already a deviant? Log In
beason's avatar
Are you using a constant ambient term for indirect lighting? I assumed you were doing path tracing but that 20k rays/sec figure works out to something like 5 rays/pixel so that can't be right.

Maybe this is too much technical discussion? :)

I love your sky. Are you hand picking the colors or is it straight Preetham? If the former, nice choice(s) :)
greenhybrid's avatar
Discussions always welcome!

picogen supports brute force monodir path tracing, but these newer images are actually whitted-ray-traced, for the bless of speed that ppl like. I sometimes compare the results of both integrators, and if you look on the picogen-todo-list, you will see that the work on the whitted-integrator isn't over; I want to approximate it as much to the path tracer result.

I am not using a constant ambient term, but rather, at the point of intersection, I peek into the hemisphere (i.e. in the direction of the surfacenormal), to see what preetham offers me.

In the next versions, somewhen, I will use a blurred version of preetham (by sampling the hemisphere for each normal in R^3) (into a lookup table of course), because at some points the artifacts are visible. A bigger challenge will be to ambient-occlude the points, but from the pure color-standpoint, I think my approach is promising.

I think the last manual color picking was 2 yrs ago or so :), *looking up* in "Glossy Spheres on Noisy Ground", at [link] :D
kram1032's avatar
from wikipedia:
"pronounced picogen"
well, "c" is inaccurate on its own....
is it pronounced "k" or "z" ?
greenhybrid's avatar
oh, erm, k. i guess. but i'll ask the picogen-crew :S
kram1032's avatar
xD yeah, I thought so^^

"g" is also inaccurate... generation or genesis g? :P

as it *seems* to be English-ish, it's probably generation, though^^
But I always imagine the Genesis-g xD

else, it should be accurate ^^
greenhybrid's avatar
:D
though I don't see the difference in genesis-g and generation-g, aren't they both spelled something like "D'Sh" ... :o hmmm
kram1032's avatar
oh? xD I'd have pronounced Genesis-g more like, wait... THAT'S the word xD gay...
kram1032's avatar
8 looks perfectly fine to me. I'd say, it's "normal" xD
great shaders :D
greenhybrid's avatar
thx,

the shaders are really easy: f=some_perlin, then color=f*superwhite + (1-f)*gray
kram1032's avatar
so, a linear blend between 2.0 and 0.5 gray, with the blendfactor, given by perlin noise :)
you might want to make sharper blends ;)
try bicosine or bicubic :)
greenhybrid's avatar
ah yes, that reminds me of the mission where you could tweak the interpolator :]
beason's avatar
Beautiful!

Funny lyc said it was slow. I was going to say that was fast :/
lyc's avatar
i should have better qualified that statement, the building stage seemed kinda slow for the amount of data visible; the rendering itself is pretty quick at just 8 minutes on seb's machine :D
greenhybrid's avatar
Thanks Kevin!

Hmm now I am confused ... If I am right, than those images have been traced at around 20k rays per second, whish is really not that fast, but then I think 8 minutes is okay at the moment, for such resolution with 2x2 anti-aliasing + simple shaders (which itself includes a Perlin term of 20 octaves).
lyc's avatar
having said that, it does look pretty good :D
greenhybrid's avatar
lyc's avatar
hmm, that's a hell of a rendertime for such an image, seems to me that you could cut it down a hell of a lot with some lod!
greenhybrid's avatar
I am not sure where exactly the bottleneck is, maybe the shader system is so slow, but yes, LOD could help, especially with the building phase (or I really stuck to intersecting implicits, but I haven't found a method yet that saturates my needs).
lyc's avatar
perhaps keeping track of screenspace derivatives (basically, you also generate two rays for dx and dy and intersect them with the plane defined by the intersection normal) and which brdf you're using, you can switch between lods based on distance.

adding lod for a terrain should add only 33% extra storage requirements, and if you use a caching/paging mechanism you'll get good use out of a reasonable fixed amount of memory even for very large and complex (procedurally defined) terrains.
greenhybrid's avatar
I think the storage would even decrease in size, as I would build the tree lazily (I have done lazy building before, but without LOD, it looked promising).

What I learned when I overused my large_array-class was that after wakeup, only a small fraction of the whole terrain was paged back in, say around 1-2GiB of a total of 32GiB was paged back (paging always goes on demand in my class; though I think "Most recently used" could fit better into quadtrees than the "Most often used"-pattern I now use; basic thought was to stabilise the large_array a bit, but for large screen resolution and path tracing, MRU could do better; dunno)
lyc's avatar
for sharp reflections (e.g. off water) you won't gain anything, but the soft light bouncing around should benefit a lot from using a lower res representation. lazy geometry caching sounds like it could work well for you :)

having said that, i am thinking about adding another 6gb of memory, for 12 total ;P
Join the community to add your comment. Already a deviant? Log In