Shop Forum More Submit  Join Login
Almost all of my images from the last few years are the result of using Julia morphings that occur in the Mandelbrot set. It takes a lot of time and effort to find their location and render them. Therefore it is extremely beneficial to have a good idea of what the result is going to look like. If you explore the Mandelbrot set a lot you can do a simulation in your head. You can then zoom to the location and see if it looks any good in reality. In this article I give information about Julia morphings and how to simulate them with a computer instead, to get a better idea of what to expect when zooming in on the Mandelbrot set. I also talk about whether such a simulation can replace real renders of deep locations.

Julia morphings

Mathematically, I define a Julia morphing as a transformation of the complex plane of the form
c -> f(c) + coordinate
where f is a polynomial function. For the Mandelbrot set this is:
c -> c^2 + coordinate

It follows that each coordinate in the complex plane then defines a Julia morphing.

I call such morphing a Julia morphing because, by definition of the Julia set, when the same Julia morphing is repeated infinitely many times, those c that have not been sent to infinity form the Julia set of f(c) + coordinate. A Julia morphing is like a single iteration.

There's also another way to look at it. This video shows what repeating a Julia morphing does to the plane. It shows the Julia morphings being applied one by one:

As you can see, a Julia set is approached as the number of morphings increases.

Julia morphings in the Mandelbrot set

Julia morphings occur inside the Mandelbrot set. All the variation of shapes that you can find is caused by it.

First let me say what I mean by a region. In fractal software you always have a particular region visible, usually determined by the center coordinate and the image width or magnification factor. It's really just a rectangular subset of the complex plane.

Let a region with zoom depth 2^n (and that contains a part of the Mandelbrot set, otherwise it's silly) be given and call it R. Then you can choose a minibrot inside it. Let the depth of the minibrot be 2^m. Then at the location of the minibrot and at the average of the depths 2^ ( (m+n)/2 ), it looks like the region R with a Julia morphing c -> c^2 + coordinate applied to it where the coordinate is the coordinate of the minibrot.

What that means in practice is: if you see some shape in the Mandelbrot set, anywhere, and you zoom further in on somewhere that leads to a minibrot, then when you're half way there you get a Julia morphing of the shape. Here's an example image of Julia morphings occurring in the Mandelbrot set, taken from my previous journal Newton-Raphson zooming and Evolution zoom method.

julia_morphing.png (1024×533)
In the left image, 2 points are labeled 1 and 2 respectively. Zooming in on the point labeled 1, which is outside of the "shape" yields the middle image, a doubling. Zooming in on the point labeled 2 yields the right image, a morphing.

Simulating Julia morphings instead of zooming deep!?

I just mentioned two paragraphs ago what a Julia morphing is, with mathematical precision. You can find them in the Mandelbrot by zooming deep, requiring high-precision calculations and therefore long render times. Does the fact that the morphings seen in the Mandelbrot set correspond to Julia morphings mean that they can be pretty easily simulated without having to zoom deep at all? All you have to do is apply c^2 + a coordinate of a minibrot before iterating c!

Unfortunately the answer is no. The reason is that morphings in the Mandelbrot set are not exactly the same. It's only approximately true, and the approximation is better the deeper you zoom. The problem is that minibrots (including the entire set) deform the plane as well. The effect is so strong at low depths that the technique of pre-calculating a Julia morphing result is completely worthless, in fact. Note that repeatedly using the same minibrot to get another Julia morphing effect means that the depth of the result is averaged with the minibrot depth again and again. Eventually the depth of the morphing will be roughly the same as that of the minibrot. The morphing is then just a ring around the minibrot, highly deformed by the minibrot. Clearly a simulation doesn't work there.

To which extent simulation of Julia morphings can be used to reduce render time is something that still requires research. As long as the minibrot is so small that its influence on the shape of the morphing is negligible it should be possible to use a simulation, but it's not clear to me when that is, and what happens when multiple Julia morphings are simulated.

Simulating Julia morphings to test ideas

As said, Julia morphing simulation is not a replacement for rendering deep zooms, but it can still give a good impression of what the real result looks like. For that I made my own program.

No other fractal software currently implements Julia morphing simulation. Ultra Fractal is actually flexible enough to do it with layers of transformations but that's not user friendly and doesn't allow fast exploration. I wanted a program that makes testing the effect of Julia morphings as easy as possible: with a single click.

You can download my program here:

ExploreFractals.exe   518 KB!Z88Q1QiI!pwRXrL6c3UN…
CRC32: 76357D65
MD5: 2915B31F8CC4C15B0F3EA293AFC63A22

With this program you can test the effect of inflections / Julia morphings on the Mandelbrot set with powers 2, 3, 4 and 5. This works by transforming the plane and then actually rendering the fractal. Each click adds an inflection at the location of the cursor. You can also work with Julia sets. You can go somewhere in the M-set, then use "Toggle Julia" which uses the center of the screen as the seed for the Julia set.

More information and source code can be found here:…
Also when I release a new version I will post it there.
For more than a year I didn't have any inspiration. Actually I think my last two submissions were even pretty boring. Now I have many new ideas again. I have at least 5 more renders planned right now, deeper and with more iterations than ever before.

I have so much to say related to my latest image submission "Trees revisited" that I decided to write a journal about it. I hope to clarify what I mean by the term evolution. Information about what it is is spread out over comment sections and deviation descriptions. Also there's been a breakthrough in computer-assisted zooming, which is what's helping me to zoom this deep.

This is "Trees Revisited":
Trees Revisited by DinkydauSet

Evolution zoom method

Maybe the title of "Trees Revisited" is misleading because it's not really about the trees. It's the same old trees again. Instead this is a variation of what I have come to call the evolution zoom method. In general, given some shape that lies somewhere in the Mandelbrot set, evolution can be described as:
1. double the shape
2. morph one of the two copies
3. repeat by treating the morphing of step 2 as the new shape

julia_morphing.png (1024×533)
In the left image, 2 points are labeled 1 and 2 respectively. Zooming in on the point labeled 1, which is outside of the "shape" yields the middle image, a doubling. Zooming in on the point labeled 2 yields the right image, a morphing.

Doubling a shape can be done by zooming to 3/4 (as a good rule of thumb - it's a little more complex than this) of the depth of a minibrot of choice outside of the shape. The exact result depends on the choice of the minibrot. A doubling leads to two copies of the same shape next to each other. That's step one. Morphing one of them involves choosing a minibrot INSIDE that shape, so we choose one, but that means it's not inside the second copy of the shape, so the second copy gets doubled, causing both the morphed shape and two copies of the original shape to be present in the result, which is a set of shapes. By iterating the steps, the original shape and every morphing tied to an iteration of the steps are present in the result and all visible at once. That allows one to see how the original shape evolved, iteration by iteration of the steps, into the final morphing. That's why I call the result an evolution set.

Here's what's new: So at each iteration of the steps we have a morphing and two copies of the previous stage. The way I used to do step 1 in pretty much every previous render where I mentioned the word "evolution" was to morph one of those two copies, but I realized many other ways could be used to double. The only requirement is that the chosen minibrot is outside of the shape to be doubled. I tried a few things and this is the most interesting one I was able to find, at least thus far.

Automated zooming

There is also a lot to be said about the computer assisted zooming I have used to get to this shape. Claude on found an algorithm to determine the location and depth of the nearest minibrot inside a bounded region, involving the Newton-Raphson method. Because doubling and morphing shapes is equivalent to choosing a minibrot and zooming to 3/4 of the depth, knowing where the minibrot is and how deep it is allows one to find the coordinate and the depth of the morphing immediately. The coordinate is the same. The depth (the exponent in the magnification factor required for the minibrot to fill the screen) needs to be multiplied by 3/4. All you need to do is do a few zooms manually to make sure the algorithm searches for the correct minibrot and the computer can do the rest. Kalles Fraktaler has this algorithm implemented and I've been using it a lot. Some links to information about how it works can be found here:…

This is revolutionary. I think we can call it the best invention since the perturbation and series approximation thing. Zooming manually takes A LOT of time. I have spent days to several weeks just zooming for one image. Once the desired path has been chosen, it's a very simple and boring process of zooming in on the center until the required depth is reached. Note that this is not what the algorithm does. It doesn't need to render any pixels or use any visual reference whatsoever. It's a solid mathematics-based method and it works if you give it an "accurate enough" guess of where the minibrot is. Note also that it doesn't help in choosing a location to zoom to. You really just tell it "zoom into this center" and it finds the minibrot inside it for you, saving a lot of work.

It's pretty fast generally, usually faster than manual zooming, especially in locations with few iterations. Based on my experience with the Newton-Raphson zooming in Kalles Fraktaler thus far, I think it's actually a lot slower than manual zooming for locations with a high iteration count. Usually that's still more than made up for. You can work, sleep, study and (most importantly, of course) explore other parts of the mandelbrot set while the computer works for you, 24/7. If you have a processor with many cores you can let it zoom to several locations at once. Effectively that makes it faster in almost every situation.

The evolution zoom method involves a number of iterations of a few steps and I have found that generally it holds that the more steps taken, the better the result. The way the result looks like converges to a limit as the number of steps goes to infinity. The Newton-Raphson zooming allows me to perform more such iterations without as much effort as before. I always want to push the limits of what's possible, so I will perform those extra iterations, meaning I will be zooming a lot deeper. It will lead to shapes that are even more refined with even more symmetries and patterns.

Perturbation for the Mandelbrot set

Perturbation for rendering the Mandelbrot set has been around for a while. I would have written a journal before because it's very awesome, but right from the start there was a fundamental problem: reliability. A recent discovery by Pauldelbrot on indicates that perturbation can now be used to render the Mandelbrot set reliably. Is the project approaching completion? "Correctness" now appears to be achieved.


Roughly a year ago, Kevin Martin published a relatively short document about the Mandelbrot set, containing some equations that staggered everyone. His idea was to apply the principle of perturbation to rendering the Mandelbrot set, and combining that with something he called series approximation. Perturbation allows the iteration count of a pixel to be derived from a different, fully calculated pixel "nearby" (to be called a reference pixel). In practice this means that it's possible to calculate just one single pixel in an image, and derive the rest using perturbation. At great depths with millions of iterations, this saves an enormous amount of render time, which is the main result.

Series approximation allows large number of iterations of pixels to be skipped entirely, good for another enormous speed-up, but it doesn't stop there. In addition, no arbitrary precision calculations are required to do the "deriving" work. Floating point calculations, which are much faster to perform, are sufficient. Martin concludes his document with the following statement:
Using [the equations] means that the time taken rendering Mandelbrot images is largely independent of depth and iteration count, and mainly depends on the complexity of the image being created.
The implications of this are enormous and such a theory is of course yelling to be implemented. Along with the mathematics, Martin also published a simple software implementation of the theory dubbed SuperFractalThing, so that everyone could see that it works. Since then, more software developers have started working on their own implementations.

The simple equation of the Mandelbrot set has long been famous of being so computationally intensive that it can bring any supercomputer to it's knees, as long as you zoom in deep enough. Although that is still the case even with perturbation, the barrier has been shifted significantly. To get an idea of the speed-up we're talking about, consider the following deviation:
SSSSSurvival of the fittest - Evolution #3 by DinkydauSet
Fractal extreme has been the fastest software to calculate the Mandelbrot set for a long time, using traditional optimizations. If the deviation above were to be rendered in Fractal extreme, the render would take roughly 6 months. The actual image was rendered in 6 hours using an implementation of perturbation by Botond Kosa. What you're looking at right there is something that, without perturbation, would have been totally out of reach for many years, no matter how optimized the software is. As Bruce Dawson, the man behind Fractal extreme, commented on good algorithms beat optimized code.


Although there is no doubt that perturbation is a "good algorithm", it came with severe problems right from the start, that Kevin Martin couldn't solve himself. If you have been paying attention, you may have noticed the requirement of a reference pixel to be "nearby". More specifically, usage of floating point numbers to do the calculations requires some numbers in the perturbation equation to be "small". Mathematically, this is completely useless, because there's no exact definition of what "small" is. Indeed, the results of the calculations were shown to be unreliable in many cases. It turned out that the results were correct "most of the time", but sometimes not. Incorrect parts of renders have since been called glitches.

An example of such a glitch can be seen in the image below.
rare_glitch.png (1024×533)
Look closely at the largest spirals. The render on the left contains glitches; the render on the right is correct.

Several attempts have been done to get rid of these inaccuracies. There have been made workarounds where the computer was taught what glitches usually look like, so that they can be automatically recognized and solved. A way to do it is to calculate a second reference point inside the glitched area and do the perturbation calculations again. Having a new reference point more "nearby" solves the glitch. Karl Runmo made notable contributions to this automated glitch solving in his software implementation called Kalles Fraktaler.

As you may understand, it is very difficult to teach a computer to distinguish between correct and incorrect renders visually, especially because glitches can occur in such an enormous variety of types. Even fractal structures can sometimes appear as glitches, which is interesting on its own, but very, very difficult to auto-recognize. As such, manually solving glitches appeared to be a necessity: a very time-consuming process.

It might seem reasonable to spend some time to solve the glitches. Considering how many months of render time (and hundreds of euros worth of electricity) can be saved, spending a day solving glitches doesn't seem so bad. This idea slowly started to change as more difficult types of glitches were found where the "extra-reference-trick" didn't even work. Where does it stop? How many more types of glitches are there and can there ever be made workarounds for all of them? What was needed was more insight in where the inaccuracies come from, so that they can be avoided instead of worked around.

Correctness: now achieved?

Recently, Pauldelbrot on published an algorithm to find reliably which pixels of a render are correct and which aren't. This information can then be used to reliably solve the glitches as well. This was somewhat unexpected, because the algorithm doesn't help in preventing glitches, instead, it helps to detect them afterwards. This is somewhat similar to the approach of Karl Runmo, except Pauldelbrot detects glitches in a non-visual way. The algorithm has shown to be reliable. It automatically solves all the hard-to-detect glitches and no counterexample that slips trough has been found so far. That is great news!

This doesn't mean the project is really finished. There may still be a better way to get rid of glitches still to be discovered and many of the programs that currently use perturbation are still under development. It may even be possible to extend the method of perturbation to be used with different fractals. A good first candidate would be the Mandelbrot set with a power of 3 (instead of 2), but applications in 3d fractal rendering cannot be excluded in the future. The search continues. Mathematics never ends.

Applications in art

I haven't been sitting idle as the developments went on. As such I can now present to you a new video. I once remarked on YouTube that I could do so many more interesting things if just my computer was 1000 times faster. Here you have it. This is one of the things I was thinking of at the time.

More is coming "soon".
The zoom video that I mainly purchased my computer for is here. After 11,5 months of rendering, here is probably the most calculation-intensive Mandelbrot zoom to date.
If you prefer constant zoom speed:…

Render time for this video was 11,5 months, almost a year! I did render other stuff too while this was rendering, but I'm sure it has gotten way over half the CPU time.

Download original file at 60 fps and 50 Mb/s:!s1MX2YTJ!ILjLg0wf…

6.763271516386836374713460138655 E914

And then something else: I saw that there's a HTML edit mode for journals, but it automatically linkifies URLs, automatically adds new lines (no need for br) and it doesn't accept youtube embed code. What's the purpose of HTML edit mode then!?
Real life fractal animation in the wok! Jupiter storms alike…

This effect is a combination of oil and soap which shows very well how the water flows in the pan. It looks very similar to a fractal animation with waves and gnarls.
Damn it! Why can't apophysis ever finish a render that takes a while? I have the patience to wait a few days for a large render to finish, but not to lose the progress every fucking time! There's something terribly wrong with that renderer. As soon as render time passes a few days, it will just stop functioning out of nothing. This is extremely frustrating and preventing me from posting anything. Should I just, instead, render at lower resolutions and have grain all over the image? I've tried apophysis 7x, but it accepts way less RAM-usage than 2.08 3d. I would try the latest 64-bit 7x by Xyrus, but it doesn't allow plugins, which renders it useless for practically every render (pun intended).