Extended Voronoi Texture support in Blender

A long time a go I reported on my efforts to get a more versatile Voronoi texture node in Blender but this effort never led to actual inclusion of the code.

But all of a sudden there has been some interest again and now we have extended Voronoi functionality in the Voronoi texture node! And of course Pablo in his unique enthusiastic style did a nice demo of it as well check out the video.

The new functionality is available in the latest build of 2.79 and presumably in the 2.8 branch too. Note that in my original patch I included Voronoi crackle as well, but that is not available in the new node itself. However since Voronoi crackle is simply the difference between the distance to the 2nd closest and the 1st closest point, this popular pattern is super easy to implement now with the following noodle:


Finally an easy way to generate lizard scales :-)


Blender Market 2018 fall sale

Blender Market will feature a sale from September 18 - friday September 21. Many products will be 25% off, including of course all my add-ons ;-)
So if you were thinking about TextureWatch, NodeSet Pro, Ortho, IDMapper, WeightLifter or even SpaceTree Pro, this will be your change to get them at a nice discount in my BlenderMarket shop.
Many other creators of add-ons as well as models, shaders etc. will participate so be sure to visit BlenderMarket's home page as well.

Add-on: export a mesh as an OpenGL display list

Ok, I know this will only benefit very few people but that doesn't stop me from sharing :-)

A cloud of OpenGL Suzannes

What is it?

A tiny add-on that exports a small Python file containing OpenGL code to display a mesh.

What is it good for?

Blender has OpenGL bindings available for use in Python scripts. You can use these OpenGL drawing commands to display for example overlays over your 3D view with a draw handler.
If you would want to draw some sort of complex object you would have to recreate it using glVertex3f() calls which is a lot of work as soon as the model is more than a few vertices. This add-on generates the code for you in the form of a function that creates a display list.

How does it work?

When you select File -> Export -> Export mesh as OpenGL snippet it will open a file dialog and then it will write Python code to the selected file.
It will write the vertex coordinates (in object space) and the vertex normals of the active object. It will triangulate the mesh internally before writing it. There is currently no check if there is an active object or that the active object is a mesh.

What does the resulting Python code look like?

The export for a plain Suzanne mesh looks like this:

import bgl

def Suzanne():
 shapelist = bgl.glGenLists(1)
 bgl.glNewList(shapelist, bgl.GL_COMPILE)
 bgl.glBegin(bgl.GL_TRIANGLES)
 bgl.glNormal3f(0.9693,-0.245565,-0.011830)
 bgl.glVertex3f(0.4688,-0.757812,0.242188)
 bgl.glNormal3f(0.6076,-0.608505,-0.510393)
 bgl.glVertex3f(0.5000,-0.687500,0.093750)
 bgl.glNormal3f(0.8001,-0.599853,-0.002850)
        ... lots of calls omitted ...
 bgl.glVertex3f(-0.5938,0.164062,-0.125000)
 bgl.glEnd()
 bgl.glEndList()
 return shapelist

That code is so old fashioned, why?

If you ask any questions on forums like stack-overflow about older versions of OpenGL (and old in this context essentially means anything before OpenGL 3.0) you will be told over and over again that you shouldn't use it and that investing time in it is wasteful or even stupid.
You will have to live with that :-)
The fact is that even though Blender certainly runs on newer versions of OpenGL, the Python bindings it provides are based on version 2.1 and I am not sure when that will change. And yes, even 2.1 supports stuff like vertex arrays but those are quite cumbersome to use, not documented in the Blender Python API any in many situations overkill and not really a speed improvement: If all you want is some fancy overlay, using a compiled display list is pretty fast.

Where can I download it?

The add-on is available on GitHub.

Blender add-on: TextureWatch

I am happy to announce that yesterday I published my new TextureWatch add-on on Blendermarket.

As illustrated in the video, TextureWatch is a small add-on to automatically synchronize textures used in your .blend file if they change on disk. This simplifies working with external programs like Gimp or Substance Painter because TextureWatch can automatically update those textures when you save your files without the need to go through all images inside Blender one by one and selecting reload. This saves time as well as guarantees consistency.


If you find this useful you might want to take a look at my BlenderMarket store.

Graswald vs. Grass Essentials

I recently bought the Graswald add-on because the sample images looked really good and the collection of plant species, variations and leaf debris on offer was quite extensive.

A couple of years ago I also bought Grass Essentials and although that was (and is) is fine collection of grasses and weeds too, I always found it rather difficult to get naturalistic looks. Both products have options to change things like the patchiness or wetness of the plants but I feel that Graswald, being an add-on*) with all the configurable options in a toolbar panel is far easier to work with and offers some extra possibilities, like integration with weight painting for distribution and length as well as aging a percentage of the plants .

*) It is also available as an asset library without the add-on at a slightly lower price.

Example


To show you what I mean I spent some time trying to recreate a patchy grass field consisting mainly of Kentucky Rye grass sprinkled with lots of dandelions. It took me about 10 minutes to set up the Grasswald patch on the left and although the time it took to set up the Grass Essentials patch on the right was about the same, I lost some time because although Grass Essentials does have dandelion flowers and seed-heads, it does not have a particle system to represent the actual dandelion plant leaves, so I finally substituted those with plantain to get at least some leaves showing.

(click to enlarge. Image rendered in Cycles with 1000 samples, filmic color management at medium high contrast and lit by a single HDRI backplate from HDRI Haven. The ground texture barely visible below the plants was a simple dirt texture created in Substance Painter. Note that Grass Essentials bundles several good dirt textures as well.)

Conclusion


Now everybody knows I not much of an artist and probably with some extra time the Grass Essentials version could be made to look more varied and patchy but for me the ease of use and the quality of the end result speaks for itself. Given this quality and ease of use combined with the slightly lower price, Graswald easily wins out on value for money.

[These opinions are my own. I am no way affiliated with either Graswald or Grass Essentials and did pay for both products myself.]

Raytracing concepts and code, part 11, antialiasing


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Anti aliasing

If we want to diminish the jagged look of edges in our rendered image we can sample a pixel multiple times at slightly different positions and average the result. In this article I show the code to do this.

Code

Once we know how many times we want to sample a pixel we have to calculate our camera ray as many times but with slightly shifted coordinates and then average the result. If we take the original direction of the camera ray to point into the exact middle of an on screen pixel, we have pick a point in the square centered around this point.
For this we first calculate the size of this square (line 9 and 10). Then as before we calculate each pixel but we repeat this for every sample (line 14). We keep the sum of all our samples in a separate buffer sbuf (defined in line 5, the addition is in line 23). We still want the final buffer that will be shown on screen to represent the averaged color we calculated so far, so in line 15 we update a whole line of pixels to represent the current average.
We also invert the line above it (line 16,17) so we have some visual feedback of where we are when we are updating the buffer. This is useful to keep track as with increasing samples the contribition gets smaller and might be hardto see.
Finally we yield the percentage of samples we calculated so far.
Note that the vdc() function returns a number from a Van der Corput sequence. I have not shown this function and you could use uniformly distributed random numbers instead (shown in the line that is commented out) in stead of these quasi random numbers, but these should give better results. The reasoning behind this is out of scope of this article but check Wikipedia for more on this.
def ray_trace(scene, width, height, depth, buf, samples, gi):     

    ... indentical code left out ...

    sbuf = np.zeros(width*height*4)
    sbuf.shape = height,width,4

    aspectratio = height/width
    dy = aspectratio/height
    dx = 1/width
    seed(42)

    N = samples*width*height
    for s in range(samples):
        for y in range(height):
            yscreen = ((y-(height/2))/height) * aspectratio
            for x in range(width):
                xscreen = (x-(width/2))/width
                #dir = Vector((xscreen + dx*(random()-0.5), yscreen + dy*(random()-0.5), -1))
                dir = Vector((xscreen + dx*(vdc(s,2)-0.5), yscreen + dy*(vdc(s,3)-0.5), -1))
                dir.rotate(rotation)
                dir = dir.normalized()
                sbuf[y,x,0:3] += single_ray(scene, origin, dir, lamps, depth, gi)
            buf[y,:,0:3] = sbuf[y,:,0:3] / (s+1)
            if y < height-1:
                buf[y+1,:,0:3] = 1 - buf[y+1,:,0:3]
            yield (s*width*height+width*y)/N
When we call our new ray_trace() function we have to determine the number of samples first. Blender provides two properties from the render context for this: scene.render.use_antialiasing and scene.render.antialiasing_samples. For some reason that last one is a string, so we have to convert it to an int before we can use it:
    def render_scene(self, scene):

    ... indentical code left out ...

        samples = int(scene.render.antialiasing_samples) if scene.render.use_antialiasing else 1
        for p in ray_trace(scene, width, height, 1, buf, samples, gi):

    ... indentical code left out ...

Now we could make the whole existing anti-aliasing panel available by again adding our engine to the COMPAT_ENGINES attribute but because it had lots of options we don't use we create our own panel with just the properties we need (something we might do later for other panels as well):
from bpy.types import Panel
from bl_ui.properties_render import RenderButtonsPanel

class CUSTOM_RENDER_PT_antialiasing(RenderButtonsPanel, Panel):
    bl_label = "Anti-Aliasing"
    COMPAT_ENGINES = {CustomRenderEngine.bl_idname}

    def draw_header(self, context):
        rd = context.scene.render

        self.layout.prop(rd, "use_antialiasing", text="")

    def draw(self, context):
        layout = self.layout

        rd = context.scene.render
        layout.active = rd.use_antialiasing

        split = layout.split()

        col = split.column()
        col.row().prop(rd, "antialiasing_samples", expand=True)

Code availability

The revision containing the code changes from this article is available from GitHub.


Raytracing concepts and code, part 10, showing progress


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Progress

The bits of code we show in this article have less to do with raytracing per se but do show off the options to indicate progress in Blender's RenderEngine class. Not only gives this some visual feedback when rendering takes long, it also gives the user the opportunity to cancel the process in mid-render.

Code

First we change the ray_trace() function into a generator that returns the y-coordinate after every line. This is accomplished by the yield statement. So now the function doesn't allocate a buffer itself but it takes a buf argument.
def ray_trace(scene, width, height, depth, buf):     

    ... identical code left out ...

    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width

            ... identical code left out ...            

        yield y
The render_scene() is changed too: it allocates a suitable buffer and then iterates over every line in the render result as it gets rendered (line 9). It then calls the update_result() function of RenderEngine class to show on screen what is rendered so far (line 12). It also calculates the percentage that has been completed and signals that as well (line 14).
    def render_scene(self, scene):
        height, width = self.size_y, self.size_x
        buf = np.ones(width*height*4)
        buf.shape = height,width,4
        
        result = self.begin_result(0, 0, self.size_x, self.size_y)
        layer = result.layers[0].passes["Combined"]
        
        for y in ray_trace(scene, width, height, 1, buf):
            buf.shape = -1,4
            layer.rect = buf.tolist()
            self.update_result(result)
            buf.shape = height,width,4
            self.update_progress(y/height)
        
        self.end_result(result)

Code availability

The code for this article is available in this revision on GitHub. (Note that it already contains some none functional global illumination code that you can ignore).