Blender add-on: TextureWatch

I am happy to announce that yesterday I published my new TextureWatch add-on on Blendermarket.

As illustrated in the video, TextureWatch is a small add-on to automatically synchronize textures used in your .blend file if they change on disk. This simplifies working with external programs like Gimp or Substance Painter because TextureWatch can automatically update those textures when you save your files without the need to go through all images inside Blender one by one and selecting reload. This saves time as well as guarantees consistency.


If you find this useful you might want to take a look at my BlenderMarket store.

Graswald vs. Grass Essentials

I recently bought the Graswald add-on because the sample images looked really good and the collection of plant species, variations and leaf debris on offer was quite extensive.

A couple of years ago I also bought Grass Essentials and although that was (and is) is fine collection of grasses and weeds too, I always found it rather difficult to get naturalistic looks. Both products have options to change things like the patchiness or wetness of the plants but I feel that Graswald, being an add-on*) with all the configurable options in a toolbar panel is far easier to work with and offers some extra possibilities, like integration with weight painting for distribution and length as well as aging a percentage of the plants .

*) It is also available as an asset library without the add-on at a slightly lower price.

Example


To show you what I mean I spent some time trying to recreate a patchy grass field consisting mainly of Kentucky Rye grass sprinkled with lots of dandelions. It took me about 10 minutes to set up the Grasswald patch on the left and although the time it took to set up the Grass Essentials patch on the right was about the same, I lost some time because although Grass Essentials does have dandelion flowers and seed-heads, it does not have a particle system to represent the actual dandelion plant leaves, so I finally substituted those with plantain to get at least some leaves showing.

(click to enlarge. Image rendered in Cycles with 1000 samples, filmic color management at medium high contrast and lit by a single HDRI backplate from HDRI Haven. The ground texture barely visible below the plants was a simple dirt texture created in Substance Painter. Note that Grass Essentials bundles several good dirt textures as well.)

Conclusion


Now everybody knows I not much of an artist and probably with some extra time the Grass Essentials version could be made to look more varied and patchy but for me the ease of use and the quality of the end result speaks for itself. Given this quality and ease of use combined with the slightly lower price, Graswald easily wins out on value for money.

[These opinions are my own. I am no way affiliated with either Graswald or Grass Essentials and did pay for both products myself.]

Raytracing concepts and code, part 11, antialiasing


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Anti aliasing

If we want to diminish the jagged look of edges in our rendered image we can sample a pixel multiple times at slightly different positions and average the result. In this article I show the code to do this.

Code

Once we know how many times we want to sample a pixel we have to calculate our camera ray as many times but with slightly shifted coordinates and then average the result. If we take the original direction of the camera ray to point into the exact middle of an on screen pixel, we have pick a point in the square centered around this point.
For this we first calculate the size of this square (line 9 and 10). Then as before we calculate each pixel but we repeat this for every sample (line 14). We keep the sum of all our samples in a separate buffer sbuf (defined in line 5, the addition is in line 23). We still want the final buffer that will be shown on screen to represent the averaged color we calculated so far, so in line 15 we update a whole line of pixels to represent the current average.
We also invert the line above it (line 16,17) so we have some visual feedback of where we are when we are updating the buffer. This is useful to keep track as with increasing samples the contribition gets smaller and might be hardto see.
Finally we yield the percentage of samples we calculated so far.
Note that the vdc() function returns a number from a Van der Corput sequence. I have not shown this function and you could use uniformly distributed random numbers instead (shown in the line that is commented out) in stead of these quasi random numbers, but these should give better results. The reasoning behind this is out of scope of this article but check Wikipedia for more on this.
def ray_trace(scene, width, height, depth, buf, samples, gi):     

    ... indentical code left out ...

    sbuf = np.zeros(width*height*4)
    sbuf.shape = height,width,4

    aspectratio = height/width
    dy = aspectratio/height
    dx = 1/width
    seed(42)

    N = samples*width*height
    for s in range(samples):
        for y in range(height):
            yscreen = ((y-(height/2))/height) * aspectratio
            for x in range(width):
                xscreen = (x-(width/2))/width
                #dir = Vector((xscreen + dx*(random()-0.5), yscreen + dy*(random()-0.5), -1))
                dir = Vector((xscreen + dx*(vdc(s,2)-0.5), yscreen + dy*(vdc(s,3)-0.5), -1))
                dir.rotate(rotation)
                dir = dir.normalized()
                sbuf[y,x,0:3] += single_ray(scene, origin, dir, lamps, depth, gi)
            buf[y,:,0:3] = sbuf[y,:,0:3] / (s+1)
            if y < height-1:
                buf[y+1,:,0:3] = 1 - buf[y+1,:,0:3]
            yield (s*width*height+width*y)/N
When we call our new ray_trace() function we have to determine the number of samples first. Blender provides two properties from the render context for this: scene.render.use_antialiasing and scene.render.antialiasing_samples. For some reason that last one is a string, so we have to convert it to an int before we can use it:
    def render_scene(self, scene):

    ... indentical code left out ...

        samples = int(scene.render.antialiasing_samples) if scene.render.use_antialiasing else 1
        for p in ray_trace(scene, width, height, 1, buf, samples, gi):

    ... indentical code left out ...

Now we could make the whole existing anti-aliasing panel available by again adding our engine to the COMPAT_ENGINES attribute but because it had lots of options we don't use we create our own panel with just the properties we need (something we might do later for other panels as well):
from bpy.types import Panel
from bl_ui.properties_render import RenderButtonsPanel

class CUSTOM_RENDER_PT_antialiasing(RenderButtonsPanel, Panel):
    bl_label = "Anti-Aliasing"
    COMPAT_ENGINES = {CustomRenderEngine.bl_idname}

    def draw_header(self, context):
        rd = context.scene.render

        self.layout.prop(rd, "use_antialiasing", text="")

    def draw(self, context):
        layout = self.layout

        rd = context.scene.render
        layout.active = rd.use_antialiasing

        split = layout.split()

        col = split.column()
        col.row().prop(rd, "antialiasing_samples", expand=True)

Code availability

The revision containing the code changes from this article is available from GitHub.


Raytracing concepts and code, part 10, showing progress


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Progress

The bits of code we show in this article have less to do with raytracing per se but do show off the options to indicate progress in Blender's RenderEngine class. Not only gives this some visual feedback when rendering takes long, it also gives the user the opportunity to cancel the process in mid-render.

Code

First we change the ray_trace() function into a generator that returns the y-coordinate after every line. This is accomplished by the yield statement. So now the function doesn't allocate a buffer itself but it takes a buf argument.
def ray_trace(scene, width, height, depth, buf):     

    ... identical code left out ...

    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width

            ... identical code left out ...            

        yield y
The render_scene() is changed too: it allocates a suitable buffer and then iterates over every line in the render result as it gets rendered (line 9). It then calls the update_result() function of RenderEngine class to show on screen what is rendered so far (line 12). It also calculates the percentage that has been completed and signals that as well (line 14).
    def render_scene(self, scene):
        height, width = self.size_y, self.size_x
        buf = np.ones(width*height*4)
        buf.shape = height,width,4
        
        result = self.begin_result(0, 0, self.size_x, self.size_y)
        layer = result.layers[0].passes["Combined"]
        
        for y in ray_trace(scene, width, height, 1, buf):
            buf.shape = -1,4
            layer.rect = buf.tolist()
            self.update_result(result)
            buf.shape = height,width,4
            self.update_progress(y/height)
        
        self.end_result(result)

Code availability

The code for this article is available in this revision on GitHub. (Note that it already contains some none functional global illumination code that you can ignore).

Raytracing concepts and code, part 9, adding a background image



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

When a ray does not hit anything we would like it to intersect with some infinite spherical background.

In Blender it is quite easy to associate a texture with the world background which is used by Blender Internal (but not Cycles which uses node based world settings exclusively):



(The important bit when creating a new texture is select World texture and of course an appropriate mapping for the image. [here we use an image from HDRIHaven])

The ray tracing renderer that we are writing will make use of this texture for the background image as well as for some environment lighting (more on this in a future article)

Elevation and azimuth

Any image texture object offers an evaluate() function that will take two arguments x and y between [-1,1] and will return the color at that point. The point (0,0) will be exactly in the middle of the regardless of its dimensions.

If we assume this image covers the whole <em>skydome</em>, this means that all we have to do when a ray doesn't hit anything in the scene, is to find out how high above (or below) the horizon the ray is pointing and in what direction along the horizon.

The first item is called the elevation (or altitude) and ranges from -90° (or -pi/2 , pointing straight down) to +90° (+pi/2, i.e. straight up).

The second item is called the azimuth and ranges from -180° (-pi) to +180° (+pi) relative to some fixed direction. We will use the positive x-axis as our reference.

Given a normalized direction in cartesian (x,y,z) coordinates, finding the elevation (theta) and azimuth (phi) is pretty straightforward.

Code


Calculating theta and phi and mapping them to [-1,1] is shown in the code below:

    elif scene.world.active_texture:
        theta = 1-acos(dir.z)/pi
        phi = atan2(dir.y, dir.x)/pi
        color = np.array(scene.world.active_texture.evaluate(
                                     (-phi,2*theta-1,0)).xyz)

Because the dir vector is normalized the z component will give us the cosine of the angle between the vector and the z-axis. The conversions are necessary to relate this angle (theta) to the horizontal plane and scale it to [-1,1]. The angle phi is calculated relative to the positive x-axis. The color we get pack from the evaluate call may contain an alpha channel therefore we make sure we only keep the first three components with the .xyz attribute.
The code is available on GitHub. It also contains the necessary code to  show a panel with options to associate a background texture with a World (it reuses a panel from Blender's internal renderer)



Raytracing concepts and code, part 8, mirror reflections



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Mirror reflections

Mirror reflections are an important part of ray traced images. Many materials like glass and metals have a reflective component. Calculating ray traced reflections is surprisingly simple as we will see but we do need some options from the material properties so that we can specify whether a material has mirror reflectivity at all and what amount. Again we borrow an existing panel (and ignore most options present; later we will create our own panels with just the options we need but cleanup is not in focus just yet)


Reflecting a ray

To calculate mirror reflection we take the incoming ray from the camera and calculate the new direction the ray will take after the bounce and see if it hits anything. The direction of the reflected ray depends on the direction of the incoming ray and the surface normal: the angle between the normal and the incoming ray equals the angle between the normal and the reflected ray:

If the the length of all vectors are normalized, we can calculate the reflected ray using this expresion: dir - 2 * normal * dir.dot(normal)
For a more in depth explanation you might want to take a look at Paul Bourke's site.

Code

To deal with mirror reflecting the single_ray() function needs a small enhancement: it needs to check the amount of mirror reflectivity specified in the material (if any) and then, if the number of bounces we made is still less than the specified depth, calculated the reflected ray and use this new direction to cast a new ray and see if it hits something:
        ... identical code left out ...

        mirror_reflectivity = 0
        if len(mat_slots):
            mat = mat_slots[0].material
            diffuse_color = mat.diffuse_color * mat.diffuse_intensity
            specular_color = mat.specular_color * mat.specular_intensity
            hardness = mat.specular_hardness
            if mat.raytrace_mirror.use:
                mirror_reflectivity = mat.raytrace_mirror.reflect_factor

        ...

        if depth > 0 and mirror_reflectivity > 0:
            reflection_dir = (dir - 2 * normal  * dir.dot(normal)).normalized()
            color += mirror_reflectivity * single_ray(
                       scene, loc + normal*eps, reflection_dir, lamps, depth-1)

        ...

Code availability

The code for this revision is available from GitHub.

Raytracing concepts and code, part 7, colored lights and refactored ray cast



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Refactoring

To make it easier to caculate secondary rays like relected or refracted rays, we move the actual ray casting to its own function, which also makes the central loop that generates every pixel on the screen much more readable:
    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width
            # align the look_at direction
            dir = Vector((xscreen, yscreen, -1))
            dir.rotate(rotation)
            dir = dir.normalized()
            buf[y,x,0:3] = single_ray(scene, origin, dir, lamps, depth)

Colored lights

Within the single_ray() function we also add a small enhancement: now we calculate the available amount of light to use in reflections from the properties in the lamp data:
        ...
        for lamp in lamps:
            light = np.array(lamp.data.color * lamp.data.energy)
        ...

User interface

To allow the user to set the color and intensity ("energy") of the point lights we need the corresponding lamp properties panel (we only use color and energy):

This panel is added in the register() function:
    from bl_ui import (
            properties_render,
            properties_material,
            properties_data_lamp,
            )

    ... identical code left out ...

    properties_data_lamp.DATA_PT_lamp.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)

Code availability

This revision of the code is available from GitHub