Raytracing concepts and code, part 9, adding a background image



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

When a ray does not hit anything we would like it to intersect with some infinite spherical background.

In Blender it is quite easy to associate a texture with the world background which is used by Blender Internal (but not Cycles which uses node based world settings exclusively):



(The important bit when creating a new texture is select World texture and of course an appropriate mapping for the image. [here we use an image from HDRIHaven])

The ray tracing renderer that we are writing will make use of this texture for the background image as well as for some environment lighting (more on this in a future article)

Elevation and azimuth

Any image texture object offers an evaluate() function that will take two arguments x and y between [-1,1] and will return the color at that point. The point (0,0) will be exactly in the middle of the regardless of its dimensions.

If we assume this image covers the whole <em>skydome</em>, this means that all we have to do when a ray doesn't hit anything in the scene, is to find out how high above (or below) the horizon the ray is pointing and in what direction along the horizon.

The first item is called the elevation (or altitude) and ranges from -90° (or -pi/2 , pointing straight down) to +90° (+pi/2, i.e. straight up).

The second item is called the azimuth and ranges from -180° (-pi) to +180° (+pi) relative to some fixed direction. We will use the positive x-axis as our reference.

Given a normalized direction in cartesian (x,y,z) coordinates, finding the elevation (theta) and azimuth (phi) is pretty straightforward.

Code


Calculating theta and phi and mapping them to [-1,1] is shown in the code below:

    elif scene.world.active_texture:
        theta = 1-acos(dir.z)/pi
        phi = atan2(dir.y, dir.x)/pi
        color = np.array(scene.world.active_texture.evaluate(
                                     (-phi,2*theta-1,0)).xyz)

Because the dir vector is normalized the z component will give us the cosine of the angle between the vector and the z-axis. The conversions are necessary to relate this angle (theta) to the horizontal plane and scale it to [-1,1]. The angle phi is calculated relative to the positive x-axis. The color we get pack from the evaluate call may contain an alpha channel therefore we make sure we only keep the first three components with the .xyz attribute.
The code is available on GitHub. It also contains the necessary code to  show a panel with options to associate a background texture with a World (it reuses a panel from Blender's internal renderer)



Raytracing concepts and code, part 8, mirror reflections



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Mirror reflections

Mirror reflections are an important part of ray traced images. Many materials like glass and metals have a reflective component. Calculating ray traced reflections is surprisingly simple as we will see but we do need some options from the material properties so that we can specify whether a material has mirror reflectivity at all and what amount. Again we borrow an existing panel (and ignore most options present; later we will create our own panels with just the options we need but cleanup is not in focus just yet)


Reflecting a ray

To calculate mirror reflection we take the incoming ray from the camera and calculate the new direction the ray will take after the bounce and see if it hits anything. The direction of the reflected ray depends on the direction of the incoming ray and the surface normal: the angle between the normal and the incoming ray equals the angle between the normal and the reflected ray:

If the the length of all vectors are normalized, we can calculate the reflected ray using this expresion: dir - 2 * normal * dir.dot(normal)
For a more in depth explanation you might want to take a look at Paul Bourke's site.

Code

To deal with mirror reflecting the single_ray() function needs a small enhancement: it needs to check the amount of mirror reflectivity specified in the material (if any) and then, if the number of bounces we made is still less than the specified depth, calculated the reflected ray and use this new direction to cast a new ray and see if it hits something:
        ... identical code left out ...

        mirror_reflectivity = 0
        if len(mat_slots):
            mat = mat_slots[0].material
            diffuse_color = mat.diffuse_color * mat.diffuse_intensity
            specular_color = mat.specular_color * mat.specular_intensity
            hardness = mat.specular_hardness
            if mat.raytrace_mirror.use:
                mirror_reflectivity = mat.raytrace_mirror.reflect_factor

        ...

        if depth > 0 and mirror_reflectivity > 0:
            reflection_dir = (dir - 2 * normal  * dir.dot(normal)).normalized()
            color += mirror_reflectivity * single_ray(
                       scene, loc + normal*eps, reflection_dir, lamps, depth-1)

        ...

Code availability

The code for this revision is available from GitHub.

Raytracing concepts and code, part 7, colored lights and refactored ray cast



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Refactoring

To make it easier to caculate secondary rays like relected or refracted rays, we move the actual ray casting to its own function, which also makes the central loop that generates every pixel on the screen much more readable:
    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width
            # align the look_at direction
            dir = Vector((xscreen, yscreen, -1))
            dir.rotate(rotation)
            dir = dir.normalized()
            buf[y,x,0:3] = single_ray(scene, origin, dir, lamps, depth)

Colored lights

Within the single_ray() function we also add a small enhancement: now we calculate the available amount of light to use in reflections from the properties in the lamp data:
        ...
        for lamp in lamps:
            light = np.array(lamp.data.color * lamp.data.energy)
        ...

User interface

To allow the user to set the color and intensity ("energy") of the point lights we need the corresponding lamp properties panel (we only use color and energy):

This panel is added in the register() function:
    from bl_ui import (
            properties_render,
            properties_material,
            properties_data_lamp,
            )

    ... identical code left out ...

    properties_data_lamp.DATA_PT_lamp.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)

Code availability

This revision of the code is available from GitHub

Raytracing concepts and code, part 6, specular reflection for lights


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts
Until now the reflections have been a bit dull because our shader model only takes into account the diffuse reflectance, so lets take a look at how we can add specular reflectance to our shader.
The effect we want to achieve is illustrated in the two images below where the green sphere on the right had some specular reflectance. Because of this it shows highlights for the two points lamps that are present in the scene.


The icosphere shows faceting because we did not yet anything to implement smooth shading (i.e. interpolating the normals across a face) but with added some smooth subdivisions to show off the high lights a bit better.
In the image below we see that we also look at the hardness of the specular reflections in our shader model, with a high value of 400 giving much tighter highlights than the default hardness of 50.


In both images the red cube has no specular intensity and the blue one has a specular intensity of .15 and a hardness of 50.

Specular color, specular intensity and hardness are properties of a material so we have also exposed the relevant panel to our custom renderer so that the user can set these values.

Code

The code changes needed in the inner shading loop of our ray tracer are limited:
if hit:
 diffuse_color = Vector((0.8, 0.8, 0.8))
 specular_color = Vector((0.2, 0.2, 0.2))
 mat_slots = ob.material_slots
 hardness = 0
 if len(mat_slots):
  diffuse_color = mat_slots[0].material.diffuse_color \
                    * mat_slots[0].material.diffuse_intensity
  specular_color = mat_slots[0].material.specular_color \
                    * mat_slots[0].material.specular_intensity
  hardness = mat_slots[0].material.specular_hardness

  ... identical code left out ...
  
  if not lhit:
   illumination = light * normal.dot(light_dir)/light_dist
   color += np.array(diffuse_color) * illumination
   if hardness > 0:  # phong reflection model
    half = (light_dir - dir).normalized()
    reflection = light * half.dot(normal) ** hardness
    color += np.array(specular_color) * reflection
We set reasonable defaults (line 2 - 5), override those defaults if we have a material on this object (line 6 - 11) and then, if we are not in the shadow, calculate the diffuse component of the lighting as before (line 16 - 17) and the finally add a specular component (line 18 - 21)

For the specular component we use the Phong model (or actually the Blinn-Phong model). This means we look at the angle between the normal (shown in light blue in the image below) and the half way vector (in dark blue). The smaller the angle the tighter the highlight. The tightness is controlled by the hardness: we raise the cosine of the angle (which is what the dot product is that we compute in line 20) to the power of this hardness. Note that the half way vector is the normalized vector that points exactly in between the direction of the light and the camera as seen from the point being shaded. That is why we have a minus sign rather than a plus sign in line 19 because dir in our code points from the camera towards the point being shaded.

Code availability

The code is available on GitHub.

Raytracing concepts and code, part 5, diffuse color


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Our rendered scene so far mainly consists of fifty shades of gray and that might be exciting enough for some but it would be better if we could spice it up with some color.

For this we will use the diffuse color of the first material slot of an object (if any). The code so far needs very little change to make this happen:
            # the default background is black for now
            color = np.zeros(3)
            if hit:
                # the get the diffuse color of the object we hit
                diffuse_color = Vector((0.8, 0.8, 0.8))
                mat_slots = ob.material_slots
                if len(mat_slots):
                    diffuse_color = mat_slots[0].material.diffuse_color
                        
                color = np.zeros(3)
                light = np.ones(3) * intensity  # light color is white
                for lamp in lamps:
                    # for every lamp determine the direction and distance
                    light_vec = lamp.location - loc
                    light_dist = light_vec.length_squared
                    light_dir = light_vec.normalized()
                    
                    # cast a ray in the direction of the light starting
                    # at the original hit location
                    lhit, lloc, lnormal, lindex, lob, lmat = scene.ray_cast(loc+light_dir*eps, light_dir)
                    
                    # if we hit something we are in the shadow of the light
                    if not lhit:
                        # otherwise we add the distance attenuated intensity
                        # we calculate diffuse reflectance with a pure 
                        # lambertian model
                        # https://en.wikipedia.org/wiki/Lambertian_reflectance
                        color += diffuse_color * intensity * normal.dot(light_dir)/light_dist
            buf[y,x,0:3] = color
In lines 5-8 we check if there is at least one material slot on the object we hit and get its diffuse color. If there is no associated material we keep a default light grey color.
This diffuse color is what we use in line 28 if we are not in the shadow.

Code availability

The code is available on GitHub.

Raytracing: concepts and code, part 4, the active camera


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

In a further bit of code cleanup we'd like to get rid of the hardcoded camera position and look at direction by using the location of the active camera in the scene together with its rotation.
Fortunately very little code has to change to make this happen in our render method:

    # the location and orientation of the active camera
    origin = scene.camera.location
    rotation = scene.camera.rotation_euler
Using the rotation we can first create the camera ray as if it originated in the default -Z direction and then simply rotate it using the camera location:
    aspectratio = height/width
    # loop over all pixels once (no multisampling)
    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width
            # align the look_at direction
            dir = Vector((xscreen, yscreen, -1))
            dir.rotate(rotation)
Later we might even adapt this code to take into account the field of vision, but for now at least we can position and aim the active camera in the scene any way we like.

Code availability

The code is available on GitHub.

Raytracing: concepts and code, part 3, a render engine


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

The code presented in the first article of this series was a bit of a hack: running from the text editor and lots of built-in assumptions is not the way to go so lets refactor this in a proper render engine that will be available alongside Blender's built-in renderers:

A RenderEngine

All we really have to do is to derive a class from Blender's RenderEngine class and register it.The class should provide a single method render() that takes a Scene parameter and returns a buffer with RGBA pixel values.
class CustomRenderEngine(bpy.types.RenderEngine):
    bl_idname = "ray_tracer"
    bl_label = "Ray Tracing Concepts Renderer"
    bl_use_preview = True

    def render(self, scene):
        scale = scene.render.resolution_percentage / 100.0
        self.size_x = int(scene.render.resolution_x * scale)
        self.size_y = int(scene.render.resolution_y * scale)

        if self.is_preview:  # we might differentiate later
            pass             # for now ignore completely
        else:
            self.render_scene(scene)

    def render_scene(self, scene):
        buf = ray_trace(scene, self.size_x, self.size_y)
        buf.shape = -1,4

        # Here we write the pixel values to the RenderResult
        result = self.begin_result(0, 0, self.size_x, self.size_y)
        layer = result.layers[0].passes["Combined"]
        layer.rect = buf.tolist()
        self.end_result(result)

Option panels

For a custom render engine all panels in the render and material options will be hidden by default. This makes sense because not all render engines use the same options. We are interested in just the dimensions of the image we have to render and the diffuse color of any material so we explicitly add our render engine to the list of COMPAT_ENGINES in each of those panels, along with the basic render buttons and material slot list.
def register():
    bpy.utils.register_module(__name__)
    from bl_ui import (
            properties_render,
            properties_material,
            )
    properties_render.RENDER_PT_render.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)
    properties_render.RENDER_PT_dimensions.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_context_material.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_diffuse.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)

def unregister():
    bpy.utils.unregister_module(__name__)
    from bl_ui import (
            properties_render,
            properties_material,
            )
    properties_render.RENDER_PT_render.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)
    properties_render.RENDER_PT_dimensions.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_context_material.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_diffuse.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)

reusing the ray tracing code

Our previous ray tracing code is adapted to use the height and width arguments instead of arbitrary constants:
def ray_trace(scene, width, height):     

    lamps = [ob for ob in scene.objects if ob.type == 'LAMP']

    intensity = 10  # intensity for all lamps
    eps = 1e-5      # small offset to prevent self intersection for secondary rays

    # create a buffer to store the calculated intensities
    buf = np.ones(width*height*4)
    buf.shape = height,width,4

    # the location of our virtual camera (we do NOT use any camera that might be present)
    origin = (8,0,0)

    aspectratio = height/width
    # loop over all pixels once (no multisampling)
    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width
            # get the direction. camera points in -x direction, FOV = approx asin(1/8) = 7 degrees
            dir = (-1, xscreen, yscreen)
            
            # cast a ray into the scene
            
            ... indentical code omitted ...

    return buf

Code availability

The code is available on GitHub. Remember that any test scene should be visible from an virtual camera located at (8,0,0) pointing in the -x direction. The actual camera is ignored for now.