Raytracing: concepts and code, part 1, first code steps


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

In this article I will show how to implement a minimal raytracer inside Blender. Of course Blender has its own very capable renderers, like Cycles, but point is to illustrate some ray tracing concepts without being bogged down with tons of code that have nothing to do with ray tracing per se.

By using Blender's built-in data structures like a Scene and an Image and built-in methods like ray_cast() we don't have to implement difficult algorithms and data structure like ray/mesh intersection and BVH-trees for example, but we can concentrate on things like primary and secondary rays, shadows, shading models etc. etc.

The scene

The scene we will be working with at first looks like this:

Nothing more than a plane, a couple of cubes and an icosphere. Plus two point lights to illuminate the scene (not visible in this screenshot).

In rendermode the scene looks like this:

Note that we didn't assign any materials so everything is shaded with a default shader.

Our result


Now the result of the minimal raytracer shown in the next section looks like this:
There are clear differences of course, and we'll work on them in the future, but the general idea is similar: light areas close to lights and visible shadows. How many lines of code do you think is needed for this?

The code


The code to implement this is surprisingly compact (and more than half of it is comments):
import bpy
import numpy as np

scene = bpy.context.scene

lamps = [ob for ob in scene.objects if ob.type == 'LAMP']

intensity = 10  # intensity for all lamps
eps = 1e-5      # small offset to prevent self intersection for secondary rays

# this image must be created already and be 1024x1024 RGBA
output = bpy.data.images['Test']

# create a buffer to store the calculated intensities
buf = np.ones(1024*1024*4)
buf.shape = 1024,1024,4

# the location of our virtual camera
# (we do NOT use any camera that might be present)
origin = (8,0,0)

# loop over all pixels once (no multisampling)
for y in range(1024):
    for x in range(1024):
        # get the direction.
        # camera points in -x direction, FOV = 90 degrees
        dir = (-1, (x-512)/1024, (y-512)/1024)
        
        # cast a ray into the scene
        hit, loc, normal, index, ob, mat = scene.ray_cast(origin, dir)
        
        # the default background is black for now
        color = np.zeros(3)
        if hit:
            color = np.zeros(3)
            light = np.ones(3) * intensity  # light color is white
            for lamp in lamps:
                # for every lamp determine the direction and distance
                light_vec = lamp.location - loc
                light_dist = light_vec.length_squared
                light_dir = light_vec.normalized()
                
                # cast a ray in the direction of the light starting
                # at the original hit location
                lhit, lloc, lnormal, lindex, lob, lmat = scene.ray_cast(loc+light_dir*eps, light_dir)
                
                # if we hit something we are in the shadow of the light
                if not lhit:
                    # otherwise we add the distance attenuated intensity
                    # we calculate diffuse reflectance with a pure 
                    # lambertian model
                    # https://en.wikipedia.org/wiki/Lambertian_reflectance
                    color += intensity * normal.dot(light_dir)/light_dist
        buf[y,x,0:3] = color

# pixels is a flat array RGBARGGBRGBA.... 
# assign to a single item inside pixels is prohibitively slow but
# assigning something that implements Python's buffer protocol is
# fast. So assiging a (flattened) 1024x1024x4 numpy array is fast
output.pixels = buf.flatten()
I have added quite some comment in the code itself and will elaborate on it some more in future articles. For now the code is given as is. You can experiment with it if you like but you do not have to type it in nor create a Scene that meets all the assumptions in the code, you can download a .blend file that contains the code if you like.

Code availability

The files are available from my GitHub repository:

The code itself

A .blend file with a full scene and the code embedded as a text file (click Run Script inside the text editor  to run it)


No comments:

Post a Comment