This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)
So far the series consists of the several articles labeled ray tracing concepts
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)
So far the series consists of the several articles labeled ray tracing concepts
In the first article I presented some code to implement a very basic raytracer. I chose Blender as a platform for several reasons: it has everything to create a scene with objects so we don't have to mess with implementing geometry and the functions to calculate intersections ourselves. It also provides us with the
Image
class that lets us store the image that we create pixel by pixel when raytracing. Finally it also comes with powerful libraries of math and vector functions, saving us the drudgery of implementing this ourselves. All of this allows us to focus on the raytracing proper.What is ray tracing?
When ray tracing we divide the image we are creating into pixels and shoot a ray from a point in front of this image (the camera location) into the scene. The first step is then to determine whether this ray hits any object in the scene. If not we give our pixel the background color but if we do hit an object we color this picture with a different color that is dependent on the material properties of the object and any light that reaches that point.So if we have calculated the location where our first ray (or camera ray, red in the illustration) hits the closest bit of geometry, the next step is to determine how much light reaches that point and combine this with the material information of the object.
There are many ways to approach this and in our first implementation we cast rays from this point in the direction of any lamp in the scene. These rays are often called shadow rays (green in the illustration) because if we hit something that means that an object blocks the light from this lamp and that the point we are shading lies in the shadow of the lamp. If we are not in the shadow we can calculate the light intensity due to this lamp by taking its original intensity and dividing it by the squared distance between our point and the lamp.
Once we have calculated the intensity we can use it to determine the diffuse reflectance. Diffuse reflectance causes light to be scattered in all directions. The exact behavior will be determined by the microstructure of the surface and there are different models to approximate this.
To start we will use the so called Lambert model. In a nutshell this model assumes that incident light is uniformly scattered in all directions which means that if a surface is facing a light it will look bright and this brightness will diminish if the direction of the surface normal diverges from this direction. The brightness we observe is therefore not dependant on the camera orientation but just on the local orientation of the geometry. Different models exist that take into account roughness and anisotropy but that is something for later.
Thank you so much, i´m a student of mathematics in Nicaragua and your post is very interesting for me
ReplyDelete