Blender: transparent particles vs geometry

Rendering scenes with lots of particles can be quite expensive. I render trees quite often and these trees typically have their leaves implemented as a particle system with thousands (or several tens of thousands) particles.

When creating particles that resemble leaves or twigs you basically have two options. The most common one is to take a photo-realistic texture with an alpha channel and use this on a simple uv-mapped rectangle. Another approach is to model the leave or twig, with as little geometry as necessary, and use that as a particle. The uv-mapped texture could still be used but no alpha channel is used. The results can be almost indistinguishable:
 

(transparent texture on the left, real geometry on the right.)

Now the big question is, what is faster? Or to be more precise, what renders faster? (Because modeling a leaf or twig takes some time too, even is it is just a flat contour).

Short answer: using real geometry can save you about 20% or more render time compared to using transparency!

The setup

We used the Cycles renderer on Blender 2.79 throughout on a midrange system (4core/8thread i7 @ 4Ghz, 16GB ram, NVidia GTX 970).

We used trees with identical particle systems except for the actual particle objects. The particle objects were either a simple uv-mapped square or a very simple mesh. We made two variants: a simple circle and a collection of three circles. These two variants enable us two look into the effect of different transparent fractions (the one with the full circle is 4 - π r² ≅ 21% transparent, the one with the small circles is 4 - 3 *(  π r²/4) ≅ 41% transparent)










The transparent meshes are 1 quad each (2 triangles) while the real meshes are 12 and 36 tris respectively.
The materials were simple:
 
(Note that we still use an image texture for the geometry node for the color, but we ignore the alpha information)

The results

For all four variants we used identical trees with 2634, 6294 and 12252 particles and rendered them on the GPU with 32 samples and 8 transparent bounces:

That is a significant difference, almost 50%! The difference when rendering on a CPU is less but still pronounced:
Still up to 20% to gain!

The slope of all the lines is fairly gentle: doubling the number of particles certainly doesn't increase the render time, and this can be explained when you realize that more and more particles will be obscured by others so no rays will be wasted on them. The surprising bit is that there is (almost) no divergence between the large and small leaves, even though the small ones will let pass double the amount of rays. Maybe in a less dense setup this would happen, but here it seems insignificant.

Now the number of transparent bounces is a significant factor here. We left it at the default of 8. For transparent renders we really do need these bounces otherwise leaves that are obscured by other ones will not be visible through the transparent part if a ray traverses deep enough into the tree:
 
(1 bounce on the left, 16 transparent bounces on the right. Note the lack of see-through spots and general darkening for the one with very few transparent bounces)
Now when we render with real geometry the number of transparent bounces makes no difference:
For transparent geometry it matters a lot and we need at least 8 and maybe more bounces if we render dense particle collections like tree crowns. The image above is for GPU, the graph for CPU is similar in the sense that at 4 transparent bounces or more the mesh particles are significantly faster but at a lower number the picture is a bit muddled:


Conclusion

Whether rendering on CPU or GPU it looks like creating simple geometry to model the outline of particles is preferable to using alpha mapped textures. Your mileage may vary of course (my Blender 2.79 on an Intel i7-4790K @ 4.00GHz / NVidia GTX 970 combo might behave differently than your setup), but I expect the general trend to be the same.

Limitation of true displacement in Blender

I am pretty sure this counts as a limitation and not as a bug of true displacement in Blender, but if normals are not smoothed in some way the displacements may cause gaps in the mesh:


The image shows two identical meshes (created with Shift D) with the same material applied. This material uses true displacement and both objects have a simple adaptive subdivision modifier.
The one on the left has all faces flat shaded while the one on the right has all faces smooth.
The gaps are obvious.

Apparently this is caused by having normals across the the edges of adjacent faces that point in different directions as displacement is along the normals. With smooth faces the calculated normals blend into each other, which will cause displacement near an edge to go in the same direction.

The same smoothing effect can be achieved by changing the adaptive subdivision modifier from simple to catmull-clark.  (catmull-clark effectively interpolates the normals before subdividing)

Again, I don't think this counts as a bug, but it is good to know if just to prevent some head scratching.

Nodeset: better heightmap adjustment

A new version of Nodeset is available that adds a subtract node if you use the option to link a heightmap to the displacement output.
The reason is that most often the values in a heightmap are between 0 and 1, which will result in a bloated mesh unless you subtract 0.5 first.

Availability

The current version (201801031529) is available for download (right click and save as ...) from my GitHub repository.

Previous nodeset articles

I wrote several articles on this tiny add-on that you might want to read. They are (newest first):

free Blender model of a red maple

As something different from all the coding I'd like to present you a freebie model of a small red maple (Acer rubrum, wikipedia)

The tree was created with my Space Tree Pro add-on (available on Blender Market). It can be used as is or (if you have the Space Tree Pro add-on) tweaked to your liking.

The tree

The tree is a generated mesh object called Tree and has a particle emitter parented to it. So if you move it make sure you move the tree mesh and not just its particles (called LeafEmitter). The materials used for the leaves can be tweaked to give an even redder appearance if you like, but I chose to tone it down a bit towards slightly more late autummn orange/yellow.

The shape of the tree crown was modeled (roughly) on the 'Autumn Glory Maple' (Red maple, Acer Rubrum) (see e.g. https://goo.gl/images/DUAFw4 ) and the leaves were taken from a photographic reference (see below) and most likely from a north American red maple. The bark material is a simple procedural material.

If you render the scene be aware that the bark material uses the experimental micro displacement settings and is set to GPU. So depending on your system you might not see all surface detail in the trunk that you see in the sample rendering and micro displacement is heavy on RAM so you might need to use your CPU to render anyway.

Availability

The .blend file is available for download from GitHub (right click and save as ...). It is fairly large (76MB) because of the packed image files and because the tree mesh and the leaf particle system add up to about 240k tris. It is distributed under a CC BY-SA 4.0 license.

Additional credits

Even though these individuals provided material under a CC0 license, I really appreciate their efforts so I would like to point you to their respective web pages.

The environment HDR used in the scene is from HDRI Haven (https://hdrihaven.com/) by Greg Zaal. Specifically the low res version of the river walk. It was used without changes in this scene. Greg's HDRIs are free (CC0) but you can support his work on Patreon

The original leaf images are from Dustytoes on Pixabay They were also provided under a CC0 license and already isolated from the background. You might want to give her a thumbs up. I created 7 individual textures from the original and converted them to PBR texture maps using Substance Designer's Bitmap to material node.

Linefit add-on

As a companion add-on to PlaneFit I put together a small add-on that can add a single edge (two connected vertices) that best fits a collection of selected vertices.
After installtion it will be available in the 3d-view menu Add → Fit line to selected and the result of applying it to a vaguely cylindrical point cloud is shown below:

Availability

As usual the add-on is available from my GitHub repository (right-click on the link and Save As ...)

Extra information

There are many ways to fit a line to a collection of points but here we use the same eigen decomposition we use for fitting a plane. Instead of selecting the eigen vector with the smallest eigen value as the normal of a plane, we now select the eigen vector with the largest eigen value. This vector accounts for most of the variance in the positions of all the vertices, and is therefore the best fit. (There are other metrics we could use and this explanation is may be a bit too much hand-waiving for real mathematicians but it works :-)
The relevant code is shown below:
import numpy as np

def lineFit(points):
    ctr = points.mean(axis=0)
    x = points - ctr
    M = np.cov(x.T)
    eigenvalues,eigenvectors = np.linalg.eig(M)
    direction = eigenvectors[:,eigenvalues.argmax()]
    return ctr,direction

PlaneFit: Blender add-on, tiny improvement

The PlaneFit add-on wasn't very robust yet so I added a check for less than 3 selected vertices so a plane will only be fitted when it is sensible to do so. Gives a proper warning as well and the menu (In the 3d view, Add->Mesh in edit mode) now has a more readable label.

Availability

I have updated the add-on in my GitHub repository. (right-click on the link and select Save As .. to store the .py file somewhere where you can find it again )

PlaneFit: Blender add-on to fit a plane through a set of vertices

After all the updates to my BlenderMarket add-ons, it is time to spend some time on other add-ons again.

Fitting a plane

Fitting a plane through a collection of selected vertices might be useful in all sort of scenarios. Currently you can snap a plane to a face with Blender's snap tools but if we want to fit a plane to a large set of vertices, we need to resort to basic linear algebra to do this in a fast and robust manner. And because linear algebra is not everybody's favorite subject I created a small add-on.

After installation the add-on is available from the Add menu if you have a mesh in edit mode. Clicking on it will create a square plane that is fitted through all selected vertices. A size option lets you scale this new plane interactively. The result might look something like this:
Note that currently we not check if the minimum of 3 vertices are selected, you get a an error if you try.

Availability

As usual the add-on is available from my GitHub repository. (right-click on the link and select Save As .. to store the .py file somewhere where you can find it again )

Source code

Both the linear algebra and the code to add a plane to an exiting mesh isn't completely trivial, so let me highlight the interesting bits:

Fitting a plane

The fitting code is quite short and straight forward:
import numpy as np

def planeFit(points):
    ctr = points.mean(axis=0)
    x = points - ctr
    M = np.cov(x.T)
    eigenvalues,eigenvectors = np.linalg.eig(M)
    normal = eigenvectors[:,eigenvalues.argmin()]
    return ctr,normal
Any book on linear algebra can give you a better explanation but with a fair bit of hand-waving it can be explained as follows: points is a list of 3d-vectors. ctr will be the midpoint of the plane which is the mean of each of the x, y and z-components.

In line 5 - 7 we calculate the eigen vectors of the point cloud. It is a 3d cloud so we will get 3 eigen vectors and 3 corresponding eigenvalues. Each combination of eigen value and eigen vector can be interpreted as a direction vector and a measure of how well it explains the spread of the points. This means that if the points lie roughly in a plane, the two biggest eigen vectors lie in the best fit plane while the smallest one will be the normal (all eigen vectors are perpendicular). And indeed this smallest one is the one we get in line 8.

Adding a plane an existing mesh

Now adding a plane consisting of four vertices in a square would be quite simple, yes? Ehh, no: the Mesh object has a from_pydata() function but it only works correctly when adding to an initially empty mesh. So lines 15 - 27 essentially replicate what that function does: create 4 vertices, and 4 loops, compose a polygon out of it and add it to the mesh as well. We could have worked with a BMesh representation but then we would not have an efficient way to retrieve all vertex coordinates, something we really need when working with thousands of vertices.

The code in lines 7 - 12 is a very efficient way to get the coordinates of selected vertices into a Numpy array: we create Numpy arrays to hold all the vertex coordinates and the selected status, then get all of them with the fast built-in function foreach_get(). verts[selected] then leaves us with an array of just the coordinates of selected vertices, which we pass to our planeFit() function we saw earlier.

We then create two vectors perpendicular to our normal and use them to construct four vertex coordinates.

         def execute(self, context):
  bpy.ops.object.editmode_toggle()
  me = context.active_object.data
  count = len(me.vertices)
  if count > 0:  # degenerate mesh, but better safe than sorry
   shape = (count, 3)
   verts = np.empty(count*3, dtype=np.float32)
   selected = np.empty(count, dtype=np.bool)
   me.vertices.foreach_get('co', verts)
   me.vertices.foreach_get('select', selected)
   verts.shape = shape
   ctr, normal = planeFit(verts[selected])
   dx, dy = orthopoints(normal)  # definition of orthopoints() not shown
   # can't use mesh.from_pydata here because that won't let us ADD to a mesh
   me.vertices.add(4)
   me.vertices[count  ].co = ctr+dx*self.size
   me.vertices[count+1].co = ctr+dy*self.size
   me.vertices[count+2].co = ctr-dx*self.size
   me.vertices[count+3].co = ctr-dy*self.size
   lcount = len(me.loops)
   me.loops.add(4)
   pcount = len(me.polygons)
   me.polygons.add(1)
   me.polygons[pcount].loop_total = 4
   me.polygons[pcount].loop_start = lcount
   me.polygons[pcount].vertices = [count,count+1,count+2,count+3]
   me.update(calc_edges=True)

  bpy.ops.object.editmode_toggle()
  return {'FINISHED'}