Ortho, a mesh manipulation tool: February 2018 update

Just a couple of weeks ago I introduced a new add-on to manipulate mesh part with the help of a reference plane: Ortho.

Based on user feedback I added some new features that are in this new February release of the Add-on. Besides some tweaks and bug fixes, the most important new feature is the ability to work with more than one reference plane. You can create as many reference planes as you need and a new panel lets you select any of those planes. You can also manipulate the list of reference planes itself, for example providing them with meaningful names.

A short demo of this new feature is available in this new video:

Availability

Ortho is available from my Blender Market store.

New add-on: Ortho, a mesh manipulation tool

I am happy to announce a new add-on that is designed to make your life as an arch-viz or hard surface modeler a little bit easier.

Ortho, a mesh manipulation tool

Ortho offers a collection of tools that allows you to move, rotate, scale, snap and align selections of a mesh relative to a user defined reference plane. Working relative to a reference plane can greatly simplify the positioning of mesh parts and can help clean up distorted meshes. Blender already offers several tools to transform and snap mesh parts but they work in the context of predefined coordinates, which is makes it difficult to position or align mesh parts in meshes that are transformed with respect to their local coordinates or in situations where orthogonal coordinates are not sufficient, for example when positioning a window inside a slanted roof.

Ortho offers a simple and interactive way to define a plane that fits a selection of vertices in a mesh and offers a set of tools that operate with respect to this reference plane. You can for example align and snap a selection to the reference plane or move this selection along its normal or its surface. Scaling is also an option, offering ways to rectify slightly distorted meshes even in situations where such a distorted plane is not aligned with any axis and scaling along individual normals with Alt-S gives strange results.

Availability

Ortho is available from my Blender Market store.

Blender: transparent particles vs geometry

Rendering scenes with lots of particles can be quite expensive. I render trees quite often and these trees typically have their leaves implemented as a particle system with thousands (or several tens of thousands) particles.

When creating particles that resemble leaves or twigs you basically have two options. The most common one is to take a photo-realistic texture with an alpha channel and use this on a simple uv-mapped rectangle. Another approach is to model the leave or twig, with as little geometry as necessary, and use that as a particle. The uv-mapped texture could still be used but no alpha channel is used. The results can be almost indistinguishable:
 

(transparent texture on the left, real geometry on the right.)

Now the big question is, what is faster? Or to be more precise, what renders faster? (Because modeling a leaf or twig takes some time too, even is it is just a flat contour).

Short answer: using real geometry can save you about 20% or more render time compared to using transparency!

The setup

We used the Cycles renderer on Blender 2.79 throughout on a midrange system (4core/8thread i7 @ 4Ghz, 16GB ram, NVidia GTX 970).

We used trees with identical particle systems except for the actual particle objects. The particle objects were either a simple uv-mapped square or a very simple mesh. We made two variants: a simple circle and a collection of three circles. These two variants enable us two look into the effect of different transparent fractions (the one with the full circle is 4 - π r² ≅ 21% transparent, the one with the small circles is 4 - 3 *(  π r²/4) ≅ 41% transparent)










The transparent meshes are 1 quad each (2 triangles) while the real meshes are 12 and 36 tris respectively.
The materials were simple:
 
(Note that we still use an image texture for the geometry node for the color, but we ignore the alpha information)

The results

For all four variants we used identical trees with 2634, 6294 and 12252 particles and rendered them on the GPU with 32 samples and 8 transparent bounces:

That is a significant difference, almost 50%! The difference when rendering on a CPU is less but still pronounced:
Still up to 20% to gain!

The slope of all the lines is fairly gentle: doubling the number of particles certainly doesn't increase the render time, and this can be explained when you realize that more and more particles will be obscured by others so no rays will be wasted on them. The surprising bit is that there is (almost) no divergence between the large and small leaves, even though the small ones will let pass double the amount of rays. Maybe in a less dense setup this would happen, but here it seems insignificant.

Now the number of transparent bounces is a significant factor here. We left it at the default of 8. For transparent renders we really do need these bounces otherwise leaves that are obscured by other ones will not be visible through the transparent part if a ray traverses deep enough into the tree:
 
(1 bounce on the left, 16 transparent bounces on the right. Note the lack of see-through spots and general darkening for the one with very few transparent bounces)
Now when we render with real geometry the number of transparent bounces makes no difference:
For transparent geometry it matters a lot and we need at least 8 and maybe more bounces if we render dense particle collections like tree crowns. The image above is for GPU, the graph for CPU is similar in the sense that at 4 transparent bounces or more the mesh particles are significantly faster but at a lower number the picture is a bit muddled:


Conclusion

Whether rendering on CPU or GPU it looks like creating simple geometry to model the outline of particles is preferable to using alpha mapped textures. Your mileage may vary of course (my Blender 2.79 on an Intel i7-4790K @ 4.00GHz / NVidia GTX 970 combo might behave differently than your setup), but I expect the general trend to be the same.

Limitation of true displacement in Blender

I am pretty sure this counts as a limitation and not as a bug of true displacement in Blender, but if normals are not smoothed in some way the displacements may cause gaps in the mesh:


The image shows two identical meshes (created with Shift D) with the same material applied. This material uses true displacement and both objects have a simple adaptive subdivision modifier.
The one on the left has all faces flat shaded while the one on the right has all faces smooth.
The gaps are obvious.

Apparently this is caused by having normals across the the edges of adjacent faces that point in different directions as displacement is along the normals. With smooth faces the calculated normals blend into each other, which will cause displacement near an edge to go in the same direction.

The same smoothing effect can be achieved by changing the adaptive subdivision modifier from simple to catmull-clark.  (catmull-clark effectively interpolates the normals before subdividing)

Again, I don't think this counts as a bug, but it is good to know if just to prevent some head scratching.

Nodeset: better heightmap adjustment

A new version of Nodeset is available that adds a subtract node if you use the option to link a heightmap to the displacement output.
The reason is that most often the values in a heightmap are between 0 and 1, which will result in a bloated mesh unless you subtract 0.5 first.

Availability

The current version (201801031529) is available for download (right click and save as ...) from my GitHub repository.

Previous nodeset articles

I wrote several articles on this tiny add-on that you might want to read. They are (newest first):

free Blender model of a red maple

As something different from all the coding I'd like to present you a freebie model of a small red maple (Acer rubrum, wikipedia)

The tree was created with my Space Tree Pro add-on (available on Blender Market). It can be used as is or (if you have the Space Tree Pro add-on) tweaked to your liking.

The tree

The tree is a generated mesh object called Tree and has a particle emitter parented to it. So if you move it make sure you move the tree mesh and not just its particles (called LeafEmitter). The materials used for the leaves can be tweaked to give an even redder appearance if you like, but I chose to tone it down a bit towards slightly more late autummn orange/yellow.

The shape of the tree crown was modeled (roughly) on the 'Autumn Glory Maple' (Red maple, Acer Rubrum) (see e.g. https://goo.gl/images/DUAFw4 ) and the leaves were taken from a photographic reference (see below) and most likely from a north American red maple. The bark material is a simple procedural material.

If you render the scene be aware that the bark material uses the experimental micro displacement settings and is set to GPU. So depending on your system you might not see all surface detail in the trunk that you see in the sample rendering and micro displacement is heavy on RAM so you might need to use your CPU to render anyway.

Availability

The .blend file is available for download from GitHub (right click and save as ...). It is fairly large (76MB) because of the packed image files and because the tree mesh and the leaf particle system add up to about 240k tris. It is distributed under a CC BY-SA 4.0 license.

Additional credits

Even though these individuals provided material under a CC0 license, I really appreciate their efforts so I would like to point you to their respective web pages.

The environment HDR used in the scene is from HDRI Haven (https://hdrihaven.com/) by Greg Zaal. Specifically the low res version of the river walk. It was used without changes in this scene. Greg's HDRIs are free (CC0) but you can support his work on Patreon

The original leaf images are from Dustytoes on Pixabay They were also provided under a CC0 license and already isolated from the background. You might want to give her a thumbs up. I created 7 individual textures from the original and converted them to PBR texture maps using Substance Designer's Bitmap to material node.

Linefit add-on

As a companion add-on to PlaneFit I put together a small add-on that can add a single edge (two connected vertices) that best fits a collection of selected vertices.
After installtion it will be available in the 3d-view menu Add → Fit line to selected and the result of applying it to a vaguely cylindrical point cloud is shown below:

Availability

As usual the add-on is available from my GitHub repository (right-click on the link and Save As ...)

Extra information

There are many ways to fit a line to a collection of points but here we use the same eigen decomposition we use for fitting a plane. Instead of selecting the eigen vector with the smallest eigen value as the normal of a plane, we now select the eigen vector with the largest eigen value. This vector accounts for most of the variance in the positions of all the vertices, and is therefore the best fit. (There are other metrics we could use and this explanation is may be a bit too much hand-waiving for real mathematicians but it works :-)
The relevant code is shown below:
import numpy as np

def lineFit(points):
    ctr = points.mean(axis=0)
    x = points - ctr
    M = np.cov(x.T)
    eigenvalues,eigenvectors = np.linalg.eig(M)
    direction = eigenvectors[:,eigenvalues.argmax()]
    return ctr,direction