Stonework - A Blender add-on to create stone walls


blenderaddons-ng



There are other options out there, (including my own, probably broken add-on) to create stone walls, but I wanted to try our how easy it would be to create a new add-on that would fit the development and testing framework I am creating, so I created this mesh generator that creates a wall of bricks or stones, with several options to randomize and tweak it.





It creates a clean mesh, although currently it does not add uv-coordinates.

It can be downloaded directly from here, or you might want to have a look at the repository and see what it is about.






Description



Brainrot warning: The following description was generated by an LLM

The add-on  provides an operator that creates a customization "stonework" wall mesh made up of rows of randomly sized rectangular stones separated by configurable gaps. The main features and functionality are:

  • Adds a new mesh object to the scene representing a wall made of stones.
  • Customizable wall dimensions: You can set the total width and height of the wall.
  • Configurable stone size: Control the minimum and maximum width of stones, the height of each row, and the width of the first stone in each row.
  • Randomization: Stones in each row have random widths (within user-specified limits), and you can set a random seed to get a different placement of the stones.
  • Gaps between stones: You can specify the width and depth of the gaps between stones, making the wall look more realistic.
  • Half-stone probability: Optionally, some stones can be half-width for a more natural, irregular pattern.
  • Mesh construction: The add-on ensures all faces are connected, merging vertices where stones meet and splitting faces where a vertex lies on another face’s edge.
  • Extrusion: Stones are extruded along the Z-axis by the gap depth, giving the wall a 3D appearance.
  • User interface: The operator appears in the "Add" menu in the 3D Viewport, under the name "Stonework Wall".

This add-on is useful for quickly generating stylized or realistic stone or brick wall meshes for architectural visualization, games, or other 3D projects.


Automatic unit tests for Blender add-ons


Reading time: 10 min

In an ongoing effort to create a GitHub repository / Vscode project that provides an example of  a solid development environment for Blender add-ons, I would like to highlight how I set up automated testing.


Blender as a module


We are talking automated unit tests here, and although in principle it would be possible to run a script with the --python option, that wouldn´t provide us with easy test discovery, coverage metrics and easy integration with continuous integration pipelines (like GitHub actions).

Fortunately, Blender can be build as a module and that module is provided as a package on PyPi. This allows us to import the bpy module (and other Blender modules like mathutils and bmesh) like an add-on would inside Blender, yet still run code as a stand-alone Python script. An extremely simplified example is provided in the repo. This example creates an add-on, and, when run stand-alone, also executed that add-on. That works because importing the bpy module also creates an environment just like when opening Blender, so you can register add-ons, execute them or manipulate any kind of Blender data. The only thing that is missing is the UI.

Testing & coverage


Now that we can run scripts that execute add-ons, we can also create automated tests, and with the right setup, they can be automatically discovered and even be run and inspected inside Vscode.

For this I added pytest and  pytest-cov to the requirements and configured my Vscode workspace to enable testing (which is pretty simple, just follow the prompts in the Testing panel. More can be found here.)

The resulting settings.json looks like this:

{
    "python.testing.pytestArgs": [
        "tests", "--cov=add_ons", "--cov-report=xml", "--benchmark-autosave", "--benchmark-skip"
    ],
    "python.testing.unittestEnabled": false,
    "python.testing.pytestEnabled": true
}



All tests are in a separate directory tests and we will record the coverage just for the add_ons directory (so we do not count the test code itself). Ignore the benchmark options for now, that's for another article perhaps. Any tests that are discovered will then show up the Test panel:


And if you run the tests with coverage reporting, you get an overview to show the coverage (where you can click the individual lines to go to the actual source file and see which lines were covered in the tests)


Anatomy of a Blender unit test


Lets have a look at an example test file to see how we can test our example add-on

We import pytest, the bpy module and the add-on we would like to test:
import pytest
import bpy
import add_ons.example_simple

and then we create a class that contains all our individual unit tests as methods. The class name starts with Test, that way pytest will automatically discover it and will execute the tests in the methods with names that start with test_
class TestExampleSimple:
    @classmethod
    def setup_class(cls):
        # Ensure the operator is registered before tests
        if not hasattr(bpy.types, add_ons.example_simple.OPERATOR_NAME):
            add_ons.example_simple.register()

    @classmethod
    def teardown_class(cls):
        # Unregister the operator after tests
        if hasattr(bpy.types, add_ons.example_simple.OPERATOR_NAME):
            add_ons.example_simple.unregister()

    def test_move_x_operator(self, monkeypatch):
        # Create a new object and set as active
        bpy.ops.mesh.primitive_cube_add()
        obj = bpy.context.active_object
        obj.location.x = 0.0

        # Set the operator amount
        amount = 2.5

        # Call the operator
        result = bpy.ops.object.move_x("INVOKE_DEFAULT", amount=amount)

        # Check result and new location
        assert result == {"FINISHED"}
        assert pytest.approx(obj.location.x) == amount

The important thing here is to have class methods called setup_class and teardown_class that register and unregister our add-on respectively. Before we register we check if the add-on is already registered and only install if it is not (line 5). We do this because there could be other classes defined in this module that all operate in the same environment that is created when we import bpy, and we don´t know what those other tests might be doing so we play it safe.

An actual test, like test_move_x_operator, is a regular pytest unit test, but we must remember that all those test will operate in the same environment setup by import bpy, so either any actions should be harmless to other test, or they should cleanup after them. Here we add a Cube in object mode, and just leave this lying around after we finish our tests, but for more complex add-ons/operators it would make more sense to remove this Cube again or even reset to the initial state with bpy.ops.wm.read_factory_settings(use_empty=True).

Summary

We saw how to set up an automated testing environment for Blender add-ons using a GitHub repository and VSCode. By installing Blender as a Python module from PyPi, we can import bpy and related modules to simulate the Blender environment outside of its UI, enabling the use of tools like pytest and pytest-cov for test discovery, coverage reporting, and integration with CI pipelines. With a simple configuration in VSCode, tests can be easily run and inspected, with coverage results clearly visualized. We also explored how to structure test files using setup_class and teardown_class methods to safely register and unregister add-ons, and how to write unit tests that interact with Blender data while maintaining a clean and stable testing environment.

Colinearity tests in Blender meshes using Numpy

I re-implemented the algorithm used in the select_colinear_edges add-on to select all edges that are co-linear with already selected edges, and I thought a little write-up with some details could be useful for some people.

Warning! Long read! (≈ 20 min)

The challenge

If we want to select a path of co-linear edges all we have to do is start from any already selected edge, check if its neighbor is co-linear and if it is, select it and proceed from there. If we are careful not to examine any edges more than once, this algorithm will be quite fast and the time will depend on the number of directly connected edges that prove to be co-linear. And even in a large mesh this is likely to be a small number.

But what if we do not require those connected edges to form an unbroken path?

Then for all initially selected edges we would have to test all other edges in the mesh for co-linearity, something that can take a very long time if the mesh contains millions of vertices and everything is implemented in Python using just the mathutils module.

The algorithm

How do you determine if two edges are co-linear?

The first step is to see if they are parallel. This is done by calculating the dot product of the two normalized direction functions. If this product is very close to 1 or -1 we consider them parallel.


(The dot product of two normalized vectors is the cosine of the angle between them)

Being parallel is a necessary condition but not a sufficient one to determine if two edges are co-linear. We also need to check if they are on the same line. This is done by first calculating the vector from anyone of the two vertices in one edge to any one of the vertices in the other edge.

E3 is parallel to E1 but the light blue between vector is not parallel to E1

If the length of this vector is zero, then the chosen vertices coincide, and the edges are co-linear. If not we check the angle between this vector and the direction vector of one of the edges, and if this is is very close to 1 or -1, the edges are co-linear.

This means that for all edges we need to calculate the normalized direction vector and for all initially selected edges we need to calculate this between vector for all other edges.

A numpy based solution

Numpy can work efficiently on vast arrays of numbers and is bundled with Blender. By using Numpy we can avoid two notoriously slow things: Python loops and calling functions.

Our function looks like this (See the function colinear_edges() in this file):


def colinear_edges(selected: np.ndarray, indices, coords, threshold):
    colinear = np.zeros_like(selected)

    # calculate direction vectors for each edge
    edge_dirs = coords[indices[:, 1]] - coords[indices[:, 0]]
    edge_dirs = edge_dirs / np.linalg.norm(edge_dirs, axis=1)[:, np.newaxis]

    for e in selected.nonzero()[0]:
        # get the direction vector of the selected edge
        dir1 = edge_dirs[e]
        # check all other edges for colinearity
        angles = np.arccos(np.clip(np.dot(dir1, edge_dirs.T), -1.0, 1.0))
        parallel = (angles < threshold) | (np.abs(angles - np.pi) < threshold)
        v1 = coords[indices[e, 0]]
        w1 = coords[indices[:, 0]]
        # vector between start points
        between = w1 - v1
        # if the vector between start points is zero, they share a vertex, so colinear
        between_length = np.linalg.norm(between, axis=1)
        connected = between_length < 1e-6
        angles_between = np.abs(
            np.arccos(
                np.clip(
                    np.dot(dir1, (between / between_length[:, np.newaxis]).T), -1.0, 1.0
                )
            )
        )
        bparallel = (angles_between < threshold) | (
            np.abs(angles_between - np.pi) < threshold
        )
        # colinear if they are parallel and either share a vertex or the angle between the direction vector and the vector between start points is less than the threshold
        colinear |= (connected | bparallel) & parallel

    return colinear

Lets explain a few important steps.

The function is called with 4 arguments, a boolean array that indicates which edges are currently selected, an array with indices (2 for each edge) that indexes the third argument an array with vertex coordinates, and a threshold value we'll discuss later. All those arrays come from a Blender Mesh object and we will see how later in this article.

Line 5+6: Here we calculate all direction vectors between the edge indices in one go, and then normalize them in a single statement by dividing each vector by its norm (i.e. length).

Line 8-10: We loop over each selected edge and get its direction vector.

Line 12: Then we calculate the angles with all other vectors. This is done by calculating the dot product between the direction vector and all other direction vectors in on go (note that we need to transpose the array of vectors for this to work). We clip the dot products between -1 and 1 to guard against any floating point inaccuracies and then use the arccos() function to calculate the angle (Remember that the dot product represents the cosine of the angle between two vectors)

Line 13: then the angle is checked against the threshold and if smaller (or very close to Ï€, because we don´t care in which direction the vectors are aligned) we deem them parallel.

Line 14-17: then we take a vertex v1 from the first edge and all vertices w1 for each other edge, and calculate the between vector.

Line 19+20: we calculate the length all those between vectors, and for all of them determine if this length is so small we consider the vertices coincident.

Line 21-27: then we calculate all angle between the direction vector and the between vectors in the same way we did before.

Line 28-30: we then determine if the between vectors are parallel with the direction vector (or anti-parallel, because we don´t care about that)

Line 32: Finally we combine the logic and say two edges are co-linear if the are parallel AND their chosen vertices are coincident OR the angle is near zero. The result is OR-ed into the colinear array because we do this for each edge that was initially selected and want to return the combined set.

Calling colinear_edges()

If we have a Blender Mesh object we can access obj.data.edges and obj.data.vertices. The select-colinear() function takes reference to those properties and uses then to efficiently retrieving all the indices, selected status and vertex coordinates with the foreach_get() method (Line 7-9). It stores them in arrays we have created first (Line 4-6).
foreach_get() expects flat arrays, so we reshape them into their expected shape where needed (Line 10+11), before we call the colinear_edges() function discussed earlier (Line 13).
The result is a flat array with the new selected status of each edge which we store in the select attribute of the mesh edges with the foreach_set() method (Line 14).
And finally we return the number of selected edges by counting all non-zero values (True is considered non-zero too, so this works fine for arrays of booleans).
def select_colinear(edges, vertices, threshold):
    n_edges = len(edges)
    n_vertices = len(vertices)
    indices = np.empty(2 * n_edges, dtype=int)
    coords = np.empty(3 * n_vertices, dtype=float)
    selected = np.zeros(n_edges, dtype=bool)
    edges.foreach_get("vertices", indices)
    edges.foreach_get("select", selected)
    vertices.foreach_get("co", coords)
    coords = coords.reshape((n_vertices, 3))
    indices = indices.reshape((n_edges, 2))

    colinear = colinear_edges(selected, indices, coords, threshold)
    edges.foreach_set("select", colinear)
    return np.count_nonzero(colinear)

Summary

Using the foreach_get() / foreach_set() lets us easily get access to mesh properties in bulk, which allows us to use Numpy to implement an algorithm that calculates co-linearity without Python loops (except for the loop of all initially selected edges).

In exchange for a modest increase in complexity we gain a lot of performance: Although your mileage may vary of course, I could easily (in < 0.1 second) select all co-linear edges when picking one edge in a default cube that was subdivided 500 times (= around 3.5 million edges). Fast enough for me 😀




New blenderaddons repo aimed at developers

 

I decided to create a new repository for my Blender add-ons. It is called blenderaddons-ng and aims to replace my old repo with a complete, Vscode based solution.

Goals

The primary goal for this new repo is not just to host any add-ons I write, but also provide an example of a complete development environment based on Vscode.

To facilitate this, I added the following features:

  • A DevContainer to isolate the development environment
  • A complete configuration to enable testing with pytest
  • Options to enable line profiling
  • GitHub actions for CI

A more complete write-up can be found on the GitHub page for the repo, but in short:

DevContainer

Based on Ubuntu and containing all necessary dependencies to develop and test Blender add-ons. It does not contain Blender but provides the bpy module standalone, so we can perform automated tests.


Pytest

We use pytest for automated testing as well as for on-demand testing in Vscode. The coverage and benchmark plugins are provided as well.

Line profiler

For those situations where we would like to take an in-depth look at the performance the line-profiler package is installed as well, and we provide example code so you can see how this can be used in such a way that you don´t have to alter code before distributing an add-on.

GitHub Actions

Upon each commit (and merged pull request) on GitHub all automated test are run an the result and coverage are updated in badges.


Cylinder fit add-on updated for Blender 4.4


In an ongoing effort to check if any of my old add-ons still work on Blender 4.4, I just tested cylinderfit.zip.




It works without any change, so I simply committed it with a comment.

The add-on is available from my GitHub repository and you can read more on it in this article.





Linefit add-on tested on Blender 4.4

 


In an ongoing effort to check if any of my old add-ons still work on Blender 4.4, I just tested linefit.py.


It needed only a minor change because the Numpy version bundled with Blender has changed.

The add-on is available from my GitHub repository and you can read more on it in this article.



Plane fit add-on tested with Blender 4.4


In an ongoing effort to check if any of my old add-ons still work on Blender 4.4, I just tested planefit.py.


It works without any change, so I simply committed it with a comment.

The add-on is available from my GitHub repository and you can read more on it in this article.