**Note:** I switched to highlight.js in combination with showdown.js to create new articles on this blog so expect some minor weirdness here and there! When manipulating **large numbers of vertices** in Blender via Python, a **naive approach** — relying on standard Python iteration (loops) and individual property assignment — is computationally slow. Blender's Python layer introduces significant overhead when repeatedly crossing the boundary between the Python interpreter and the core C/C++ data structures for each vertex. The performance bottleneck can be overcome by using **NumPy** to process the data in **bulk** using **vectorized operations**. Because [NumPy](https://numpy.org/) is bundled with Blender, there is no need to install anything extra; your add-on can simple import it. The numpy based approach involves three main steps: data export, array processing, and data import. ----- ### 1\. Bulk Data Transfer: `foreach_get()` and `foreach_set()` Instead of looping through `mesh.vertices[i].co` one at a time, Blender's Python API provides high-speed methods for transferring entire blocks of data: **`foreach_get`** and **`foreach_set`**. These methods directly copy data between Blender's internal data structures (like vertex coordinates, normals, or custom properties) and a flat, contiguous **NumPy `ndarray`**. This minimizes the number of expensive function calls across the Python/C boundary, turning thousands of slow operations into one or two very fast memory copies. The `foreach_get()` and `foreach_set()` methods are available for any [property collection](https://docs.blender.org/api/latest/bpy.types.bpy_prop_collection.html#bpy.types.bpy_prop_collection.foreach_get), and allow access to any property that has a 'basic'type (i.e. a bool, int or float). ```python # Assuming 'mesh' is a Blender mesh data block import numpy as np # bundled with Blender, so no need to install it # 1. Export: Get all vertex coordinates into a flat NumPy array # 'co' are the vertex coordinates (x, y, z), so the array size is N_verts * 3 verts_co = np.empty(len(mesh.vertices) * 3, dtype=np.float32) mesh.vertices.foreach_get("co", verts_co) # The array is now a flat 1D array; reshape for 3D processing verts_co = verts_co.reshape((-1, 3)) ``` ----- Note that the NumPy ndarray needs to be allocated before we can use a call to `foreach_get()`. There is no need to zero out the contents of this new array, so we can us `np.empty()` to allocate space saving some time by omitting unnecessary initialization. ### 2\. Vectorized Processing with NumPy Once the vertex coordinates are in a NumPy **`ndarray`**, all processing should be performed using NumPy's optimized C-backed functions. #### NumPy Operations NumPy executes operations on the entire array at once, often using highly optimized C implementations, which is vastly faster than interpreted Python loops. **Example: Translating all vertices** ```python # Define a translation vector translation_vector = np.array([1.0, 0.5, 0.0]) # Perform the addition on all vertices simultaneously # This is where the speed comes from verts_co += translation_vector ``` #### The Power of Broadcasting [Broadcasting](https://numpy.org/doc/stable/user/absolute_beginners.html#broadcasting) is a fundamental NumPy mechanism that automatically handles operations between arrays of different but compatible shapes. It allows the smaller array (the translation vector `[1.0, 0.5, 0.0]`) to be conceptually "stretched" or "broadcast" across the larger array (`verts_co`), eliminating the need to explicitly write a loop to apply the translation to every single vertex row. It is no exageration to state that when using NumPy, any explicit loop left in your Python code means your are probably not doing things right. NumPy performs the operation by aligning the shapes from the trailing dimensions, effectively applying the 1x3 translation vector shown in the example to every row in the Nx3 array of vertex coordinates. ----- ### 3\. Importing Data Back After all modifications are complete, the resulting NumPy array is copied back into the Blender mesh data using **`foreach_set()`**. ```python # 3. Import: Flatten the processed array back to 1D verts_co = verts_co.reshape(-1) # Set the data back into the mesh mesh.vertices.foreach_set("co", verts_co) # Update the mesh data for Blender to display the changes mesh.update() ``` By utilizing **`foreach_get()`/`foreach_set()`**, and NumPy's **vectorized operations**, operations involving hundreds of thousands or even millions of vertices become practical. In a future article I will explore this some more with a real world example and some performance data.
Small Blender Things
Customizing Blender with Python and OSL
Efficient Vertex Manipulation in Blender with NumPy
Updated: Generate a list of Blender object information
A while ago, I created a tiny add-on the generate a list of object info as a comma separated file.
I found it useful when I wanted to keep track of a large collection of objects where the vertex count was important and I was simplifying them one by one.
However, the add-on only listed this information for meshes, while I also want this info for the effective polygon count of curves.
So I updated the add-on to convert any object object in the scene to a mesh (ignoring the ones that cannot be converted, like lights and empties etc.) and list the info of these converted objects. After these calculations these converted objects are deleted again, to you won´t notice them (although they will briefly use memory).
Installation and use
Download and install the file object_list.py * from my GitHub repository and once enabled it will show up in the Object menu:
If selected a new text block will be generated that will contain a comma separated list of object info with a header and one line for each object in the scene that is a mesh or can be converted to one.
For example, in a sample scene with a cube, a cone, a bezier curve, a light and a camera, the following list will be generated (note the absence of camera and light):
Name,Type,Tris,Faces,Edges,Verts,Datablock name,Users,Collection 1,Collection 2,Collection 3
Cube A,MESH,12,6,12,8,Cube,1,Collection,,
Cone,MESH,62,33,64,33,Cone,1,Collection,,
BézierCurve,CURVE,672,336,700,364,BézierCurve.001,1,Collection,,
IDMapper add-on for Blender is now free
In a previous article I mentioned that I stopped publishing add-ons on blender market because basically I was done with it it: I didn't earn enough to warrant the amount of effort that goes into maintaining and supporting add-ons.
However, there is still quite some interest in some of the add-ons, and so I decided to open source some of the add-ons and make them available for free. The first one was Snap! then it was Weightlifters turn, my very first add-on published on BlenderMarket (now Superhive), back in 2014 (!), and now IDMapper.
Technically IDMapper already was open source, at least the source code was, but now I'm also open sourcing the manual, and that means that you can download the source from this GitHub repository (the releases section), and the manual is available too.
This repository is still a work in progress, but the code will run on 4.4. For now, simply download weightlifter.py and use the install from filesystem in the add-ons section. Then read the manual!
To give you some idea of its capabilities, there is a video on YouTube.
Useful? Upgrading an old Blender add-on with ChatGPT
![]() |
Warning: Long read! (15 min) |
Preparation
- minimal version number in bl_info
- property definitions
- the menu where we add the operator changed
- the way we create uv and vertex color layers
- vertex colors now have alpha,
Prompting
Suggest all steps needed to port this add-on to Blender 4.4
Would you like a review of the drystone.py code for common compatibility issues?
Would you like a code review of drystone.py to identify specific lines that may need updating?
Suggested changes
- Confirm all `bpy.context` and `bpy.ops` usage is compatible with Blender 4.4’s context requirements.
- Check for any new warnings/errors after updating.
Issue | Old Code Example | Update To |
---|---|---|
bl_info version | (2, 78, 0) | (4, 4, 0) |
Property definition | xsub = IntProperty(...) | xsub: bpy.props.IntProperty(...) |
Menu type | INFO_MT_mesh_add | VIEW3D_MT_mesh_add |
UV/vertex color layers | me.uv_textures.new(), me.vertex_colors.new() | me.uv_layers.new(), me.vertex_colors.new() |
- Apply above changes.
- Test the add-on in Blender 4.4 and check the console for new warnings or errors.
- Fix any additional API changes found in testing. Let me know if you want line-by-line code examples for any of these changes!
Observations
Conclusion
Stonework - A Blender add-on to create stone walls
![]() |
blenderaddons-ng |
It creates a clean mesh, although currently it does not add uv-coordinates.
It can be downloaded directly from here, or you might want to have a look at the repository and see what it is about.Description
The add-on provides an operator that creates a customization "stonework" wall mesh made up of rows of randomly sized rectangular stones separated by configurable gaps. The main features and functionality are:
- Adds a new mesh object to the scene representing a wall made of stones.
- Customizable wall dimensions: You can set the total width and height of the wall.
- Configurable stone size: Control the minimum and maximum width of stones, the height of each row, and the width of the first stone in each row.
- Randomization: Stones in each row have random widths (within user-specified limits), and you can set a random seed to get a different placement of the stones.
- Gaps between stones: You can specify the width and depth of the gaps between stones, making the wall look more realistic.
- Half-stone probability: Optionally, some stones can be half-width for a more natural, irregular pattern.
- Mesh construction: The add-on ensures all faces are connected, merging vertices where stones meet and splitting faces where a vertex lies on another face’s edge.
- Extrusion: Stones are extruded along the Z-axis by the gap depth, giving the wall a 3D appearance.
- User interface: The operator appears in the "Add" menu in the 3D Viewport, under the name "Stonework Wall".
This add-on is useful for quickly generating stylized or realistic stone or brick wall meshes for architectural visualization, games, or other 3D projects.
Automatic unit tests for Blender add-ons
Blender as a module
Testing & coverage
{ "python.testing.pytestArgs": [ "tests", "--cov=add_ons", "--cov-report=xml", "--benchmark-autosave", "--benchmark-skip" ], "python.testing.unittestEnabled": false, "python.testing.pytestEnabled": true }
Anatomy of a Blender unit test
import pytest import bpy import add_ons.example_simple
class TestExampleSimple: @classmethod def setup_class(cls): # Ensure the operator is registered before tests if not hasattr(bpy.types, add_ons.example_simple.OPERATOR_NAME): add_ons.example_simple.register() @classmethod def teardown_class(cls): # Unregister the operator after tests if hasattr(bpy.types, add_ons.example_simple.OPERATOR_NAME): add_ons.example_simple.unregister() def test_move_x_operator(self, monkeypatch): # Create a new object and set as active bpy.ops.mesh.primitive_cube_add() obj = bpy.context.active_object obj.location.x = 0.0 # Set the operator amount amount = 2.5 # Call the operator result = bpy.ops.object.move_x("INVOKE_DEFAULT", amount=amount) # Check result and new location assert result == {"FINISHED"} assert pytest.approx(obj.location.x) == amount
Summary
bpy
and related modules to simulate the Blender environment outside of its UI, enabling the use of tools like pytest
and pytest-cov
for test discovery, coverage reporting, and integration with CI pipelines. With a simple configuration in VSCode, tests can be easily run and inspected, with coverage results clearly visualized. We also explored how to structure test files using setup_class
and teardown_class
methods to safely register and unregister add-ons, and how to write unit tests that interact with Blender data while maintaining a clean and stable testing environment.Colinearity tests in Blender meshes using Numpy
I re-implemented the algorithm used in the select_colinear_edges add-on to select all edges that are co-linear with already selected edges, and I thought a little write-up with some details could be useful for some people.
![]() |
Warning! Long read! (≈ 20 min) |
The challenge
If we want to select a path of co-linear edges all we have to do is start from any already selected edge, check if its neighbor is co-linear and if it is, select it and proceed from there. If we are careful not to examine any edges more than once, this algorithm will be quite fast and the time will depend on the number of directly connected edges that prove to be co-linear. And even in a large mesh this is likely to be a small number.
But what if we do not require those connected edges to form an unbroken path?
Then for all initially selected edges we would have to test all other edges in the mesh for co-linearity, something that can take a very long time if the mesh contains millions of vertices and everything is implemented in Python using just the mathutils module.
The algorithm
How do you determine if two edges are co-linear?
The first step is to see if they are parallel. This is done by calculating the dot product of the two normalized direction functions. If this product is very close to 1 or -1 we consider them parallel.
(The dot product of two normalized vectors is the cosine of the angle between them)
Being parallel is a necessary condition but not a sufficient one to determine if two edges are co-linear. We also need to check if they are on the same line. This is done by first calculating the vector from anyone of the two vertices in one edge to any one of the vertices in the other edge.
![]() |
E3 is parallel to E1 but the light blue between vector is not parallel to E1 |
If the length of this vector is zero, then the chosen vertices coincide, and the edges are co-linear. If not we check the angle between this vector and the direction vector of one of the edges, and if this is is very close to 1 or -1, the edges are co-linear.
This means that for all edges we need to calculate the normalized direction vector and for all initially selected edges we need to calculate this between vector for all other edges.
A numpy based solution
Numpy can work efficiently on vast arrays of numbers and is bundled with Blender. By using Numpy we can avoid two notoriously slow things: Python loops and calling functions.
Our function looks like this (See the function colinear_edges() in this file):
def colinear_edges(selected: np.ndarray, indices, coords, threshold): colinear = np.zeros_like(selected) # calculate direction vectors for each edge edge_dirs = coords[indices[:, 1]] - coords[indices[:, 0]] edge_dirs = edge_dirs / np.linalg.norm(edge_dirs, axis=1)[:, np.newaxis] for e in selected.nonzero()[0]: # get the direction vector of the selected edge dir1 = edge_dirs[e] # check all other edges for colinearity angles = np.arccos(np.clip(np.dot(dir1, edge_dirs.T), -1.0, 1.0)) parallel = (angles < threshold) | (np.abs(angles - np.pi) < threshold) v1 = coords[indices[e, 0]] w1 = coords[indices[:, 0]] # vector between start points between = w1 - v1 # if the vector between start points is zero, they share a vertex, so colinear between_length = np.linalg.norm(between, axis=1) connected = between_length < 1e-6 angles_between = np.abs( np.arccos( np.clip( np.dot(dir1, (between / between_length[:, np.newaxis]).T), -1.0, 1.0 ) ) ) bparallel = (angles_between < threshold) | ( np.abs(angles_between - np.pi) < threshold ) # colinear if they are parallel and either share a vertex or the angle between the direction vector and the vector between start points is less than the threshold colinear |= (connected | bparallel) & parallel return colinear
Lets explain a few important steps.
The function is called with 4 arguments, a boolean array that indicates which edges are currently selected, an array with indices (2 for each edge) that indexes the third argument an array with vertex coordinates, and a threshold value we'll discuss later. All those arrays come from a Blender Mesh object and we will see how later in this article.
Line 5+6: Here we calculate all direction vectors between the edge indices in one go, and then normalize them in a single statement by dividing each vector by its norm (i.e. length).
Line 8-10: We loop over each selected edge and get its direction vector.
Line 12: Then we calculate the angles with all other vectors. This is done by calculating the dot product between the direction vector and all other direction vectors in on go (note that we need to transpose the array of vectors for this to work). We clip the dot products between -1 and 1 to guard against any floating point inaccuracies and then use the arccos() function to calculate the angle (Remember that the dot product represents the cosine of the angle between two vectors)
Line 13: then the angle is checked against the threshold and if smaller (or very close to π, because we don´t care in which direction the vectors are aligned) we deem them parallel.
Line 14-17: then we take a vertex v1 from the first edge and all vertices w1 for each other edge, and calculate the between vector.
Line 19+20: we calculate the length all those between vectors, and for all of them determine if this length is so small we consider the vertices coincident.
Line 21-27: then we calculate all angle between the direction vector and the between vectors in the same way we did before.
Line 28-30: we then determine if the between vectors are parallel with the direction vector (or anti-parallel, because we don´t care about that)
Line 32: Finally we combine the logic and say two edges are co-linear if the are parallel AND their chosen vertices are coincident OR the angle is near zero. The result is OR-ed into the colinear array because we do this for each edge that was initially selected and want to return the combined set.
Calling colinear_edges()
def select_colinear(edges, vertices, threshold): n_edges = len(edges) n_vertices = len(vertices) indices = np.empty(2 * n_edges, dtype=int) coords = np.empty(3 * n_vertices, dtype=float) selected = np.zeros(n_edges, dtype=bool) edges.foreach_get("vertices", indices) edges.foreach_get("select", selected) vertices.foreach_get("co", coords) coords = coords.reshape((n_vertices, 3)) indices = indices.reshape((n_edges, 2)) colinear = colinear_edges(selected, indices, coords, threshold) edges.foreach_set("select", colinear) return np.count_nonzero(colinear)
Summary
Using the foreach_get() / foreach_set() lets us easily get access to mesh properties in bulk, which allows us to use Numpy to implement an algorithm that calculates co-linearity without Python loops (except for the loop of all initially selected edges).
In exchange for a modest increase in complexity we gain a lot of performance: Although your mileage may vary of course, I could easily (in < 0.1 second) select all co-linear edges when picking one edge in a default cube that was subdivided 500 times (= around 3.5 million edges). Fast enough for me 😀