glLineStipple and co are gone, must be replaced by shaders
This popped when working with the bgl module. There is a good shader based alternative however with an example in the docs.
Groups are now collections
which means references to bpy.data.group must be replaced by bpy.data.collection
Setting active object is different
You assign an object to bpy.context.view_layer.objects.active.
In many circumstances the view_layer is now the ultimate arbiter of what we see. This allows for organizing the visibility of objects in a scene but it also offers the link to the active object.
You must use ob.select_set(<bool>) instead of ob.select = <bool>
This is a breaking change change that could have been avoided: if it was necessary to move from a simple assignment to executing some code when the select status changes it would have been possible to use a property instead. Now it is just an annoying change in my opinion
INFO_MT_mesh_add menu is now called VIEW3D_MT_mesh_add
Which makes sense because it is part of the VIEW3D area.
use_subsurf_uv no longer available on a subsurf modifier
Too bad, but I guess this wasn used very often anf the new quality settings of the modifier might well server the purpose even better.
no bl_category in Panels
So you cannot decide where your Panel will be located in the Toolbar.
That's what I thought earlier and it is true but only if your region type is TOOLS (visible with Ctrl-T) but you can still have a category if your region type is UI (visible with Ctrl-N)
For quite some time it has been possible to create draw handlers in Blender that use OpenGL functionality to draw overlays in the 2d view. Until now this functionality has been restricted due to older bindings, limiting the exposed OpenGL functionlity to version 2.0 features. The most noticeable effect of this is slow performance in complex drawings due to the lack of vertex buffer objects.
With Blender 2.80 however all new bindings have been added that expose OpenGL version 3.3 and the Python API has been expanded to not just to expose those bindings but add some convenient extras as well that greatly help you to get started.
In this article I want to highlight some features that are available and that I am using in an add-on that implements some sort of connection editor for game levels. The idea is that on top of an active object you have an overlay where you can point and click to indicate spots where other blocks might connect. This is easier to show than to explain in words so I've put together a small video that illustrates the concept:
Drawing code
I haven't uploaded any code yet but I do want to illustrate the important bits with some code snippets.
The all important includes are the following:
import gpu
from gpu_extras.batch import batch_for_shader
import numpy as np
The numpy part is actually not relevant to our current discussion but it speeds up working with lots of geometry quite a bit. More in the gpu module can be found in Blender's online documentation.
With these imports in place the actual drawing is pretty straight forward:
bgl.glEnable(bgl.GL_BLEND)
shader = gpu.shader.from_builtin('3D_UNIFORM_COLOR')
shader.bind()
pos, indices = some_function_generating_geometry()
mat = np.array(ob.matrix_world)
pos = np.array([mat @ v for v in pos], np.float32)[:,:3]
batch = batch_for_shader(shader, 'TRIS', {"pos": pos}, indices)
shader.uniform_float("color", (1,0,0,0.5))
batch.draw(shader)
Because we want to draw transparent faces, we enable blending (line 1). We will need a shader to draw our geometry but because the faces will be a uniform color we can use a built-in shader (line 2).
It is necessary to bind the shader (line 3) to assign a value to its uniform color later.
Next we calculate the vertices and indices that make up our geometry (line 5). The positions of the vertices are in uniform coordinates (i.e. 4 components, x,y,z,w with w == 1.0) so the world matrix of the object around which we are drawing can be used transform those coordinates to world space. Because the positions we will pass to our shader need to have 3 coordinates we remove the 4th coordinate after the transformation (line 8).
Next we create a batch, i.e. all necessary information to actually draw our geometry. We tell what shader to use, and that we will be drawing a collection of triangles (TRIS) and we pass it a list of indices (in this case 3 for each triangle) to point to the vertices that make up those triangles (line 10).
Finally we assign a color and do the actual drawing (line 12). Note that we didn't do anything with regard to positioning the camera or calculating a view matrix, this is all setup in Blender's draw handler already so anything we draw in the 3d-view will react to camera changes without us having to do anything.
Pointing and clicking
Although in principle OpenGL coudl be used to determine where a mouse click lands on a bit of drawn geometry, implementing it that way would be rather cumbersome. It is easier by far to use other Blender built-in functionality for that.
Before we start interacting with our object we use the same geometry to create a so called BVH tree that contains all our triangles:
from mathutils.bvhtree import BVHTree
pos, indices = some_function_generating_geometry()
mat = np.array(ob.matrix_world)
pos = np.array([mat @ v for v in pos], np.float32)[:,:3]
indices.shape = -1,3
self.tree = BVHTree.FromPolygons(pos.tolist(), indices.tolist(), all_triangles=True)
A BVH tree is a structure that can be used very efficiently to determine whether a ray intersects any of the triangles in the tree, even if we have thousands of them. Note that the first argument to the FromPolugons constructor (line 8) is a Python list of vertices with 3 coordinates, that is why we again drop any 4th coordinate when we calculate the world coordinates (line 5) and convert our numpy array explicitly to a Python list. Every triangle is defined by 3 indices into this list of coordinates so we convert the linear list of indices to a Nx3 array. The Blender implementation can work with other polygons besides triangles but it is more efficient if it knows all polygons are indeed triangles so that is what we indicate when we create the tree.
With the BVH tree in place it is pretty straight forward to convert mouse coordinates to a direction into the scene that is drawn in the 3d view area and use this direction to get the intersected triangle from the BVH tree:
from bpy_extras.view3d_utils import region_2d_to_vector_3d
from bpy_extras.view3d_utils import region_2d_to_origin_3d
def pick_face(self, context, event):
scene = context.scene
region = context.region
rv3d = context.region_data
coord = event.mouse_region_x, event.mouse_region_y
direction = region_2d_to_vector_3d(region, rv3d, coord)
origin = region_2d_to_origin_3d(region, rv3d, coord)
pos, normal, index, dist = self.tree.ray_cast(origin, direction)
if pos is not None:
... we can now do stuff based on the triangle we have hit ...
To get the origin and direction of the ray that is coming from the 3d view viewpoint into the scene to the point where you click the mouse, the view3d_utils module provides us with all the necessary components to calculate those if we provide the mosue coordinates and the region data (line 10+11).
We can then use the ray_cast() method of our BVH tree to see if the ray intersects any triangle. Note that the origin and ray are in world coordinates and that is the reason we constructed our BVH tree in world coordinates in the first place.
Conclusion
It is quite simple to draw OpenGL overlays in the 3d view and interact with it based on mouse clicks. The new Blender 2.80 Python API for OpenGL provides us with all we need and drawing can be done very efficiently.
There is of course a lot more to OpenGL and a good place to start might be to look at the examples provided in the gpu module documentation.