Avoiding repetition artifacts with chaos mosaic

Chaos mosaic or chaos mapping is a method to extend limited size textures to huge uv-mapped surfaces while avoiding repetition artifacts.
You might have for example a grass covered ground texture that is detailed and would map to a 2 x 2 meter square quite well. If you would apply this to a 10 x 10 meter field and scale to its proper size, obvious repetition artifacts would be visible:

A chaos mosaic on the other hand would take randomly selected squares from a texture giving the appearance of an endless texture without repetition:

At close range you would still be able to make out the seams but for large objects seen from a distance this probably wouldn't be noticeable.
This technique only gives good results for non-patterned textures like groundcover, asphalt, plaster etc. but in those cases it might be just what you are looking for and it is quite fast.
In an older article I showed a chaos mosaic implementation in Open Shading Language but I like to work with the GPU as much as possible so I implemented the same technique in just nodes.

Node group


The noodle takes the uv-coordinates and then you can plug in the transformed coordinates into you texture. The scale can then be adjusted as needed. The rotation gives an additional amount of randomness to the final material but depending on the texture this might not always improve the visual quality.
The .blend file with the node group is available from my GitGub repository. Just download the chaosmap.blend and then in your own .blend use File → Append to select the nodegroup Chaosmap. It will then become available in the Add → Group menu of the node editor.

Improvements

To reduce the visibility of the seams between the tiles you can mix two chaos mosaics: the second one should use slightly offset and rotated uv-coordinates and then you can use for example a noise texture with a scale comparable to the actual textures to mix the two:

The result has less visible seams but is also somewhat blurred:

The highlighted area shows a visible seam:



Especially at close range:



Some details

You can examine the details of the nodegroup if you like but the basic principle is that it takes the original uv-coordinate, determines in which grid section this falls and then maps the relative position of the point inside this grid to a relative position inside randomly selected square in the unit uv-map. (This square is randomly selected but always the same square for the same grid section)

Tiny Blender Addon: Snap and transform

I was doing some arch-viz the other day and I was placing a lot of objects in a large scene. The objects where placeholders that I created on the spot and more often than not I needed to move the origin of the new mesh object to a selected vertex for easy positioning, scaling, rotating etc.

This is of course simple enough: select Snap cursor to selected in edit mode, switch to object mode and then select Transform → origin to 3d cursor.
But it is also a lot of actions for a simple operation, especially if you doing this a hundred times in a scene...

Another common scenario that I encounter is that I want to position the origin at the lowest point of a mesh. This is a little bit more involved as far as the code is concerned, a small explanation below for those who are interested in doing this on large meshes in a fast way.

Anyway, here is a small add-on: Edit mode origin tools. It does nothing fancy, it will just create two new menu entries in edit mode:
Mesh → Snap → Origin to selected,
Mesh → Snap → Origin to lowest vertex (along z-axis)
and save you some time :-)

Code availability

Download it from my GitHub repository (right-click on the first link and select save as ...) and in Blender go to File → User preferences → Add-ons → install from file ... (don't forget to enable it after installation. Note that the downloaded file is called snapandtransform.py while the add-on will appear as Edit mode transform tools)

Finding the location of the lowest vertex (fast)

If we want to get all the vertex coordinates fast, we got to switch to object mode first (line 2), get the number of vertices present (line 4) and the allocate an empty numpy array to hold all coordinates (line 7). Then we can use the foreach_get() method to get all coords (the co attribute of the verts array) in one go (line 8). It will be a flattened array so we have to reshape it to an array of 3-vectors (line 9).
 def execute(self, context):
  bpy.ops.object.editmode_toggle()
  me = context.active_object.data
  count = len(me.vertices)
  if count > 0:  # degenerate mesh, but better safe than sorry
   shape = (count, 3)
   verts = np.empty(count*3, dtype=np.float32)
   me.vertices.foreach_get('co', verts)
   verts.shape = shape
   verts2 = np.ones((count,4))
   verts2[:,:3] = verts
   M = np.array(context.active_object.matrix_world,
       dtype=np.float32)
   verts = (M @ verts2.T).T[:,:3]
   min_co = verts[np.argsort(verts[:,2])[0]]
   context.scene.cursor_location = min_co
   bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
  bpy.ops.object.editmode_toggle()
  return {'FINISHED'}
Now all the coordinates will be in object space so we will want to convert all of them to world space. For this we need to multiply each of them with the matrix_world of the object. This is necessary because due to rotations the lowest vertex in object space need not be the lowest vertex in world space!

The world matrix is a 4x4 matrix (one that holds not only scale and rotation but translation as well) so we need to extend all our coordinate vectors with a fourth coordinate of 1 (lines 10,11). We also convert the matrix_world to a numpy array (line 12).

Line 14 is then where all the magic happens: we multiply our numpy world matrix M with our array of extend coordinates using the new @ operator. (new since Python 3.5 and especially added to allow numpy code to be better readable). The double transpose is to allow matrix multiplication of a 4x4 matrix with a list of 4-vectors and transform the result back again. The fourth coordinate of the result is dropped by the [:,:3] slice index.

Now that we have converted all coordinates to world space, all we have to do is the find the index of the coordinate with the lowest z-coordinate with argsort() and assign this to the position of the 3d-cursor before calling the origin-set() operator.

Nodeset: bugfix and small new feature


The nodeset add-on I talked about in a previous article had a small bug: if your texture set was something different than a collection of .png files the extra files that should be loaded automatically were in fact not loaded. This is fixed in the latest version (201707011445)

Code availability

The change is already committed in the GitHub repository. (right click the link to download).

New feature

Because of a bug in Blender a material with a normal map node will show up as all black if you use the experimental adaptive subdivision / micro polygon displacement. If you change the normal maps space from tangent to object this does not happen so I added a user preferences setting to do this automatically.

Converting Blender Filmic LUTs for use in Substance Painter

with the new Filmic Blender color management options getting a lot of attention I wanted to get the exact same looks when creating textures in Substance Painter.
Substance Painter supports LUTs in so called 3d format which are stored in .exr files. So our challenge is twofold: convert the bundled Blender Filmic LUTs to this format and of course to get results in Substance that match Blender as closely as possible.
The results of 3 of the filmic LUTs are compared side by side in the image below:

I made this comparison collage by keeping parameters like exposure (1) and gamma (also 1) the same in both Blender and Substance Painter and then took screenshots. I glued the screenshots together in GIMP. This way I could see all images on the same monitor. This is important because if the applied profiles are the same and both displays are sRGB, two monitors will still differ in their color response and it is fiendishly difficult to calibrate them (unless you have a very expensive monitor with the associated color calibration kit). My own monitors are not even the same brand so it is an easy trap to fall into if you compare images on two different monitors side by side.
Anyway, even this way when you have a close look the images are close in tone but not 100% identical and I am not sure what is causing this. There might be slight differences in camera aperture, focal blur and multiple importance sampling of the environment and of course Cycles' ray model is not the same as Iray's Substance painting mode renderer. Still, I think this is pretty close and useful to compare textures in Substance under different looks before transferring them to Blender.

How the LUTs were generated

I used the Python bindings of the OpenColorIO and OpenImageIO libraries to create a small script (code below). These are in fact the libraries Blender uses to work with color conversions.

The script takes a linear to linear transform that is encoded as an .exr image and creates a new .exr for each 'Look' in Blender's OCIO config file that is defined in the 'Filmic Log' process space.
The docs for the Python bindings for both libraries are not an easy read and the APIs have some small inconsistencies so it took some time to get it working. The code is far from beautiful but i commented the relevant parts. I am open to any critique that can help to improve the transforms.
Note that the code is not plug and play and i have no intention of improving that :-)

The Substance LUTs

They can be downloaded from my GitHub repository. They are bundled in one .zip file and should be unpacked before importing them in Substance Painter.

The code

#export LD_LIBRARY_PATH=/home/michel/ocio/lib
#export OCIO=/home/michel/Downloads/blender-2.78-b94a433ca34-linux-glibc219-x86_64/2.78/datafiles/colormanagement/config.ocio
#python 2.7
#
# run as
# python transformlook.py
#
# expects linear_to_linear.exr in the current directory and will write the transform there too

import OpenImageIO as OIIO  #  already installed using Ubuntu pkg manager
from sys import path
from array import array
path.append('/home/michel/ocio/lib/python2.7/site-packages/')
import PyOpenColorIO as OCIO

config = OCIO.GetCurrentConfig()

for look in config.getLooks():
 if look.getProcessSpace() == 'Filmic Log':
  lookname = look.getName()
  print(lookname)

  transform = OCIO.DisplayTransform()
  transform.setInputColorSpaceName(OCIO.Constants.ROLE_SCENE_LINEAR)
  transform.setView('Filmic')
  transform.setLooksOverrideEnabled(True)
  transform.setLooksOverride(lookname)
  transform.setDisplay('sRGB')
  processor = config.getProcessor(transform)

  # indentiy transform from https://support.allegorithmic.com/documentation/display/SPDOC/Color+Profile
  img = OIIO.ImageInput.open('linear_to_linear.exr')
  spec = img.spec()
  spec.set_format(OIIO.FLOAT) # for some reason this is not extracted from the image
  pixels = img.read_image()
  img.close()

  outfile = lookname + '.exr'
  transformedpixels = processor.applyRGB(pixels)
  imgout = OIIO.ImageOutput.create(outfile)
  ok=imgout.open(outfile, spec, OIIO.Create)
  if not ok:
   print(OIIO.geterror())
   break
  # ImageInput.read_image() returns a list of floats, and applyRGB(0 returns one too, but ImgOutput.write_image
  # expects an array of float and will die when passed a list. Bit inconsistent I think.
  a = array('d')
  a.fromlist(transformedpixels)
  imgout.write_image(a)
  imgout.close()


Nodeset: add a principled shader

Even though there are better paid and free PBR nodegroups/shaders available for Blender (for example from Jeffrey Hepburn or Remington Graphics) the new Principled BSDF (a.k.a. Disney shader or PBR shader) will no doubt prove popular with Blenderheads because it is so simple to use and gives decent results.

So I added an option to add this shader along with all the imported texture sets as well as a normal map node, basically giving you a one-click (almost) option to add a PBR material based on a set of textures from your favorite texturing tool. The new functionality is a available from Add -> Texture menu in the Node editor and sits alongside the original Set of images entry:

The resulting node setup (after selecting a set of textures) will look like this:
Note that this will of course only work with the new Blender 2.79 or with a recent daily build. If the Principled BSDF is not available in your version of Blender it will simply be omitted.

Code availability

The latest version of the code (201706251223) is available on GitHub (right click and select save as ... , then in Blender File -> user preferences ... -> Add-ons -> Install from file .... Don't forget to remove the previously installed version first!)

A short video demo:

Previous articles

Previous articles about the Nodeset add-on:
NODESET: IMPORT SUBSTANCE PAINTER TEXTURES INTO BLENDER
NODESET: TINY UPDATE MIGHT SAVE EVEN SOME MORE TIME
NODESET: SUPPORT FOR AMBIENT OCCLUSION MAPS
NODESET: MORE FLEXIBILITY

Substance Painter experiment



Inspired by a real life flower pot in our garden but weathered quite a lot more:


Sale: Blender Market turns 3


There is something to celebrate: Friday June 9 Blender Market turns 3.
To celebrate, many products will carry a 25% discount that day up til Sunday 11 (applied at checkout, and remember they are on Chicago time), including all products is my shop :-)