Raytracing concepts and code, part 8, mirror reflections



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Mirror reflections

Mirror reflections are an important part of ray traced images. Many materials like glass and metals have a reflective component. Calculating ray traced reflections is surprisingly simple as we will see but we do need some options from the material properties so that we can specify whether a material has mirror reflectivity at all and what amount. Again we borrow an existing panel (and ignore most options present; later we will create our own panels with just the options we need but cleanup is not in focus just yet)


Reflecting a ray

To calculate mirror reflection we take the incoming ray from the camera and calculate the new direction the ray will take after the bounce and see if it hits anything. The direction of the reflected ray depends on the direction of the incoming ray and the surface normal: the angle between the normal and the incoming ray equals the angle between the normal and the reflected ray:

If the the length of all vectors are normalized, we can calculate the reflected ray using this expresion: dir - 2 * normal * dir.dot(normal)
For a more in depth explanation you might want to take a look at Paul Bourke's site.

Code

To deal with mirror reflecting the single_ray() function needs a small enhancement: it needs to check the amount of mirror reflectivity specified in the material (if any) and then, if the number of bounces we made is still less than the specified depth, calculated the reflected ray and use this new direction to cast a new ray and see if it hits something:
        ... identical code left out ...

        mirror_reflectivity = 0
        if len(mat_slots):
            mat = mat_slots[0].material
            diffuse_color = mat.diffuse_color * mat.diffuse_intensity
            specular_color = mat.specular_color * mat.specular_intensity
            hardness = mat.specular_hardness
            if mat.raytrace_mirror.use:
                mirror_reflectivity = mat.raytrace_mirror.reflect_factor

        ...

        if depth > 0 and mirror_reflectivity > 0:
            reflection_dir = (dir - 2 * normal  * dir.dot(normal)).normalized()
            color += mirror_reflectivity * single_ray(
                       scene, loc + normal*eps, reflection_dir, lamps, depth-1)

        ...

Code availability

The code for this revision is available from GitHub.

Raytracing concepts and code, part 7, colored lights and refactored ray cast



This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Refactoring

To make it easier to caculate secondary rays like relected or refracted rays, we move the actual ray casting to its own function, which also makes the central loop that generates every pixel on the screen much more readable:
    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width
            # align the look_at direction
            dir = Vector((xscreen, yscreen, -1))
            dir.rotate(rotation)
            dir = dir.normalized()
            buf[y,x,0:3] = single_ray(scene, origin, dir, lamps, depth)

Colored lights

Within the single_ray() function we also add a small enhancement: now we calculate the available amount of light to use in reflections from the properties in the lamp data:
        ...
        for lamp in lamps:
            light = np.array(lamp.data.color * lamp.data.energy)
        ...

User interface

To allow the user to set the color and intensity ("energy") of the point lights we need the corresponding lamp properties panel (we only use color and energy):

This panel is added in the register() function:
    from bl_ui import (
            properties_render,
            properties_material,
            properties_data_lamp,
            )

    ... identical code left out ...

    properties_data_lamp.DATA_PT_lamp.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)

Code availability

This revision of the code is available from GitHub

Raytracing concepts and code, part 6, specular reflection for lights


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts
Until now the reflections have been a bit dull because our shader model only takes into account the diffuse reflectance, so lets take a look at how we can add specular reflectance to our shader.
The effect we want to achieve is illustrated in the two images below where the green sphere on the right had some specular reflectance. Because of this it shows highlights for the two points lamps that are present in the scene.


The icosphere shows faceting because we did not yet anything to implement smooth shading (i.e. interpolating the normals across a face) but with added some smooth subdivisions to show off the high lights a bit better.
In the image below we see that we also look at the hardness of the specular reflections in our shader model, with a high value of 400 giving much tighter highlights than the default hardness of 50.


In both images the red cube has no specular intensity and the blue one has a specular intensity of .15 and a hardness of 50.

Specular color, specular intensity and hardness are properties of a material so we have also exposed the relevant panel to our custom renderer so that the user can set these values.

Code

The code changes needed in the inner shading loop of our ray tracer are limited:
if hit:
 diffuse_color = Vector((0.8, 0.8, 0.8))
 specular_color = Vector((0.2, 0.2, 0.2))
 mat_slots = ob.material_slots
 hardness = 0
 if len(mat_slots):
  diffuse_color = mat_slots[0].material.diffuse_color \
                    * mat_slots[0].material.diffuse_intensity
  specular_color = mat_slots[0].material.specular_color \
                    * mat_slots[0].material.specular_intensity
  hardness = mat_slots[0].material.specular_hardness

  ... identical code left out ...
  
  if not lhit:
   illumination = light * normal.dot(light_dir)/light_dist
   color += np.array(diffuse_color) * illumination
   if hardness > 0:  # phong reflection model
    half = (light_dir - dir).normalized()
    reflection = light * half.dot(normal) ** hardness
    color += np.array(specular_color) * reflection
We set reasonable defaults (line 2 - 5), override those defaults if we have a material on this object (line 6 - 11) and then, if we are not in the shadow, calculate the diffuse component of the lighting as before (line 16 - 17) and the finally add a specular component (line 18 - 21)

For the specular component we use the Phong model (or actually the Blinn-Phong model). This means we look at the angle between the normal (shown in light blue in the image below) and the half way vector (in dark blue). The smaller the angle the tighter the highlight. The tightness is controlled by the hardness: we raise the cosine of the angle (which is what the dot product is that we compute in line 20) to the power of this hardness. Note that the half way vector is the normalized vector that points exactly in between the direction of the light and the camera as seen from the point being shaded. That is why we have a minus sign rather than a plus sign in line 19 because dir in our code points from the camera towards the point being shaded.

Code availability

The code is available on GitHub.

Raytracing concepts and code, part 5, diffuse color


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

Our rendered scene so far mainly consists of fifty shades of gray and that might be exciting enough for some but it would be better if we could spice it up with some color.

For this we will use the diffuse color of the first material slot of an object (if any). The code so far needs very little change to make this happen:
            # the default background is black for now
            color = np.zeros(3)
            if hit:
                # the get the diffuse color of the object we hit
                diffuse_color = Vector((0.8, 0.8, 0.8))
                mat_slots = ob.material_slots
                if len(mat_slots):
                    diffuse_color = mat_slots[0].material.diffuse_color
                        
                color = np.zeros(3)
                light = np.ones(3) * intensity  # light color is white
                for lamp in lamps:
                    # for every lamp determine the direction and distance
                    light_vec = lamp.location - loc
                    light_dist = light_vec.length_squared
                    light_dir = light_vec.normalized()
                    
                    # cast a ray in the direction of the light starting
                    # at the original hit location
                    lhit, lloc, lnormal, lindex, lob, lmat = scene.ray_cast(loc+light_dir*eps, light_dir)
                    
                    # if we hit something we are in the shadow of the light
                    if not lhit:
                        # otherwise we add the distance attenuated intensity
                        # we calculate diffuse reflectance with a pure 
                        # lambertian model
                        # https://en.wikipedia.org/wiki/Lambertian_reflectance
                        color += diffuse_color * intensity * normal.dot(light_dir)/light_dist
            buf[y,x,0:3] = color
In lines 5-8 we check if there is at least one material slot on the object we hit and get its diffuse color. If there is no associated material we keep a default light grey color.
This diffuse color is what we use in line 28 if we are not in the shadow.

Code availability

The code is available on GitHub.

Raytracing: concepts and code, part 4, the active camera


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

In a further bit of code cleanup we'd like to get rid of the hardcoded camera position and look at direction by using the location of the active camera in the scene together with its rotation.
Fortunately very little code has to change to make this happen in our render method:

    # the location and orientation of the active camera
    origin = scene.camera.location
    rotation = scene.camera.rotation_euler
Using the rotation we can first create the camera ray as if it originated in the default -Z direction and then simply rotate it using the camera location:
    aspectratio = height/width
    # loop over all pixels once (no multisampling)
    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width
            # align the look_at direction
            dir = Vector((xscreen, yscreen, -1))
            dir.rotate(rotation)
Later we might even adapt this code to take into account the field of vision, but for now at least we can position and aim the active camera in the scene any way we like.

Code availability

The code is available on GitHub.

Raytracing: concepts and code, part 3, a render engine


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

The code presented in the first article of this series was a bit of a hack: running from the text editor and lots of built-in assumptions is not the way to go so lets refactor this in a proper render engine that will be available alongside Blender's built-in renderers:

A RenderEngine

All we really have to do is to derive a class from Blender's RenderEngine class and register it.The class should provide a single method render() that takes a Scene parameter and returns a buffer with RGBA pixel values.
class CustomRenderEngine(bpy.types.RenderEngine):
    bl_idname = "ray_tracer"
    bl_label = "Ray Tracing Concepts Renderer"
    bl_use_preview = True

    def render(self, scene):
        scale = scene.render.resolution_percentage / 100.0
        self.size_x = int(scene.render.resolution_x * scale)
        self.size_y = int(scene.render.resolution_y * scale)

        if self.is_preview:  # we might differentiate later
            pass             # for now ignore completely
        else:
            self.render_scene(scene)

    def render_scene(self, scene):
        buf = ray_trace(scene, self.size_x, self.size_y)
        buf.shape = -1,4

        # Here we write the pixel values to the RenderResult
        result = self.begin_result(0, 0, self.size_x, self.size_y)
        layer = result.layers[0].passes["Combined"]
        layer.rect = buf.tolist()
        self.end_result(result)

Option panels

For a custom render engine all panels in the render and material options will be hidden by default. This makes sense because not all render engines use the same options. We are interested in just the dimensions of the image we have to render and the diffuse color of any material so we explicitly add our render engine to the list of COMPAT_ENGINES in each of those panels, along with the basic render buttons and material slot list.
def register():
    bpy.utils.register_module(__name__)
    from bl_ui import (
            properties_render,
            properties_material,
            )
    properties_render.RENDER_PT_render.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)
    properties_render.RENDER_PT_dimensions.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_context_material.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_diffuse.COMPAT_ENGINES.add(CustomRenderEngine.bl_idname)

def unregister():
    bpy.utils.unregister_module(__name__)
    from bl_ui import (
            properties_render,
            properties_material,
            )
    properties_render.RENDER_PT_render.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)
    properties_render.RENDER_PT_dimensions.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_context_material.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)
    properties_material.MATERIAL_PT_diffuse.COMPAT_ENGINES.remove(CustomRenderEngine.bl_idname)

reusing the ray tracing code

Our previous ray tracing code is adapted to use the height and width arguments instead of arbitrary constants:
def ray_trace(scene, width, height):     

    lamps = [ob for ob in scene.objects if ob.type == 'LAMP']

    intensity = 10  # intensity for all lamps
    eps = 1e-5      # small offset to prevent self intersection for secondary rays

    # create a buffer to store the calculated intensities
    buf = np.ones(width*height*4)
    buf.shape = height,width,4

    # the location of our virtual camera (we do NOT use any camera that might be present)
    origin = (8,0,0)

    aspectratio = height/width
    # loop over all pixels once (no multisampling)
    for y in range(height):
        yscreen = ((y-(height/2))/height) * aspectratio
        for x in range(width):
            xscreen = (x-(width/2))/width
            # get the direction. camera points in -x direction, FOV = approx asin(1/8) = 7 degrees
            dir = (-1, xscreen, yscreen)
            
            # cast a ray into the scene
            
            ... indentical code omitted ...

    return buf

Code availability

The code is available on GitHub. Remember that any test scene should be visible from an virtual camera located at (8,0,0) pointing in the -x direction. The actual camera is ignored for now.

Raytracing: concepts and code, part 2, from rays to pixels


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

In the first article I presented some code to implement a very basic raytracer. I chose Blender as a platform for several reasons: it has everything to create a scene with objects so we don't have to mess with implementing geometry and the functions to calculate intersections ourselves. It also provides us with the Image class that lets us store the image that we create pixel by pixel when raytracing. Finally it also comes with powerful libraries of math and vector functions, saving us the drudgery of implementing this ourselves. All of this allows us to focus on the raytracing proper.

What is ray tracing?

When ray tracing we divide the image we are creating into pixels and shoot a ray from a point in front of this image (the camera location) into the scene. The first step is then to determine whether this ray hits any object in the scene. If not we give our pixel the background color but if we do hit an object we color this picture with a different color that is dependent on the material properties of the object and any light that reaches that point.

So if we have calculated the location where our first ray (or camera ray, red in the illustration) hits the closest bit of geometry, the next step is to determine how much light reaches that point and combine this with the material information of the object.

There are many ways to approach this and in our first implementation we cast rays from this point in the direction of any lamp in the scene. These rays are often called shadow rays (green in the illustration) because if we hit something that means that an object blocks the light from this lamp and that the point we are shading lies in the shadow of the lamp. If we are not in the shadow we can calculate the light intensity due to this lamp by taking its original intensity and dividing it by the squared distance between our point and the lamp.

Once we have calculated the intensity we can use it to determine the diffuse reflectance. Diffuse reflectance causes light to be scattered in all directions. The exact behavior will be determined by the microstructure of the surface and there are different models to approximate this.

To start we will use the so called Lambert model. In a nutshell this model assumes that incident light is uniformly scattered in all directions which means that if a surface is facing a light it will look bright and this brightness will diminish if the direction of the surface normal diverges from this direction. The brightness we observe is therefore not dependant on the camera orientation but just on the local orientation of the geometry. Different models exist that take into account roughness and anisotropy but that is something for later.

Review: CinemaColour For Blender

Recently I was offered a change to look at CinemaColour For Blender by FishCake.



CinemaColour offers a large collection of professional color management looks (LUTs) to use with Blender's built-in Filmic color management. Check out the artist's website for some great examples. It comes in the form of an add-on that installs over a hundred looks and also offers a convenient way to browse and select the looks that are available. In this short article I'll have a good look at my experiences with the add-on and the looks it offers.

Filmic color management

Since well over a year Blender has incorporated Roy Sobotka's Filmic color management solution and everybody, really everybody, has been very enthusiastic about it ever since. This enthusiasm is well deserved because when using Filmic your renders may almost immediately gain a lot of realism, mainly by mapping the wide range of light intensities in your scene to the limited range of intensities your monitor can handle in a much better way than before. No more blown out highlights or loss in details in the deep shadows!

Looks

Filmic uses lookup tables (LUTs) to convert these intensities and by selecting different look up tables you can create different looks: not only can you choose these looks to present you with more or less contrast in your renders but because these lookups are done independently for all color channels you can use looks that add a color tone to your render. These tones can greatly influence the feel of your scene and because this mapping is done after the result is rendered and composited, it is very easy to experiment with the overall look.

CinemaColour for Blender

Now creating professional looks and making them available for use in Blender is a real craft and that is where CinemaColour comes in. The add-on installs over a hundred different looks and panel in the Scene options where you can easily browse through the list of choices and apply them to your scene. The available looks are grouped in several collections and each look has a name that hints at major blockbuster films that feature a similar look. The available looks range from subtle contrasts to moody color settings and everything in between. Some examples are shown below:

Look: CS ST Y L 3
Look: B2 Gangster
Look: B2 Sniper Alt
Look: B2 The Phantom Alt
Look: B2 Wade Pool
Look: CC Crush
Look: CM Ice
Note that we did not have to render our scene again to generate these looks, so generating these examples after render the scene a single time only took seconds.

Conclusion

A great set of professional looks done by someone who clearly knows their color management. The selection of the provided looks is also intuitive and every artist that is serious about the overall look and feel of their renders should check this out. CinemaColour is available on BlenderMarket.




Raytracing: concepts and code, part 1, first code steps


This is an article in a multipart series on the concepts of ray tracing. I am not sure where this will lead but I am open to suggestions. We will be creating code that will run inside Blender. Blender has ray tracing renderers of course but that is not the point: by reusing Python libraries and Blender's scene building capabilities we can concentrate on true ray tracing issues like shader models, lighting, etc.
I generally present stuff in a back-to-front manner: first an article with some (well commented) code and images of the results, then one or more articles discussing the concepts. The idea is that this encourages you to experiment and have a look at the code yourself before being introduced to theory. How well this works out we will see :-)

So far the series consists of the several articles labeled ray tracing concepts

In this article I will show how to implement a minimal raytracer inside Blender. Of course Blender has its own very capable renderers, like Cycles, but point is to illustrate some ray tracing concepts without being bogged down with tons of code that have nothing to do with ray tracing per se.

By using Blender's built-in data structures like a Scene and an Image and built-in methods like ray_cast() we don't have to implement difficult algorithms and data structure like ray/mesh intersection and BVH-trees for example, but we can concentrate on things like primary and secondary rays, shadows, shading models etc. etc.

The scene

The scene we will be working with at first looks like this:

Nothing more than a plane, a couple of cubes and an icosphere. Plus two point lights to illuminate the scene (not visible in this screenshot).

In rendermode the scene looks like this:

Note that we didn't assign any materials so everything is shaded with a default shader.

Our result


Now the result of the minimal raytracer shown in the next section looks like this:
There are clear differences of course, and we'll work on them in the future, but the general idea is similar: light areas close to lights and visible shadows. How many lines of code do you think is needed for this?

The code


The code to implement this is surprisingly compact (and more than half of it is comments):
import bpy
import numpy as np

scene = bpy.context.scene

lamps = [ob for ob in scene.objects if ob.type == 'LAMP']

intensity = 10  # intensity for all lamps
eps = 1e-5      # small offset to prevent self intersection for secondary rays

# this image must be created already and be 1024x1024 RGBA
output = bpy.data.images['Test']

# create a buffer to store the calculated intensities
buf = np.ones(1024*1024*4)
buf.shape = 1024,1024,4

# the location of our virtual camera
# (we do NOT use any camera that might be present)
origin = (8,0,0)

# loop over all pixels once (no multisampling)
for y in range(1024):
    for x in range(1024):
        # get the direction.
        # camera points in -x direction, FOV = 90 degrees
        dir = (-1, (x-512)/1024, (y-512)/1024)
        
        # cast a ray into the scene
        hit, loc, normal, index, ob, mat = scene.ray_cast(origin, dir)
        
        # the default background is black for now
        color = np.zeros(3)
        if hit:
            color = np.zeros(3)
            light = np.ones(3) * intensity  # light color is white
            for lamp in lamps:
                # for every lamp determine the direction and distance
                light_vec = lamp.location - loc
                light_dist = light_vec.length_squared
                light_dir = light_vec.normalized()
                
                # cast a ray in the direction of the light starting
                # at the original hit location
                lhit, lloc, lnormal, lindex, lob, lmat = scene.ray_cast(loc+light_dir*eps, light_dir)
                
                # if we hit something we are in the shadow of the light
                if not lhit:
                    # otherwise we add the distance attenuated intensity
                    # we calculate diffuse reflectance with a pure 
                    # lambertian model
                    # https://en.wikipedia.org/wiki/Lambertian_reflectance
                    color += intensity * normal.dot(light_dir)/light_dist
        buf[y,x,0:3] = color

# pixels is a flat array RGBARGGBRGBA.... 
# assign to a single item inside pixels is prohibitively slow but
# assigning something that implements Python's buffer protocol is
# fast. So assiging a (flattened) 1024x1024x4 numpy array is fast
output.pixels = buf.flatten()
I have added quite some comment in the code itself and will elaborate on it some more in future articles. For now the code is given as is. You can experiment with it if you like but you do not have to type it in nor create a Scene that meets all the assumptions in the code, you can download a .blend file that contains the code if you like.

Code availability

The files are available from my GitHub repository:

The code itself

A .blend file with a full scene and the code embedded as a text file (click Run Script inside the text editor  to run it)


Realistic trees in the middle distance

The goal of this article is to document as well as possible what is needed to create realistic trees in de middle distance while using as few rendertime resources as possible.
In a previous article we found that using geometry instead of alpha mapped leaves might be faster to render and in this article we investigate this further and investigate geometry and material choices that allow for realistic trees while minimizing render time and memory consumption.

Realism and middle distance defined

What do we consider to be the 'middle distance'? We are not dealing with image filling close-ups of hero trees but we need more details than a back-plate depicting trees at hundreds of meters away.
For trees I consider the middle distance a distance where we still can see that the tree consists of individual leaves and that when moving we are close enough to see a parallax effect (we see different sides of the tree and perceive it as truly three dimensional.
To achieve realism at this distance then, it is necessary for the tree to be a three dimensional mesh with actual 3D branches in its crown. Also we will still be able to discern the outline of individual leaves (but not very clearly) but surface detail in leaves will not make perceptible difference and neither will small details in the bark texture.
In the sections below I will illustrate the choices I made as I recreate a medium sized red oak (quercus robur) from scratch while keeping this image as a reference.

To create the tree mesh I use my Space Tree Pro add-on but the observations apply to any tree mesh of course.

The reference

The reference photo was shot in fairly light conditions in the afternoon in the spring (May). The tree in question is about 8 meters high and has new spring leaves that are (almost) fully grown. These leaves are individually about 10-12 cm long and all fairly even in color because there is almost no insect, mold or wind damage yet.

We will try to emulate the direction and intensity of the light in the original with a suitable HDRi from HDRI Haven and use filmic color management to approximate these fairly high contrast conditions.
Note that the trunk of this tree is a bit obscured by a metal trellis to guard against damage by goats ;-)
A closeup of the bark shows a fairly uniform grayish texture with comparatively few deeper structures (the tree is about 14 years old).

The camera was at roughly 20-25 meters from the tree.

The crown shape

Most trees have a crown shape that is not a perfect sphere and an oak is no exception. The shape is an egg shape, wider at the bottom than at the top. The crown silhouette is not perfectly smooth still, some bumps and dents are noticeable.
In more mature oak trees these irregularities will result in a more cumulus cloud-like silhouette. Even though the branches themselves are hardly visible due to the leaves, from winter photos we know they have a slightly upward bending habit and are not straight. Note that we do not model every little twig; our leaf particles will be modeled to resemble twigs and not just single leaves.

Foliage density

It is a little bit difficult to find information on the actual number of leaves on a tree of a certain age so we will do this 'by eye'. There are no leaves deep inside the crown but red oaks do not have leaves just at the end of the branches but twigs with leaves are also present deeper into the crown along the branches.

Twigs instead of individual leaves

Leaves are typically connected to small twigs so instead of bunching up all the leaves at points along the branches we create particles with multiple leaves like shown below. Note that at this point we don't bother with the actual shape: the leaves are rectangular and we leave out the actual twig altogether.

Leaf shape

We notice from the initial rendering that at a camera distance of 25m we can see even at this resolution that the leaves are rectangular, so we need to shape these leaves a little bit more.

Geometry vs. alpha mapping

Since we cannot see small details, leaf texture maps are not necessary: instanced geometry is very efficiently rendered, and also real geometry gives us the option to add a real crease and some curvature to the leaves. The final shape we choose is shown in the image below. We create some variations that we place in a group that we can use in a particle system. Remember to use smooth shading on the meshes otherwise you will get sharp reflection boundaries which will give a noisy impression

Leaf material

The leaf material is important but because we cannot see any small details at this distance it will not be necessary to use texture maps, so we will create a simple shader that uses Blender's principled shader node.

Color and roughness

Color and roughness are very important for the look and feel. Because all the leaves on our tree are fresh the color is rather uniform. The leaves are also smooth but not very shiny. The first approximation looks like this:

Variation

Even if we are looking at fairly uniform spring foliage, for visual interest or to get a more summer-like look we might want to add some variation. Because each particle is an object with a unique random number we can use a simple color ramp to drive this color variation. Because each particle has more than one leaf, we give each leaf a unique gray-scale vertex color as well for even more variation.

The node setup we use and the result look like this:


Note that the front and backsides of the leaves are slightly different in real life but we ignore that.

Translucency

If we look at the image now we notice that the coloring is still rather flat, even though the contrast is quite high and we added a bit of color and roughness variation. The main reason for this is that the leaves at the outside receive a lot of light from behind and are a bit translucent. If we add a small fraction of translucency, the whole crown gets a far more dynamic coloring:


Transparency

Translucency implies some transparency as well: the light that is travelling through a leaf and gets through will end up illuminating something else. Transparency will result in longer render times however so we will want to use as few bounces as possible. We use the same noodle as before but add 0.1 transparency in the principled shader:


(Images show 0, 1, 2 and 3 bounces respectively: the difference is hardly noticeable if we add some transparency and more than one bounce is indistinguishable to the human eye)
So we see that transparency is desirable for slightly more light deeper inside the crown but the effect of more than one bounce is limited]

Bark material

With the leaves covered we also need to look at the bark.

Color, roughness and normals

The bark is only really visible on the main trunk. Some color variation is visible at this distance but not much and the bark structure is invisble.

Displacement

The outline of the trunk looks rather smooth and artificial so it will benefit from some extra distortion and even though still experimental in 2.79, micro-displacement is an efficient way to add details to a low poly mesh and break the artificially smooth outline of the trunk. The settings used are the defaults (modifier on the left, material settings on the right):


The shader we use looks like this

Note that we scale everything with the distance to the origin of the mesh [which is at the foot of the trunk), this way small branches will get almost no visible displacement while the foot of the trunk even flares out a little bit, hinting at some hidden root system

Render times

Each effect that we add impacts render times so here is a small summary, all rendered at a resolution of 700x750 pixels at 500 samples with denoising on a GTX970 (the absolute timings will be different on different hardware of course):

Material effects


Featuretime (seconds)
3 transparent bounces174
2 transparent bounces172
1 transparent bounces164
no transparency156
no translucency144

The timings were generated by simply muting the relevant nodes in the material or setting transparent to 0 in the principled shader. Cycles is smart enough to optimize away any unused nodes in the resulting shader.
Since transparency adds quite a bit to the render time, we might skip it all together because as we have seen it is hardly visible in the end result. The 10 extra seconds for translucency however are certainly worth it.

Leaf geometry

We already know that real geometry is faster than using alpha mapped textures, but what about more particles or more detailed geometry? In the images shown until now we each time had 14868 leaves on the tree, each leaf with 448 square faces. If we vary these numbers by changing the number of particles and subdividing the faces in the leaves, we can compare the render times.
number of particles /
number of faces in leaf
100001486820000
448 (1x)154174190
1792 (4x)162183204
7168 (16x)170194226

As we can see, doubling the number of particles does not double the render time. So if we need to create a denser tree crown, adding a few particles does hurt much.
This is even more true for the amount of detail in the leaves: 16 times the number of faces only amounts to approximately 10% extra render time.
The impact on peak memory usage during rendering is minimal: at 7168 faces per particle, 10000 particles peak at 647 MB, and 20000 particles at 651 MB, so with the number of particles we need for a tree the memory usage is hardly a concern.

Conclusion

With our choice of a single transparent bounce but with added translucency we get a nice result. Adding some extra particles or refining the geometry of the leaves does not hurt render times much but there is no need to go overboard as a few extra faces go a long way. No doubt that the realism of the image can be improved even more by proper lighting etc, but I am not an artist so i concentrated on the technical aspects :-)

Freebie

If you like the tree and/or want to experiment with it, you can download the .blend file from my GitHub page. The tree was generated with Space Tree Pro [available on BlenderMarket], so if you own that add-on you can even change its parameters to get different oak trees (the tree parameters are not compatible with my old free space tree add-on)