Principled BSDF – Cycles gets a native PBR shader


(The spaceship chairs from Homeward, updated with the new shader)

If you open a build of what will become Blender 2.79 and create a material, you’ll find yourself face to face with a giant, unfamiliar node. Although if you’ve used the Renderman plugin, or worked in apps like Houdini or Unreal, you might recognize this node after a second. It’s an implementation of Disney’s Principled BSDF, and it’s Cycles’ new physical uber shader. The principled shader has become something of a de facto standard “PBR” shader. It’s the viewport shader in Substance Painter/Designer, and it is used in a number of other tools as well.

This means we’re done with PBR hack node groups. No more simple_pbr_v5 (which was your favorite PBR hack group, rite?), no more fresnel node setups. No more appending nodes every project. It’s over, you’re free. It’s all one built in node now, and it works just like the shaders in, say, Unreal. Or Substance. Or Renderman. Or Mantra.

First of all, some background on the Principled Shader:

If you’re just looking in that paper for what the parameters are, skip over the mumbo jumbo down to section 5.2. It even includes helpful example renders.


So how do you hook up base color, roughness, metallic, and normal maps to this shader? Like this:


Normal map needs to be passed through the normal map node still, and remember that roughness, metallic, and normal maps are “non color data”, not managed “color” textures!

As another benefit, the principled shader is also implemented in OpenGL! Cycles material draw mode does not use scene lights, so you still have solid lights. But the result is far more accurate than our old node groups:


And now, answers to questions that are likely to come up:

Does this work on GPUs?

Yes, both CUDA and OpenCL.


Do I need to square roughness?/Roughness does not match the Glossy BSDF

The Principled BSDF squares roughness internally. This is generally easier to control, and allows better compatibility with other software. If you need to match a value you had on the glossy bsdf, just type in <value> ^ 0.5, Blender will calculate the equivalent.


Do I still need to account for roughness in fresnel?

Nope. Roughness is correctly handled within the shader. Just plug in your roughness map and go.


I see there’s a specular input. Do I need to export a specular map from Substance?

No! The specular slider is mainly there to tweak reflection brightness. Generally, you can leave that as 0.5 to start and adjust up or down if you need weaker or stronger reflections. If you want to match Substance, leave it at 0.5 (that’s what Substance’s viewport does).

It actually corresponds to IOR where 0-1 is equivalent to an IOR range of 1.0-1.8. The default is 0.5, which corresponds to an IOR of 1.5. (for reference, in the typical PBR Specular workflow IOR=1.5 corresponds to a specular value of 0.04).


Some other apps allows emission or tinted refraction in the principled shader. Is this not supported in Cycles?

Cycles already has easy and fast ways to do these that still work with the principled shader, so they’re not integrated. For emission, simply combine the Principled Shader node with an Emission Shader node using the Add Shader. For tinted refraction, add a volume absorption node to the volume output.


What are the limitations of the shader in material draw view (OpenGL) compared to the full render in Cycles?

  • Anisotropy does not respect custom tangents
  • Refraction is not supported
  • SSS is a very limited approximation
  • Only solid mode lighting, aka Blender’s “OpenGL Lights” are supported. This is a general limitation of Cycles material draw view


Cycles scene optimizations: Ray visibility

If you want to make something invisible to a certain ray type in Blender/Cycles, most people do things like this:

Screen Shot 2015-02-21 at 7.24.15 PM


You are making Cycles waste time doing worthless calculations. Instead, go over to the object tab, and use these instead:

Screen Shot 2015-02-21 at 7.27.14 PM

Unchecking “camera” there will produce the exact same result (the object is invisible when viewed directly, but otherwise appears normally), but it goes about it differently. The shader node setup tells Cycles when evaluating the material, figure out the current ray type, and if it’s camera, choose the second input on the mix shader. This is a transparent BSDF node at full white, so the ray passes unchanged.

Ray vis does it a much more efficient way: it just tells Cycles “if the current ray is a camera ray, don’t even bother checking if you hit this object, just assume you didn’t”. This is important, as it allows us to filter out the object while doing intersection tests (render nerd speak for “checking to see what the ray hit”). Even better, it allows us to avoid running the shader at all, much less realizing the result was a transparent BSDF and having to continue the path and do the whole thing over on the next object we hit.

It gets worse though. The light path shader includes a transparent BSDF, thus, it enables transparent shadows. Realtime engines and rasterizers have taught too many people to think alpha maps are their friends. But raytracers are different: to them alpha maps are evil, because they create a horrible effect called transparent shadows. Raytrace shadows are computed by tracing a path from the shading point (that’s where the initial ray landed) out to a lamp, and seeing if anything blocks the path. If something is there, the object is “in shadow” and the lamp contributes nothing. If it has a clear shot, it is not in shadow, and the lamp contributes at full strength. Simple.

Transparent shadows crap all over that simplicity. They cause the shadow ray test to return bullshit like “the ray might be blocked, or maybe it isn’t”. The only way to find out is to to run the shader (which we didn’t even have to do in the first place for opaque shadows!) and figure out what the actual opacity is at the point on the object we hit, then trace another ray from this point to see if we were actually blocked entirely by something else and we did all this for nothing (lolz). God help you if that “something else” is also transparent, then we have to do this shit all over again.

Needless to say, that wastes a lot of time. And now you know why that “transparent shadows” box is in the material options. Unchecking that tells Cycles “I know this mat has a transparent BSDF, but just ignore that and do opaque shadows anyway”. Because that can save a lot of time if no one is going to notice an object is casting solid shadows.

Now, sometimes we don’t have a choice, (alpha maps have their uses), but in the case of 100% transparency to a particular ray type, we do. We can just turn off ray visibility for that type, and have a simple shader with no transparency.

Just to compare, I set up this scene. The cube is invisible to camera rays:

Screen Shot 2015-02-21 at 7.42.18 PM

Regardless of if we use ray visibility or the light-path shader, the result is the same. But the render times aren’t:

  • Diffuse-only shader, camera ray visibility disabled: 1.73sec
  • Diffuse/transparent mix by “is camera ray”, camera ray visibility enabled: 2.28sec
  • Diffuse/transparent mix by “is camera ray”, camera ray visibility enabled, but “transparent shadows” disabled: 2.10sec

This is obviously a worst-case scenario, but you can see the impact of the extra calculations. 30% longer to render the exact same image! Using the same setup with transparent shadows disabled (still the same result, because light path “is camera ray” will never evaluate to 1 for a shadow ray) is only 2.1 sec. That’s means about 35% of that penalty wasn’t even due to our extra shader stuff, it was due to the useless enabling of transparent shadows.

So there we have it. Ray visibility is your friend.


“But wait!” you might be saying, “I need to do some ray-switching for a bunch of objects with this one material! Light path is a lot easier in this case!”

Python to the rescue. This will disable camera visibility on all selected objects (select one object, then use “select > linked > material” to grab all objects with your mat):

import bpy

for obj in bpy.context.selected_objects: = obj = False

If you need to flip a different ray visibility switch, you can just change that last line. This version will turn on transmission visibility:

import bpy

for obj in bpy.context.selected_objects: = obj
    bpy.context.object.cycles_visibility.transmission = True

The python names for the 6 switches are:


Importing Substance Painter texture to Cycles

You should also check out the latest version of the PBR node group here:


Substance Painter is a powerful physically-based (PBR) texturing tool. It’s primarily aimed at game use (for now…), but you can easily make use of its output in Cycles. In this tutorial, I’m going to show you how to do that.

To start, I’ve given everyone’s favorite monkey an interesting paint job using the default color, height, roughness, and metallic channels.


First order of business is to export these for Blender. Hit File > Export, and just use the default “Document Channels + Normal” preset.

export channels

Now let’s look at our shader in Cycles. We’ll start by making the simplest, most standard physical shader, a diffuse/glossy mix with fresnel.

starting nodes

NOTE: Some people like to use the layer weight node instead of the fresnel node. If that’s you, go right ahead. I’m just using the fresnel node here for simplicity.

Base Color

Our first channel to deal with is base color. There’s not much to this one. We have a base color image, we’ll just load it and plug it into the diffuse BSDF’s color slot.

base color attatched


Our next texture if the roughness map. As you might have guessed, we’re connecting it to the roughness inputs on both the diffuse and glossy nodes. However, there’s a couple things we need to set to make sure Cycles interprets the map correctly. The first is to change the node from “color” to “non-color data” (as that is what the roughness map is). Any channel that is not set as “sRGB” in substance painter should be loaded as non-color data. Here we see that base color is sRGB (and thus should be loaded as color), but the rest of our channels are luminance (marked with an “L”), so they should be loaded as non-color data.

sp channel type

The other thing we need to do is square the roughness value to match the range between SP and Cycles. To do that, add a math node, set it to “power”, connect your roughness map to the top slot, and set the bottom slot to “2.0”.

rough attached

Let’s check how it looks so far:


Ok, it’s a start. The colors are right at least. Now let’s deal with that height channel. There’s a few different ways to do that:

Height Method 1: Bump Node

One obvious way to handle the height channel is to do what SP shows it as: load the height texture and stick it in as a bump map. To do that, we’ll pass it through the bump node and attach that to our normal inputs

bump node

This works ok, but you often need to fine-tune the bump strength, and if you have a normal map for your object, we have to add more spaghetti (and load another texture) to integrate the two. There’s a better way, namely…

Height Method 2: The Normal OpenGL output

Substance Painter can generate a number of “converted” texture maps. One of which is the “Normal OpenGL” channel, which generates an OpenGL-style tangent-space normal map using the height channel and the object’s normal map (if it has one). We can attached it with the normal map node, like usual.

normal ogl node

Let’s try that:


Perfect! Works right out of the box! In this case, our object didn’t have a baked normal map in Substance Painter. But in case it did (such as baked sculpting detail) the normal map will contain that as well as your height painting, so you don’t need to do anything else to connect the two.

Optional Height Method 3: The Displacement Output (Which Doesn’t Work Right)

There’s one more method you can do. It doesn’t really behave the way it should since it relies on an unfinished feature in Cycles, but you can instead attach the height map to the displacement output. If you don’t turn on experimental features, this has basically the same result as the bump node method. However, if you enable experimental and change the displace mode to “both” (aka, auto-bump), Cycles will displace the vertices of the mesh as best it can to fit the height map, then adjust the normals to fit the rest of the way.

It’s kind of cool, but again, this is an UNFINISHED FEATURE of Cycles that DOES NOT WORK QUITE RIGHT. (don’t enable subdiv, trust me). It’s probably best you just forget I mentioned this and use the normal map method.


Ok, one more channel. This one gets a bit complicated. Up until now, our shader has been a simple dielectric shader. We now need to introduce an alternate shading path for metal, and use the metallic map to blend between them. To do this, you are going to add a mix node at the end of the chain, attach your metallic map to the mix factor, then put an extra glossy BSDF on the second input, and attach your base color, rough (from the power node) and normal (from the normal map node) to this glossy BSDF.

metallic attached

I’ve highlighted our newly added nodes in red. Areas that are dark in the metallic map will use our diffuse/glossy fresnel mix we created earlier. Areas that are bright in the metallic map will shift over to this glossy-only shader. Since metals have colored reflections, we attach the base color to the glossy-color input here, unlike our other glossy BSDF on the dielectric side. Since roughness and normals are shared, we use the squared-roughness and normals just like on our other BSDF nodes.

Wrapping Up

Here is our final monkey!


This shader setup is just the very basic elements required to translate the shader. There are numerous other effects you can add. For example, some people like the look of metal shaders that blend two glossy BSDFs at slightly different roughness. You can add that in to your shader (use an extra math node to scale your roughness map). Some people prefer to use the layer weight node instead of fresnel. It’s all up to you, feel free to experiment!

You can make a node group out of your shading nodes here (minus the image textures) so you don’t have to set this up all over again for every material. Or you can just use mine. Grab it in this post:

Zbrush, Cycles, normal map channels, and YOU.

I’ve mentioned before I’m a fan of using Zbrush’s Multi-Map Exporter (hereafter called MME) to pack up a finished sculpt for Blender/Cycles. (I use GoB to send base meshes to Zbrush initially, but I prefer settings things up my way back in Blender). MME has a normal map config panel like this (spoiler alert, rest of the post is describing why it’s configured the way it is in the picture):

Screen Shot 2015-01-04 at 5.53.14 PM

Those 6 controls along the bottom control how the channel arrangement is done. Different 3D packages expect different arrangements, and the internet is pretty vague about which one you want for Blender. I’d previously read somewhere to use flipG only, but that was looking slightly weird. So I decided to consult with my dragon (see previous blog post) just wtf Cycles wanted here. Sometimes, it’s hard to tell if your map is weird. Some combos (like flipB) are obviously wrong and give black or patchy-shaded meshes so you KNOW they are wrong. Other combos though, they give normals that look ostensibly correct, but depict surface features going in the wrong directions. But if these features are small enough, sometimes they can get hidden under the displacement and color detail. Not this time though. I turned off the subsurf and displace modifiers, set a clay shader, and exported normal maps against lv1. We’re going to find out once and for all!

To start, this is the high-res model in Zbrush. Ideally, the surface will look as close as we can to this (shader differences aside. I didn’t bother to match the matcap):


First of all, flipG, as I’d read somewhere I can’t remember. This is obviously not it, you can see scales along the side of the thigh shift from raised to sunken!:



Ok, so if we’re getting shifts from up:down, clearly R and G need to match. What does flipRG give?


Well, shit. Now EVERYTHING is sunken!


So if flipRG gives sane normals, but with sunken details that should be raised, doesn’t that imply no flips at all is correct? Let’s see:


Yep, I think that might be it.


But let’s be thorough. What about that swRG button?


Nope, mix of raised and sunken scales again!


But what if we flipped AND switched??


This one gave me a bit of pause. I THINK this is still wrong, several details along the face seem sunken where they should be raised (and are in the no-flip version). So I’m going to go with no-flip is the correct way.


To cover some other options on MME while we are here:

FlipV/FlipU – Zbrush and Blender have opposite conventions for the V axis in texture space. Sending a mesh from one to the other will cause your textures to be inverted if you don’t flip your UVs along V. If this needs to be done, switch on flipV. Note that GoB will automatically flip V, so if you are mixing MME and GoB for your roundtripping, make sure you don’t double-flip. Blender and Zbrush (and Maya, incidentally) have the same convention for U, so as long as no other apps have touched this mesh, you can leave flipU disabled.

Tangent – This switch needs to match the “space” option on Cycles’ normal map node. Enable if using tangent space.

SmoothUV – There isn’t necessarily a right or wrong setting for this, but this switch, the corresponding one on the displacement map export, and the “subdivide UV” checkbox on your subsurf modifier back in Blender all need to match. Otherwise your UVs get distorted out from under the texture.

Cycles – lamp cover optimizations

Does your scene have a light source with a glass cover on it? Headlights, flashlight, lantern, etc? Here’s a little tip for the glass. Use this shader instead of the glass bsdf or a refraction/glossy mix:



This way, the “glass” is considered transparent instead of transmissive. This gives a few nice benefits:

  • The light is not considered a “caustic” and will shine even with no-caustics enabled, and be MUCH easier to sample if you are lighting with it directly (instead of an invisible lamp outside the fixture)
  • The light source is considered direct, meaning it is affected by the “clamp direct” value instead of the harsh, firefly-stomping “clamp indirect” setting
  • Being direct light also means it will appear in the “emit” pass instead of the transmission passes, which makes adding glow effects and adjusting lamp power a lot easier.

There’s also another even further optimization you can do. This shader, containing the transparent bsdf, casts transparent shadows. Transparent shadows are bad, they make sampling lamps slower (Cycles has to call the shader, then trace another ray the rest of the distance instead of simply seeing if it hit something or not). While it won’t usually have a big affect on some small panes of glass, you can squeeze out a little extra speed by disabling shadow-casting entirely for your glass object in the ray visibility panel:Image