depth-of-field pass for maya or mentalray

david | rendering,tutorials | Tuesday, December 11th, 2007

A common approach to rendering is to do it in several passes and to composite them using something like after effects or fusion. For me an important pass is a variation on the standard depth pass, which I refer to as the depth-of-field pass. For a few years I have rendered the depth-of-field pass using a wonderful mentalray shader called zDepthDOF by Andreas Bauer. (The link takes you to a good explanation of how the shader works and why it is better than a standard depth render.)

The thing I like most about zDepthDOF is that I can use it with the maya distance tool. I connect the distance attribute to the shader's focus distance parameter. I point-constrain one of the distance tool's locators to the camera. Then I animate the other locator to accurately control the focal point in my depth pass which can then be used to create some nice focus change effects using an after effects filter like the compound blur.

But zDepthDOF is a mentalray shader, so just for fun I decided to see if I could do the same thing using the standard maya software renderer. It wasnt too difficult and I thought it might worth sharing. It's a great example of how the maya utility nodes can be used in a shader network.

Click here for...........

I'll start with some pictures. I have some cones and a sphere layed out on a ground plane at various distances from the camera and there is a wall right at the back.

beautyA.jpg

I could use zDepthDOF to render a depth-of-field pass where black means in focus and white means out of focus. If I set the focus distance to be the same as the distance from the camera to the sphere I would get the following render.

zDepthDOF pass

If I take both images into after effects and use the depth-of-field render to control bluriness in the compound blur filter then I get something like this.

compoundBlurA.jpg

So what I need is a maya shader that will give me the same result as the mentalray zDepthDOF.

To help visualize what is happening I made the following diagram.

concept01.jpg

The gradient extends from the camera's near clipping plane to the far clipping plane. Objects sitting on the focal plane are shaded black. The further an object is from the focal plane the whiter it gets.

I think the best way to understand the shader network I created is to load it into maya and study the connections in the hypershade window.

Download my example scene file, djZDepthDOF.ma, from here.

Here is a snapshot of the shader network (click the image to open a hires version in a new window).

click to open new window with hires version

To figure out what color an object should be I need to know how far it is from the camera and for that I used the samplerInfo utility node. This node is has an output called pointCameraZ which is the distance from the camera of the surface point being rendered. Strangely, the value of pointCameraZ is always a negative number, so I used a multiplyDivide utility to multiply it by -1 thus making it a positive number.

To create the gradient I used two ramp textures. The first ramp is from white to black, and the second from black to white. The first ramp is used for distances from the camera near clipping plane to the focal plane. The second ramp is used for distances from the focal plane to the far clipping plane. The distance is obtained from the samplerInfo node and is used to "look up" the ramp color and a condition node is used to select the right ramp depending on the distance.

Since a ramp texture expects a "look up" value between zero and one I used some setRange nodes to normalize the distance values.

The cameraShape node is part of the network since it's farClipPlane value is used as an input to one of the setRange nodes.

A distance tool is also used (Create|Measure Tools|Distance Tool). One locator is point-constrained to the camera. The other locator is the one I animate to specify the focal plane. Sometimes I point-constrain this to the object I want to keep focus on, but more often I animate it by hand. The distance output feeds into some plusMinusAverage nodes and some simple math is done before passing some values to the setRange nodes and the condition node.

I also used the second locator as a way of specifying a focus range. This is the width of the black band in the middle of the gradient. It can be increased of decreased by scaling the locator in the z-axis. Objects within this distance range are in focus. To make it easier to visualise in the viewport I parented a cube to the locator to show me approximately where the focus range is (approximate because it should really be a spherical slice - but close enough to work with).

The next two images show the viewport snaps.

sideViewA.jpg

perspViewA.jpg

This shader renders quickly and requires no lights in the scene. It can be used with transparant surfaces using the same technique described here zDepthDOF by Andreas Bauer. It is not a neat as a single node mentalray shader like zDepthDOF, but it has the advantage of working with the maya software renderer and it shows up in the viewport in texture display mode (so there is a degree of interactivety).

Download my example scene file, djZDepthDOF.ma, from here.

The exampe scene has two render layers. One for the beauty pass and one for the depth pass. You could export the shader and use it in your own scenes - you would just need to use the connection editor to hook up your own rendering camera in place of the one in my scene.

13 Comments »

  1. very gooood. it helps me very much. lot of thanks

    Comment by debroysibai — August 22, 2008 @ 3:23 pm

  2. thanks so much

    Comment by dican — January 19, 2009 @ 1:18 pm

  3. Of all the depth shaders out there: DepthShader 1.0, zDepthDOF 1.6, and your djDepthPhenomena, this one is the only one that works on my Mac with Maya 2011 64-bit.

    djDepthDOF works very well for me. Many, many thanks! I was burning through possibilities and began to think I would simply have to settle for mentalray's built-in depth channel.

    I have encountered only two problems, which appear to be harmless:

    Whenever I open the example file, I am given this error: [Directory Path]/example.ma line 1370: Unrecognized node type 'vectorRenderGlobals'; preserving node information during this session.

    Perhaps this is because the Vector Renderer is not available in 64-bit for Mac, so I do not have it loaded.

    The second error I receive occurs ever time I attempt to render a scene:
    // Error: Cannot find procedure "shave_MRFrameStart". //
    // Error: Cannot find procedure "shave_MRFrameEnd". //

    This error occurs whether I am using Maya Software or mentalray, and it occurs regardless of the render layer that I am rendering.

    I do not know what these are, but it does not seem to affect the output.

    Comment by kienjakenobi — May 29, 2010 @ 7:36 am

  4. I'm glad it works for you. I have to admit that I'm using the p_z shader more often these days. Not sure if there's a mac version though.

    You can probably use optimize scene and remove unknown nodes to avoid the vectorRenderGlobals error. And you can get rid of the shave error by clearing the pre and post render mel fields in render globals.

    Comment by david — June 7, 2010 @ 12:58 am

  5. Hey Dave

    What's the setup for your renderlayers?
    Specifically the DOF render layer... How are you getting it to show the b/w ramp?

    thanks

    Comment by n8skow — November 2, 2010 @ 1:36 am

  6. Also - when I import your shader into my scene, I get a green coloring in my renders from the camera frustum object (even though I see it's set as non-renderable).

    Comment by n8skow — November 2, 2010 @ 1:52 am

  7. 2nd question first: Yes mentalray seems to ignore some of maya's renderstat flags. I usually put the frustum on a display layer which I set to templated when I need to render. Or you can just hide it.

    renderlayers? Pretty much like in the example file. I have a renderlayer where objects have material overrides. The ramp shows up in my viewport in texture display mode. Maybe, if you are not seeing it, this is graphics card dependent.

    Comment by david — November 2, 2010 @ 4:25 pm

  8. I'm having a slight problem with the shader when used on a scene where the objects are on a much smaller scale than your example file. I noticed that if I scale your scene and camera down and do a render with the focal point still on the ball in the middle, the cones in the back are still completely in focus. Whereas before, the cones in front and in back were out of focus (white) and the ball was in focus. Creating the white to black to white gradient. Is there something else that needs to be adjusted to get it to work on a smaller scale?

    Thanks!

    Comment by skdzines — February 16, 2012 @ 3:43 am

  9. I have a rule for myself. Never scale cameras. When ever I do, I seem to run into strange problems.

    In this shader network the value of the camera's far clipping plane is being used to drive one of the setRange nodes. When you scale the camera it also scales the location of the clipping plane, but the value of the far clipping plane distance in the camera attributes does not change. This is what caused your problem.

    You could fix it in a few ways, but probably the easiest, assuming you grouped the whole scene and scaled the group, is to remove the camera from the group, and parent constrain it instead. Then adjust the clipping planes to fit the extent of your scene.

    Comment by david — February 16, 2012 @ 10:49 pm

  10. So I guess my question is, and I apologize if your previous response addresses it and I'm just not understanding, how do I simply import your shading network and hook it up to my camera which is at a much smaller scale than your scene? I didn't scale mine down at all, it's the default scale 1,1,1 to my scene which is set to centimeters. However, if I open your example file, your camera is still scaled to 1,1,1 as well but in comparison to your grid, it is about 50 times larger than my default camera. The same goes with your locators as well.

    Here is an example of your camera and locators after being imported to my scene. Again, I didn't scale my down at all and my settings are at their default. http://skdzines.com/capture.jpg.

    Other than that, I think I'm hooking your shader up correctly to my camera by connecting the cameraShape.farClipPlane -> setRangeFar.oldMaxZ.

    Thanks again!

    Comment by skdzines — February 17, 2012 @ 7:45 am

  11. Hello David,
    I was just wondering why one would want to render the zdepth in maya as oppose to mental ray? Wouldnt mental ray look better then maya? As always, thanks.

    Comment by smokedogg — February 19, 2012 @ 10:59 am

  12. @ skdzines: If you import my shader network, you should be able to connect your camera into it. Display my camera connections in the hypergraph, then use the connection editor to make the same connections to your camera.

    The size of the camera icon that you see in the viewport can be set using the camera attributes in the "Object Display" tab. The attribute is called "locator scale". This is just a viewport guide and makes no difference at all to the render or the clipping plane values.

    Comment by david — February 19, 2012 @ 12:30 pm

  13. @ smokedogg: The shader network I showed here uses standard maya nodes and can be rendered using mentalray or the standard maya software renderer (and probably other renderers like vray - though I have not actually tried it). The result will be pretty much the same for all of them. I'd probably just use the one that rendered quickest, unless I had other reasons for choosing a different one.

    When I wrote it, one of the things I did like about this set up was the ability to define the "in focus" region and to see the greyscale result in the viewport. The downside to this approach is that you are baking that focus info into the depth render. These days I prefer to use a true depth pass and make those focus decisions in post.

    I originally wrote this post about 4 years ago so I should mention that I switched, almost exclusively, to vray 2 years ago. Now I can simply add the vray depth pass to my renders. And that means nothing extra to set up, and no extra render time.

    Comment by david — February 19, 2012 @ 12:43 pm

RSS feed for comments on this post.

Leave a comment

You must be logged in to post a comment.

Powered by WordPress | Based on a theme by Roy Tanck