A common approach to rendering is to do it in several passes and to composite them using something like after effects or fusion. For me an important pass is a variation on the standard depth pass, which I refer to as the depth-of-field pass. For a few years I have rendered the depth-of-field pass using a wonderful mentalray shader called zDepthDOF by Andreas Bauer. (The link takes you to a good explanation of how the shader works and why it is better than a standard depth render.)
The thing I like most about zDepthDOF is that I can use it with the maya distance tool. I connect the distance attribute to the shader's focus distance parameter. I point-constrain one of the distance tool's locators to the camera. Then I animate the other locator to accurately control the focal point in my depth pass which can then be used to create some nice focus change effects using an after effects filter like the compound blur.
But zDepthDOF is a mentalray shader, so just for fun I decided to see if I could do the same thing using the standard maya software renderer. It wasnt too difficult and I thought it might worth sharing. It's a great example of how the maya utility nodes can be used in a shader network.
I'll start with some pictures. I have some cones and a sphere layed out on a ground plane at various distances from the camera and there is a wall right at the back.
I could use zDepthDOF to render a depth-of-field pass where black means in focus and white means out of focus. If I set the focus distance to be the same as the distance from the camera to the sphere I would get the following render.
If I take both images into after effects and use the depth-of-field render to control bluriness in the compound blur filter then I get something like this.
So what I need is a maya shader that will give me the same result as the mentalray zDepthDOF.
To help visualize what is happening I made the following diagram.
The gradient extends from the camera's near clipping plane to the far clipping plane. Objects sitting on the focal plane are shaded black. The further an object is from the focal plane the whiter it gets.
I think the best way to understand the shader network I created is to load it into maya and study the connections in the hypershade window.
Here is a snapshot of the shader network (click the image to open a hires version in a new window).
To figure out what color an object should be I need to know how far it is from the camera and for that I used the samplerInfo utility node. This node is has an output called pointCameraZ which is the distance from the camera of the surface point being rendered. Strangely, the value of pointCameraZ is always a negative number, so I used a multiplyDivide utility to multiply it by -1 thus making it a positive number.
To create the gradient I used two ramp textures. The first ramp is from white to black, and the second from black to white. The first ramp is used for distances from the camera near clipping plane to the focal plane. The second ramp is used for distances from the focal plane to the far clipping plane. The distance is obtained from the samplerInfo node and is used to "look up" the ramp color and a condition node is used to select the right ramp depending on the distance.
Since a ramp texture expects a "look up" value between zero and one I used some setRange nodes to normalize the distance values.
The cameraShape node is part of the network since it's farClipPlane value is used as an input to one of the setRange nodes.
A distance tool is also used (Create|Measure Tools|Distance Tool). One locator is point-constrained to the camera. The other locator is the one I animate to specify the focal plane. Sometimes I point-constrain this to the object I want to keep focus on, but more often I animate it by hand. The distance output feeds into some plusMinusAverage nodes and some simple math is done before passing some values to the setRange nodes and the condition node.
I also used the second locator as a way of specifying a focus range. This is the width of the black band in the middle of the gradient. It can be increased of decreased by scaling the locator in the z-axis. Objects within this distance range are in focus. To make it easier to visualise in the viewport I parented a cube to the locator to show me approximately where the focus range is (approximate because it should really be a spherical slice - but close enough to work with).
The next two images show the viewport snaps.
This shader renders quickly and requires no lights in the scene. It can be used with transparant surfaces using the same technique described here zDepthDOF by Andreas Bauer. It is not a neat as a single node mentalray shader like zDepthDOF, but it has the advantage of working with the maya software renderer and it shows up in the viewport in texture display mode (so there is a degree of interactivety).
The exampe scene has two render layers. One for the beauty pass and one for the depth pass. You could export the shader and use it in your own scenes - you would just need to use the connection editor to hook up your own rendering camera in place of the one in my scene.