I'm going to start by saying that I do not consider myself and expert on this subject, but I'm going to write about how I have adapted my methods to deal with the "linear workflow" thing in my day-to-day work.
If you have never heard the term "linear workflow" then you must have been really busy doing something else for the past year, because it has been discussed over and over in forums and blogs. I've done a lot of reading and managed to confuse my thinking on numerous occasions, but lately it all seems to be falling into place and I feel like some kind of born again "linear workflow" convert. And like most new converts, I feel compelled to spread the word.
First off, some background reading that you may find interesting:
- 3dLight - Andrew Wiedenhammer intros linear workflow
- 3dLight - The full story
- Rob Nederhorst discusses linear workflow and gamma
- CGTalk thread - "Gamma correction - do you care?"
- Floze Light and Shade tutorial (look at "a note on color space")
- mymentalray wiki - gamma
- Bill Spitzak - Digital Domain - linear v sRgb
Even though each of these links provides expert advice, I still found it confusing. I understand math quite well and I followed the logic as I read it, but 5 minutes after I stopped I was confused again. I think that much of this problem comes from the fact that I have been rendering and compositing CG stuff for a long while and I have become set in my ways. Master Zap wrote this on the CGTalk thread that I linked to above
If you render on a "standard" computer monitor (i.e. an sRGB monitor which has a practical gamma of 2.2 on average) with no regard to gamma anywhere in your workflow (i.e. pretending the monitor has gamma=1, like unfortunately most software defaults to), then when you think you are making something twice as bright, you are actually making it almost 5 times as bright.
This makes even the most trivial math turn out completely wrong. Basically, your renders come out as if 2 + 2 = 10
This is why highlights blow out unrealistically, why reflections look wrong, why you can't seem to be able to use physically correct lights with a quadratic falloff, and why you have to save everything in comp with a bunch of horrendous dirty tricks like "screening" your speculars back on (Yuk!).
That last bit about the highlights and reflections sounds so familiar to the way I used to work, but like many of us I had adapted to that mathematically incorrect color space. I would add ambient or diffuse only lights to brighten up the dark areas. I'd use lights with no decay because they seemed to give me the most control over illumination without blowing out. I would carefully adjust reflectivity settings in my blinn shaders to get them to look balanced against the specular highlights. And in post I would often add some contrast with the curves filter in a nice familiar "S" shape. If I look back at those renderings I still think they look ok.
One of the things I always say to myself is "it doesn't have to be right; it just has to look right" and I have not changed my opinion on that. However I have learned that by understanding the underlying mysteries of the "linear workflow" and "gamma" I can achieve superior results with less effort.
Ok. Enough history. As it turns out, when I distilled all that information into just the parts I actually needed to know everything became really simple. So here is my non-expert take on the issue.
What we see on a monitor is almost always the sRGB color space. What comes out of the maya or mentalray by default is a linear color space. When you display the linear color space on an sRGB display the darker parts look too dark. Instead of compensating for this by adding more lights a better solution is to apply a gamma correction to the output of the renderer to convert the colorspace to sRGB.
That's it. Technically I'm not being totally accurate with this, but it doesn't matter, its close enough. And when I said "That's it" I was kind of fibbing. There are some other things you need to know, but they make more sense if you accept that simple idea. I will now explain how I adjusted my workflow to fit.
gamma = 2.2
I use the mia_exposure_photographic as a lens shader with it's gamma set to 2.2
(If you read Floze's tutorial you will see that he approaches this slightly differently. I will not go into it here, but its worth a mention. Both methods are correct. The end result is the same as long as you understand what you are doing.)
gamma = 0.455
If I set my output gamma to 2.2 then I need to degamma my input fileTexture colors so that the gamma is not applied twice. This is because file textures created in photoshop (or almost any other program or camera) are already in the sRGB color space. A gamma of 0.455 is the inverse of 2.2
This is maya's gammaCorrect node. It get's inserted between the output of my fileTexture node and the original connection. Sometimes I also use one of these gammaCorrect nodes when I just want to enter a color into another attribute, because maya displays color swatches in sRGB, so they need to be degamma'd too. (Though most of the time I just input a darker color into the swatch, rather than complicating my scene file will too many gammaCorrects)
OpenEXR for file textures
An alternative approach to using sRGB file textures with the gammaCorrect in maya is to save your file textures in a format that uses the linear color space by default. OpenEXR has become a popular choice for floating point image data. Lately I have been saving my file textures as 16 bit half floating point OpenEXR with piz compression (Have a look at http://www.fnordware.com/ProEXR/). There is a bit of an increase in file size, but not in render time and there are some benefits to doing some blurs and image processing in a floating point format - but I'll leave that for another discussion). After effects and photoshop display the image correctly in the sRGB colorspace while you work on it, but when it gets saved it is output in the linear space. If you enable the OpenEXRLoaded.mll plugin in maya's plugin manager then you can work with OpenEXR just like any other format, and since the image data is linear now, there is no need for that gammaCorrect node.
mia_material_x is first choice for shader
This is not actually a linear workflow requirement, but its in the ball park, as they say. I'll quote Floze's thoughts on mia_material_x here
Unlike the regular Maya shaders, and most of the custom mental ray shaders, it implements physical accuracy, greatly optimized glossy reflections, transparency and translucency, built-in ambient occlusion for detail enhancement of final gather solutions, automatic shadow and photon shading, many optimizations and performance enhancers, and the most important thing is that it’s really easy to use.
My thinking is that by adopting a linear workflow I am moving towards a solution that better represents the way light behaves in the real world. Since the mia_material_x is also attempting to be physically accurate it makes sense to use it.
Once again, not a requirement for linear workflow, but definately something that benefits greatly form using a linear workflow. This was demonstrated well in Rob Nederhorst's tutorial. And I am often using it now with 2 FG Diffuse bounces (as shown in Floze's tutorial) to really get some nice natural looking ambience.
floating point and HDR
I mentioned floating point when I wrote about OpenEXR earlier. It seems a common misconception that to use a linear workflow you must also render to a HDR floating point format (either 32 bit or half 16), but that is not really the case. You can if you want to, assuming that your compositing application can deal it. I use after effects and it will happily import OpenEXR and I can work in a 32-bit floating point project. But this is something I decide on a project by project basis.
After effects does not have good tools for working with HDR imagery - it has some, but I dont think they are good enough in most cases - so if I render to floating point I still use an exposure node to keep me in a relatively low dynamic range. And I still use an output gamma of 2.2 (many would say I should render to gamma=1 and apply the gamma correction in after effects, but I prefer to stick with my gamma=2.2 renders and then just make sure after effects interprets them correctly)
A welcome side effect of rendering to a low dynamic range is that the adaptive sampling that the rendered performs behaves much better and is much less likely to leave the kind of aliassing artifacts around highlights and high contrast edges that you often see in HDR renders. I set up my render exposure so I get a small degree of over brights (say 10 to 15%) which still gives me some room for adjustment in after effects.
But often, when I don't need the extra flexibility of rendering to floating point, I simply render to standard 8 bit IFF's - which means smaller file sizes and more interactive response when working in after effects.
Well I tried to explain my own version of a linear workflow without getting into all the technical detail and mathematical explainations that I can never seem to repeat. I know I have probably over-simplified things in places and have used some terms very loosly, but I hope the meaning comes though and that somebody will find it useful. At the very least, I have provided some links to people who really do know what they are talking about.