linear workflow and gamma

david | mentalray,opinions,rendering | Saturday, September 13th, 2008

I'm going to start by saying that I do not consider myself and expert on this subject, but I'm going to write about how I have adapted my methods to deal with the "linear workflow" thing in my day-to-day work.

If you have never heard the term "linear workflow" then you must have been really busy doing something else for the past year, because it has been discussed over and over in forums and blogs. I've done a lot of reading and managed to confuse my thinking on numerous occasions, but lately it all seems to be falling into place and I feel like some kind of born again "linear workflow" convert. And like most new converts, I feel compelled to spread the word.

First off, some background reading that you may find interesting:

Even though each of these links provides expert advice, I still found it confusing. I understand math quite well and I followed the logic as I read it, but 5 minutes after I stopped I was confused again. I think that much of this problem comes from the fact that I have been rendering and compositing CG stuff for a long while and I have become set in my ways. Master Zap wrote this on the CGTalk thread that I linked to above

If you render on a "standard" computer monitor (i.e. an sRGB monitor which has a practical gamma of 2.2 on average) with no regard to gamma anywhere in your workflow (i.e. pretending the monitor has gamma=1, like unfortunately most software defaults to), then when you think you are making something twice as bright, you are actually making it almost 5 times as bright.

This makes even the most trivial math turn out completely wrong. Basically, your renders come out as if 2 + 2 = 10

This is why highlights blow out unrealistically, why reflections look wrong, why you can't seem to be able to use physically correct lights with a quadratic falloff, and why you have to save everything in comp with a bunch of horrendous dirty tricks like "screening" your speculars back on (Yuk!).

That last bit about the highlights and reflections sounds so familiar to the way I used to work, but like many of us I had adapted to that mathematically incorrect color space. I would add ambient or diffuse only lights to brighten up the dark areas. I'd use lights with no decay because they seemed to give me the most control over illumination without blowing out. I would carefully adjust reflectivity settings in my blinn shaders to get them to look balanced against the specular highlights. And in post I would often add some contrast with the curves filter in a nice familiar "S" shape. If I look back at those renderings I still think they look ok.

One of the things I always say to myself is "it doesn't have to be right; it just has to look right" and I have not changed my opinion on that. However I have learned that by understanding the underlying mysteries of the "linear workflow" and "gamma"  I can achieve superior results with less effort.

Ok. Enough history. As it turns out, when I distilled all that information into just the parts I actually needed to know everything became really simple. So here is my non-expert take on the issue.

What we see on a monitor is almost always the sRGB color space. What comes out of the maya or mentalray by default is a linear color space. When you display the linear color space on an sRGB display the darker parts look too dark. Instead of compensating for this by adding more lights a better solution is to apply a gamma correction to the output of the renderer to convert the colorspace to sRGB.

That's it. Technically I'm not being totally accurate with this, but it doesn't matter, its close enough. And when I said "That's it" I was kind of fibbing. There are some other things you need to know, but they make more sense if you accept that simple idea. I will now explain how I adjusted my workflow to fit.

gamma = 2.2

I use the mia_exposure_photographic as a lens shader with it's gamma set to 2.2

attributes_exposure.jpg

(If you read Floze's tutorial you will see that he approaches this slightly differently. I will not go into it here, but its worth a mention. Both methods are correct. The end result is the same as long as you understand what you are doing.)

gamma = 0.455

If I set my output gamma to 2.2 then I need to degamma my input fileTexture colors so that the gamma is not applied twice. This is because file textures created in photoshop (or almost any other program or camera) are already in the sRGB color space. A gamma of 0.455 is the inverse of 2.2

attributes_gamma.jpg

This is maya's gammaCorrect node. It get's inserted between the output of my fileTexture node and the original connection. Sometimes I also use one of these gammaCorrect nodes when I just want to enter a color into another attribute, because maya displays color swatches in sRGB, so they need to be degamma'd too. (Though most of the time I just input a darker color into the swatch, rather than complicating my scene file will too many gammaCorrects)

OpenEXR for file textures

An alternative approach to using sRGB file textures with the gammaCorrect in maya is to save your file textures in a format that uses the linear color space by default. OpenEXR has become a popular choice for floating point image data. Lately I have been saving my file textures as 16 bit half floating point OpenEXR with piz compression (Have a look at http://www.fnordware.com/ProEXR/). There is a bit of an increase in file size, but not in render time and there are some benefits to doing some blurs and image processing in a floating point format - but I'll leave that for another discussion). After effects and photoshop display the image correctly in the sRGB colorspace while you work on it, but when it gets saved it is output in the linear space. If you enable the OpenEXRLoaded.mll plugin in maya's plugin manager then you can work with OpenEXR just like any other format, and since the image data is linear now, there is no need for that gammaCorrect node.

mia_material_x is first choice for shader

This is not actually a linear workflow requirement, but its in the ball park, as they say. I'll quote Floze's thoughts on mia_material_x here

Unlike the regular Maya shaders, and most of the custom mental ray shaders, it implements physical accuracy, greatly optimized glossy reflections, transparency and translucency, built-in ambient occlusion for detail enhancement of final gather solutions, automatic shadow and photon shading, many optimizations and performance enhancers, and the most important thing is that it’s really easy to use.

My thinking is that by adopting a linear workflow I am moving towards a solution that better represents the way light behaves in the real world. Since the mia_material_x is also attempting to be physically accurate it makes sense to use it.

final gather

Once again, not a requirement for linear workflow, but definately something that benefits greatly form using a linear workflow. This was demonstrated well in Rob Nederhorst's tutorial. And I am often using it now with 2 FG Diffuse bounces (as shown in Floze's tutorial) to really get some nice natural looking ambience.

floating point and HDR

I mentioned floating point when I wrote about OpenEXR earlier. It seems a common misconception that to use a linear workflow you must also render to a HDR floating point format (either 32 bit or half 16), but that is not really the case. You can if you want to, assuming that your compositing application can deal it. I use after effects and it will happily import OpenEXR and I can work in a 32-bit floating point project. But this is something I decide on a project by project basis.

After effects does not have good tools for working with HDR imagery - it has some, but I dont think they are good enough in most cases - so if I render to floating point I still use an exposure node to keep me in a relatively low dynamic range. And I still use an output gamma of 2.2 (many would say I should render to gamma=1 and apply the gamma correction in after effects, but I prefer to stick with my gamma=2.2 renders and then just make sure after effects interprets them correctly)

A welcome side effect of rendering to a low dynamic range is that the adaptive sampling that the rendered performs behaves much better and is much less likely to leave the kind of aliassing artifacts around highlights and high contrast edges that you often see in HDR renders. I set up my render exposure so I get a small degree of over brights (say 10 to 15%) which still gives me some room for adjustment in after effects.

But often, when I don't need the extra flexibility of rendering to floating point, I simply render to standard 8 bit IFF's - which means smaller file sizes and more interactive response when working in after effects.

after thoughts

Well I tried to explain my own version of a linear workflow without getting into all the technical detail and mathematical explainations that I can never seem to repeat. I know I have probably over-simplified things in places and have used some terms very loosly, but I hope the meaning comes though and that somebody will find it useful. At the very least, I have provided some links to people who really do know what they are talking about.

14 Comments »

  1. Hi David! Nice post. I think you've certainly gotten it, and mentioned a bunch of stuff I didn't! Likewise, I just posted a long-awaited bit on the details of linear workflow on my blog, and I think it works well to supplement this one. Thanks!

    Andrew

    Comment by MrHooper — September 15, 2008 @ 4:40 am

  2. Hi, David! Awesome post and very informative for us Maya user. I have been doing research for linear workflow and found a lot of information dedicated to applications. Your post and 3dLight's over his blog really made my weekend by clarifying the linear workflow in Maya with mental ray.

    Comment by jasonhuang1115 — September 16, 2008 @ 12:05 pm

  3. Andrew (or should I say MrHooper) I appreciate your comments. I've just read your latest "linear workflow" update at http://3dlight.blogspot.com/2008/09/linear-workflow-for-maya-mental-ray.html - very well written.

    Jason, I'm pleased to hear you got something out of it. Thanks for your comments.

    Comment by david — September 17, 2008 @ 12:12 am

  4. Hi David,
    You're the man . . .

    - May I ask you, if you have to unplug the mia_exposure_filter from camera when rendering to openexr? I've done so many tests that I'm just confused.

    - Are you able to mix the mia material x passes, a reflection dome with a surface shader assigned as well as the ibl node to emit fg and light - within the new passes system.
    Beside I have to reorder my channels in post - it just renders weird results . . .

    thx (y)

    Comment by mayanic — June 24, 2009 @ 6:04 am

  5. mayanic: openEXR is a floating point format that can be used for high dynamic range images, so it is common for people to disconnect the exposure node when rendering. This gives you the ability to make all your exposure and gamma decisions in your compositing application.
    However I prefer to leave the exposure node in place. I make my exposure choices at render time. I can still adjust things in aftereffects without loss of quality since it is still floating point.

    I cant help with your mia passes question, since I have never used them. Maybe someone else will comment here. Otherwise I'm sure you'll find some info on cgtalk.

    Comment by david — June 25, 2009 @ 12:08 am

  6. David,

    Awsome explaination on how to get started in working in a linear workflow. I really wish i coulda been taught this stuff while in school, but who am i kidding. They didnt even mention mental ray which really pisses me off. Anyways, im definitly gunna be re-reading this again and also check out the other links you supplied. It seems that Andrews will be next since he seems to have a great deal of knowledge on this topic.

    Thank you to all who have shared their knowledge and shed some light onto this workflow. I now feel smarter thanks to all you and can not wait to run into someelse having problems so I can help them with all this info as well as pass it onto them and others.

    Comment by smokedogg — July 18, 2009 @ 7:10 am

  7. Nice feedback smokedogg. I appreciate your comment.

    I thought I would add something I read on cgtalk a few weeks back. Leif Pederson (leif3d) was writing about frustrations trying to get colleagues to adopt a linear workflow and he made a nice comparison...

    (quote)
    ..a couple of guys at work think I'm crazy when I tell them that they should map their wacom tablets to the aspect of their monitor, but then they tell me: "But we'll loose tablet space?!" Then I tell them: "Well, you realize that when you draw a circle, it's not a circle right? it's an oval..." Then they turn to their monitor and draw a perfect circle, while looking at me like I'm a jackass. They had no idea that their hand motion is compensating for what they are seeing on screen.
    At this point I ralized that getting used to working a certain way is stronger than what's wrong or right, and depending on your personality and goals, people will accept this reality as correct unless necessity dictates otherwise. For others, being anal comes at a cost of sleepless nights and much more stress.
    I fit in the second category.
    (endquote)

    I'm there with you Leif.

    Full post is at http://forums.cgsociety.org/showpost.php?p=5963988&postcount=21

    Comment by david — July 18, 2009 @ 3:41 pm

  8. David, so what is the "physically correct" way to start. (in my case re-do) With my output gamma at .45 and exposure lens at 1 or 2.2? Also when bringing in a texture from photoshop, is that where i adjust the exposure gamma settings? And by doing that you will not need to connect a gammacorrection node? Only because when i try to render a shot of my interior building, my walls and pretty much everything else, excluding the areas where my large windows are, they are completly dark.Or do i just have to adjust the cm2 in the mia photo exposure? Being relitivly new to this workflow im just trying to make sure im correct. It just seems like an over-abundance of information to comprehend all at once making me feel like pulling my hair out.

    Thanks for any input you gave and could give to me.

    Comment by smokedogg — July 19, 2009 @ 5:13 am

  9. Sorry but i forgot to mention something reguarding file formats. I believe i read the answer in someones blog or post but not really sure. Subject: After creating the desired texture in photoshop, do you want to save it as an .exr? If so, can you then convert using the imf_copy to make it a .map file? Or do you want to keep it as an .exr?
    thank you.

    Comment by smokedogg — July 19, 2009 @ 7:20 am

  10. smokedogg: 2nd answer first. If you are working on an LDR 8-bit image in photoshop, there is little to be gained by saving as EXR. But if it is a 32-bit HDR then EXR is a good choice. Whether you convert to .map depends on if you are having memory issues with your render. If you are, then the .map format may help, but the file size will increase massively which may cause network slowness depending on your hardware.

    1st answer. Leave your hair where it is. You do not need to learn everything at the start. It is better just to start anyhow and modify your methods as you learn. Part of the problem you face is that there are so many possible workflows. Not all of them are "correct" and even the ones that are may still not work when applied to your production pipeline.

    The best thing you can do is get a good understanding of what gamma is and why it matters. Once you have that understanding, you then have to figure out how to apply it when working with mentalray for maya, where the concept of linear workflow is not well implemented and inconsistent. This may change in a future release, but my guess is that people will still be confused. After all, mentalray is only one piece of the jigsaw puzzle. If you understand how gamma works, then you can logically work through the steps of your production process and apply it where it is needed.

    Good luck.

    Comment by david — July 20, 2009 @ 10:55 pm

  11. Thanks David. Do you or do you know if my monitor(s) / labtop should be color calibrated before I even start working with anything? Is yours? If so, what did you use to calibrate it with?

    I also came across a pretty detailed desciption on more linear workflow. Although its on maya software renderer and hasnt updated his Mental ray section, I think it may be pretty useful to some. Check it out if ya gotta chance. http://egostudios.blogspot.com/2008/12/basic-gamma-corrected-workflow-with.html

    Thanks again!

    Oh, for whatever reason I can not seem to find how to save in openEXR in photoshop CS3. Is it a plugin I need to get? ProEXR?

    Comment by smokedogg — July 22, 2009 @ 5:42 am

  12. smokedogg: The link you posted contains good info. The only thing I'd point out is the render-settings Color/Compositing Gamma Correction attribute shown does not exist for mentalray. However there is a render-settings framebuffer gamma which does the same job but, confusingly, you need to set it to 0.455 (not 2.2).

    I only calibrate my monitors approximately - using image based methods - nothing fancy. My LCD displays are not good enough for accurate color work anyway, but I also have a decklink card output to a Sony CRT and this is how I make my final decisions. (My work is for TV - this setup probably would not cut it for film).

    As far as I know, OpenEXR is a standard plugin with recent versions of Photoshop. To save EXR in photoshop you need to be in 32-bit mode.
    ProEXR is worth looking into since it has better compression and other options. The aftereffects plugin is free, so give it a try.

    Comment by david — July 22, 2009 @ 10:11 pm

  13. Hi David,

    I just had a quick question for you, it may be very simple, so I hope I am not wasting your time... When I save out a 32bit EXR or TIFF the alpha channel is not included in the channels of the file (it resides within the actual image layer when opening it in Photoshop). Now when I go to add a background the reflection is noisy and doesn't include the same amount of colour as if I were to setup the gamma etc correctly for previewing in Maya and saving the renderview image as a TIFF. - Even with " Pass custom alpha" the reflection / alpha isn't keyed properly and no channels exist in Photoshop. I am using Maya 2008 Ext 2...

    Now if I were to import the file into After Effects and pre-multiply the EXR with white, the file is handled fine with the image reflections and alpha being interpreted correctly, however the images are for print and AE isn't that great for handling larger resolutions and exporting etc.

    Perhaps you can shed some light on this problem for me please, I have looked everywhere! My only solution for the time being is to have the FrameBuffer set to 32bit RGBA, degamma buffer to 0.454, add a lens shader of Gamma simple exposure of 2.2 and save the renderview as a TIFF (Obviously this is only 8bit from renderview though). Saving this way and opening in Photoshop all the alpha channels are present and reflections with colour are keyed properly but only using renderview saved TIFF...

    Thank you!
    Domenick

    Comment by Domenick — July 30, 2009 @ 1:08 pm

  14. Domenick. its a rather long question, and I need to see some example images to be able to help you properly. I have emailed you some preliminary questions to get us started.

    Comment by david — July 30, 2009 @ 10:51 pm

RSS feed for comments on this post.

Leave a comment

You must be logged in to post a comment.

Powered by WordPress | Based on a theme by Roy Tanck