1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Dismiss Notice
Like RaceDepartment on Facebook.

Tone mapping

Discussion in 'Racer' started by Mr Whippy, Oct 12, 2010.

  1. I currently think the system used is rather limited.

    It appears to try give light detail across a massively wide dynamic range, and as such, imo, fails to give sufficient realism.

    Has anyone else played with the tone mapping method?

    I'm currently trying to implement a 4 stops wide (4 EV) digital camera type system of tone mapping, but I'm a little unsure how it all works right now.

    c=colour.rgb*exposure seems to give a linear representation...

    ...what we might see with an SLR camera, but I'm not sure. Contrast seems much higher and realism feels better. It also shows up issues with specular intensities (they become lots more powerful than may be ideal on most materials)


    I'm thinking that we need a nicely calibrated ToneMap method that tries to copy a real cameras dynamic range, to even start to tweak materials in game (diffuse, ambient, specular levels etc)...

    The logarithmic method "ToneMapHDR" seemed to be lacking realism. It gave a good presentation of the dynamic range of the scene, but it didn't feel all that real to me...



    Hmmmm, just starting to tinker, but I'd like to maybe have a system where we have about 4 linear stops, then about 4 stops at each end that knee out (exponential) to black or white out, ending up close to the ~ 13 stops the human eye is capable of seeing at any given 'glance' of a scene...
    The current system appears to be up to 25 stops wide, possibly more, and it just makes the scene feel flat some times! Specular in particular feel to just become very subtle with such a wide dynamic range squished into the 255 levels of intensity we ultimately have...


    Hmmmmm

    Dave
     
  2. You only mention cameras, It is unclear for me whether Racer only simulates cameras or should it also simulate human eye. For example, when driving in cockpit view, i think a human eye view should be simulated, but when using bumper view, a camera should be simulated. Or are they more or less the same thing?

    As for tone mapping, there is also the desaturation effect at night, which is quite broken because it desaturates regardless of the intensity of the pixel, i.e. bright read brakelight at night becomes unnaturally desaturated.

    Dave, btw, it would be nice you if posted a screen or two of your results (whether good or bad) more frequently - you could make your posts shorter, because a picture is worth a thousand words or so :D (not that your posts are too long).
     
  3. I'll post some pics :D


    As per what you were saying though.

    Most DSLR cameras cover about 4 stops. Video maybe a bit more. They often use 'knee' or little curves to soften each end of the dynamic range they cover, so we don't get nice tones suddenly running into white or black, but retain some info going into these blow-out areas.

    The human eye, as said, is probably about 12-13 stops at any given time. It's impossible to say really because our eyes just build a mental image, and probably do all manner of exposure compensations in our mind :) But I'd say that the dynamic range of about 12 stops squashed into our on-screen 255 intensity levels, is probably about right for our eyes. Probably with some knee used at each end too, to soften things, but much softer than what cameras use (ie, knee over a few stops at each end)


    I agree, we should have camera_type= for most cameras in-sim. Then define cameras somewhere (fs_filter perhaps?)...

    That way we can have depth of field, vignetting, eye, blur and all those things defined per camera, and then when we define cams for tracks or cars they can be specific types :)



    Desaturation seems to be ok here on a night with high emission on brake lights etc (ie, models with red texture with emission 30 30 30 for example)
    What I do think is a problem though is it's too aggressive too early. I think it's starting to desaturate twice as soon as it should. OK, maybe technically it's accurate or something, but again my eyes are compensating for that and re-saturating things for me, to a certain degree... of course there is no denying on a night it does get more desaturated!

    Maybe you just don't have enough emission on those lit objects, making them get sampled as low-intensity and thus desaturated? You have to go quite bright to get out of the desaturation range it seems.



    Main issue right now is coding my ideas. I've worked out how to get the scene_intensity to an EV reading, which can then be used to give an ideal exposure choice (18% grey), which could be used to centre the 12 stop choice on (ie, if the ideal exposure is for 14ev, then we render the range 20ev to 8ev, maybe 18ev to 10ev linearly, with the last 2 stops at each end knee'd into the range 255>235 and 0>20 rgb for example)
    Then as the ideal exposure moves up and down based on scene exposure, that linear range moves around too.

    Programmers wouldn't have an issue with it. I do :D :D

    Dave
     
  4. Stereo

    Stereo
    Premium Member

    I think it would be easier to skip the 0~255 thing, and look at output in terms of 0.0~1.0 range (which is what it gets truncated to when rendering anyway).

    I don't know if 18% grey corresponds to 0.18 or if that's post-gamma correction, but if it does, then there's a direct correspondence to base the exposure on.

    In terms of pushing a larger range into the smaller visible one 'linearly', it goes something like
    1) choose a centerpoint to scale down , eg. if you want 18% grey at 8000 lux, 1/8000 = 0.000125 -> I think Racer's using kLux though, so this would actually only be 1/8 = 0.125
    2) multiply all pixels by that, so anything at the centerpoint is = 1.0
    3) since EV is a logarithmic thing, to fit more stops into 0 ~ 1, you need to take the exponent of the values. Eg. exp(pixel, 2). Ideally a monitor would show 1 ~ 256 intensity levels which is 8 EV?, so to stick 16 in you need to double that.
    4) drop the centerpoint down to some "average" grey by just multiplying by the intensity of that colour (0.18?) Now your centered 1.0 is at the exact grey shade you wanted, and the rest extends on either side of it a little exaggerated.

    By putting the centerpoint in like this, you get one reference point that should show up on the monitor exactly as expected. If you used a LDR camera, you'd just skip step 3?

    If you want the elbows on either side, you change step 3, so it's exp(pixel,f(pixel)) where f(pixel) is equal to the linear amount (probably a little less than 2) around 1.0, but tapers upwards on either side (higher numbers = more stops fit in)
     
  5. Haha, you've said what I want to do, just a bit more programatically. But it's still a mile from useable for me :D

    I think what I am saying is that the main stop range around the ideal exposure needs to be linear feeling, says 6 stops. Then knee off for a wide range (so not linear over that range), with the total coverage being around 12 stops maybe.


    The problem with the current system seems to be it's covering a massive EV range and squashing it ALL into the 1-256 intensity range we have. A digital camera will get around 4 stops, the human eye maybe, perceptionally, 12 stops... I'm sure Racer is getting a range of 20EV in some scenes.


    I'm not sure if running the mapping linearly from the HDR values to LDR ones via just the exposure coefficient gives a linear representation or not.

    Hmmm, just very confused. But I think we need to err to defining a tonemap to copy an SLR camera (4 stops or so, with a little bit of knee), and then a human eye, basically 12 stops wide with much wider knee out of a 6 stop linear range in the middle?!


    Simply tinkering with the tonemapping reveals how different materials and lighting setups look, so until we have a tonemap that does something very close to something real then it's hard to pin down things (ie, specular levels can look really intense with other tonemaps, while with the current default they are quite muted!)

    If we could get a tonemap to copy an SLR with 4 stops right now for example, then I can compare photos with my Racer scene and tweak values side by side. Right now I'm tweaking things to look right but if the tone mapping changes then it all looks wrong... not ideal :D


    Dave
     
  6. Stereo

    Stereo
    Premium Member

    I'm still trying to figure out how SLR cameras actually work out tonemapping. I suspect the 'linear' method (multiply by exposure) is in need of gamma correction at 0.45, which is something cameras do internally, to prepare it for the monitor (which puts that to 2.2, and thus gives linear output).

    Took a while to find this so I'm linking it to make sure I don't lose it.
    http://www.covingtoninnovations.com/dslr/Curves.html

    Need to grab my point'n'shoot camera, put some batteries in it, and compare it to my dSLR to see how different they actually are, as well.


    I guess this page points out why the logarithmic method is ok - it's almost the equivalent of the gamma correction anyway. The bases may be different though, which would lead to too many EV stops in Racer I suppose.

    Matching the human eye properly is difficult - it doesn't expose the entire scene uniformly, it's all local differences. In reference to HDR images, tonemapping usually means this - location dependent exposure so the features are visible everywhere even though the camera can't capture them at a single setting. It's not just a matter of bumping up the EV - I think that's creating some of the weird looking images. Not sure how fast these processes are but it might be hard on the FPS.
     
  7. That link is good.

    It's showing a 10EV range on the first chart, but the linear part is covering about 4-5 stops, probably about right for an SLR... but I didn't know that the knee (seem to be shoulder and toe on still cameras!?) areas were so wide... but this is talking about film response which I gather may be wider than that of a digital sensor, softer at each end.


    Looks like yet another big can of worms/learning here. I was just worried that we used that log method Ruud is currently using as default, and it may not be ideal.

    It's something a fair way out of my depth of understanding and just something I would probably prefer not to learn too much when content creation is more my thing. I've found specular settings that work nicely today, but tone mapping has a big impact on them, hence the little bit of pondering, since finding nice values can be a time consuming element :D


    What*would* be nice is a photomode type thing. I know it sounds silly, but if we can make a full-screen shader that copies the scene to what would be a camera exposure as authentically as possible, it'd be a fantastic check resource to make sure things were the right kinds of values :D

    Dave
     
  8. Ruud

    Ruud
    RACER Developer

    I'm looking into this; the Uncharted function seems to miss something; it does not look like a curve near 0:

    (x*(6.2*x+0.5))/(x*(6.2*x+1.7)+0.06) gives:

    http://www.wolframalpha.com/input/?i=%28x*%286.2*x%2B0.5%29%29%2F%28x*%286.2*x%2B1.7%29%2B0.06%29

    An alternative is atan(), since that is linear around 0. With a bit of shifting vertically (to normalize) and horizontally (exposure point) you get, in hdr.cg:

    float4 ToneMapHDR(float4 color,float exposure)
    // Filmic tonemapping
    {
    float3 c;

    // Filmic tonemapping
    c=color;
    float3 x=max(float3(0,0,0),c.rgb-float3(0.004,0.004,0.004));
    x=c.rgb;

    // Marno function
    const float pi=3.14159265;
    float middleCol=exposure;
    c=(atan(0.5*(x-middleCol))/pi)+0.5;

    // Gamma correction?
    const float gamma=2.2;
    //c=pow(c,gamma);
    return float4(c,1);
    }

    Then turn off auto exposure, and tweak 'exposure' directly with the console command ('exposure 0.5' for example). Gives quite a sharp look, see the attached picture (also demonstrating btw the mix shader effect of sand/grass that we use).

    I'm looking into getting the more general toe & shoulder function from Uncharted2 working. That should gives a bit of linear around the exposure point, then start squeezing around the black & white overblow regions.
     

    Attached Files:

  9. Ruud

    Ruud
    RACER Developer

    Here's a shader that does Uncharted2; I only get it to look good if I use:
    - time 1930 or around time 0800
    - exposure 10

    using the attached hdr.cg (rename the txt file). See the jpeg for a bit of the look.
    The problem however remains how to reduce the lighting when using 'time 1400' for example; I get too much exposure. shadowmapping.txt is attached although I don't think you need it.

    I cross-checked it with the original v0.8.21 shadowmapping tonemapper, and it definitely keeps more detail in highlights & shadows, while keeping the rest a bit more linear.
     

    Attached Files:

  10. That is a good summation of my feelings.

    It looks like you have had some luck here then. I'll try it out and see what happens.

    PS, is there no way to have the auto-exposure determine the ideal scene exposure (18% grey), and then always aim for that? Ie, the linear stretch of exposure before the knee/shoulder, is always around the ideal one for the current scene?

    I also noticed that the auto-exposure step is linked to the exposure used. Ie, if the steps are done 10 times per second, then the exposure changes 10 times per second.
    Could you not check scene exposure every half second, and then just have exposure always aim to move towards the last check value at a certain speed?


    All good though. Would be cool to just simulate a camera here and see if things look right at all :D (a check that will confirm values are half right as we can take a real pic and a Racer pic and assume they should be similar)

    Dave
     
  11. Ruud

    Ruud
    RACER Developer

    I've gotten Uncharted2 tonemapping to work correctly, with a nice link: http://filmicgames.com/archives/75/comment-page-1

    I'll see if I can detach the sampling and integration of the exposure. No attachments yet, since I have to see how this all relates now to new exposure values (and the exposure_factor).

    The link http://imdoingitwrong.wordpress.com/2010/08/19/why-reinhard-desaturates-my-blacks-3/ also tries to undo John Hable's filmic tone operator where Reinhard is improved a bit; I tried that (and that also now works) but the comments state that RGB should be modifed independently, not luminance. It does give quite a bit of a different (more saturated) look.
    Interesting reads, a programmer on God of War III comments there for example. Don't you just love the net. :)
     
  12. Ruud

    Ruud
    RACER Developer

    I'll read that in a bit.
    I've attached hdr.cg which does Reinhard and Uncharted2. It's now set to UC2 which I like the best (Reinhard is still a bit flat). I think I'll keep UC2 tonemapping for Racer v0.9.

    You only need the hdr.cg and constants.cg files (both in data/renderer/common) and in racer.ini change auto_exposure.gradient to around 0.4 (I think it's set to 1.0) to avoid exposure getting too high.
    It's a bit flat by default, a bit GT5-ish even I think. Depending on your monitor I now put a 'gamma' in constants.cg which is used by the UC2 tonemapper. Defaulting to 1.0 (gamma correction is a better word), a value around 1.5 will give more contrast. It's all about taste and your monitor; we use Mac 30" and custom 42" screens in our simulators and those have entirely different gamma and colors.
    I do feel that higher gamma's go towards the Codemasters looks (and then you'd also need to increase saturation a bit or generally color-correct) and that gamma=1.0 (in constants.cg that is; there is more gamma code inside the tonemapper) is a bit more the GT5 look.

    An image is on my blog (the 458 on Carlswood). Things do map colors around a bit in the UC2 tonemapper, but according to what I've read, this is exactly what film does. So it's a bit of a look/style rather than entirely real life, but it still looks good. Then the rest is artistic freedom...

    [​IMG]
     

    Attached Files:

  13. Looks good here.

    A few shadow tweaks again to make them work ok, and it's spot on!

    Set gamma correction to 1.25 to just give it a bit of a 'boost' in the old contrast and depth, and it feels quite nice through various TOD's...


    Back to work on my terrain!

    Dave
     
  14. @Ruud: The skydome in your image looks really good. Could never get that
    kind of clarity in mine, must be a GPU issue. Wrote a post on it, but seeing your
    image kind of makes that post bogus.. at least to a certain degree..

    The changes to hdr and constants were remarkable.. the light is more
    even distributed and not so strong, which also makes the ground texture
    less saturated. VERY good improvement indeed.
     
  15. Looks alright here too, at varied times of day even. Much more natural looking.
     
  16. Ruud:
    The constants fouled up my shadows and made everything way too bright! Had to adujst the variables in constants and racer.ini, still not good. Just try shadows on a long fence that goes East/West and one that goes North/South and see that they produce two different shadows.

    A track should look the same with/without autoexposure, imho!
     
  17. Ruud

    Ruud
    RACER Developer

    Don't be fooled by the screenshots; what you wrote about near-horizon clouds (or rather, texture mapping) is totally right. My Carlswood skydome is a bit awkward; not mapped nicely at the edges. Ofcourse, it depends a bit on the texture you use to map the dome.
    I'm just waiting for Dave to create a nice new default track to semi-drop Carlswood. ;-)

    As for the HDR, it does look softer, nicer. At the moment I think that is about done for the future v0.9.
     
  18. Haha, working as fast as I can, on two tracks!

    Ummm...

    I'll email you tonight to discuss what I am doing and I'm happy for you to give pointers on what you think is worth me pursuing first and foremost to aid the future of Racer for now...
    I'll also probably be best speaking to Boomer and DavidI to get good materials in them, and some of the texturing methodologies applied to their best too!

    I sooooo wish there were 48hrs in a day, then I could have a 'normal' day and then a Racer day at the end of it... :D

    Dave
     
  19. Mr. Whippy,
    While updating an old track I found a way to merge the edges of two textures such as grass to dirt. It gives a good transition from one to the other instead of having straight harsh edges. It enables scaling the textures to match and get a good result.

    I used the mix shader that Ruud used in the "Blend" track he made and has on http://www.racer.nl/tutorial/shading_tracks.htm.