Ambient occlusion, is it possible?

If you want to push in extra textures, for testing purposes, TEXUNITs carry over from the most recent thing rendered (the car) if the FS pass is not using it, so you can use layer7 in a car material to put something in the unused texture space TEXUNIT7, such as a random normal map.

You might be able to do it into layer1 too, but I'm not sure what order Racer does car materials in. With velocity map disabled, you get the Lamborghini's bumpmap in there.

I'm checking out some SIGGRAPHs on SSAO and SS GI now.
 
Hahaha, I was wondering why there was a bumpmap popping up randomly when I was trying to find the depth buffer...

That's the GI I was going to try to implement, might have a crack after work tonight.
The downside is that it isn't physically accurate, more of a fudge to make things look "right" but I think it looks loads better.
Downside is only visible objects bleed colour, but I think in most cases that should be okay.

It also includes some edge finding stuff for the SSAO which should improve the look. I noticed on my test car that there was some horrible ghosting where some extreme normals were occuring (i.e. the rear diffuser) - I'm hoping this will solve those issues!

Code is here if anyone wants to have a crack within the next 8 or so hours:
http://www.gamedev.net/topic/517130-my-new-ssao--some-help/page__p__4362114#entry4362114
 
It's subtle.

We start getting into physically iffy grounds with that implement. IRL bleed is actually a reflection of coloured light, so if we have a reflective surface reflecting a red wall, we're gonna get more red via this system too.

All gets messy fast mixing lighting systems IMO, which is the last thing we need when racer looks so nice now due to real values everywhere we can :)

Right now Racer looks very nice, with AO it'll look excellent and lighten author workload!

Not sure I care much for GI, my next realism bugbear is getting these envmap cube maps either in HDR, or better yet actual mini live envmap references with res, update rate, track xyz coords, perhaps a blur...


Some more elegant lighting system generally would be good to get GI in there, but generally I'm thinking things to speed up an authors content creation, while also reducing texture lookup loads because shading isn't needing to be considered as much...

Ie, I'd prefer AO to use the extra power the "GI" used up and get better AO... or a decent motion blur perhaps, since it's amazing how much that can blur out the need for so much track side detail!


Hmmmmmm

Dave
 
Heh, sort of contradicting yourself - the SSAO has (AFAIK) no physically based variables, only fudged things (such as radius etc.) so if you wanted to only implement things with physically correct values we'd also need full BRDF, etc.

Just my 2c, I think even a little light bounce (even if it is just faked) would go a long way to selling the scene.

Though I was watching a video of Forza 4 last night and the motion blur just adds SO much to the perceived realism. That's probably my biggest gripe at the moment. Yes we have the alpha of the framebuffer (which I think I noticed has been disabled in the latest version?) but it's not the same as in a video like this:

OT, however - if Ruud's listening:
Can the replay system PLEASE get an overhaul? That video has such a nice GUI.
 
GI is probably a great basic approximation for realtime GI on lambertian surfaces, but most of what Racer does are not those types of materials really... especially since we focus around your car which is one big shiny object with complex reflection properties :D

(ie, a red car and a black car will both reflect at glancing angles some light, white in the black car and red tint in red car... at direct angles the black car will absorb almost all the light, but the red car will cast off a lot of red light. Different response = bad news if incident angle changes. So might look nice some times, but totally wrong at others, which makes it dangerous imo, AO generally doesn't do too much that is dangerous I don't think)

If we could isolate the GI to certain surfaces then it might be better, but I guess it's just using the AO ray-casting to pick colours and then shade over the top of the scene render?

Hmmmm...




I agree on BRDF, I'd love to have BRDF and a soft live-envmap blurred for IBL... cars would look photo realistic for the most part then... and it's fast (just look at Forza 4 hehe)

I also agree on the motion blur. GT5 looks great in motion. When you pause it (but not photomode) it looks rather average really.
In motion the blurring adds realism and blurs away the lack of details, and when you stop to take a pic you get photomode to add in extra realism.

Since Racer is all about moving then a great motion blur would make sense imo.


With decent MB and AO the saving in making tiny details look perfect so they looked good as you drove by at 50mph wouldn't matter as much, and the saved texture space not having unique AO baking on every part of your car/interior/track would easily offset MB and AO implements whole also looking better :D



I guess another fantastic feature is that the AO pass will 'see' the normal maps we have applied to our objects too? Or does it only see geometry normals?

Dave
 
Since the screen space normals are generated from the depth map they don't include our nice normal maps, no :(

True about the GI vs. AO - I don't have any time tonight but in the morning I'll have a crack at that AO method (or the edge detection/garbage matte) and see if it's any better.

I'm puzzled about the motion blur though, it seems we've got the two most widely accepted methods (the velocity map version that's documented in the NVIDIA gpu gems stuff and the alpha blending version) but neither really seem to work nicely for us :(

I remember someone did a simple BRDF model a while back, gtpbiz was it? Used a texture lookup to see what the lighting should do at different angles, I never really understood how to generate the texture though.
 
I'm puzzled about the motion blur though, it seems we've got the two most widely accepted methods (the velocity map version that's documented in the NVIDIA gpu gems stuff and the alpha blending version) but neither really seem to work nicely for us :(
For some reason the car doesn't get rendered at the right speed in the velocity map, otherwise it'd work fine.
 
Yeah generating the BRDF LUT's is the hard bit really Cam. I guess you need a BRDF generator of some variety where you can tweak the response you want on a sphere or similar type of 'test' object (maybe even a model of your car)
Then fire out the LUT from that once you are happy.

Yeah gtpbiz implemented a BRDF and a HDR envmap iirc... not sure where he has got to these days :D

Dave
 
My experience with implementing AO is that you will need the screen normals.
Algorithms without the normals are terrible.
Passing screen normals is only usefull in a deferred rendering system.
Racer has a forward rendering system so storing the screen normals does not really have a point.

The depth buffer is available, it is used in the particle shaders for soft particles.
But i think it lags one frame though...
 
I thought racer had some weird mixed rendering system that was basically illogical to me when it was first explained.

It is true though, we need to do a bunch of calculations to get the screenspace normals (which is costly) and they're not even that great IIRC.

So is racer a fully forward system or are there parts that are deferred?
 
I think you could say Racer has a fully forward rendering system.
You might say that there are some deferred characteristics, like the bloom maps. Although that might be pushing it.
The renderer has all the advantages (AA, transparency,..) and drawbacks (pass per lightning source) of a forward renderer, and none of the characteristics of a deferred renderer, say that says it all I guess.
 
You can't pass screen normals from the fragment shader along with the regular map? I'm not really familiar with what variables are available in the renderer profile, but it looks like with CSM disabled there's a COLOR1 that could take it.
 
Might be possible. If you are talking about the single alpha channel, that's not enough to store a normal i think.
How I would personally handle it for now is to store the normals in the colour map. You don't need the colours to see the effect of the AO. It's even better to work without colours to see the slightest changes.
Or store the colours one channel (obviously losing a lot of detail), and the normals in the other two.

Maybe Ruud will make a buffer available when it looks good ?? We can only hope !

Im curious to see some results !!
 
Good point, easy to test this stuff by just writing up a track & car that use a shader which passes normal maps.
gMDOh.jpg

(still using world coordinates in this picture, need to find the right matrix rotation.)

I found a pretty straightforward tutorial for SSAO with normal+depth maps so I'll give it a shot. ( http://www.gamedev.net/page/resourc...a-simple-and-practical-approach-to-ssao-r2753 - I implemented it without the random texture lookup)

I think these are the correct screenspace normals:
AB9PE.jpg

(blue faces the camera, red's left/right, green's up/down - for display purposes I'm using absolute value)


EDIT: This is the result I've got from this.
bV44s.jpg


Since this is straight up using the main render texture to pass normals, some artifacts are to be expected - for example, the car's shadow is doing funny things to the normals.

Also, this is without a random map - I'm using relatively large primes to maximize randomness, but it could probably be better. I'm not sure what's causing the interspersed dark pixels, but that's probably going to be a problem.

I also did a fake-composite by taking 2 screenshots using the same camera, the effects are mostly pretty subtle (except the headlight, which is transparent and screws things up)
GTUoS.jpg

Bottom is with the AO multiplied with the output result.
GkRVn.jpg

This one has the AO's effect multiplied up 2.5x to make it more prominent. It does, I think, add to the effect of the door opening. But the artifacts are also fairly prominent.

Difference in framerate is 52 -> 37 (16 samples) -> 29 (32 samples)

Tweaking the constants helped out with the artifacts. Still does a black line around the sky (it's due to depth I suppose)
rIsne.jpg


Definitely at least partially losing its impact because the main things it hits are already dark (shadows, the wheels have a baked AO map on them too)


EDIT #5 or so
2Y833.jpg

Added bumpmap shader, which is really just adding the tread pattern.
 
Looks very nice Stereo!

Roggel would be a better place to see how the AO can benefit the scene perhaps... car and scene together.
Carlswood doesn't appear to do AO much justice due to the textures and existing baked vert lighting and so on.

Also I think with AO we can remove the old style car shadow (underneath one)


I guess AO needs to be multiplied into the ambient pass only, but probably pretty aggressively (ie get a matte object on a cloudy day looking correct to find the correct AO values for falloff, intensity etc).
Then that can be flooded out where appropriate with the diffuse pass lighting in bright sunny conditions and it should be as natural as possible :)


The FPS hit is quite high.

Perhaps with the normal random map you could drop the sampling more to boost FPS?

I guess there may also be some more optimisations to come.


The nice thing is in theory, we can just crank the samples value for 'photomode' and get lovely looking AO, but then drop it down to probably half what it's set at in your pics add in some nice motion blur, and it'll all feel pretty nice when racing around a track :D


Hmmm

Dave
 
Great stuff Stereo.
Definitely looks much better than the algorithm i used.

Curious to see the performance impact on different systems.
What hardware are you using ?
 
My experience with implementing AO is that you will need the screen normals.
Algorithms without the normals are terrible.
Passing screen normals is only usefull in a deferred rendering system.
Racer has a forward rendering system so storing the screen normals does not really have a point.

The depth buffer is available, it is used in the particle shaders for soft particles.
But i think it lags one frame though...

It shouldn't lag, but it's currently a #define (OPT_DEPTH_FIRST) in world/renderer.cpp.
That needs to be turned on for the soft particles to work, but that effect was fairly subtle and it does take more time to render that way. I may be able to just turn it into an option though.

But if you need normals, that's a bigger point; storing it with MRT will need all shaders to be tweaked. At some point shader generation must be made for these types of things; there are so many shaders right now!
 
Hey Ruud, the soft particles are working, it uses the WPOS semantic. That is why i striked that remark.

The lag referred to the shader variable "curDepthTex" which passes the depth texture to the shaders and effectively lags one frame. That was the first attempt at creating soft particles and is obsolete.

OPT_DEPTH_FIRST is not necessary since the depth texture is needed in the fullscreen pass. The depth texture is already available by then.
 
Great stuff Stereo.
Definitely looks much better than the algorithm i used.

Curious to see the performance impact on different systems.
What hardware are you using ?
I'm using a 460GTX 1GB, and some form of AMD Phenom II 6-core. I haven't really attempted to optimize anything yet, neither have I tried any of the post-methods suggested (resampling for blur etc.)

07X7r.jpg

8, 16, 32 samples: 55, 28, 18 fps approximately. (image exaggerated from actual map. No AO, same settings otherwise, is 120fps.) The actual sample radiuses could be tweaked further - it samples 4 points at each of 2, 4, or 8 distances, with the upper being the closest and farthest, the middle adding two intermediate, and the lowest adding 4 more intermediate values.

12, lying somewhere between the first and second, for about 45 fps, seems like a decent compromise in realtime. If the # could be changed dynamically for screenshots, sure, use 32.
zkEGd.jpg

Or, on youtube,



One oddity in performance is that it's heavily dependent on camera distance to objects. If I'm zoomed right out, like the helicopter cam, it's 100+ fps, at the distance in those screenshots, 45 fps, and the interior cam drops it to 28. When I check out the CPU profile on page 6, it says CPU% physics is going from ~20% to ~80% as I zoom in (thus cutting out 3/4 of the cpu gfx render main & post 2D) I don't know what CPU% physics refers to but that seems strange. The effect is less extreme using a normal FS shader - 160 -> 100 fps.
 
If you guys want to try it out, cg files here.
ao_f.cg: fs shader, put in /fullscreen_shaders_hdr, set as fs_shader in racer.ini
dyn_standard_bump_ao_f.cg: bump fragment shader, put in /shaders, use as fragment shader for materials with bumpmap in layer1.
standard_ao_f.cg: fragment shader, put in /shaders, use as fragment shader for all other materials.
Personally I made an AO-test-copy of the car & track since it required major edits to the shader.

The two fragment shaders write out normals in the regular color map, the fs shader expects those as well as depth map enabled.

To change the # of samples, go into ao_f.cg and uncomment the lines beginning ao +=, and adjust the number it's divided by . Default 3.0 -> 3 uncommented samples. It loops 4 times, so that's 12 total. There are also 4 variables at the top of the file, named g_bias (biases the minimum angle between sample points to cast shadows, so that flat surfaces don't self-shadow as much), g_scale (scales depth difference necessary to be considered relevant - basically 0 distance is maximal and it falls off at a rate depending on this var), g_intensity (scales output), and g_sample_rad (scales screen distance samples are taken from).
 

Latest News

How long have you been simracing

  • < 1 year

    Votes: 333 15.5%
  • < 2 years

    Votes: 229 10.6%
  • < 3 years

    Votes: 225 10.4%
  • < 4 years

    Votes: 169 7.8%
  • < 5 years

    Votes: 289 13.4%
  • < 10 years

    Votes: 251 11.6%
  • < 15 years

    Votes: 161 7.5%
  • < 20 years

    Votes: 122 5.7%
  • < 25 years

    Votes: 97 4.5%
  • Ok, I am a dinosaur

    Votes: 279 12.9%
Back
Top