Racer v0.8.37 released

Ruud

RACER Developer
Just in time for X-mas. ;)

Get the new version at http://www.mediafire.com/?hc44aiq5izp4iqi
Then get the racer.exe/pdb which fixes reflections at www.racer.nl/temp/racer0838_alpha.7z

At some point we're going to have to call it v0.9rc1, since this one does a little tweaking to the graphics intensities (energy conserving) and sets cast_shadow to 0 by default. Most shadows will disappear; a huge fps boost until you put shadows back in shader files (.shd).

Merry Christmas!
Ruud

The changelist for v0.8.37:
- Added views.ini keep_aspect property to maintain aspect ratio for static images. See http://www.racer.nl/tutorial/dials.htm
- dyn_standard_reflect_window_f.cg now uses alpha as reflectiveness as well; opaque parts get less reflection
- TrackEd's AutoAseToDof has improved error messages on bad submaterial usage.
- TrackEd: some (progress) dialogs never appeared. Copy flags to group didn't work.
- TrackEd now colorizes the Collide and Surface flags in Props mode (F4) for easy tweaking of flags.
- standard_bump_detail_f.cg had a hardcoded 12x scale; now it uses the scale defined in the .shd file
- The garage track is not shown in the 'Select Track' screen anymore.
- Added log.events.host/port settings for remote racer tracking (niche usage).
- Lidar data (track.rld files) can now be sorted and driven on. Pro use only.
- Added car pitch/roll in body AND world coordinates in the Ctrl-9 debug screen.
- Added 'analyse shaders' command to check all shaders (track.shd/car.shd).
- The shaders default 'reflect' value for each layer is now 0 (was 1).
- Final tweaks in the shading system; energy conservation for diffuse and specular. For diffuse colors, this
means diffuse colors are divided by PI (assuming diffuse colors are kept between 0 and 1), and specular
colors are being corrected with (n+6)/8*PI.
See also http://www.rorydriscoll.com/2009/01/25/energy-conservation-in-games/
- Timers are now in microsecond accuracy using QueryPerformanceTimer(). Multicore machine issues?
- Profiling is more accurate due to the better timer accuracy. More graphics divisions added.
- Shader cast_shadow property by default is now 0; it's better to explicitly define shadow-casting materials
since you don't need a lot of shaders to cast shadows to still get a good image.
You *will* need to add cast_shadow=1 lines for shaders that you want to cast shadows.
- Depth texture setup optimized for shadowmapping; quite a big fps increase.
- Added look/left right controls (look_left, look_right) to dynamically look left or right (Q/W by default)
- In your controller file you can add global.left_right_look_velocity to set left/right look velocity (default=250)
- In your controller file you can add global.left_right_look_max to set the max left/right look angle (default=45)
 
Indeed, it's a term coined in the professional industry. I've changed it to 'PHYSICS RUNNING SLOWER THAN REALTIME'. Indeed previously known as 'SLOMO'.
Sometimes these engineering terms are a bit selfcentered to their own area of expertise...

Cheers Ruud,

So is this when the physics runs slower than realtime?

It always seems to run side by side with a graphics slow down...

Is that because graphics suddenly get very heavy and slow the PC down so much it slows physics down, or the other way around?


I'm just trying to understand how to fix it when it occurs.

Dave
 
That is what TrackEd's split function is for; after the track is about done, you split it into big section of 400x400m somewhat. These are generated by shader and cell (geolocation). This makes tracks often a LOT faster. It does produce nasty outputs ofcourse, with cell_x_y tags attached to dofs, but that would be the case with any machine-generated track. You do have to keep the original track that is more logically built up for future splits in that case.

Hmmm...

The thing is, I get better speed without splitting where most objects within the object to be split will be seen at once.

If for instance, I have a large DOF that is split into 6 smaller DOF, and each one has the same shader, and at one point on the track I can see all 6 smaller DOF, there are 6 render calls.
If at that point all the DOF are one object, there is one render call, which is much faster.

If you do this for ALL the shaders on your track, there is a good chance that a lot of the time you are rendering lots of 'shaders' many times in many DOF, rather than in one pass in one DOF.


I'll probably still split my track manually because I can decide where different parts are visible or not at different times and optimise based upon that.


Splitting will increase render calls = slow. However, too little splitting means geometry you won't even see is loaded = slow too. It's a case of finding the sweet spot.
I just wonder if there are better methods out there which other commercial engines are using!? Ie, it reads like Unity 3D engine is happy to accept heavily split items, and then 'merge' all alike shaders in any geometry to render them in one pass at each frame, which seems much more sensible... assuming it isn't costly in itself!?

Dave
 
Hmmm...

The thing is, I get better speed without splitting where most objects within the object to be split will be seen at once.
Ruud I agree with Dave here, when I've split large tracks into many dofs from a few large dofs (easier to model together) I've found fps drops off as well. I've been leaving stuff as whole models, basically split into surface meshes & non surface meshes, stuff like trees that currently need to be individual iI've many of, but when there's a lot of dofs in view, the fps drops off, and the OGL calls appear high compared to other parts of the tracks.
 
It is probably worse doing many render calls when you have big atlas textures too (16meg ish for a 2048x2048), as they are loaded into memory potentially hundreds of times for each DOF, rather than once. Probably worse to load bigger textures than smaller ones!?

I'm happy to work to whatever standards are best, but my concern is that we can get much better visual quality AND better FPS if we use atlas textures and low render call counts more often, which is why I am pushing forward with building content with as few DOF as possible, and as few texture/shaders.


I don't think there is anything fundamentally wrong with splitting ourselves more intelligently, or splitting generally if it is required/desired for some content creation or workflows.
However, I think it is at least worth looking at if a renderer can quickly scan a scene and 'batch' alike shaders into one pass rather than many.
Ie, 1000 dof cones, renderer scans scene, sees 1000 objects all with same shader, combines it in one pass. 999 render calls saved!

Dave
 
It is probably worse doing many render calls when you have big atlas textures too (16meg ish for a 2048x2048), as they are loaded into memory potentially hundreds of times for each DOF, rather than once. Probably worse to load bigger textures than smaller ones!

I find it hard to believe any resource is loaded more than once, especially textures. I believe, Racer keeps track of what resources it has already loaded, and what are needed etc. In OpenGL, when rendering something with textures, you just bind the texture id. You can have the same id for different shaders/materials, if they use the same texture file.

Splitting track to smaller junks is good so that the renderer can better determine which meshes are in view frustum and which are not. No point rendering a big mesh, if only a fraction of it is visible.
 
Cheers Ruud,

So is this when the physics runs slower than realtime?

It always seems to run side by side with a graphics slow down...

Is that because graphics suddenly get very heavy and slow the PC down so much it slows physics down, or the other way around?

Both can be the cause; it's a sign that the whole process really isn't going fast enough and the physics is lagging. This might be caused by slow physics (not being able to go faster than realtime) or graphics taking such a long time that too much physics would need to be 'catched up' so that it limits the amount of physics it's simulating. In racer.ini there's a limit.max_sim_time_per_frame value (100) which defines how many physics steps Racer will take maximally. This to make sure that the physics will not block trying to catch up, and will always render a frame now & then, to keep things relatively responsive.
 
That isn't how Racer works now.

Culling is done even on one giant object before rendering.

Ie, put a donut in scene with half in front and half behind. The FPS are no different if the object is split into 100 pieces, 2 pieces or 1 piece.

The only difference is that ALL the geometry of the object is loaded if it is one object, even if most of it is not filled/rendered/shaded or whatever the process is.

Textures are loaded once into memory yes, but each time a shader is called it loads the texture somewhere in the GPU/VRAM, deals with all the associated calculations, then moves onto the next shader. The Nvidia docs on this are very clear that atlasing is MUCH more preferential for graphics speed, because loading many small textures is much slower than loading one.
However, right now, having our dof split into 10 objects is the same as having 10 shaders. Yes it shares the same texture, and same shader, but it's passed 10 times, and that is no different to the graphics card. It deals with 10 operations of loading the required assets internally and processing them, rather than once!


The issue here ultimately is that if we don't split, we load geometry into the scene that might not be visible, which uses up some performance. Most graphics cards today can handle lots of geometry, but there is little point loading 100,000 polys you won't see, if you can instead load 100,000 polygons in trees or grass planes and throw them in with another graphics pass at almost zero cost but adding loads more visual detail!


I'm mostly working off the Nvidia docs guides which make great reading. Their logic has proven correct in all my tests so far.


I just don't get the splitting logic yet. I'll have to do more testing and push the poly counts really high so they start to drag the FPS down, then I can see if it's better at that stage to have more render calls and less geometry loaded (split into smaller DOF's) to bring FPS back up.



It'd be good to get clarity on if current mainstream rendering engines do indeed 'batch' per frame or not. http://unity3d.com/unity/engine/rendering

It's reading here like perhaps Unity3D just batches at export.

Hmmm, I wonder if they do indeed just batch EVERYTHING together (ie, one giant geometry object for each shader)... hmmm...

In any case, I get tons more FPS with a few big DOF, one for each shader, than I do for stuff that is split.

Dave
 
My next question would be, would moving to GLSL change this? Does it behave the same as Cg behaves in this respect? I suppose it doesn;'t really matter, if we create small numbers of large size dofs now, tracked can split them later anyway.
 
Going back to the question of track cam zoom interpolation (see #106)

k = zoom_edge * radius / 90
fov = 90*k/distance
This formulation has a problem when distance -> 0, in that FOV goes to infinity.
Taking care of this is pretty straightforward, just add to the distance.
fov = 90*k/(distance + .15)
It also has another benefit - zoom_close can now be calculated with distance=0, giving a second FOV to use.
k_edge = zoom_edge * radius / 90
k_close = zoom_close / 90 / .15
fov = 90*(lerp(k_close,k_edge, min(distance/radius,1)))/(distance+.15)
This allows variation in FOV with distance, while still behaving nicely beyond the radius (zoom stays at zoom_edge approximate car size). I believe this would make the 'old' style track cams far more usable.

I think this image demonstrates nicely.
Ydwgu.png

This is with settings
radius=100
zoom_edge=5
zoom_close=60
Red: current method (has an asymptote just past 100m, which is where the camera flips upside down)
Blue: Previous post method (div/0 error if distance were to be zero)
Black: This post's method (always finite positive FOV)
The plot is showing how much of the screen a 4m long car will take up - so at 1.0, the car is exactly as wide as the screen. With this changing so wildly in the current algorithm, a consistent zoom is impossible. With the previous post, there is no zoom_close so it's not possible to have the camera "zoom in" on the car as it gets closer, it just stays the same size while the FOV changes. With this post, though, variation in FOV is possible - and the car smoothly changes size as the zoom happens until it reaches the camera's radius - at which point it's constant.

An alternative would be to cap off distance, when distance=radius. This does of course prevent the current method from inverting. But it's still not a very smooth curve.
ihZx0.png

As you can see, my methods behave the same outside the radius - but are a lot smoother inside it, the car doesn't get hugely smaller or larger all of a sudden.

And if you want the car to be smaller as it gets closer to the camera (zooming out too fast as it approaches, zooming in too quickly as it leaves), it's still doable.
uGUQf.png

This is with zoom_close=120, same radius and zoom_far. Much less rough than the current method.


The FOV's only calculated once per frame, so I'm not too concerned about the additional complexity involved. This is a pretty easy way to make track cams more user friendly, avoiding complicated keyframes while still getting quality zooms.
 
Ruud, I have an ATI card and I get very Bad shadows. I posted some screenshots a few pages ago. I have been having this since Racer 15. The shadows have been getting worse for me each release. I have seen screens of Racer running on Nvidia cards and the shadows look perfect. Any Idea what could be causing it?
 
That isn't how Racer works now.

Culling is done even on one giant object before rendering.

Ie, put a donut in scene with half in front and half behind. The FPS are no different if the object is split into 100 pieces, 2 pieces or 1 piece.
Textures are loaded once into memory yes, but each time a shader is called it loads the texture somewhere in the GPU/VRAM, deals with all the associated calculations, then moves onto the next shader. The Nvidia docs on this are very clear that atlasing is MUCH more preferential for graphics speed, because loading many small textures is much slower than loading one.
However, right now, having our dof split into 10 objects is the same as having 10 shaders.Yes it shares the same texture, and same shader, but it's passed 10 times, and that is no different to the graphics card. It deals with 10 operations of loading the required assets internally and processing them, rather than once!

TrackEd's split splits by shader and location, but collects all geometry with the same shader into 1 object per cell. So it should be pretty optimal. Ofcourse, if you don't have a lot of polygons to begin with, it might be advantageous to make cells about 1000x1000m to get a high vertex/face count per DOF. It seems these days that the higher this ratio is, the better.

All textures are uploaded to the card and shared; select another texture means binding another texture, which normally should just flip a pointer on the card itself. Racer currently sorts by shader, then texture, but there's a lot of Cg parameter setting that is done per shader, and this takes a lot of time it seems. Just like setting the same parameter settings for the depth (CSM) textures took a lot of time somehow.

The same accounts for DOF files btw; use_vbo=1, so the vertex/triangle data is uploaded to the card and after that only referenced by an index number.

So splitting the DOF into 10 objects indeed will take 10 draw calls; if they are all the same shader then that is cached/detected as well, but having 10 objects with the same shader in a single view means the split cell size was really set to low, since you normally only want about 3 or so. It might be good to test with cellsize=1000, see if that make things better.
 
TrackEd's split splits by shader and location, but collects all geometry with the same shader into 1 object per cell. So it should be pretty optimal. Ofcourse, if you don't have a lot of polygons to begin with, it might be advantageous to make cells about 1000x1000m to get a high vertex/face count per DOF. It seems these days that the higher this ratio is, the better.

All textures are uploaded to the card and shared; select another texture means binding another texture, which normally should just flip a pointer on the card itself. Racer currently sorts by shader, then texture, but there's a lot of Cg parameter setting that is done per shader, and this takes a lot of time it seems. Just like setting the same parameter settings for the depth (CSM) textures took a lot of time somehow.

The same accounts for DOF files btw; use_vbo=1, so the vertex/triangle data is uploaded to the card and after that only referenced by an index number.

So splitting the DOF into 10 objects indeed will take 10 draw calls; if they are all the same shader then that is cached/detected as well, but having 10 objects with the same shader in a single view means the split cell size was really set to low, since you normally only want about 3 or so. It might be good to test with cellsize=1000, see if that make things better.
Hello Ruud,
yes and no...
I made a simple test...
1) Ver. 0.8.36 Track unsplitted and Daves Lambo...i have (to compare at the SF Line) ~15fps.
2) Ver. 0.8.36 same Track Splitted in cellsize=1000 also with Lambo at SF Line...i got ~20fps!
3) Ver 0.838a (not with the env mip) same Track and shader unsplitted, car lambo...~30fps...
4) Ver 0.838a (not with the env mip) same Track and shader splitted cellsize=1000, car lambo...~28fps!!

so yes with the older version there is a good fps +
and no not with the latest version, there is the fps less....

...
 
Going back to the question of track cam zoom interpolation (see #106)

k = zoom_edge * radius / 90
fov = 90*k/distance
This formulation has a problem when distance -> 0, in that FOV goes to infinity.
Taking care of this is pretty straightforward, just add to the distance.
fov = 90*k/(distance + .15)
It also has another benefit - zoom_close can now be calculated with distance=0, giving a second FOV to use.
k_edge = zoom_edge * radius / 90
k_close = zoom_close / 90 / .15
fov = 90*(lerp(k_close,k_edge, min(distance/radius,1)))/(distance+.15)
fov=min(fov,zoom_close)

I tried that, it seems to trigger somewhat quickly when crossing normalizedDistance>-1. I've uploaded a Racer version which implements your function, see http://racer.nl/temp/racer0838c.7z (also take Newton.dll, a v3.0 mind you, just for testing as we get hangups every few times in 24h).

The FOV code looks quite what you wrote (ouch, there's a small bug in that zoomFar is not used anymore here, and this exe breaks the minimap envmap generation):

Code:
void RTrackCam::CalcState()
// Calculate the current situation for this camera, based on the car location
{
  // No need to do this for keyframed cameras
  if(animatedCamera->GetKeyFrames())
    return;
 
  // Calc normalizedDistance
  float distance=ProjectedDistance();
  normalizedDistance=distance/radius;
  d3Limit(normalizedDistance,-1.0f,1.0f);
 
  // Calc zoom value
  if(flags&FIXED)
  {
    // Use fixed view
    zoom=zoomClose;
  } else if(flags&ZOOMDOT)
  {
    // Interpolate from zoomEdge->zoomClose->zoomFar
    float kEdge,kClose;
    const float fudge=0.15f;
 
    kEdge=zoomEdge*radius/90.0f;
    kClose=zoomClose/90.0f/fudge;
    if(normalizedDistance<0)
    {
      // Before the camera
      distance=-distance;
      zoom=90.0f*(d3Lerp(kClose,kEdge,d3Min(distance/radius,1.0f)))/(distance+fudge);
    } else
    {
      // After the camera
      zoom=90.0f*(d3Lerp(kClose,kEdge,d3Min(distance/radius,1.0f)))/(distance+fudge);
    }
    zoom=d3Min(zoom,zoomClose);
  } else
  {
    // Base zoom on car distance to camera (no front/back)
    // Only uses zoomClose and zoomEdge
    rfloat d;
    RCar *car=RMGR->scene->GetCamCar();
    d=car->GetPosition()->SquaredDistanceTo(&pos);
    zoom=zoomClose+(zoomEdge-zoomClose)*d/(radius*radius);
    // Avoid numbers out of range
    float min,max;
    // Only zoomClose & zoomEdge play a role
    if(zoomClose<zoomEdge){ min=zoomClose; max=zoomEdge; }
    else { min=zoomEdge; max=zoomClose; }
 
    d3Limit(zoom,min,max);
  }
}
 
Nice test there Alex.

I do find the FPS hard to use as a gauge of actual speed though, sometimes you can load a track and the FPS are lower than if you load it another time... not sure why that might be, but they do fluctuate (maybe other apps running on your computer?!)

I've generally used the draw calls as a guide to FPS. Generally more draw calls = lower fps.

I think Ruud's advice of splitting per 1000m or so for example, is a good idea as a starting value.

However, I'll likely still manually do jobs like that, then I can leave objects I *know* will be visible a lot of the time all together, in one DOF.


I'm not saying anything is wrong with Racer per se, just wondering if other engines do actually batch many objects with the same shader into one draw call, in real-time (per frame)...?!

If they do, then would Racer benefit from it?

If not, then perhaps I should do a little FAQ with images explaining a good process on how to split the objects/shaders for optimum FPS :)



Now, time for me to go tinker with 100,000 trees and 400,000 polygons of x-trees in one big dof, or 10 40,000 tree dofs :D


Also Ruud, one other thing I wanted to bring up issue with was the panorama shader. The scale variable didn't seem to work intuitively. I'm not sure if you developed it or just quickly added it. I'll try do some tests again to show why it wasn't working for me with images. (I made some tweaks to make it work as you'd expect, but not sure if it's right :) )


Cheers

Dave
 
You should find it in racer.ini. I don't have the problem in Win7/64bits, this FMOD is already in use at a customer of ours. Hm.

Ah, I didn't think to look there. I tried it but it made no difference, there's still an increase of about 40% once I pause/unpause the game
and it affects all cameras not just the incar.
I've had confirmation from others on this forum that they too are having the problem, otherwise I would be looking at my system. Could it be a directx problem?

Alex Forbin
 
I tried that, it seems to trigger somewhat quickly when crossing normalizedDistance>-1. I've uploaded a Racer version which implements your function, see http://racer.nl/temp/racer0838c.7z (also take Newton.dll, a v3.0 mind you, just for testing as we get hangups every few times in 24h).
Oh, sorry, I was mostly using flags=0 (plain distance-to-camera-based) zooms. I'll have to give the zoomdot ones a try.


Here's a set of cams for Roggel, using flags=1 that seems to show it off pretty well. I compared .36 against .38 and there's a fairly big diff.
Code:
cam_default
{
      zoom_far=16.000000
      flags=0
      src
      {
        k=0.000000
        mass=1.000000
      }
      dst
      {
        k=50.000000
        mass=1.000000
      }
}
tcam
{
  count=10
  cam0
  {
    pos
    {
      x=0.000000
      y=300.000000
      z=0.000000
    }
    pos2
    {
      x=0.000000
      y=0.000000
      z=0.000000
    }
    posdolly
    {
      x=0.000000
      y=0.000000
      z=0.000000
    }
    rot
    {
      x=0.000000
      y=0.000000
      z=0.000000
    }
    rot2
    {
      x=0.000000
      y=0.000000
      z=0.000000
    }
    radius=50.000000
    group=0.000000
    zoom_edge=2.000000
    zoom_close=2.000000
    zoom_far=2.000000
    flags=0
    src
    {
      k=0.000000
      mass=1.000000
    }
    dst
    {
      k=0.000000
      mass=1.000000
    }
    cube
    {
      edge=0.000000 0.000000 0.000000
      close=0.000000 0.000000 0.000000
      far=0.000000 0.000000 0.000000
    }
    keyframes
    {
      count=0
    }
  }
  ; additional camera positions
  ; first step: watch video, mark approx. locations of known on-ttrack cams.
  ; okay, done that.  now driving to them in-game.
  ; there are 6 fixed cams and a chopper
  ; leaving the above camera alone for now, since it doesn't hurt anything, number them
  ; from the start: 1 first chicane 2. third corner 3, fourth 4, last 5
  ; 1 is at: 37.5 -27.3 357.8
  ; 1 zoom is from 12 to 30 to 6
    cam1~cam_default
    {
      ; pos is where the camera goes to.  (this is all with flag 8)
      pos
      {
        x=37.50000
        y=-27.300000
        z=357.800000
      }
      flags=1
      radius=30.000000
      zoom_edge=5.00000
      zoom_close=20.000000
      cube
      {
        edge=9.49 -28.56 393.0
        close=42.2 -28.97 362.45
        far=82.47 -29.17 321.24
      }
  }
  ; 2 is at: 174.1 -26.9 221.36
  ; 2 zoom is from 2 to about 25 to maybe 12, since it cuts so quick.
    cam2
    {
      ; pos is where the camera goes to.  (this is all with flag 8)
      pos
      {
        x=174.100000
        y=-25.900000
        z=221.3600000
      }
      pos2
      {
        x=158
        y=-27.2500000
        z=225.740000
      }
      ; posdolly is where the camera starts.  not sure what triggers it.
      posdolly
      {
        x=30.3000
        y=-28.1200000
        z=372.800000
      }
      rot
      {
        x=0.000000
        y=0.000000
        z=0.000000
      }
      rot2
      {
        x=0.000000
        y=0.000000
        z=0.000000
      }
      radius=130.000000
      group=0.000000
      zoom_edge=2.000000
      zoom_close=45.000000 ;10?
      zoom_far=6.000000
      flags=1
      src
      {
        k=0.000000
        mass=1.000000
      }
      dst
      {
        k=0.000000
        mass=1.000000
      }
      cube
      {
        edge=67 -28.7 339.8
        close=158 -27.25 225.74
        far=154 -27 197
      }
      keyframes
      {
        count=0
      }
  }
  ; 2.2: at the bend after the chicane
  ; 2.2 at 221.1 -25.4 35.97
    cam3
    {
      ; pos is where the camera goes to.
      pos
      {
        x=221.100000
        y=-25.400000
        z=35.9600000
      }
      flags=1
      radius=165.000000
      zoom_edge=1.000000
      zoom_close=25.000000
      zoom_far=1.000000
      cube
      {
        edge=158.6 -27.2 183.0
        close=215.4 -27.2 35.0
        far=214.5 -27.4 -138.5
      }
  }
  ; insert 2.5.  doy numbering.  This corner wasn't filmed in footage but it makes some sense.
  ; 2.5 is at: 183 -26.2 -438
  ; 2.5 zoom is from 6 to about 50 to 2
  ; 2.5 should start by about 215, -27, -131 in order prev. cam doesn't flip.
      cam4
      {
        ; pos is where the camera goes to.  (this is all with flag 8)
        pos
        {
          x=183.100000
          y=-26.200000
          z=-438.00000
        }
        flags=1
        radius=150.000000
        zoom_edge=2.000000
        zoom_close=25.000000
        cube
        {
          edge=204.6 -28.0 -317.2
          close=174.0 -29.34 -430.1
          far=4.85 -27.44 -356.9
        }
  }
  ; 3 is at: -167.3 -24 -342
  ; 3 zoom is from 2 to 40 to 2
  cam5
  {
      ; pos is where the camera goes to.  (this is all with flag 8)
      pos
      {
        x=-167.300000
        y=-24.000
        z=-342.000
      }
      flags=1
      radius=100.000000
      zoom_edge=2.000000
      zoom_close=40.000000
      cube
      {
        edge=-69.6 -27.3 -346.3
        close=-162.9 -27.3 -336.8
        far=-149.9 -27.1 -247.6
      }
  }
  ; 3.5, aka the helicopter's 2nd flight
  ; really this should be a dollied one
  ; anyway, start it around -148.5, -24, -247
  cam6
  {
      pos
      {
        x=-214
        y=180
        z=-102
      }
      radius=260
      zoom_edge=1.5
      zoom_close=4
  }
  ; 4 is at: -114.2 -26.5 138.73
  ; 4 zoom is from 2 to about 40, and doesn't turn enough to swap again.
  cam7
  {
      ; pos is where the camera goes to.  (this is all with flag 8)
      pos
      {
        x=-114.2000
        y=-26.500000
        z=138.730000
      }
      flags=1
      radius=115.000000
      zoom_edge=2.000000
      zoom_close=40.000000
      cube
      {
        edge=-121.04 -27.2 27.45
        close=-117.17 -28.23 138.16
        far=-130.88 -28.23 174.92
      }
  }
  ; 5 is at: -214.76 -27.8 329.84  - may nudge up y
  ; 5 zoom is from 2 to about 22 to 2 again.
  cam8
  {
      ; pos is where the camera goes to.  (this is all with flag 8)
      pos
      {
        x=-214.800000
        y=-24.800000
        z=329.84000
      }
      flags=1
      radius=170.000000
      zoom_edge=12.000000
      zoom_close=30.000000
      zoom_far=2
      cube
      {
        edge=-184.25 -28.23 278.76
        close=-207.25 -28.23 328.2
        far=-68.6 -27.55 401.48
      }
  }
  ; 6 is at: -31.15 -26.5p 433.9 - may nudge a bit up in y again
  ; 6 should be at: -31.7 -26.6 434.5 to avoid viewing a sign close up
  ; 6 zoom is from 6 to 50 to 6
  cam9
  {
      ; pos is where the camera goes to.  (this is all with flag 8)
      pos
      {
        x=-31.700000
        y=-26.600000
        z=434.500000
      }
      flags=1
      radius=50.000000
      zoom_edge=24.90000
      zoom_close=50.000000
      cube
      {
        edge=-68.6 -27.55 401.48
        close=-33.48 -27.9 429.66
        far=9.49 -28.56 393.0
      }
  }
}
Tried to record a lap but having crashes when I do.

I'll think about how it should work on [-1, 1] to go better with zoom_far.
 
Hello Dave,
the Apps in the background are all time the same...
I does it again with a look to this drawcalls (OGL?!) and it was the same....
1) Ver. 0.8.36 Track unsplitted and Daves Lambo...i have (to compare at the SF Line) ~15fps. drawcalls=668
2) Ver. 0.8.36 same Track Splitted in cellsize=1000 also with Lambo at SF Line...i got ~20fps! drawcalls=267
3) Ver 0.838a (not with the env mip) same Track and shader unsplitted, car lambo...~30fps...drawcalls=668
4) Ver 0.838a (not with the env mip) same Track and shader splitted cellsize=1000, car lambo...~28fps drawcalls=267

and the fps was to day the same as yesterday...hmm....
 
There's in fact a huge difference when flagging to 0 or 1 the TC (tracks Cams). As shown in the code, to get rid of this ugly TC flipping, the radius need to be increased & also to set quickly all your cams in NP++, it implies that you place TC approx. to the same distance to each other to edit all your cams at once.

Here's some testing with 0.8.36 with TCs flags=0

Code:
radius=65
group=0
zoom_edge=8.5
zoom_close=10
zoom_far=2
flags=0

Setting each cam to 1/2 of the radius indicated in my code & it should animate quite nicely, with smooth fov/zoom interpolations...:)

Also, the disco cam automatic 'around the car' camera animation/interpolation, you definitely see that those keyframes need to be placed far away from each other in your Racer code timeline...Interpolation points over short time laps or space/distance, is messy ! It's like NURBS technology, try duplicating a bunch of non equidistant curves to finally loft them, ouch...

Other thing, the projection shader (projected lights) when foving to extreme values < 4, the projection is behaving strangely from certain perspective views.
 

Latest News

Are you buying car setups?

  • Yes

  • No


Results are only viewable after voting.
Back
Top