Racer v0.8.34 released

Status
Not open for further replies.

Ruud

RACER Developer
Racer v0.8.34 is out! Get it at http://www.racer.nl/download/racer0.8.34.7z (48Mb, note the 7z extension; you'll need 7-zip from http://www.7-zip.org ).

Enjoy!
Ruud

Changes:
- Bugfix: Director cam didn't work
- Bugfix: Rain didn't affect grip (snow did)
- ENet upgraded to v1.3.3 (small change to avoid unreliable packet regression, and disconnect fix)
- Added input limiters for Pacejka curves for slipangle/ratio/camber/load (see data/cars/default/car.ini)
(wheel<n>.pacejka.slipangle_min/max etc); see http://www.racer.nl/reference/pacejka.htm#inputsoutputs
- Bugfix: Replay was never painted; no camera was defined.
- Bugfix: particles in replays were not animated in reverse mode. Now they are (although not entirely correct)
- Turning AI on while offroad would cause a crash.
- Added ai.spring_factor in car.ini to add some magic forces to keep the car on track (use 'ai scale' for example
to add some speed to the AI line; check out 'ai save' and such).
- CurvEd import of ASCII txt and CSV added.
- CurvEd imports were buggy (zero) if more than 50 points were imported.
- Added textures.stub in racer.ini to optionally load a single texture for every loaded shader; useful
when testing mipmapping (if you're using mipmaps effectively, see http://www.racer.nl/tutorial/stub_textures.htm)
- Added data/images/texture_stub.dds as a default images for the feature above
- Added dev.fast_fade in racer.ini to allow quick fades while developing
- Added wheel roadnoise filtering to simulate rubber; see http://www.racer.nl/reference/wheels.htm
- Added ai.rc_factor in car.ini to be able to help AI if you give it a high grip_factor as well. See http://www.racer.nl/reference/carphys.htm#ai
- Added ai.cg_factor in car.ini to lower effective CG height to help AI drive faster through turns without flipping.
See also http://www.racer.nl/reference/carphys.htm#ai
- susp_implicit_integration now set to 0 for explicit physics. It seems stable for F1 cars with tire_damping,
and the implicit method creates buggy load values in the tires.
- Added ghost.update_if_loaded (defaults to true) to determine whether to update the ghost lap (if you're faster),
even if you've explicitly loaded one with ghost.load.
- Materials without names in DOF files used to get the exact texture name (ie 'body.tga'). The extension is now cut, so it'd become 'body').
 
I know, it sucks, but compared to how it was on BT.. it's much much better. Thankfully fibre-optic is coming here! :)

Recent transfer speed records by a German high tech group approx. 26 TB/s....with Laser tec, what a coincidence...

#==================================================#

Now jokes apart, it seems we're hitting a new age...Do anyone know about these links ?

We talking about quantum mechanics, point clouds technology, voxels engines & stuff that, for sure, industries don't like, because they want you to buy 'blindly' new GPUs, keeping you 'lobotomized'. I have an incredible amount of proofs which discredits all actual governments visions/politics/truths & actual worldwide statements, in all sectors(energy, politics...), so keep all your eyes wide-open & never trust/work for them again.

Boycott the system with your actual tools, so it won't never happen again, free your mind from constraints & always think that almost everything, what you've dreaming or thinking is actually/now possible. Open your consciousness, your mind & your heart...

You will experience in the next few months the biggest humanity 'consciousness' crysis...& you'll probably remember my words, because of actual simple 'insolvent' mathematical equations we're living in.



http://atomontage.com/?id=home
http://www.euclideon.com/
http://en.wikipedia.org/wiki/Point_cloud
http://www.nextengine.com/
http://dneg.github.com/dnPtcViewerNode/
http://vterrain.org/Packages/Com/

I really hope you all feel me, because it's time to change things & flush the all old ideas & ways/techniques of thinking/approaching different problems....It's GAME OVER for them & we shall ENTER a new 'paradise' of infinite ideas, who knows, extracting LIDAR cosmic constellation terrains from other planets & integrate cars in a complete new quantum physical world above all expectations, all that with ease ? :)

To help you, open your consciousness, I invite you to see the 2011 'Black Whole' documentary from Nassim Haramein, which I discovered recently & is actually quite a good example of how most people (physicians, researchers, engineers..) actually/now sees the physical & infinite world/cosmos as we know it today from Quartz to Quasars. Interesting documentary about our physics & cosmos, from micro to macro POV (Point Of View), from Newton to Quantum Mechanics :).


@Ruud/Mitch

I know it hurts, because that supposes to restart from a new POV the Racer project which should this time be 100.000 times more durable...Let us know, if this is do-able, I guess it's a matter of time right now...
 
What?
I'm sorry but that first video is full of crap. I didn't even bother to look at the second video. They're comparing 1990's graphics to current graphics, of course current is going to look better. "polygon" vines compared to "unlimited" vines? Please, add a normal map and you can get exactly the same look.

In the end it's comparing graphics to art, and that's the only difference...some people can make art, others make graphics - point cloud data and that stuff is completely pointless without an artist to make it look good.
 
Recent Bruce Well Interview 10/08/2011...just forget the interviewer & forward some the movie...at least you'll discover it's actually true :) Was just telling myself, what a bunch of idiots at ATI/Nvidia, both, even if they show us some minor improvements thru time (15 years I'm in the bizness), but sorry it's REVENGE TIME, all over the gaming forums, you'll hear /read truth, people generally mod games because they sick of not having proper tools + they sick of seeing always same type of ideas again & again, tricks to overcome ridiculous polycount constraints, as discussed recently all together, etc...of slow & non optimized algorithm...etc..


21 Trillions of Polys in Realtime; No Normal maps/Displacement/Height maps needed since everything is highly detailed. More infos in the video, interesting stuff indeed, a guy with good ideas/approaches. No LODs, no alpha's for faking & no Z-fighting ugly issues....64 points / mm³...you can even go in microns dimensions....

How would you do 3D quickly, just take some images & process them thru these websites :

http://homes.esat.kuleuven.be/~konijn/3d/


Other similar tools already exists, 2 images typically are needed to 'reverse engineer' the 2D to highly accurate 3D obj model. I usually do a 'convert Texture To Poly' & set the algorithm in Maya accordingly for getting similar results.

http://photosynth.net/

Also, I'm actually extracting & reverse engineering real LIDAR (laser scanned scenery with nice roads emboss) data in point clouds for some of my test tracks, they 100.000 times faster to handle & contains typically billions of points. Extraction & real reproduction of the world is actually an easy automated task. All over the world, governments & military have 3D Laser scanned the whole Earth & many tools already exists for post-processing & extraction.

So, yeah, especially in our case, for Racer, all this information (GIS, LIDAR, DEMs...), could be used to get all worldwide tracks, with all the detail as it really is without no optimization worries (polycount budget...), I know Dave & most of you would be happy, if we can simplify & automate things. All I say here, I've somehow quickly tested some professional point clouds apps, so yeah it's really a question of time before every1 sees everyone. :D

That implies that no secret can be held anymore, if you're smart enough to think further ahead. That's why all this revolution stuff, at all levels & in any disciplines...Ironically, it's like a modern car, where you have all kinds of sensors, logically talking, our real cars can process & do, let's say 10.000 more stuff as it actually does, but it's not 'default', that's why all these customization in all domains. In Japan, tuning the car CPU is common & done quickly thru apps & customs scripts...

Anyways, much stuff is happening these days, I wonder how long it can last this way ?

If you asked me what car I like, I'd probably say none, since I know some about electro-magnetism & speed of light vehicles (also called Anti-Gravity vehicles) & stuff that was classified by govs for too long (2nd WW). Many recent events shows they still hiding the truth to keep us locked & brainwashed, so time to discuss real things with family, friends, wife, unknowns ..& do a global update of what is currently available in durable/efficient/futuristic tech.

Ah I forgot, you remember, we had also that 'hardware vs software' renderer question in Racer & you were telling me, no way of doing it....now you know :tongue:, Bruce isn't even using his GPU, just imagine...what an insult to ATI/Nvidia ! Too much waste, too much incompetency & non logical behaviors which needs to be stopped...
 
I'll believe it when I see it, not trying to be skeptic, but the cheesy narator, the unfair comparisons, and you talking about how the government is hiding floating vehicles is making me skeptic.

One other thing, this method wouldn't require stupidly expensive CPUs and GPUs?
 
I feel you, been thru that too...

For now, it's using only CPU power & in the new link, he talks about using GPU power too, so let's wait a few months, some forums are getting crazy, sure industry too, because it's 'unfortunately' not in their 'financial' interests/plans...to see something that is efficient & durable.

Somehow, he shows/demonstrates all the hidden power we have in our pcs, the same is almost applicable to many current inventions as we have seen thru consoles for example, where you can have a custom system for PSPs which does a lot more than just the default stuff. I recall everyone, that the human has traveled to the moon with the equivalent processing power as we have in our table calculators, imagine, how far we would go, if you apply the same amount of power nowadays with our CPU power ?

Probably in our Milky Way Galaxy....:)
 
Point cloud data isn't something new, it's been around for ages. While it's a good tool it's ridiculous to want to use that data in an engine anyway unless the purpose is to prove the tech. You get your point cloud data (you can turn it into a mesh if you'd like via tools like photofly) but then you model around it, using your skills as an artist, since that's what differentiates you from everybody else...your modelling skill and style. To use raw point cloud data or messy models in real time might seem amazing at first, however once you realise everything you're missing out on it makes the point moot.

Also, about the anti-gravity spaceships...[/tinfoil]

Anyway, how about we get back to a topic like say...v0.8.34

At work they recently revised the shadow system of the engine we use and they've run into the banding problem we get too. The shadows looked fine earlier and probably ran better, why can't we go back to v8.3 or whenever it was?
I'm baffled at how poorly the game's running these days - I can only draw the same conclusion as others have and suggest it's Newton's fault (at least in part)
I've got so many projects on at the moment almost ready to release but I don't feel it's time yet since they don't run as well as they should. What's going on?
 
Point cloud LUT with no other data per point stored except what, pre-baked lighting?

So an XYZ coord and a LDR colour value?


Now throw in a light that passes through the scene, how do you make that lights value adjust the intensity of the millions of points? You are going to have to check the distance to each point to get the lighting falloff correct.



Then what about lighting properties? There are no specular, reflections or anything evident in those demos. How do you calculate the specular properties of the materials as you move around them?

Each point then suddenly has to have a normal, because to calculate a reflection we need to know which way the surface is pointing. Each point needs a value for reflective strength... how about other properties if we want to interact with it, like physics, or deformations?

How about applying damage to items, like good old bullet holes?


Right now it's just an exercise in showing static data graphically. It's a huge data-set yes, but that is all it is. It doesn't look much like an interactive game environment at all.




I'm also curious just how good it can be. To get 65 points per cubic mm means 65 xyz coords per cubic mm.

A human body has about 1.9m2 of body area, so we need 12,350,000,000 to create a human body model.


Each point will need an rgb value, so that is 24bits per point, or 3 bytes of data.

Each point will need to be accurate to within about 1/10th of a mm, out to accuracy within 10m, so we need a range of 10m > 0.1mm, so 5 decimal places give or take.
That is 3 axis so 16bits per point, or about 2bytes per point. So 6bytes total.

So right away we need 12.5 billion points x 9bytes = 103gb of data for a human model with 65 points per mm2 of surface area...


Something doesn't add up to me. It looks like they dump tons of point data and render just enough to make stuff look ok at a bigger distance, and then add points from their dataset as they get closer to objects. But are they instancing heavily? It looks like they might be in their demo... or using fractal generation to populate their world, because to store that much unique information would take many terabytes for your average game level.



You have to remember, the biggest cost today for GPU's isn't polygons, it's the textures and sampling of each pixel to do nice looking things.

I also question the need to have a large world which you can zoom right into a pebble with, and see something that detailed. Unless you are making 'Global Ant Scientist' as a game, where you walk around a huge forest and need to zoom right into the floor to find ants, it's rather useless to have that dynamic range of detail of scales.



They are making out like there is a conspiracy, but why would there be? If their software runs on todays super fast cpu's, it can run on todays super fast GPU's just as well...

It's just voxels taken a bit further. There is no inherent better or worse way of doing things. Outcast proved that back in the day. It looked better in some ways, but worse in others, vs polygonal/texture based 3d worlds. We even saw voxel/point cloud presentations in 3D Mark over the years, so it's not like it's been 'hidden' technology.


Give me polygons any day of the week for realtime/interactive use. I guess that is why they prevailed.



If for some reason the GPU giants are worried, I wouldn't know why... they make hardware. If things *could* run better with a different technique, then fine, but give it 6 months and creatives/artists would just demand more and more of the hardware.
They want specular control, anisotropic control, reflection control, emission control, shadows, lighting, radiosity, transparency, solid objects with refractive properties.
Suddenly your super fast point cloud projection is chugging along at 1fps because we realise that apart from simply looking up data and showing it on screen, we need to calculate all these other visual features on the fly.

You can't have an efficient look up table of how a material will look at a certain angle at a certain time of day so it shines right in the sunshine. What about how it looks with a person stood between the sun and the object? Another look up table haha!


It's just madness to compare a well developed interactive polygonal world with a (probably instanced) static look up table of points with only rgb info (ao/shadow already baked in it seems too)



I'm surprised with 15 years in "the business", QCM, that you have fallen for this without actually realising it's limitations for interactive entertainment use as it stands, and that with those requirements fulfilled, the speed/scope will drop, and suddenly it won't look so fantastic for speed/visual prowess trade-off.



It *may* have some use soon, I don't doubt it. Outcast showed that you didn't NEED to use polygons. But by the time they made it work for an actual game, it was no faster or better than polygons/textures... just different. Some benefits, some costs.

The same will count today. By the time they add lighting and material properties onto the points, and a way to calculate normals so they can do speculars and fancy effects, it'll be no faster or generally better than polygons, just different.


Dave
 
I just had a thought.
Could the scripting language have access to vars exposed through cg? Say there were 3 "user" vars available in cg - these would then be able to be changed through the rsx scripts. Then we could write a DOF shader and tweak the focal planes in (almost) realtime. (Almost because the script would need to run reload gpu shaders after each change...and that takes time. Speaking of...surely there's a way to have them update in realtime. I'm sure they are in the engine used at work - though that is directx.)

Apologies if that made little to no sense, I'm typing using the on screen keyboard so I'm thinking at least 100 times faster than I'm actually typing.

Another thought: is there a limitation in the DOF format that only allows one map channel per object? If not can we pass map channel 2 to the uv.zw
A bit of a waste having a float2 when you can have a float4 and different textures (say you want to tile a small texture for diffuse and have an AO map to cover the whole object...would be nice)
 
@Dave

I understand what you mean, but you don't seem to get the deeper side which is emerging...You'll probably understand things better soon, & why all the stuff I talk will happen.

@Cam

Yeah, accessing custom CG vars thru Racer scripts & do magics with CG in real-time would be great, but since scripting shading systems isn't that easy these days, I doubt we will see that soon.

Cool, you finally talking about it & that's make me happy, cuz you wish the same as I do. What about UV channels or map channels as you call them ? I really don't know in Racer...

In Shift, we had for scratches/collisions another UV channel responsible for that stuff, same for painting (custom liveries) ...
 
Hmmmm... I'm fairly certain it's just big LUT's of pre-baked data and instancing.

The sheer data size of a point cloud of data from LiDAR of a whole track at a point per 1mm resolution, for every object in the tracks general area, would be terabytes big.
Even then the scans from even LiDAR leave gaps, unless you go climbing the trees and behind fences, so all those places need to be filled in.
And that might only capture static diffuse colour information. What about 'painting' in all those trillions of points for material properties. Ie, how shiny are those points, what colour do they shine? How much do they blur their reflection?
Even at a point per 1cm, the data set would be gigabytes with just point location, never mind the other data you want.

I'm sorry, it's just too pie in the sky for me right now. By the time it's a workable graphical game renderer, it'll be stripped back and optimised down so much it'll be no different to using polygons on speed/quality, it'll just be a different way of doing the same thing (as Outcast was over a decade ago)





As for the Shift2 using more UV channels, I'd like some ways to add more channels/textures for materials too :)

I'm not sure if the DOF format even stores more than one set... Ruud could probably answer that, but I for one would like to be able to use more channels and then use light mapping for tracks (rather than vert baked shading which is less nice)

Hmmmmmm

Dave
 
The sheer data size of a point cloud of data from LiDAR of a whole track at a point per 1mm resolution, for every object in the tracks general area, would be terabytes big.
Spot on, (pardon the pun) I've seen point cloud data that's a point every 10m over a few k's area and that's half a gig in itself. 10m down to 0.1 m is 1000x - if my maths is correct I doubt you'd want to load that every time you load a track.

I'm not sure if the DOF format even stores more than one set... Ruud could probably answer that, but I for one would like to be able to use more channels and then use light mapping for tracks (rather than vert baked shading which is less nice)
That's my uncertainty too - I have a feeling DOF only stores the first map channel.
 
I imagine to make it usable you'd need some good forms of compression on the in-memory data (resulting in slower lookups, though putting it on a grid makes certain aspects simple), plus big reductions in point density outside where the player can travel. Both of which are doable but make the technique a lot less exciting, and I still wouldn't want to build a point cloud by hand. So it may as well be procedurally generated, and only go to full level of detail near the player.
 
The real goal imo, is to take advantage of our machines & softwares, from there digitize the world or whatever you want, I mean that's what's actually occurring worldwide.

The best thing in this technology, since it's dealing with Infrared light, it penetrates almost any 3D shape, so you can have nested 3D which brings 3D to a new level of understanding... :)

============================================
Racer Updates

Was lately thinking about a GPS system inside Racer...anyone had that thought too ?
 
Would be nice to have something like that if we were able to have TDU-sized maps.. but currently that seems unlikely(?)

And ofcourse, plenty more important things for the guys to work on :D
 
Racer Load Error - OS version???

hi guys! long time since I've been here! Now I have a problem, with all new betas I get this error:
errorel.jpg
 
Status
Not open for further replies.

Latest News

How long have you been simracing

  • < 1 year

    Votes: 357 15.7%
  • < 2 years

    Votes: 251 11.0%
  • < 3 years

    Votes: 243 10.7%
  • < 4 years

    Votes: 179 7.9%
  • < 5 years

    Votes: 302 13.3%
  • < 10 years

    Votes: 260 11.4%
  • < 15 years

    Votes: 166 7.3%
  • < 20 years

    Votes: 128 5.6%
  • < 25 years

    Votes: 99 4.3%
  • Ok, I am a dinosaur

    Votes: 294 12.9%
Back
Top