I'm not sure generally or specifically these days but how do modern API's work with regards to graphics memory?
Ie, say I have a large track with 500meg of textures, do they get loaded into RAM/virtual memory at track load, but only accessed by the GPU memory when the render call is made?
Or does the GPU load in as much texture data as possible at track load, and then store it resident on the card?
Or does it manage it intelligently somehow?
Can we for example have a track that is 10km long, and in the first 5km we have 500meg of textures for a desert landscape, then we hit a tunnel, and when we come out the other side there is 5km of city scape with 500meg of textures there.
The GPU isn't going to try load 1gig of textures into it's memory is it?
Obviously the GPU keeps:
Current stored frames, say 20meg, but with triple buffering that might be 60meg in frame buffer alone
Shadow maps, there are 4 x 1024px iirc, so that is ~ 35meg I think
HDR envmap is probably a big user of memory so 512px at 16bit floating point, 4 meg... 32bit would be 8meg.
So right off just straight up running Racer is using about 100meg of GPU memory just to store envmap, shadow maps, frame buffer etc.
So I'm confused, how much video memory can we use up?
Can I author with 1gig of textures but manage the scene in such a way that only 512meg are seen and used by the GFX card at any time (frame), thus I can have plenty of variance across a large track?
Or am I needing to ideally keep all my textures for the entire track in the 512meg, meaning either lower res or less unique textures etc?
In the old days we got AGP to speed texture calls from system RAM to the GFX VRAM so we could basically have faster graphics and nicer graphics too, since old PCI at the time was very slow to access system RAM in the volumes graphics were needing.
So AGP used system memory to speed graphics.
But is that still the same today via PCI-E?
Are there ways to prioritise certain texture data into GPU ram and other into system ram?
Just trying to get a figure on how much memory I have to play with in total... final speed will be determined via actually rendering it to see what the FPS are
I guess I can test this by putting loads of 8192x textures (gigs worth so they easily fill my GPU memory) on some boxes and then hide them with LOD, and have the boxes only appear one at a time as I drive along... hmmmm...
Cheers
Dave
Ie, say I have a large track with 500meg of textures, do they get loaded into RAM/virtual memory at track load, but only accessed by the GPU memory when the render call is made?
Or does the GPU load in as much texture data as possible at track load, and then store it resident on the card?
Or does it manage it intelligently somehow?
Can we for example have a track that is 10km long, and in the first 5km we have 500meg of textures for a desert landscape, then we hit a tunnel, and when we come out the other side there is 5km of city scape with 500meg of textures there.
The GPU isn't going to try load 1gig of textures into it's memory is it?
Obviously the GPU keeps:
Current stored frames, say 20meg, but with triple buffering that might be 60meg in frame buffer alone
Shadow maps, there are 4 x 1024px iirc, so that is ~ 35meg I think
HDR envmap is probably a big user of memory so 512px at 16bit floating point, 4 meg... 32bit would be 8meg.
So right off just straight up running Racer is using about 100meg of GPU memory just to store envmap, shadow maps, frame buffer etc.
So I'm confused, how much video memory can we use up?
Can I author with 1gig of textures but manage the scene in such a way that only 512meg are seen and used by the GFX card at any time (frame), thus I can have plenty of variance across a large track?
Or am I needing to ideally keep all my textures for the entire track in the 512meg, meaning either lower res or less unique textures etc?
In the old days we got AGP to speed texture calls from system RAM to the GFX VRAM so we could basically have faster graphics and nicer graphics too, since old PCI at the time was very slow to access system RAM in the volumes graphics were needing.
So AGP used system memory to speed graphics.
But is that still the same today via PCI-E?
Are there ways to prioritise certain texture data into GPU ram and other into system ram?
Just trying to get a figure on how much memory I have to play with in total... final speed will be determined via actually rendering it to see what the FPS are
I guess I can test this by putting loads of 8192x textures (gigs worth so they easily fill my GPU memory) on some boxes and then hide them with LOD, and have the boxes only appear one at a time as I drive along... hmmmm...
Cheers
Dave