1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Why more than 60 fps

Discussion in 'Sim Racing Hardware' started by Cote Dazur, Feb 9, 2014.

  1. Cote Dazur

    Cote Dazur
    AC Addict

    What are we getting when reaching over 60 fps?
    Why not locking a game in the menu like it is possible in AC at 60 fps even if your system allows for more? Is it better to have stable 60 fps, or to let it fluctuate?
    If 60 fps is not reachable should we lock it at 30 fps?
  2. The higher the framerate, the less latency from input to output on the screen.
  3. I am also interested in this.

    I can see that if your monitor can handle a higher refresh rate, then running at this may reduce input lag as the time between sequential frames is less. If your monitor can only achieve 60Hz then will increasing your graphics card output have no effect?

    Also, can your display have a high refresh rate but still have high input lag as the two are not directly related (the screen might be able to display 120Hz, but there be a large lag between the output from the graphics card and the screen showing it, due to processing of some kind)?

  4. Mathematically 60fps on average means 1/60=0,0167seconds. In reality 60fps does not mean frame time of 0,0167 though. Your frame rate can vary a lot despite being 60fps. Ideally you want always to get your frame draw time to be less than 0,0167 seconds. On 60Hz monitor that means that the image can be drawn by the screen as soon as possibly after gpu has calculated the image. If you lock your fps to 60 you may still get lag and stutter as the fps fluctuates so the screen does not always have new image to draw when it updates.

    My understanding is that the gpu has a full image in stored in the memory which it updates one pixel at a time running through the screen. It starts from the upper left corner of the image and then line after line calculates all pixels on each line. On 1920x1080 image this means one frame is 2073600 pixels. If it does this perfectly 60 per second then it creates 124416000 pixels per second. Exactly the same amount the screen can show during one second.

    However if your fps is 120 then the image is calculated twice between the times the 60Hz screen refreshes and reads the image. This means the upper portion of the image the screen draws is new and the bottom part is old because when the screen reads the image the gpu has already started calculating it for the second time and is halfway through. This also causes screen tearing (not necessarily in the middle of the screen but one tear somewhere with these numbers). If the refresh rate and fps form multiples (120 is 60x2 or 60 is 30x2) then you get screen tearing because the tear always appears at same line on the image. but if your fps is for example 83 and screen is 60Hz then 83/60= 1,38 which makes the tear appear not in same line every time.

    So higher fps = more recent image on screen.

    Other thing is that the gpu and the screen are not in sync. If you run a perfectly stable 60fps and a 60Hz monitor there can still be maximum lag of 0,0167seconds (16,7ms) simply because the gpu creates the new image just tiny bit later than the screen draws the image. On average I'd imagine on 60fps and 60Hz screen the lag is half of that, 16,7ms/2=8,4ms. Similarly on 120fps on 120Hz monitor your lag on average would be (1/120)/2=4,2ms.

    The higher the fps the better chance your screen has to show an image created by the gpu that is the newest image created most recently. For example running 60Hz monitor with 166fps reduces this lag more as the screen has very fresh image created by the gpu to show. In other words your monitor updates every 0,0167seconds but gpu can update every 1/166=0,006seconds. While it does fluctuate on average your screen has half of that as delay. Compared to running IDEALLY perfect and stable 60fps on 60Hz screen giving you 8,4ms lag in optimal situation running 166fps at 60Hz gives you average lag of roughly 3ms.

    Now all this assumes the screen draws the image instantly which is not true. Old crt screens do draw the image instantly but for lcd screens it takes some time. But that lag is just added on top of the processing lag.
    • Beer Beer x 2
  5. Because the fps tool lie. It don´t show how many frames you show at every single moment it just measure the fps once every second or tenth of a seconds or so as not eat to much performance.

    Most that have tried sli know how easy it´s to lie with fps. How many here have run SLI which got you nearly double the fps but somehow the game feels sloppier then ever? SLI introduce more input lug/stutter with no exception. You really need to have a racing title with deadzones baked in you can´t do anything about to not notice this.

    Many have also installed a new CPU that seemingly don´t affect fps but yet somehow the game runs smoother.

    Also there was an old ancient myth that the human eye can´t detect more then 30 frames per second. I don´t believe even 60 frames per second is on the limit of the average human eye but may be wrong.

    G-sync technology is kind of interesting I have never experienced how that work. I generally just run with v-sync off to kill input lag and live with ev tear. I am not to sensitive to that.
  6. Cote Dazur

    Cote Dazur
    AC Addict

    Using this ( http://frames-per-second.appspot.com/ ) there is a visual difference between 30 and 60 fps, but over 60 not so much.
    It might not be relevant to what we are experiencing in a SIM.
    On the side of the app the red dot shows the different reaction times, it varies from 60 to 120, but if the screen is not fast enough can we see it?
    Ghoults, thank you for the explanation, very instructive.
    Last edited: Feb 11, 2014
  7. Thanks guys, couple of follow-on questions:

    Ghoults, I can't quite get my head around why rendering two sequential frames at 60Hz is worse for lag than rendering three at 120Hz on the graphics card but only showing the first and third on your screen (i.e. rendering at a higher refresh rate than your monitor). These two seem analogous to me, with your graphics card just working twice as hard if rendering at 120Hz, but maybe I am misunderstanding.

    Par, I understand that maintaining a steady frame rate is very important with it being better to have a steady 60Hz than fluctuating between 120Hz and 0Hz even though these might both give you the same average frame rate. Isn't this achieved by locking the game to a maximum frame rate assuming that your card can hold this solidly, and not rendering unneeded frames?

  8. Lag essentially means how fresh the images you are seeing are. The less lag you have the more recent images you are seeing. If you run higher fps the images you are seeing are newer because they were created more recently. At 60fps a new frame is created every 1/60 seconds. Or in other words the full frame is fully updated by the gpu every 1/60 seconds. At 120fps new frame is fully created every 1/120 seconds.

    The frame rates between your monitor and gpu are not synced so the screen just takes the latest image the gpu is working on. The higher your fps the newer that image is because more of it is new.

    Effectively for low lag you want couple of things to happen. One is to draw a frame in as short time as possible. And then draw it on the screen as soon as possible. Higher fps reduces the time to draw a frame and also makes it more likely to appear on the screen sooner.

    Honestly I don't know people lock max frame rates. Theoretically it could keep the fps more stable and if the fps stays at the upper limit then you can adjust the upper limit so that you get most fluent fps but no screen tearing.

    I don't think stable fps really has all that much value. Avoiding really low fps is what is most valuable.
    Last edited: Feb 11, 2014
    • Beer Beer x 1
  9. I understand what input lag is, but I still can't grasp why running at a higher refresh than your screen is of any benefit. OK, here is my thought process:

    Screen refresh (60Hz) in blue, graphics card refresh v-synced to screen in green, so the only lag at each frame here is the inherent hardware lag in the system (so non-zero, but as low as it can be) and each frame is refreshed when it is drawn:


    On the bottom is a graphics card refreshing at 120Hz and drawing every orange frame to the screen, with every yellow one not drawn. As it is not synced it can drift with reference to the screen refresh, and in this example has a median lag of half a frame (so approx. 4ms here, but this could be between zero and nearly 8ms) plus any inherent system lag.

    This would be improved if you rendered all of the 120Hz frames on the 60Hz screen, but then you would only render half a screen at a time and still get worse lag between alternate top halfs of the screen being rendered out of phase (I guess that this is where tearing comes from?). I think that tearing looks awful but this is just a personal preference.

    Don't get me wrong, I can see why you would want 120Hz with a 120Hz screen, and want to be able to sustain much more than 60Hz so that you could guarantee never dropping below this in game, I just don't fully understand what advantage there is to running your card faster than your screen refresh once set up. Unless the use of v-sync adds overhead to drawing rendered frames, and therefore increases lag?
    Last edited: Feb 11, 2014
  10. 1. V-sync causes lag. Don't use v-sync.
    2. 60 fps does not mean precisely 60fps/1 frame every 0,0167 seconds.

    Here are two pics I made. Both images show the delay of each screen. On the bottom of each pic you can see what the monitor would actually show in each case along with the delay of that part of the screen.
    - 60 fps with v-sync
    - 60 fps
    - 120 fps

    in first image the fps and Hz are very well synced. by synced I mean that gpu starts calculating new pic conviniently as the screen just draws the old one the gpu just finished. In the 2nd pic the situation is more realistic with the fps and Hz not being in sync.

    The black thick line show the gpu load. Black means full load, red means waiting. With v-sync the gpu waits until it gets the go ahead signal to start drawing again. It basically waits until the display copies the full image from its memory before it starts to work on the new image. Laterally the width of each bar describes how long it takes for the gpu to create one frame. Wider bar means longer time and so forth.

    So here we can see the average age of the 120fps frame is about 8ms compared to the 17ms with 60fps.


    Here we can see what happens without v-sync. The frames are finished and new frames are started to draw before the screen has "looked" at the frame the gpu is working on. The red dots in this image show the point at which the frame is drawn ("age of the frame" or the lag of that part of the picture on the screen).

    60fps v-sync stays as it was because it can not get unsynced. 60fps image gets a tear little below the middle. upper part of the image is 9ms old while bottom is 26ms old. With 120fps we get a tear as well roughly at the same place. Upper part of the image is 4ms old while bottom part is 11ms old.

    Notice that unless the tear gets drawn in the same place each time the screen refreshes the screen tear is not visible to human eye.
    Last edited: Feb 11, 2014
    • Beer Beer x 2
  11. Sounds like I need to figure out how to remove v-sync, and run at a capped frame rate of 60Hz without getting tearing...