Gran Turismo Sophy - A breakthrough in AI?

GT Sophy.jpg
Gran Turismo Sophy is a new take on AI within a driving game. The project is a collaboration between Sony AI, Polyphony Digital, and Sony Interactive.

According to the development team, GT Sophy is the next level to AI and recent results would suggest they are right. They are claiming GT Sophy is an AI breakthrough, comparing this AI to other breakthrough AI that have mastered arcade games, chess, and multiplayer strategy games.

Does this mean that dive bombing AI into T1 and making up half a dozen places in a thing of the past? Or does this mean bullying AI out of the way is a tactic we’ll no longer be able to do?

The GT Sophy project has been in the works for six years, where GT Sophy was trained using deep reinforcement learning techniques. The development team didn’t build an AI to cheat the game, like many other AI do - they developed an AI that learns by completing human-like tasks. The reinforcement techniques “reward” or “penalize” the AI for specific actions it takes inside an environment.

GT Sophy's learning covered three areas of driving skills:​


Race Car Control - Operating at the edge
GT Sophy acquired a deep understanding of car dynamics, racing lines, and precision manoeuvres to conquer challenging tracks.

Racing tactics - Split-second decision making
GT Sophy showed mastery of tactics including slipstream passing, crossover passes and even some defensive manoeuvres such as blocking.

Racing Etiquette - Essential for fair play
GT Sophy learned to conform to highly refined, yet imprecisely specified, racing etiquette rules including avoiding at-fault collisions and respecting opponent driving lines.

On two occasions last year GT Sophy competed against some of the best Gran Turismo drivers

In July 2021, GT Sophy had the fastest lap in all three races and won two of the three races. Yet the human GT drivers won with the most points.

During October 2021, GT Sophy won all three races and finished second in all races.

GT Sophy has achieved a major milestone however PDI, SIE, and Sony AI will continue to develop and upgrade their AI’s capabilities as well as explore how this AI can be implemented in the Gran Turismo series in the future.

Additionally, Sony AI is also exploring new partnerships to enhance the gaming experience for players through AI.

What are your thoughts about GT Sophy? Is it about time AI in racing games took the next step?
About author
Damian Reed
PC geek, gamer, content creator, and passionate sim racer.
I live life a 1/4 mile at a time, it takes me ages to get anywhere!

Comments

Since I work for big corp myself I know how marketing and sales teams like to oversell buzzwords like AI, blockhain and whatever is popular any given year. Quite often AI means some dumb robotics behind it and similar. Having the same physics as human player is also important part.
Besides that as others mentioned, making AI super perfect is actually exactly opposite how real people drive. Even best drivers like Max and Lewis make mistakes.
 
PS1, PS2, PS3 versions of GT (I quit at GT6) have had laughable, so called AI. There wasn't a version of GT where I didn't get drilled from behind in T1.

The blindest, lamest opponents, six editions in a row. So excuse me if I dont buy a PS5, PS wheel, and the $70 game just to get rammed some more.
 
I have read the research paper out of interest. One thing to note is that Sophy was given cars' coordinates + track boundaries and centerline for the tracks (X,Y,Z). So my understanding is that this means it was constantly aware of the exact track position (X,Y,Z) of itself + positions of other cars on the track. To me this is bit "cheating", as normal human drivers (with cockpit view) have to rely on just the screen visuals (with mirror(s) + possible inaccurate radar) + sounds from the game, so the data available to humans is much more limited, than was give to Sophy.

Also someone asked about the reaction times. I think they tested with different "reaction times" to make it operate at the level of race drivers (200-300ms reaction times) and that did not affect much the laptimes.

I think the killer feature of the AI is the consistency, as the AI can run 1000 laps with the exact same fastest optimal racing line. When the AI has data of the exact car position + distance, it can e.g. brake at the exactly same optimal position over and over again.

But I am sure that as the models and modeling get better, the AI/ML guys can make the AI to rely just on the same data as human (i.e. just the visual + possible sounds). Would really love to see how such system works on e.g. ACC or similar game with more advanced physics and track development during the race with changing conditions.
 
HAHAHAHAHAHAHAHAAAAA!!!!
So for 4 generations of console Gran Turismo has has the worst AI in the history of racing games and now all of the sudden they are going to have the best. Right. I'd rather trust every fart the morning after a night of crazy drinking than this marketing hoopla.
 
Last edited:
Don’t see the point in this, saw some of the driving and a questionable use of the track limits, I prefer racing against humans myself, although that can be testing too at times, but if everyone wants to race, I’d take this all day than against this sophy BS
 
HAHAHAHAHAHAHAHAAAAA!!!!
So for 4 generations of console Gran Turismo has has the worst AI in the history of racing games and now all of the sudden they are going to have the best. Right. I'd rather trust every fart the morning after a night of crazy drinking than this marketing hoopla.
I agree with you that Gran Turismo has some of the worst (slowest) AI in the genre, but the deep learning method of Sophy was published as an article in Nature. I don’t think you will get your results published in Nature if your paper just consists of some marketing buzzwords.
 
Last edited:
Would really love to see how such system works on e.g. ACC or similar game with more advanced physics and track development during the race with changing conditions
As I can see in the web page, they are open to collaboration.
Soon every sim will have a Sophy :thumbsup: (Or maybe we will see Sophy on the roads in our IRL future cars)
"In addition to Gran Turismo, Sony AI is also eager to explore new partnerships to enhance the gaming experience for players through AI"
 
Wake me up when Sophy can handle a 1967 F1 car on the Nordschleife in the rain, without reading the physics data on the fly. Sophy is cheating, because she can read all the game data, including physics. She's not dependant of visual cues or FFB effects on the wheel, like a real driver.
 
The AI in question will not be implemented in GT7.
The AI in question will not be implemented in GT7. In any case not before 1 or 2 years and probably never.
I love racedepartment, the best site I know for car simulation, but you have to be careful with the tone of your press articles, you shouldn't become a publicity carrier for Sony or others.
The article doesn't say that AI will be implemented in GT, but pretty much everyone is going to believe it is, right?
I live in France where the main video game site is becoming a thing that does more disguised advertising than quality articles, they may be making money, but they are losing their souls and many readers are leaving somewhere else. They just made an article similar to yours, they still add at the very end of the article that it takes 5 minutes to read and that almost no one will think that the AI will not be implemented in GT7, or so maybe in a year or two... Many people, especially young people, do not read all the articles, it is well known to any journalist.
Frankly, do not like them, you are better than that.
Ps, maybe you didn't do it on purpose, if so just be careful to be more specific. That's my opinion.
 
Last edited:
The real issue is can the AI run within the machine it's on (as well as the physics and gfx etc etc), or is this all going to be server side? As others have mentioned it needs to be scaleable for different player abilities. Perfect Ai is no good, I remember racing in GTR2 and I'm certain the Ai pace would ebb and flow slightly.
 
I agree with you that Gran Turismo has some of the worst (slowest) AI in the genre, but the deep learning method of Sophy was published as an article in Nature. I don’t think you will get your results published in Nature if your paper just consists of some marketing buzzwords.
Except that the revolutionary AI will not be implemented in GT7, nanana!
 
It is coming to GT7 at a later stage as an update.
Not for several years that I read, and it's maybe, it's not sure at all. So, Chinese proverb: "before you buy an arcade car video game because you are told that it may become a real simulation with a superb revolutionary AI one day, wait for that day to come, young Padawan ", word of Lao Tzu in 451 BC or of Master Yoda a long time ago, in a galaxy far, far away.
 
Except that the revolutionary AI will not be implemented in GT7, nanana!

Not for several years that I read, and it's maybe, it's not sure at all. So, Chinese proverb: "before you buy an arcade car video game because you are told that it may become a real simulation with a superb revolutionary AI one day, wait for that day to come, young Padawan ", word of Lao Tzu in 451 BC or of Master Yoda a long time ago, in a galaxy far, far away.
I think you missed the point here. Whether or not Sophy is implemented in GT7 is not important. What is important that they pulled off to create an AI that uses deap reinforment learning to be able to drive a car around a track faster than humans can. The AI is not programmed like the current AI to race on a racing line on a circuit, but actually has LEARNED to race around a track. That is a totally different approach and apparently they pulled it off because Sophy defeated top class simracers. I don’t care if it is implemented in GT7 or not, because GT7 is not my type of game.
 
Wake me up when Sophy can handle a 1967 F1 car on the Nordschleife in the rain, without reading the physics data on the fly. Sophy is cheating, because she can read all the game data, including physics. She's not dependant of visual cues or FFB effects on the wheel, like a real driver.
The point is that with deep reinforcement learning you don’t need the physics data. The Sophy AI is just learning to race around a track and learns which behaviour is good (faster) and bad (slower). Whether the car (1967 F1) or conditions are difficult or not does not matter for the results. After a lot of laps (they mentioned 10000) Sophy has learned for instance that after turn 2 at a certain point on the track she can push the throttle 57.3% in first gear to achieve maximum acceleration without losing control of the car. Do you think a human person can beat that?
 
Last edited:
The point is that with deep reinforcement learning you don’t need the physics data. The Sophy AI is just learning to race around a track and learns which behaviour is good (faster) and bad (slower). Whether the car (1967 F1) or conditions are difficult or not does not matter for the results. After a lot of laps (they mentioned 10000) Sophy has learned for instance that after turn 2 at a certain point on the track she can push the throttle 57.3% in first gear to achieve maximum acceleration without losing control of the car. Do you think a human person can beat that?
Exactly, the AI can do that turn 2 in the same way 1000s of times and will learn by doing it and will make small adjustments to that while driving (like a real driver would make). So one can expect that as the models get better, the AI can also adapt to changing conditions.

Still I would like to see the AI to drop the support wheels (exact telemetry + coordinates for all cars) and rely only on visual data. Then it would be on the same level with humans. And very probably that will happen in the future.
 
Sophy has learned for instance that after turn 2 at a certain point on the track she can push the throttle 57.3% in first gear to achieve maximum acceleration without losing control of the car. Do you think a human person can beat that?
But then surely that is defeating the whole point of Ai in a computer game, it needs to emulate a human not be robotically perfect?
 
Last edited:
As an AI and robotics graduate, I think most of you misunderstood how a reinforcement learning agent will drive, and how this is a MASSIVE improvement over previous AI systems in racing games:
All of them cheat right now, and not relying on visual information only is to help with computational power and reliability, making it not have to remember ther's someone in its blind spots at each update and try to estimate where he is.
Machine learning works in a statistical way, if it has even just a pseudorandom parameter like tire state, each lap will be different since it will make slightly different decisions (even just 0.001% steer) that overtime will cumulate in a different positioning on the track, and could even cause a mistake. If you add humans in the mix, the chances of it behaving differently increase a lot.
The fact that it works on may tracks/conditions/cars without actually knowing how the car behaves untill it starts driving is astonishing. It's probably trained over all of the cars (but the tuning makes the possible combinations in the billions), but still most ML algorithms suffer from "catastrophic forgetting", they only remember most recent training, so it's quite an archivement to have it work on all of the cars.
Personally i sent them my CV immediatly after seeing the trailer, it's a really interesting project and I hope more studios go that route in the future, even if developement costs may be huge.
 
As an AI and robotics graduate, I think most of you misunderstood how a reinforcement learning agent will drive, and how this is a MASSIVE improvement over previous AI systems in racing games:
All of them cheat right now, and not relying on visual information only is to help with computational power and reliability, making it not have to remember ther's someone in its blind spots at each update and try to estimate where he is.
Machine learning works in a statistical way, if it has even just a pseudorandom parameter like tire state, each lap will be different since it will make slightly different decisions (even just 0.001% steer) that overtime will cumulate in a different positioning on the track, and could even cause a mistake. If you add humans in the mix, the chances of it behaving differently increase a lot.
The fact that it works on may tracks/conditions/cars without actually knowing how the car behaves untill it starts driving is astonishing. It's probably trained over all of the cars (but the tuning makes the possible combinations in the billions), but still most ML algorithms suffer from "catastrophic forgetting", they only remember most recent training, so it's quite an archivement to have it work on all of the cars.
Personally i sent them my CV immediatly after seeing the trailer, it's a really interesting project and I hope more studios go that route in the future, even if developement costs may be huge.
I think you'll find even though most of us may not understand the specifics we do understand that it's a major step. The issue I and many have is there's too much talk of "perfect lines" and 0.00000001 reaction times and that's not going to simulate reality at all. Robots don't race cars. Now I know it means no more laborious waypoint setting for an AIW file, as the Ai will learn the racing line on it's own (and probably more importantly the Ai will finally use different lines for different classes), but how do you factor in the human element? No human is perfect. But ultimately it's a very exciting time for Ai development even for the unwashed unlearned ;) .
 

Latest News

Article information

Author
Damian Reed
Article read time
2 min read
Views
9,351
Comments
54
Last update

What's needed for simracing in 2024?

  • More games, period

  • Better graphics/visuals

  • Advanced physics and handling

  • More cars and tracks

  • AI improvements

  • AI engineering

  • Cross-platform play

  • New game Modes

  • Other, post your idea


Results are only viewable after voting.
Back
Top