Throughout the 2019 season I kept track of a stat called Expected Points (xPoints). xPoints is the number of points we would expect a driver to earn in a race based on their average track position during the race. The intuition behind xPoints is that crashes, mechanical failures, slow pit-stops, and more “bad luck” don’t reflect a driver’s true skill: these sources of bad luck are factored into traditional stats like average finishing position and the points table overall. A driver’s true skill can be measured by how they ran throughout the entirety of a race, not just by how they finished — or didn’t finish.
So, does measuring xPoints add to our IndyCar knowledge?
In short, yes. One hallmark of xPoints is that it reflects the number of points we’d expect a driver to earn in a race and throughout the course of a season. If a driver is over performing their xPoints, they’ve been pretty lucky and might be performing better than we’d expect them to in the future. And if a driver is under performing their xPoints, they’ve been unlucky and we’d expect that to revert back to the mean in the future and their results to pick up.
A natural testing place for this is with full season results. I split the 2019 season in two parts: races 1-8 are the first half of the season and races 9-17 are the second half. I then measured the correlation (how related two variables are) between xPoints in the first half of the season vs. actual points in the second half of the season. For good measure I compared this to the correlation between actual points in the first half of the season vs. actual points in the second half of the season for each driver. If first half actual points are a better predictor of second half points than xPoints is, then maybe it’s not worth it to measure.
So, what were the results?
Correlation between first half xPoints and second half actual points: .84
Correlation between first half actual points and second half actual points: .75
The correlation between first half xPoints and second half actual points is higher than the correlation between first half actual points and second half actual points. Over 70% of the variation in second half actual points can be explained by first half xPoints, while only 56% of the variation in second half actual points can be explained by first half actual points. Put simply, a driver’s xPoints is a better predictor alone of how they will perform in the second half of the season than their actual number of earned points is.
This analysis was done using full-time drivers and included only the 2019 season, so the sample size is small but the results were statistically significant. For additional insight I also checked the correlation between a few other variables and second half actual points:
- First half Average Track Position (ATP) and second half actual points: -.8
- First half Average Finish Position (AFP) and second half actual points: -.65
These correlations are negative because a decrease in average track/finish position is associated with better race performances, but it’s really the absolute value of the number we are concerned with here. A driver’s ATP explains 64% of the variation in a driver’s second half actual points and is also a better predictor of second half actual points than first half actual points is. First half AFP explains just 42% of the variation in second half actual points and really isn’t that useful to us.
When it comes to evaluating how drivers are doing once this season gets underway, be sure to come back and follow the xPoints stat posted on our page here.
Photo: Joe Skibinski/IndyCar
One thought on “What Can We Expect from Expected Points?”