I've done a bit more work with the Poisson-Logit Model (hereafter PLM), so I thought I would make a post about it that I can reference in the future.

How It Works

As the name implies, it has two stages: the Poisson stage and the logit stage.

The Poisson stage breaks the game down into two parts which are the scoring at either end. Using goals scored and conceded for each team, total home goals to get a home-ground advantage, and the schedule of each team it outputs a scoring and defending factor. When two teams play each other, the average goals a team will score is their scoring factor times the other team's defending factor, times the home-ground advantage multiplier if they are playing at home.

The logit stage uses previous data to get a representation of how often each result (home win, draw, away win) have historically happened when two teams meet with average goals as determined by the Poisson stage. So the inputs are the scoring and conceding factors for each team as well as the home-ground multiplier. The output is a home win probability, an away win probability and the probability of a draw. For occasions where scorelines are of interest I also have run a different version which as the same inputs and gives as an output the probability of all scorelines from 0-0 all the way to 10-9.

How Has It Done?

Past results with it have been quite good. I have run it on 8 seasons of data from several leagues. An important thing is that my suspicions were confirmed and it worked a lot better when I didn't use it for matches where the home team had played fewer than 10 matches. In other words, I waited 10 weeks into the season for there to be some kind of sample size so there wouldn't be crazy data or some of the small sample problems we've seen here (see the Manchester derby predictions).

The first test is how many predicted home wins, draws and away wins the model predicts compared to what actually happened. Here are the results:

Type Pred. Actual

Home 16456 16479

Draw 9565. 9444

Away 8756. 8853

As you can see, it's off by a bit but quite close when you think of them in percentage terms. Using the Chi-Square goodness of fit test, I got a p-value of 0.267. This indicates a good fit.

For the next test, I broke the matches up into groups by how often the model predicted the home team to win (I did the same for draws and away wins and the results were similar). For each group I then compared the predictions to how often the home team actually won to see how accurate the model got the predictions. The results there were quite good. Here's a graph with them. Instead of using bar charts I'm using a line graph as I think it's easier to see where the (slight) differences are:

The groups are every 5%. For example, there were only 26 matches where it was predicted that the home team had less than a 5% chance. These predictions averaged out to 2.96%. The home team actually won 11.5% of the time so you can see that at the bottom of the graph the actual line is a fair bit above the prediction line. It goes back toward the actual line because the model did better from 5% to 10%.

Other than the very bottom group, the predictions of the model were within a couple standard deviations of the predicted percentage, usually a lot less. Running a similar goodness of fit test on this, I got a ridiculous p-value of .695. This model appears to fit the data incredibly well.

Is This Model Perfect?

If it was then quite frankly I would move to Vegas and be rich in a season. Unfortunately for me it's not that simple. I am extremely happy with the model and predictions it makes, but it certainly has limitations. The problem is that it doesn't take into account outside factors, most notably injuries. These things, when you have a huge data set, cancel each other out. Say you have a match where the model predicts the home side will win 40% of the time. That 40% is really an average. Sometimes the home team will have a star player with an injury and really be more like 30% to win, other times they will be healthy and the visiting side will have injury or suspension issues. In these spots they might be 50%. Over the whole set the injuries will even out so the home team will actually win the 40% of the time, but looking at individual matches the predictions can be off even if on aggregate they are perfect or close to it.

Conclusion

I am quite happy with the model. It is just as simple as the Poisson model and makes predictions that are as accurate as possible given that simplicity. I feel quite comfortable using it as the base model so for now I will stick to it. None the less, it should be thought of as a baseline and not the be-all-end-all. Some thought is certainly needed to assess how to adjust for factors outside the model such as injuries, suspensions and even things like tactics if some teams possibly matchup particularly well against a certain style of opponent or something along those lines.

## Tuesday, October 20, 2009

Subscribe to:
Post Comments (Atom)

## No comments:

## Post a Comment