Road to Rugby World Cup 2019: Rugby scores decomposition

With the Rugby World Cup 2019 Japan starting on 20th September, I thought I’d take a look at the tournament from a few different statistical angles. For this post I’ll be looking at the problem: given a rugby score, how can we decompose it into possible combinations of tries, conversions, penalties and dropped goals?


I have a dataframe of results for almost all professional and international rugby union scores since the 2012/13 season, more than 10,000 matches. This is nice in terms of ‘breadth’ of the sample – however in terms of depth it’s a bit lacking! For each result I only have home/away team and home/away score, for example:

27/07/2019  New Zealand  South Africa  16  16

I was curious: is it possible to decompose the results into valid combinations of scoring methods? Then, perhaps as a second stage, estimate the probability of occurrence of each combination for a given score? The first question I will be looking at in this post, and the second will be next up in the series!

I’ve never seen a match of rugby before! What are the scoring methods you’re referring to?

TRY (5 points): awarded when an attacking player grounds the ball in the area at the end of the pitch (“in-goal area”).

CONVERSION (2 points): the team who has scored a try immediately gets to kick at goal for another 2 points before kick-off restart.

PENALTY GOAL (3 points): when an infringement is made, a penalty may be awarded to the other team who may then choose to take a penalty kick at goal.

DROP GOAL (3 points): a player may, at any time in play, drop-kick the ball over and between the posts.

PENALTY TRY (7 points): if a foul has stopped the attacking team from scoring then a penalty try is awarded, worth a full 7 points. Note: these happen fairly rarely so I include this just for completeness but don’t refer to penalty tries hereon.

I took the videos above from World Rugby Laws of the Game which is a great resource if you want to learn more about the laws of the game.

Grouping the Elementary Scoring Methods

  • 3 points: penalty goal or drop goal.
  • 5 points: unconverted try (i.e. a try has been scored, but the conversion did not score the extra 2 points)
  • 7 points: converted try (i.e. a try has been scored, and the conversion succeeded in scoring the extra 2 points).

Starting Off the Analysis: Scores 0-7

ScorePenalties or
drop goals (3pt ea)
tries (5pt ea)
tries (7pt ea)
  • 0 is obviously a valid score.
  • 3, 5, 7 are obtained from elementary scores only.
  • 6 is obtained only from two penalties.
  • 1, 2 and 4 are not valid scores as they cannot be sums of

Onwards! Scores 8-10


The only way forward is to score 3, 5, or 7 points! So new valid scores/combinations are {previous scores} + {3,5,7}. In the table above:

  • 8 is the row for 5 points but plus 1 penalty/drop goal.
  • 9 is the row for 6 points but plus 1 penalty/drop goal.
  • 10 is either a converted try plus 1 penalty/drop goal OR an unconverted try plus another unconverted try (hence two rows).


With the general rule established, it is fairly easy to script it:

for each i in 8:150
    if i - 3 in validscores, copy row(s) of i - 3, increment pen/dg field by 1 and set score to i
    if i - 5 in validscores...  "
    if i - 7 in validscores...  "

I wrote a script in R, which can be found on my github repo along with the results for scores up to 150 points.

What about the New Zealand vs South Africa match mentioned at the start?


Both teams scored 16 points. Both teams got there through one converted try and three penalties, corresponding to the first row of two possible ways to reach 16 points.

Are all scoring combinations equally likely?

No, for a given score, not all scoring combinations are equally likely because even if all scoring methods were of equal probability (1/3 probability each), they contribute different amounts of points and so this would make the likelihood uneven!

The table below shows all of the possible scoring combinations relating to the score of 48 points.


It’s pretty unlikely that a team would ‘rack up’ so many points through 16 penalties/drop goals without scoring any tries! Equally, it’s unlikely a team would get there through just scoring tries alone. Intuitively, it would seem that it would be through a mixture that a team would be most likely to get there.

If we know the relative likelihood of occurrence of the three scoring methods to each other then we can calculate the probability of scoring combinations for a given score. That’s what we’ll be looking at in the next post!

Simulating the Six Nations 2019 Rugby Tournament in R: Final Round Update

In an earlier post I blogged how I had made a Monte Carlo simulation model of the Six Nations Rugby Tournament.  With the final round of the tournament approaching this Saturday, I decided to do a quick update.

Who can win at this stage?
Wales, England, or Ireland can still win.  Scotland, France and Italy do not have enough points at this stage to win.  Quite a good article from the London Evening Standard explains the detail.  The current league table is below.

Actual standings after round 4 out of 5

Who is playing who in the final round?

What is the simulation model based upon?
A random sample from a probability mass function for tries, conversions and penalties, which is combined with a pwin for each team, calculated based on the RugbyPass Index for both home and away teams.  If you want to know more, feel free to look at my previous post (linked above) or the R script (linked at the bottom).

What does the simulated league table look after the final round?
Running a simulation for the final three games, and adding these results on to the actual points each team has achieved after round 4, we get the distribution of league points shown below.

Apologies: a box plot can be a bit odd for discrete data such as this.  Please forgive me!  If I had the time I would reform this into something like a stacked histogram which would be more accurate 🙂

It should be noted that, whilst the ‘standard’ scoring scheme applies for these final matches, i.e.

  • 4pt for a win, 2pt for a draw, 0pt for a loss.
  • plus 1 bonus pt for scoring 4 tries or more, regardless of win/lose/draw.
  • plus 1 bonus pt if a team has lost but by 7 game points or less.

…there are also 3 additional points awarded if a team wins the ‘Grand Slam’ (wins all of their matches).  The candidate for this is Wales only.  They have so far won every match, and if they win their final match they get these extra points to ensure they win the tournament.

This rule avoids the situations where a team could lose one match but obtain maximum bonus points in the other, finishing up with more points overall than a team that has won every match but never obtained any bonus points.

So then, what are the final standings likely to look like?
After having run a simulation of the final round, the results are below.


Wales are “firm favourites” to win the tournament.  England have a “reasonable chance”.  Ireland retain an “outside chance”.

How does all of this compare to expectations before the start of the tournament?

Ireland, England, and Wales were predicted to be in close contention.  Wales have outperformed the prediction (mainly due to beating England).  England have outperformed the prediction (mainly due to beating Ireland, and due to amassing a lot of bonus points).  Ireland have under performed against the prediction (mainly due to losing to England, and then narrowly missing out on bonus points: scored only 3 tries against Scotland; lost to England by only 12 game pt).

Scotland beat Italy with a bonus point victory, but they have only managed to pick up one bonus point in their other games.  Picking up points against England in their final match will be tough.  So they will be likely to under perform.  France will likely beat Italy and perform roughly as expected.  Italy are looking firm against the prediction of finishing bottom again this year (however imho they could be a team to watch in their final match, as they’ll be playing a presently disorientated France, at home in Rome).

It has been an interesting journey for me simulating sports tournaments over the past few months.  Monte Carlo approaches can help you see the wood from the trees in complex situations, which has applications not just in sport but in industry as well.

Maybe this has inspired you to have a go yourself?  If so, the code for this blog post is available via Git here.  Although if you wish to have a play or to adopt the code, the original version is much cleaner, available here.  Good luck!

Simulating the Six Nations 2019 Rugby Tournament in R

I really like running simulation models before sporting events because they can give you so much more depth of understanding compared to the ‘raw’ odds that you get from the media or bookmakers, etc.  Yes, a team might have a “30% chance of winning a tournament we might hear”.  But there might be another strong team in the tournament might be 25% likely to win.  Then there’s that plucky underdog team who are only 2% likely to win the tournament, but have a fair chance of causing at least one team an upset.  Not to mention home/away advantage, competition points, and structure, etc.

That’s where a simulation can provide a lot of extra oompf for a little extra effort.  If we can take the ‘raw’ odds but then add in some volatility then we can understand the distribution of outcomes.

That being said, with the Six Nations 2019 officially starting this weekend with France playing Wales on Friday, I thought I’d best run a simulation model.  I do apologise in advance as it’s a little bit hastily put together as I have not had a huge amount of time and wanted to complete a simulation before the competition starts in a few hours time.  But nevertheless I hope it can provide some insight into both the tournament and also more generally how to simulate a sporting competition!

Rating Data – How good is a team?  Given that, how likely is team A to beat team B?

I used rating data from the Rugby Pass Index to calculate the probability of team A beating team B for every fixture in the Six Nations.  This was done using a formula that considers the difference in ratings between the two teams and also gives a home advantage boost to the home team.


The favourite team doesn’t always win – how much leeway for luck is there?We need a Distribution for Points Scored in a Rugby Match.

For those less familiar with rugby, a random selection of the most recent Pro 14 results, giving a feel for the volatility in score

In a game like soccer, you get a lot of draws because goals don’t happen very often, at least compared to rugby where it’s not unusual to see 60pts or more scored per team.  What does a distribution of points look like?  Therefore how much leeway is there for a ‘better’ team to ‘choke’ and not score the tries, conversions and penalties needed to get the winning score, and how much leeway is there for the ‘underdog’ team to ‘get lucky’ and nail their tries, conversion and penalties needed to get the winning score?

If we can define this then combined with the ‘raw’ probability of winning we can take random samples of it during multiple simulations to understand the distribution and volatility of outcomes.

For the general probability distribution of occurrence of tries scored, conversions given a try has been scored, and penalties scored, a fitted distribution from the 2017/18 season of the Pro 14 league has been used because I already had it to hand (and I hope to document this in another post).  Given more time, I would use Six Nations data as the distribution might be slightly different in high-pressure international matches compared to the Pro 14 league.  Nevertheless, I think it the Pro 14 data is worthy for use for a quick model.

Now that we have the probability of winning, and can randomly sample from the distribution of points scored, we are in business.

For each match simulation, a random sample of the probability distribution on tries, conversions, and tries scored was taken and essentially multiplied by the probability of winning for each team.  The competition points (win 4pt, draw 2pt, lose 0pt; 4 tries or more bonus point; losing by 7pt or less bonus point) were then calculated for each team in that match.  For each tournament simulation, the results were recorded and the tournament was simulated again, until we got 10,000 simulated tournament results.

Finally, we can plot and analyse the results.

No surprise really that Ireland had the most tournament wins.  The order follows the Rugby Pass Index rating.  For a competition like the Six Nations where the most major factor is who are you playing at home (and hence gaining that all important home advantage), this is to be expected.  For something like a World Cup where group stages are involved, this would be a lot less straightforward.

In the Six Nations, winning all of your matches get you the Grand Slam in addition to the Six Nations tournament victory, thus bringing further glory to your nation!  It looks like it’s a fairly large ask this year.  For Ireland who are clear tournament favourites, their odds of a Grand Slam victory are somewhat reduced due to the probability of being beaten by any one of the other five nations in the process – in particular England or Wales.

Before we look at the box plots from the simulation – I’ve included this summary of what a box plot is from Wikipedia.  The box is showing 25th-75th percentile outcomes, and the thick line the 50% percentile (aka median).  The whiskers either side of the box usually end up somewhere in the ballpark of the 10th and 90th percentiles.  Circles are then used to show occurrences outside the whiskers.

The order remains when we look at the distribution of points – again no surprise.  We can now see that Italy are clear contenders for the wooden spoon.

Italy, by far the worst team, don’t really ever seem to get less than 5 points.  This could point to a deficiency in the model, perhaps in the distributions – which I will look to address at a later date.

Finally, what does the distribution of standings look like?  Based on the simulation results, Ireland will reliably be in the top 3.  Although you might be tempted to think that according to this, they also have a fair chance of finishing bottom – this is incorrect and is due to the Inter-Quartile Range spanning 2pt – this behaviour is also observed for Wales and Scotland – I’ll may revise the visualisation in an update later.  England reliably ought to finish 2nd or 3rd but with a fair possibility of winning or finishing 4th.  Wales, a similar story but likely to be one place lower.  Scotland and France are likely to battle it out for 4th/5th.  Poor Italy once again are very much expected to win the ‘wooden spoon’.

Well, I hope you enjoyed this as much as I did.  The simulation code is available via Git here.

Insey Winsey Spider Game Monte-Carlo Simulation

Time is such a precious commodity especially with a family.  So when your daughter asks to play a board game… you think ‘how long will this take’.  With most board games, one is able to roughly estimate how long the game will take… the Shopping Game, well that can be expected to take 10 minutes with 2 players. The Memory Game, well that can be expected to take 15 minutes with 2 players.  Whilst there is of course some variation, I have found most (age 4-6) games to have a fairly symmetrical and tight distribution of time taken to complete the game.

One exception though, has been the Insey Winsey Spider Game… where saying yes to a game has derailed many’s a schedule or upset a child because it can very frequently take much longer than expected… one time it’s all over in 3 moves, next time it takes 20 moves.  So, let’s try out some Monte-Carlo simulation modelling in R to understand the distribution of moves better!

As a player in the game, you are provided with a waterspout and a spider.  The spider starts at the bottom of the drain spout.  Each turn:

  1. you roll the six-sided die and progress your spider upwards by the number of squares awarded by the die.
  2. you spin a spinner to determine the weather: sunshine or rain.  Rain takes up approx 30% of the spinner’s space and when it does happen, your spider gets washed out back to the bottom of the waterspout.

The winner is the first to exit the top of the spout.   We’re interested in how many turns this can take!

In terms of doing some Monte-Carlo simulation modelling in R, our strategy will be to model a turn (the roll of the six-sided die and the spin of the spinner); then model an entire game – keeping on going until our spider has reached more than 10 squares from the start; and record the number of moves taken – as that’s what we’re curious about!

six_sided_number_die_roll <- function()
return(sample(x=c(1:6),size = 1,replace = TRUE))

Using the sample function in R, we are able to take a random sample of the sequence 1:6 with replacement.

weather_spin <- function()
return(sample(x=c('raining','sunshine'),size=1,replace = TRUE, prob = c(0.3,0.7)))

For the spinner, we wish to table a random sample of ‘raining’ and ‘sunshine’, again with replacement, but this time specifying uneven odds of 30% and 70% respectively through the prob argument.

Now that we have created two functions that enable us to model a turn, let’s bring these together in a full game simulation.

spider_game_number_run <- function() {
spider_position = 0
i = 0
while (spider_position <= 10)
spider_position <- spider_position +six_sided_number_die_roll()
if(weather_spin() == 'raining') spider_position <- 0
i = i + 1

We set up two counters: spider_position which is going to count how many squares up the waterspout the spider is; and i which is the number of turns taken.  With both of these counters set to zero, we are then ready to start our while loop which will continue while the spider’s position is less than or equal to 10 squares.  During that time, the spider’s position is incremented by one die roll, and one weather spin is made.  If the weather spin value is raining then the spider’s position is reset to 0.  Once this while loop completes, the value returned is i, the number of turns taken to reach completion for that simulated game.

Now we can simulate an entire game by calling the function spider_game_number_run() and it will return the number of turns taken to complete.  What we need to do now is run this simulation many times to understand the distribution of turns needed to complete a game.

spider_game_number_sim <- function(k) {
j = 1
agg <- NULL
while(j < k + 1)
{agg[j] <- spider_game_number_run()
j = j + 1}
#return the number of turns it took to finish

Our function takes the argument k and will run the game simulation k-times, each time adding the number of turns taken to the agg object which it returns.  With this in hand, finally we want to simulate the game a good number of times and display the results.

start_time <- Sys.time()
a <- spider_game_number_sim(n)
elapsed_time <- Sys.time() - start_time
a_mode <- getmode(a)
a_mean <- mean(a)
hist(a, freq = FALSE, breaks = seq(1:max(a)),
main = paste("Insey Winsey Spider Game (number version),\n", format(n,scientific = FALSE, big.mark = ","),"simulation results"),
xlab = paste("x, number of turns to complete game\nE(x)=",round(a_mean,2)," Mode=",a_mode))

All of this code (plus a little bit more) can be found on the Github repository for this blog,  The code above completed in around 50s on a modest low-powered Intel laptop.  The output was the following histogram.

So time for a sense check: the waterspout is 10 squares tall, and if one is particularly lucky, the spider can progress 10 in two goes (6+6,5+6,6+5, and two sunshine spins).  The minimum is two goes to complete the game – this is accurately represented on our graph.

The mode number of turns is 3.  OK, but perhaps this is not so helpful given the large spread of possible turns.  More meaningful perhaps is the expected number of turns, which is 8.15.  50% of the time, we expect games to take up to 8 turns; 50% of the time we expect it will take longer than 8 turns.  But what fate are we likely to face (in terms of parental time management) if we end up taking more than 8 turns.  Well, the distribution has a long tail, so it could well end up taking many more turns.  That long tail was what I felt was happening, and through a quick’n’dirty Monte-Carlo simulation in R, I was able to thoroughly explore and visualise this behaviour!

The beauty of the simulation approach is that you can arrive at answers quickly.  However it would also probably be possible to do this mathematically and to propose a distribution deterministically – maybe that can be a part II to this post one day in the future.  For now, I think I have to go as I’m being called to play a game…



a new horizon!

Be not afeard: the isle is full of noises,
Sounds, and sweet airs, that give delight, and hurt not.
Sometimes a thousand twangling instruments
Will hum about mine ears; and sometimes voices,
That, if I then had wak’d after long sleep,
Will make me sleep again: and then, in dreaming,
The clouds methought would open and show riches
Ready to drop upon me; that, when I wak’d,
I cried to dream again.

William ShakespeareThe TempestAct 3, Scene II