For every league that I can think of, the current definition of these are the goaltenders in net at the time of the game-winning goal (the N+1th goal, when the losing team scores N goals).
At some levels of leagues, the published statistics do not allow easily to see whether a goaltender won or lost in overtime - recent KHL statistics have this feature. Here, I've included both into the "T" column.
Some leagues separate OTL and SOL - I combine these here. If I have separate shootout statistics, I keep them separately (in another column).
(2*W + T + OTL) / [2*(W + L + T + OTL)]
For instance, Henrik Lundqvist had a record of 33-24-5 in 2013-14. This is 71 points earned in 62 decisions, or a winning percentage of (2*33+5)/[2*(33+24+5)] = 57.3%.
Note that for seasons with "loser points", this will result in the "average" goaltender having a points percentage of greater than 50%.
GAA = (GA)*(60)/(MIN)
The intent of the goals-against average is to put seasons of different length on an even footing, reporting the number of goals the goaltender gives up (on average) in a sixty-minute game.
Pre-1982 NHL shot (and save percentage) information comes from a second-hand Excel workbook done by Roger Brewer, using data from the Hockey Summary Project (a tremendous endeavor by both). It is not considered official National Hockey League dogma.
SVPCT = (SA - GA)/(SA)
S/60 = (SA)*(60)/(Min)
Note that I vary the definition of a "save" here to include any attempt that does not result in a goal (including shots that go wide). If you prefer, think of this as shootout goals prevented.
During an average 30-shot game, how many additional saves will this goaltender make above and beyond what a league average goaltender would make? This is just the goaltender's save percentage minus the league average save percentage, multiplied by 30. I chose 30 as a representation of a "typical" game across eras, instead of using each goaltender's actual shots faced per game. Otherwise, goaltenders on teams that allow a large number of shots would have their totals magnified (for good or for bad).
(In the calculation of league-average save percentage, I remove the goaltender in question from the totals.)
For instance, suppose that a league-average goaltender had a save percentage of 90%, and faced 100 shots on goal. The truly league average goaltender would allow 10 goals on these 100 shots. Suppose that our goaltender instead allowed 8 goals. Assuming a binomial distribution (I note that this may not be fair), we can calculate how many standard deviations above (or below) average this goaltender's performance was:
ZSCORE = ((Saves) - (Shots * League Average SV%)) / SQRT (Shots * League Average SV% * (1 - League Average SV%))
Or, in this case:
ZSCORE = (92 - 90) / SQRT (100 * 0.9 * 0.1) = 0.67, indicating that the goaltender was above average but not in a statistically significant fashion.
Truly remarkable performances (good and bad) start at about 2 standard deviations away from average, and the larger the number, the more significant.
(In the calculation of league-average save percentage, I remove the goaltender in question from the totals.)
A league-average goaltender would allow (1 - League Average SV%) * (Shots Faced) goals, and so:
GD = ((1 - League Average SV%) * (Shots Faced)) - (Goals Against)
This statistics, Goals Above Replacement, attempts to quantify that value by comparing how many goals a goaltender prevented above a replacement-level goaltender. On this site, "replacement level" represents the best goaltender that a team could find on short notice with small resource expenditure (either the top goaltender on their minor league team, or the top free agent available). I need to analyze this more rigorously at some point, but here, replacement level is defined as 1.5% below league average (so if the league average goaltender is at 90%, then replacement level is defined as 88.5%). And thus:
GAR = ((1 - (League Average SV% - 0.015)) * (Shots Faced)) - (Goals Against)
SNW% = (Goals Scored^2) / ((Goals Scored^2) + (Goals Allowed^2))
Or in this case:
SNW% = ((1 - League Average SV%) * Shots Allowed)^2 / (Goals Against^2 + ((1 - League Average SV%) * Shots Allowed)^2)
Note that, unlike the goaltender's winning percentage, this metric is guaranteed to be such where a league average goaltender scores out with a 50% winning percentage.
For instance, suppose that a goaltender had an (actual) record of 7-3-0, with a support-neutral winning percentage of 60%. They had ten decisions, and so their support-neutral wins would be 6 (and support-neutral losses would be 4).
On some level, this suggests that things not measurable by save percentage (either team offense, or team defense, or biases in save percentage) gave the goaltender an "extra" win.
For each opponent in the league, a "benchmark save percentage" is developed, based upon their non-empty net shooting percentage. If we make the (admittedly simplifying) assumption that shots faced in a single game represent a binomial distribution, then we can estimate (for each game) how many standard deviations above (or below) average a goaltender's actual game performance represented.
The variation metric is then the standard deviation of the above metric. For instance, if a goaltender plays five games in a season, and has in each game, he is 0.5 standard deviations above average, then his variation score would be zero (since his performance does not vary).
A low variation represents a more-consistent goaltender, while a high variation represents a less-consistent goaltender. Note that in this case, a goaltender can be terrible but still be consistent (so long as his performances are consistently terrible.
Over the course of a full season, a goaltender of average consistency will have a variation of about one (1.0). For seasons with very few games played, variation will be artificially low (to take an extreme example, a goaltender with one game played in a season will have a variation score of 0, since all of their games are identical).
For each opponent in the league, a "benchmark save percentage" is developed, based upon their non-empty net shooting percentage. If we make the (admittedly simplifying) assumption that shots faced in a single game represent a binomial distribution, then we can estimate (for each game) how many standard deviations above (or below) average a goaltender's actual game performance represented.
Performances within 0.5 standard deviations of average are grouped as "average", with performances below that (and above that) grouped as "below average" (and "above average"). Season totals are weighted by shots faced in each game.
I developed an estimate of each team's strength - using their entire (regular season plus postseason data), starting with each team's goal differential (GF minus GA), then adjusting for schedule (each team's average opponent's goal differential). This is an iterative process, but does converge to a metric that estimates how many goals better (or worse) a team is compared to average during the season. The top teams in the league are typically about +1, and the bottom teams in the league are typically about -1 (although some of the early-1990 expansion teams hovered around -2). I also develop separate strength scores at home and on the road, just in case (for instance) a team only plays their goaltender on the road. For those of you who remember the "Norris Power Index" that I published in the mid-to-late 1990s, this is that algorithm.
Once these strength ratings have been developed, I calculate SoS as the minutes-weighted average strength of opponent.
SoS balances to zero in the regular season, but should be positive in the postseason - since a team is facing above-average opponents (by definition).
Using time played to weight the strength metrics appears to inflate the metrics of backup goaltenders. Why? Because they're more likely to enter the game in relief against a strong opponent (and conversely, the starter is more likely to get a short night). p>
To answer the question, I took each team's (regular season plus postseason) non-empty net shoting percentage. For an individual goaltender, I take the (shots-weighted) average opponent shooting percentage, and then subtract the result from one. This puts the result on the same scale as save percentage, so that one can compare it to the goaltender's actual performance.
(Note that this is not the average save percentage of a goaltender's opponents.)
OpS% balances to the league-wide save percentage over the course of the season.
Using shots faced to weight the strength metrics appears to inflate the metrics of backup goaltenders. Why? Because they're more likely to enter the game in relief against a strong opponent (and conversely, the starter is more likely to get a short night). I still prefer it this way, since it's a measure of what actually happened (if Glenn Healy faces 500 shots in a year, but 100 of those are against juggernauts in relief, then I want to know that when I'm evaluating his performance).
Lastly, I should note here that, while save percentages are considered a decent statistic for evaluation of a goaltender's individual performance (certainly better than goals-against average or wins and losses), it is by no means a perfect statistic. I will write more to this at a later date, but for now, please keep that in mind.