Advanced Playoff Boxscores for 2014: The three-day weekend

"Into a daybreak that's wondrously clear

I rise” – Maya Angelou

Much like a good meal or a good wine, each playoffs – hell, each playoff series – have a unique flow and flavor that must be experienced to be understood.

For me as a fan and as a writer, there is a process to that discovery. First, I must spend an inordinate amount of time pouring over all the regular season data to make the playoff picks. Then I take the pulse of the room through the playoff contest. The games themselves follow – along with the cacophany of sound and excitement that we all love – and it is that that typically drives the majority of all the narratives. But we are not common media pundits, we are scientists.

We want to get at the nuts of what actually happened. The truth – not as a subjective feeling, but as an objective fact – is what we pursue, and for that we want the numbers.

To that end, we keep track of any and all relevant stats and work out all the Wins Produced numbers shortly after the games have been played. This allows me to do proper analysis, build my model, and come up with my picks for subsequent rounds. 

The math, you see, provides the depth to the story being told by our eyes.

Case in point, the odds shifted.

Let’s get to those boxscores and recaps right?

A few quick notes before we go:

  1. We are using Wins Produced numbers. I am manually compiling them for the playoffs and we've compiled them for the season. There will be some drift in the numbers as we go along (don’t worry, it’s a function of moving averages), but it’s good enough for horseshoes, hand grenades, and tactical nuclear weapons.
  2. The Boxscore contains:
    • Basic information: player, team, Game ID (who, what, and when)
    • Classic Stats: points, shots, offensive rebounds, defensive rebounds, steals, blocks, and assists (because the classics are classics for a reason).
    • Simple spins on classics: % of team minutes (player minutes as % of total minutes available), position (average player position)
    • Possession and play stats: offensive plays: FGA + 0.434 * FTA + TO, usage of offensive plays: % of offensive plays used by player when in the game
    • All the classic offensive efficiency stats (and some slightly modded ones): effective field goal % = (FG + 0.5 * 3P) / FGA, true shooting % = PTS / (2 * (FGA + 0.44 * FTA), points per shot = PTS/FGA, points per offensive play = PTS/offensive plays
    • Do-it-yourself offensive point margin stats:
      • Offensive point margin: this is the marginal value created by the player per offensive play spent. The calculation is: OPM = (points per play for player- average points per play for player for League) * offensive plays for player.
      • Defensive point margin: this is the marginal value surrendered by the player per offensive play spent. The calculation is: DPM = (points per play for opponent- average points per play for player for league) * offensive plays for opponent. I’m doing this one by position averages per game.
      • Combined margin: this is just OPM-DPM
    • Rebounding rates: % of rebounds on offense, % of rebounds on defense.
    • Points over Par (PoP). This is our points version of Wins Produced that tells you the direct effect of a player’s production on the game's point margin. This is the key number boys and girls. In the interest of keeping it to each series and games I am using Point over Par in comparison to the players on the court for each game. What this means is that each player will not be judged against the average playoff production for their position, but rather their opponents. This guarantees that on a game level PoP maps to actual point margin.
    • I’ve classified performances on a sliding scale:
      • Hall of Famer: 12.5 PoP48 and Above. Submit tape of performance to Hall of Fame voters.
      • Superstar: 5 to 12.5 PoP48. LeBron on a regular night.
      • Star: 2.5 to 5 PoP48. A good night.
      • Starter: 0 to 2.5 PoP48. Positive contributions to the outcome.
      • Bench: -2.5 to 0 PoP48. Not worthy of a starting role.
      • Scrub: -5 to -2.5 PoP48. Play only in case of emergency.
      • Traitor: less than -5 POP48. You’re wearing the wrong uniform.

Boxscores first:

 

Now for the recaps.

Spurs-Mavs

On Sunday, the Mavs gave it their best shot but it wasn't quite enough. Jae Crowder having that kind of game is not going to happen again. Also, Kawhi Leonard was not his usual fantastic self. The Spurs, however, meandered around and completely destroyed the Mavs in the last part of the 4th quarter. Tony Parker looked as good as he has since Game 1 of the Finals last year. This is my way of saying it's about to go poorly for the Mavs.

Initial Prediction: Spurs in 4 (99.9% Win)

MVP so Far: Vive la France!

Updated Prediction: Spurs in 4 (99.9% Win). It's inevitable

Thunder-Grizzlies

This series has had all the awesome so far. In Game 1, the Grizzlies erased a 22 point first half deficit but faltered in the end; in Game 2, KD forced OT by making the craziest shot of the playoffs so far, but Memphis grinded out the win. Here's a formula for you: Thunder + Grizzlies = Teh wesome. More please!

Initial Prediction: OKC in 7 (59.6%% Win)

MVP so Far: Conley has a case but looked shook at the end of Game 2. Ibaka has a stronger case, but it's Scotty Brooks for Memphis. 

Updated Prediction: OKC in 7 (53.4%% Win). Best looking 1st round series so far. Thank you basketball gods.

Clippers-Warriors

The only people who can cover Blake Griffin in this series are the refs, and in Game 1 they did. Get Blake on the floor with no Bogut and it looks downright terrible for the Warriors. Anyone remember Blake laying down a fatality versus the Warriors this season?

Yeah. I still think this is gonna stay ugly for the Warriors.

Initial Prediction: Clips in 5 (79.3% Win)

MVP so Far: Blake. He might dunk on me if I don't give it to him.

Updated Prediction: Clips in 7 (71.5% Win). The model doesn't account for bad refeering, but I do. I think that Warriors are in danger of being swept like gentlemen (in five) after last night's drubbing.

Rockets-Blazers

I told you that the Rockets would give at least one game away. LMA, Lillard, and Batum were beasting. Harden missed the winners to close regulation. It also feels like Stotts outcoached McHale in this one. Houston is still the better team, but they're sloppy like that, and Portland is good enough to capitalize.

Initial Prediction: Rockets in 5 (88.2% Win) (I personally liked HOU in 6)

MVP so Far: Batum

Updated Prediction: Rockets in 6 (73.4% Win). See, the model came around.

Heat-Bobcats

The Bobcats are really outmatched here, particularly since it looks like Playoff Dwyane Wade has arrived. As mentioned, Wade was excellent for the Heat, along with James Jones and Birdman. And LeBron was just OK and Bosh was bad (hey! Playoff Bosh is here too!). Other than McBob, it was a rough night for Charlotte. I still think they win one, but that's it. 

Initial Prediction: Heat in 5 (97% Win)

MVP so Far: @DwyaneWade

Updated Prediction: Heat in 4 (98% Win). I still like five; the model now prefers 4.

Pacers-Hawks

We called this one, didn't we? The Pacers look terrible. In particular, Roy Hibbert has been excruciatingly bad recently. The Hawks are better than their record, particularly against the Pacers. 

Some fun facts:

Pacers were outscoring teams by 9.9 pts per gm in the 1st half of season. That fell 10.7 to -0.8 in the 2nd half. 2nd biggest drop ever (ever as in in the history of the NBA). Pacers also had the 11th biggest drop in win % from 1st to second half of season in NBA history.

Combine that with the Game 1 results and those holding Hawks' betting slips are totally smiling right now.

Initial Prediction: Pacers in 7 (53.5% Win)

MVP so Far: Trillsap was outstanding in Game 1.

Updated Prediction: Hawks in 6 (66.4% Win). Yeah, I said the Hawks. Is everyone prepared for the possibility of the Hawks or Wizards in the ECF?

Raptors-Nets

F##k father time. For one night it was 2007 again, and the Nets roster looked like world-beaters. The Raptors' guards couldn't handle Joe Johnson at all. DeMar, Terrence, and Kyle also managed to look like playoff rookies. All in all, the Raptors basically looked like playoff noobs giving homecourt away. 

We did kind of expect that though. I do also expect some bounce back from the Raptors and a long, fun series (even if I don't know what to with myself trying to root against Paul Pierce).

Initial Prediction: Nets in 6 (51.4% Win).

MVP so Far: Joe Johnson

Updated Prediction: Nets in 6 (69% Win). This'll jump the other way if Raptors win Game 2 (as they should).

Bulls-Wizards

Nobody expects the professor. I was really surprised at the amount of Andre Miller and Nene we saw from the Wizards in Game 1. The track record of minutes did not foreshadow that. What Miller did with those minutes is not unexpected, as he's always been excellent. I also kinda forgot Ariza (the Wiz MVP) being in two Finals and Gortat in one. Translation? I may have misread the playoff experience edge; the Wizards were poised and ready.

It didn't help that the Bulls will periodically crap the bed on offense, and this was that game.

Initial Prediction: Bulls in 5 (90% Win)

MVP so Far: Trevor Ariza (the only guy with a ring in the series)

Updated Prediction: Bulls in 7 (75.4% Win). I may actually have to go back and redo the lineups for this series (game 2 will tell).

-Arturo

(Note: I was supposed to put this up to 4/22 but ran out of time. It'll go up later with the updated series odds after the 4/23 games. See below. The games will come a bit later.)

 

Thanks for the analyses!

However, I was a little bit surprised to see that the value of Tim Duncan in game 1 against the Mavs was much lower than the value of guys like Danny Green. I am normally a huge fan of WP48, but in that case, I really wonder whether the value of a player, who depends so much on other players (e.g., creating open looks for a three point expert), is indeed higher than the value of a player, who had to create his own offense (with a 60% efg) in the low post due to a great (and largely unexpected) defense by the opponent.

I know this is a general complain against WP48, but in that case - since I had seen the game - it was so obvious for me.
> ...the value of Tim Duncan in game 1...

I haven't seen a formal analysis about how much variance there is in WP, but sampling is going to be an issue for any single game.
NBA please more of the most interesting series on NATIONAL FUCKING TV, Raps and NETS. Come on Silver!

Arturo,
You think Duncan retires if Spurs win the finals?
TD,

WP evaluates player production, not player ability. If you want to try to understand why one player is more/less productive than another, you need to take a more holistic look (and no, PER, +/- variants, and Win Shares are not going to give you what I'm talking about).

You also have to account for the positional average. For example, in this game (yes, small sample size alert!), Green benefited from poor play by his Mavs counterparts. And it should also be noted that Duncan would've been #2 in wins if you moved him to the Mavs.

Basically...tune back in after a couple of games, then you can start doing more analysis.
Playoffs are cool but can't wait until the draft stuff.
TD,
Tim was good (.160 straight WP48) but so were his counterparts. Rule for playoff games is that I'm comparing players against their direct opponents. This is nice for reference but may be bad for James Harden :-)
Arturo, great ODDS, except for 1 thing, since Portland won GM1, L4 should be 0
For the Bobcats, the Al injury might be the perfect time to give Biyombo or Zeller the starting spot and the other more minutes. If Charlotte values Jefferson, which I think they do, let him heal.
I dunno GnoiXiaK, maybe the refs will go back and review the replays for that game.
GnoiXiaK,
3.4% is the odds on a Blazers sweep of the Rockets.
Scott Brooks is secretly working for Hollinger. I'm convinced of it.
Houston is DONE.

San Antonio has, quite frankly, been very fortunate to get one game at home. They have no defensive answer for the Mavs, and sloppy offensive play has cost them dearly.
wmcguire,
I wouldn't count these close games with Aldridge making so many mid range jumpers. Houston could easily be up 2-0.
Unfortunately, they count just as much as if they had been won with perfect efficiency, and Houston isn't taking four out of five. Aldridge aka Yay Points just stomped a mud hole in this site's model and walked it dry.
Spurs aren't done but they've been the lesser team thus far.
Let's wait until the series is over before making a conclusion if the model is a bust or not.
The model shows that Houston had an 88% chance of winning the series. If Portland were to win (still five games left) that hardly disproves the model.
Thank god nobody in the scientific community overreacts the way wmcguire18 does, otherwise nothing would ever be accomplished. Evolution would have been thrown out with the rediscovery of Mendelian genetics in the early 1900s, because it "stomped a mud hole" in the model and "walked it dry." Because, you know, small sample sizes are always meaningful, and god forbid we use new information to inform our model.
I was being kind of facetious , but I would say that in my experience with the social sciences, particularly economics, is that they're wonderful at explaining phenomena, but their power at prediction, happily for human civilization, is scattershot.

Again, i find your articles and models generally useful at helping me look at basketball in a different way, but i thought a little pseudo scientific detachment when someone points out the obvious about the Houston series might be the order of the day. Instead I'm told that the games don't count, because they weren't won the right way and treated to a history of genetic sciences, that even for someone as self important as I am seems a bit inflated when talking about bball.

The problem with total insistence on empirical data is that you become a slave to it, and you miss the obvious occasionally. In this case, it's Houston being hampered by the lack of a real play making point guard and it resulting in their offense stalling twice in two games. In game two they spammed a mis match in the low post, Portland was kept alive by superb play from Aldridge which didn't stop even after Howard ran out of gas late in the game, and NO ONE could pick up the slack.

You guys were joking about Houston dropping one to stay in Portland longer. Houston has dropped the only two that mattered, to team that was always a bad match up, and they dropped the second in spite of a huge performance from their best player who got rope a doped. They're just about done.



With all due respect the "Small Sample Size" excuse/rationalization for Points over Par is a stinking pile of elephant dung. Points over Par is SUPPOSED TO BE A TOOL TO DEAL WITH SMALL SAMPLE SIZES! Points over Par is supposed to be a reasonable facsimile of wins produced on a single game basis, in other words it is supposed to be used for small sample sizes.
> Points over Par is SUPPOSED TO BE A TOOL TO DEAL WITH
> SMALL SAMPLE SIZES!

Can you cite where anyone claimed (or credibly suggested) that points over par is particularly insensitive to sampling error? It would be, at best, careless, to suggest any cumulative measure that doesn't compensate for playing time as something that will handle small samples of player performance well.
Nate I am taking no position on the relative accuracy good or bad of Points over Par. I am simply stating that Points over Par is designed to deal with small samples, and WP with large samples, so don't use "Small Sample Size" as an excuse if the results don't turn out like you think they should. If small sample size is a problem with Points over Par, and Points over Par is designed to deal with small sample sizes, then what value does it have over what already exists?
Ziggy, PoP is just a different way of looking at WP that makes it more convenient to compare values in the scope of a game, nothing about it alleviates any of the inherent problems with small sample sizes (nor has it ever been claimed to). if WP measures in miles, PoP measures in feet, but they both tell you the same thing.

side note: does anyone else suspect that the first commenter on this post is Tim Duncan?
> ...I am simply stating that Points over Par is designed to deal
> with small samples, and WP with large samples ...

The normal Points over Par used here (i.e. in the player charts) is almost the same as Wins Produced. The relationship is - effectively:
PoP48 / 30.3 = WP48 - 0.1

I missed Arturo that 'contextualized' "Points over Par" for these tables: "...each player will not be judged against the average playoff production for their position, but rather their opponents..." I'm not exactly sure whether that will help w.r.t. sampling issues.

(Honestly, I don't understand the design motivation. Arturo writes "...This guarantees that on a game level PoP maps to actual point margin." but - except for the ends of periods and strange changes of possession - WP already has that property by design.)
Player performance on a game to game basis is really volatile, so PoP is going to be really volatile as well. It's a totally fine explainer of a small sample, and trying to predict the future off a small sample is as difficult as it would be using any other explanatory metric.

RE: Houston, the model gave them a non trivial chance of being upset; if upsets never happen when you give them nontrivial odds, something is as wrong with your model as if you consistently over predicted upsets. Maybe after these playoffs we can check for such over a multi year sample?
Ok... so if we are talking about predictions how on earth could a small sample size prove anything? This is not meant as a defense of Wins Produced at all since I think they are upfront with their data and time produces enough results to judge, but a single game or even a 7-game series doesn't prove much one way or another. I mean, if two people go all-in with pocket pairs pre-flop and the lower pair wins (about 20% chance more or less if I remember correctly), that doesn't prove that the person with the better hand made a mistake or the person with a lower hand is a genius... it just proves that a certain amount of randomness is guaranteed. Do you really think Seattle was THAT much better than Denver? I don't. Judging any model on a series or one teams' season is foolish... if in the aggregate Wins Produced doesn't do very well then bring it up, but stop acting like those Fox News fools who kept claiming that a cold March in NA proved global warming wasn't happening at the same time we were having a March that was one of the warmest on record globally. Do not forsake the forest due to some of the trees... judge the model on its totality.
Dodgson,
I am not sure if you are referring to me, but I am not making any reference to any predictions. I was going to make a comment about something you referenced, and since you have made the point I will make a comment.
If the high temperature today is 97 degrees, it is a fact. Sample size is of no meaning to what the high temperature is today. It is foolish to use today's temperature of 97o and use that to claim there is global warming, any more than using a low temp today of 32o to disprove global warming. If you wish to make a case about global climate, then sample size is relevant.
If a player goes off for 46 and 18 you can make definitive factual statements about that players performance in that game, and sample size is not of any relevance. If you want to take a players 46-18 game and make the statement "LaMarcus Aldridge is the best player on the planet" then sample size matters, and you cannot use the 46-18 in one game to prove anything like that.
Points over Par is designed to provide a "Holistic Value" of a players performance in that one single game. It is inappropriate to take a single games PoP and make a larger "Holitsic Evaluation" of a player within the context of all players, and say that "X Player is better or worse or the best" based solely upon a single games PoP. Based upon my interpretation of nearly everything stated (here, Wages of Wins, and Nerd Numbers) about PoP is that is an exceptional measurement of a players single game performance. If so sample size does not matter, because PoP was designed so that sample size didn't matter.
As far as PoP and WP, I find value in each. I also find a great deal of holistic value in Win Shares.

I will also state for the record that the measurement of LaMarcus Aldridge as only the second best Blazer player in game 1 behind Nic Batum is PATENTLY ABSURD, when Aldridge had

46 points
TS% of 62.6%
eFG% of 58.1%
18 rebounds
2 assists
2 blocks
only 3 turnovers
and had a higher ORB%, DRB%, TRB%, Blk%, Ast%, and a lower TO% than Batum, and the only ways in which Batum was better was he had a 70% TS and had 2 steals to 0 for Aldridge.

I cannot see how anyone can make the claim that Batum's contributions were greater than Aldridge's, measured in points better than the average players contribution. So while I believe PoP has value, in this case is failed miserably. Sample size has nothing to do with it!!!!!!!
The PoP numbers shown here are really the difference between a player's PoP and his opponent playing the same position. They're really close to perfectly coupled, so a player can be a star under this model if he plays really well or his opponent plays really poorly. I wouldn't put too much stock in them as a measure of individual performance.
The midrange is a lower volatility shot than the 3 since it is a higher percentage shot. Especially if you are the best in the league at it, like LMA. The expected point value may be lower, but in this situation a game or two of slumping 3s can lead to you going fishing. Playoffs are a small sample by definition copared to the regular season, and in a small sample, low volatility strategies definitely have their merit.
Per Vorped.com, LMA shot 42% from midrange on the season. Chris Bosh shot 48% from the same areas.

It's worth noting that while LMA has gone bonkers, the Rockets haven't exactly been blown out. They needed only one of the following two things to happen: hit 5 or 6 more threes over two games (they are 11-51 or 22%, compared to 36% on the season), or have LMA regress to his average from the mid-range. Then they're up 2-0.

Billy Beane said that the playoffs are a 'fucking crapshoot'. I doubt Morey will have a crisis of faith, even if they get swept.
Ziggy,

I was referring to you and I appreciate your longer response which came across differently to me than your first response. I actually agree with pretty much everything you just said. I do think that if you view shot attempts as a turnover (requiring greater than 1PPP to just break even) and view defensive rebounds as more of a team function than you would look at some of those numbers differently. Thanks for a response which actually included more details!
Re: Ziggy

"If the high temperature today is 97 degrees, it is a fact. Sample size is of no meaning to what the high temperature is today. It is foolish to use today's temperature of 97o and use that to claim there is global warming, any more than using a low temp today of 32o to disprove global warming. If you wish to make a case about global climate, then sample size is relevant."

I don't think this analogy is quite comparable.

You are assuming that temperature is a single measurement from a continuous scale. However, our reading of temperature is a single sample from an unknown but measurable/estimable distribution. In fact, temperature is defined as the average energy imparted by the movement of particles within a given medium relative to some physical baseline. When the Weather Channel reports the "high temperature today is 97 degrees," this is shorthand for "we examined the weighted averages reported by some number of local weather stations at a number of pre-determined intervals throughout the day. the highest such measured average was 97."

Look at the temperature reports right now for all the local weather reports in your area. Quite likely, some of them will report different temperatures. Heck, if you were to place sensitive enough thermometers in different rooms of your house, they would read different temperatures. Temperature is volatile: it is effected by humidity, air quality, elevation, wind, pressure, proximity to bodies of water, shade, cloud cover, time of year, wider weather patterns. When weather stations measure and report their temperature (usually every half-hour or hour, though now with better computer systems some may do it as frequently as five minutes), they aren't just reading a thermometer.

They may be taking the average temperature as reported by multiple thermometers, and adjusting the number based on pre-determined (though also unknown, and therefore variable) factors (including, by federal regulation: proximity to air conditioning or heating units, whether the building is surrounded by other buildings or large patches of asphalt, whether they are measuring from rooftop or ground level, proximity to industrial and municipal facilities, or even whether they are close to power stations or electrical transformers; often, they also adjust these measurements by comparing them to measurements from nearby rural areas). The weather reported on the Weather Channel is the average OF many of these averages, which they also adjust mathematically.

The point being that the high temperature reported for a day is not a definitive representation of whatever the "true" temperature is. It is the estimated output from a complex mathematical model, where a range of highly variable measurements are adjusted based on estimations of the effect of various parameters (which are themselves only estimates, not "true" values). Due to sophisticated statistical rigor and precise instruments, we can have a fairly high degree of confidence in a daily high temperature report, but it is still a variable with some degree of error. In truth, "97 degrees is the high" really means something like "we can be 95% confident that the highest temperature reached today is between 96.9 and 97.1 degrees" (since I don't have the figures available, I don't know how wide this confidence interval is, I just picked 0.1 arbitrarily, but the magnitude of this deviance is irrelevant).

Since there is variability, sample size becomes important. The more measurements taken, the more information we have to estimate the "true" temperature, and the lower our variability (and thus the more likely our estimate is to be a good approximation of the truth). It is factually incorrect to assert that sample size is irrelevant to our understanding of what the temperature is.

Further, temperature is inherently a comparative measure, just like PoP. Temperature is the movement of particles relative to a baseline. PoP is a measure of efficiency relative to a baseline (positional average or opponent production). That is, it makes no sense to try and interpret them as absolutes without context. That context is provided by statistical inference, which is heavily reliant on sample size. This is true of any statistical model. Both of these measures are our attempt to map the properties of a complex phenomenon into empirical terms that we can use to understand how they operate. In other words, you can't just point at a single value and say anything meaningful about what it means without some sort of empirical context.
RyNye ,
Thanks for that, I was completely unaware of how much of a moron I actually am, thanks for clearing that up! I have always appreciated being set straight in a pedantic and patronizing manner, by people who just know they are more brilliant than I.

I now understand that the stat line for any individual player in any game is actually not what their actual stat line is. Their stat line is actually defined as the average energy imparted by the players, through the movement of particles within a given arena relative to some physical competition between two professional basketball teams. As such the stats that are reported by the NBA, ESPN, SI.com, and then transferred to a larger database like basketball reference are actually the weighted averages reported by some number of local sporting event statistical reporting stations, at a number of pre-determined intervals throughout each game, and the weighted average of all those multitude of samples is the reported stat line. There is no actual statistical measurement, because performance is volatile: it is effected by humidity, air quality, elevation, wind, pressure, proximity to bodies of water, shade, cloud cover, time of year, and wider in-game offensive and defensive patterns.

The point of this is that the statistical performance of any individual player reported for a day is not a definitive representation of whatever the "true" performance is. It is in fact the estimated output from a complex mathematical model, where a range of highly variable measurements are adjusted based on estimations of the effect of various parameters (which are themselves only estimates, not "true" values). Due to sophisticated statistical rigor and precise instruments, we can have a fairly high degree of confidence in the actual statistical performance line of an NBA player in a game, but it is still a variable with some degree of error. In truth, "46 point, 18 rebounds, 2 assists, 2 blocks, and 16 FTM and 19 FTA" really means something like "we can be 95% confident that the LaMarcus Aldridge produced a 46 point, 18 rebounds, 2 assists, 2 blocks, and 16 FTM and 19 FTA statline, but actual range of points scored is actually 45.4 to 46.6 and his rebounds were actually within a range of 17.7 and 18.3" (of course I don’t have all the specific data so I am just estimating the confidence interval, I don't know how wide this confidence interval is, I just picked these arbitrarily, but the magnitude of this deviance is irrelevant).

Further, actual in game performance is not an abject absolute fact, but rather an inherently comparative measure, just like PoP. Basketball performance is the movement of player and ball through a mass of particles relative to a baseline. That is, it makes no sense to try and interpret basketball performance in an individual game as absolute, WITHOUT CONTEXT, and that context is provided by statistical inference, which is heavily reliant on sample size. What a player does in an individual game is meaningless, you can’t truly assess his SINGLE GAME PERFORMANCE unless you use a much larger sample size, than that of a his actual single game performance. That is necessary for any statistical model. In other words, you can't just point at a single value and say anything meaningful about what it means without some sort of empirical context.

And to think I thought that the actual stat line produced by a player was actually real, but it actually isn’t, it is actually just an estimation. I also thought one could evaluate a players single game performance using a tool like Points over Par, or Wins Shares, but I was wrong , you can’t actually properly assess a “single game performance” unless you get a much larger sample of many many many games. I was also mistaken in thinking that it was possible, and in fact probable the highly rigorous statistical model that is Points over Par had some flaws, that rigorous statistical models are just that and are not reality, and that it was possible that it didn’t perfectly measure every single player game perfectly accurately, and that it was possible that it actually could evaluate a players performance inaccurately in the larger context of his team. I see now that I was wrong, and that I was completely "miscontextualizing" all players performance.

I am now going to contact my alma mater and demand a refund for the thousands of dollars I spent getting my MBA in Finance, since they graduated a complete fricking moron. If my word isn't enough for them, I will make sure and provide them with your name, so that they can contact you so you can assure them that I am a completely clueless fool, who doesn't even know how to read a fricking thermometer.

Sign in to write a comment.