When I was a Little Leaguer every now and then I would calculate my own batting average. A typical scenario would go like this: I had just gone 2 for 3 in the fifth game of the year, so then the excitement of a multi-hit performance would spur me to add up my stats so far on the season to see how "well" I was doing. I would then find my totals to be something like 3 for 14, which makes for a batting average of .214.
Of course, then I would be depressed. Even when I was ten years old I knew that .214 sucked. As a baseball card collector in those days, I had a limited number of those plastic sleeves used to protect cards. So what did I do? I only used sleeves on hitters who had batted at least .250 the year before. Anything below that I didn't deem worthy of a sleeve; and yet, even after a multi-hit game, I was still below .250.
Ah, but wait! What about those walks? If I added the two walks that I had gotten so far, now my total looks like this: 5 for 16, which makes for a .313 average. I could feel good about myself again. In my mind, I was now a ".300 hitter." It might not be a manly .300, but it was good enough for me.
Little did I know back then that I was dabbling with baseball mathematics that had yet to be fully understood by most MLB scouts. What I'm talking about, of course, is baseball's century-old love affair with "batting average."
Let's compare two hypothetical players who each had 600 plate appearances in a season.
Player A
165 hits
570 at-bats
.290 batting average
Player B
130 hits
490 at-bats
.265 batting average
For the longest time, the .290 average would have been praised whereas the .265 average would have been viewed, at best, as just okay. The .290 average might make you an All-Star, whereas the .265 average might make you a journeyman.
However, when it comes to a hitter's value to an offense, batting average is not the best indicator. Let's look at the previous numbers closer:
Keeping the math simple, let's say Player A had 30 walks. That brings his on-base percentage (OBP) to (165+30) / (570+30) = 195 / 600 = .325 OBP.
For Player B, we have a whopping 110 walks, bringing OBP to (130 + 110) / (490 + 110) = 240 / 600 = .400 OBP.
In this example, the pendulum swings favorably toward Player B. Quite simply, for any given plate appearance, Player B has a 40% of not making an out. Player A only has about a 33% chance.
If memory serves me correctly, one of the points made in Michael Lewis's Moneyball was that the number 3 is at the heart of baseball. In an inning, anything can happen for the offense until they make 3 outs. Therefore, each out is precious, and it is the statistic of "on-base percentage" that gives us the best measure of the frequency at which a player makes (or avoids) outs.
Back in the late 90's (or early 2000's), Billy Beane and the geeks working for him (such as Paul DePodesta and others whom I can't remember) discovered that OBP is the stat which best correlates to a player's ability to produce runs for an offense. The Oakland A's used this information to great advantage for a few years. However, the league started to wise up to this strategy. In fact, the GM for the Red Sox, Theo Epstein, has employed many of Beane's statistical strategies; that, combined with the Red Sox's superior resources, has contributed to Boston's high level of success over the past 5 years.
So here's my complaint. If it's so well documented that OBP is more important than batting average, why do TV announcers (and just about everyone else) keep referring to batting average instead of OBP? It seems to me that when a player comes to bat, the announcer should list his on-base percentage, not batting average.
1 comment:
In my post above, I compared an imaginary "Player A" with "Player B." In real life, Alfonso Soriano's stats are similar to Player A, whereas Adam Dunn's stats are similar to Player B. Alfonso Soriano is an All-Star, but Adam Dunn will probably contribute more to your team's lineup.
Post a Comment