Has Sabermetrics Been Good For Baseball Journalism?
Bill James on "good sabermetrics" in 1981
Has Sabermetrics Been Good For Baseball Journalism?
We’re up to The 1981 Bill James Baseball Abstract, which is the last of the series that I don’t have a physical or scanned copy of.
The unfortunate thing is that I’ve had to use abridged and summarized versions of each of these old Abstracts. Fortunately, Rich Lederer published a bunch of these. The unfortunate part, however, is that his articles are over 20 years old, and most of the discussion on the now defunct Baseball Primer (or Baseball Think Factory) is lost to internet history.
Anyway, Bill James apparently opened the 1981 Abstract with a two page letter on what sabermetrics is and how it differs from traditional sports journalism.
Remember, by the way, that this is around the time that Bill James started to attract national attention. Ballantine Books picked up his homemade publication in early 1982, and he became a household name not long afterwards, including publishing well known articles in Sports Illustrated and elsewhere. There’s a reason why James wanted to compare himself to other sports journalists: like many of the sports bloggers of a few years ago, he wanted to demonstrate that he belonged in the conversation.
James offered 3 reasons why sabermetrics differed from traditional sports journalism. Let’s look at each of them:
I don’t think James explains this point adequately. What he seems to mean to say is not just that sabermetrics introduces new statistical evidence to the picture, but that it introduces this evidence regardless of the conclusions it wants to reach.
Take this article from 1949 comparing the awful St. Louis Browns to the 1916 Philadelphia Athletics, for example:
There aren’t many statistics here, but the statistics that the article does present are intended to simply prove it’s point. The point is that the Browns were bad, but Connie Mack’s post-$100,000 infield Athletics were even worse. And, hey, Mack didn’t fire himself after that season, did he?
Aside from won-loss percentage, there aren’t many statistics employed here at all.
How would a sabermetric writer approach this problem? Almost certainly from a different perspective. The sabermetric writer would look at all awful teams in baseball history first, use a variety of tools to analyze and compare them, and would only then come up with a conclusion and shape it into an article.
That’s what James is talking about — at least, that’s what I understand.
This seems to be a natural extension of the first point.
You can make pretty much any argument you want, provided that you use the right statistics. You can make arguments based on arbitrary RBI thresholds in favor of your favorite player one day, and then turn around the next day and bash RBI as a worthless statistic if it favors a player you dislike.
In theory, sabermetrics is about using an actually scientific approach to address this problem. You’re supposed to look at the data first, go where it takes you, and only then start to formulate your conclusions.
Do you think most sabermetricans these days do that? Or are they starting with their conclusions and looking for the evidence?
And this third point is… well, it’s the same as the first two.
The idea here is to stay away from the diatribe and focus instead on the data.
Now, I don’t think that Bill James has necessarily consistently followed this approach for his entire career. However, this is absolutely the point that he started at.
And then there’s the concept of “bad sabermetrics:”
Earnshaw Cook is an excellent example of “bad sabermetrics,” by the way. I don’t have a copy of his famous (and somewhat rare) book on how baseball lineups should be created. However, I do know that it’s true that he paid absolutely no attention to defense, and didn’t really know the first thing about how the sport is actually played.
In other words, I’d echo James’ sentiments here:
But the interesting question isn’t about Earnshaw Cook. It’s about modern sabermetrics.
What do you think about the keyboard warriors that make up the modern sabermetricans? Do they qualify as “good sabermetricans” according to James’ definition?
And the biggest question of all: has this statistical revolution been good for baseball journalism in general?
I doubt I'm knowledgeable enough to answer the question. I got into sabermetrics a bit back in the 2010s and quickly realized it would take years of study to fully understand and appreciate them. I think they have value and wish I had the time to learn more about them.
I suspect that many, even most, baseball journalists may likewise be only on the edge of sabermetrics, taking what they can use (as you say, to prove a point they want to make). It may be that the main value of them is for websites like Baseball Reference and us fans and teams that use them to improve their play (ala "Moneyball").
I don't dislike or disqualify sabermetrics and understand the scientific nature of it. I just sometimes find it boring. The only reason I care about stats at all—or the main reason, anyway—is because they are so ingrained in baseball lore. Stats mentioned or referenced, to me, are more like props used in the telling of the story. That being said I understand why people like or enjoy exploring the more advanced metrics of the game and appreciate the outcomes they come up with.