#
OFFENSIVE REPLACEMENT LEVELS

## INTRODUCTION

Bill James, in his __1985 Baseball Abstract,__
suggested that if
one
studied the rate of return for regulars at different levels of
offensive
production, one could determine the amount of hitting needed for
players
to keep their jobs. This required performance Mr. James calls the
sustenance
level, and I call here the offensive replacement level. Determining the
offensive replacement level at each position is important for several
reasons.
First, it is valuable to use in estimating the value of a player to his
team, which is simply how much better he is than whomever his team
could
replace him with without giving up another player. Since regulars who
produce
at less than the offensive replacement level lose their jobs, it is
obvious
that teams believe that there are players readily available who perform
at least at that level. Thus, the replacement level can be used for
evaluating
players in general, rather than considering what specific person would
take over for a given player if he were replaced. Second, it is useful
for comparing the relative importance of fielding at each position, at
least as perceived by managers. Theoretically, one could determine a
player's
defensive value by examining at what point he loses playing time due to
weak hitting (see Eddie Brinkman for an example.)
The complete results of my study can be found in Chart
#1. Both the replacement level and average production for
each
position
have the expected relationship, in that first basemen and left fielders
have the highest replacement levels and averages, and shortstops and
second
basemen the lowest. For those not familiar with runs created per game,
Chart
#2 lists some representative seasons showing conventional
statistics
along with the corresponding runs created per game.

Methodology

Results

Other Study

Conclusion

## METHODOLOGY

The first step was to determine the regular at each position (except
pitcher)
for each major league team from 1969 to 1989. I did this using the __Baseball
Encyclopedia__, generally choosing the player listed there as
the
regular
as long as he had made at least 243 outs (about 9 games (9 x 27) worth,
a full season being 18 games worth (18 x 9 players = 162 games,))
although
sometimes, especially with catchers, I might go as low as 189 outs (7
games.)
If there was no regular, then that position was not included in the
study
for that team and year. Next, using the statistics in __Total
Baseball__,
I determined the runs created per 27 outs (RC/G) for each regular for
the
years 1969 to 1988. I then noted if the player was a regular the next
season
at any position (or DH), even if he played several positions and could
not be considered a regular at one position, such as Dick Allen in
1970.
If he was traded and played regularly for his new team, he was
considered
to have kept his job. If he did not play regularly the following season
due to injury, but returned to regular status the year after that, I
counted
him as keeping his job. However, if he never played regularly again,
his
last season was not included in the study. After I had all these data,
I set up ranges of one-half or one runs created per 27 outs for each
position.
Then it was a simple matter of counting up the number of players in
each
category and the number who returned the following season.

## RESULTS

As you can see in Chart #1,
the rate of
return
varies greatly from position to position. The rates did not generally
vary
much between leagues. I had supposed that the replacement level would
be
slightly higher in the National League, since the designated hitter
allows
American League teams to put a weak fielder at DH and put a good
field/no
hit player in his stead, but that wasn't the case. Bill James had
theorized
that there would be a range of one-half runs created per 27 outs where
the replacement level would change sharply. However, that did not prove
to be true. The rates tend to change slowly; in some cases, it was
lower
at a slightly higher level of production, e.g. at second base, players
hitting between 3.5 and 3.9 RC/G returned 78% of the time while at
4.0-4.4
RC/G, the rate of return was 71%. Note that although I studied a total
of 504 team/seasons, the sample size in each range is fairly small.
The replacement level for each position is about 1.5 to 2 runs
below
average production. I expected it to be one run below, since using the
Pythagorean method of predicting winning average, at the average level
of 4.2 RC/G, 1 run below gives a winning average of .367, 1.5 below
yields
an average of .292, and 2 runs below average gives a result of .215. In
the past I have used one run above average for pitchers, which seems
superficially
valid. Thus, the level for hitters shouldn't be so low for an average
fielder.
It may be that it is artificially lowered by only good fielders being
allowed
to play regularly at low levels of offense (e.g. the young Ozzie
Smith),
so the offensive replacement level for average fielders is actually
higher
than is shown on the chart.

In considering the validity of this study, one must consider
several
factors. One is that a single season's statistics may not accurately
represent
a player's ability- indeed, several players who kept their jobs after a
poor season were established players having an off year and their teams
obviously believed that they could do better the next year. Another is
that other factors than offense are more important than I have
hypothesized.
Further, the overall level of offense varied during the period. Also, a
manager may overestimate a player's offense based on an incomplete
understanding
of statistics. Finally, a team may not have an adequate replacement
available
and/or doesn't want to take a chance on an unproven player. Any of
these
factors could result in a "true" offensive replacement level being
higher
or lower than I have calculated.

## OTHER STUDY

Another study of this question was done by Phil Birnbaum and published
in
*By The Numbers*, the newsletter of
SABR's Statistical
Analysis
Committee in September 1994. Mr. Birnbaum used the entire 20th Century
as the basis for his study. Some differences in methodology included
counting
a player as returning if he ever played regularly again and counting
all
outfield positions together. He also tested the results when a player's
*Total
Baseball* Fielding Runs were taken into account.
Both times the
results were generally consistent with my findings: no clear
replacement
level but rather a gradual drop in return rate as hitting decreased.

## CONCLUSION

Based on these two studies, there is no evidence that a "sustenance
level"
as postulated by Bill James exists. Apparently, too many
other
factors
affect management's decision process, so that even light hitting
players
have a decent chance to retain their jobs.
Any comments or questions? E-mail me at CliffordBlau@yahoo.com

Return to homepage