clock menu more-arrow no yes

Filed under:

Year-to-Year Unit Improvements

New, 36 comments


Defense is why I began writing this post, so I'll tackle that one first. I think we all realize that this year's defense was better than last year's defense even though it's hard to tell from the raw numbers. The offenses put up ridiculous amounts of points this year in the Big 12 for various reasons. Without getting too into those details, I will only say that from my perspective two of the reasons were pace and officiating. A large number of offenses in the Big 12 this season played at a very quick pace, and the no-huddle hurry-up was very common in the conference. As for officiating, well, it's easier to score when holding isn't called. I'll leave it at that before I go crazy.

So how to measure defensive improvement in such an environment? My ratings assign an offense and defense rating to each team and a normalized ranking is also produced which allows for some comparison between seasons. One thing that's important to note, however, is that the effects of pace are incorporated into each number. What that means is that a team that plays at a higher pace than average (producing more possessions per game) will have a higher offense and lower defense rating than an equivalently strong team that plays at a slower pace. That's important to remember as it introduces the most obvious bias to the study I'm about to report. Also, Thursday's game will affect Texas' 2008 ratings as an opponent is involved, but the changes probably won't be significant enough to affect these results.

To determine year-to-year defensive improvement I subtracted the previous year's normalized defense rating from the same rating for the current year for each Texas team reflected in the ratings. I then divided by the standard deviation used for the normalized ratings to determine the improvement in terms of standard deviations above the mean. So the final number below is the number of standard deviations better than the previous year that each defense was. The pace effect mentioned in the previous paragraph, therefore, could bias a result in that a team could improve their defense rating simply by playing at a slower pace or playing many games against slower paced teams. However, having watched every game of the 2007 and 2008 Texas football seasons of course, nobody could argue that the 2008 games were played at an overall slower pace than 2007. Quite the opposite, in fact. As reported in the Fiesta Bowl, for example, Texas defended fewer running plays than any other team in the nation this year. Obviously this isn't directly correlated to pace, but we know that running plays take less time off the clock than passing plays on average because of incompletions.

The table:

Best Y2YDI
Rank Year Y2YDI
1 1957 1.31
2 2008 1.16
3 1998 1.11
4 1940 0.91
5 1990 0.90
6 1966 0.87
7 1972 0.84
8 1944 0.81
9 1989 0.80
10 1981 0.78

I'm not an expert on staff changes throughout our history, but even I can recognize that the top 3 defensive improvements in Texas history according to this analysis are due to major coaching changes. Darrell Royal taking over for Ed Price, Mack Brown taking over for John Mackovic, and Will Muschamp being hired as the new defensive coordinator all resulted in immediate and extreme defensive improvement.

For the masochists among you, here is the table with the opposite results (worst year-to-year defensive change):

Worst Y2YDI
Rank Year Y2YDI
1 1992 -1.90
2 1997 -1.58
3 1984 -1.36
4 1980 -1.36
5 1938 -1.04
6 1943 -1.00
7 1965 -0.89
8 1931 -0.88
9 1973 -0.86
10 2006 -0.81

Here we see Mackovic taking over for McWilliams, Mackovic's 1997 meltdown, and Akers' 1984 meltdown represented in the "top" 3 seasons. Also notable is Tommy Nobis' departure (although srr50 can speak to the other recruiting problems that began to plague the program around that time). Also, the dropoff from 2005 to 2006 shows that our problems were more than just on the offensive side of the ball without Vince, although having a strong offense will help the defense somewhat, of course. It also raises an ugly question about the 2006-07 defensive coordinator's performance as his takeover of the unit caused the 10th worst dropoff and his leaving the role resulted in the 2nd best improvement in our history.

While I was here I figured I'd throw up the offense tables.

Best Y2YOI
Rank Year Y2YOI
1 1977 1.99
2 1998 1.85
3 1941 1.80
4 2005 1.67
5 1932 1.64
6 1968 1.33
7 1939 1.29
8 1921 1.23
9 1914 1.12
10 1987 1.12

So, we have Akers taking over for Royal (move to Earl as the tailback), Brown taking over for Mackovic (focus on Ricky after Mackovic's meltdown), the 1941 team, and the year it all came together in 2005.

Worst Y2YOI
Rank Year Y2YOI
1 1976 -1.90
2 1942 -1.58
3 1933 -1.36
4 1971 -1.36
5 2006 -1.04
6 1997 -1.00
7 1991 -0.89
8 1962 -0.88
9 1916 -0.86
10 1944 -0.81

At first glance, this table appears to have a high correlation to the years following an outstanding offensive outburst more than coaching changes. I'd be interested to hear what happened in 1976, though.

Note: All tables use only results since 1912 when the current scoring system of 6 points for a touchdown and 3 for a field goal was introduced