Remember the ten-percent rule? To minimize your risk of injury, it said, don鈥檛 increase your mileage by more than 10 percent from week to week. It鈥檚 one of those nuggets of common-sense wisdom that runners and other endurance athletes have relied on for generations鈥攅ven though, if you try to take it literally, it becomes nonsensical. What if you run 10 miles one week after an injury or a break or some other disruption? Do you really need to ramp up by 10 percent at a time, so that seven weeks later you still won鈥檛 have reached 20 miles per week?
These days, the 10-percent rule has been supplanted by a more sophisticated yardstick called the acute-to-chronic workload ratio (ACWR). The ACWR involves dividing your most recent weekly mileage (or other measure of training load) by the average of your most recent four weeks of mileage. If you run weeks of 40, 30, 40, 50, your ACWR is 50 / 40 = 1.25. If you simply do the same training every week, your ACWR is 1.
Since the ACWR was first introduced in the sports science literature back in 2014, it鈥檚 been widely studied and discussed. An International Olympic Committee on sports injuries a few years ago endorsed the idea of a sweet spot minimizing injury risk between 0.8 and 1.3, with substantially greater risk when ACWR exceeds 1.5. For comparison, if you increase by 10 percent every week, your ACWR is 1.15. I鈥檝e written about the concept , because it made intuitive sense and was easy to apply.
But there has聽been backlash, with some scientists pointing out flaws in both the theory and evidence supporting the use of the ACWR. In in Sports Medicine, researchers from McGill University led by Ian Shrier sum up the case against it. In a way, the discussion reminds me of debates around the original ten-percent rule, where you have to weigh demonstrable flaws against the sense that this ratio really does tell you something useful in the real world.
Here are a few of the criticisms that Shrier and his colleagues note, drawing in many cases on previously published critiques by other scientists:
- Since it鈥檚 a ratio, it doesn鈥檛 reflect the absolute size of the load. Judging solely from their ACWR, someone who increases their mileage from 10 miles a week to 15 miles a week would have the same injury risk as someone who suddenly jumps from 100 miles a week to 150 miles a week. Even worse, someone who started out running 10 miles a week and slavishly kept their ACWR just below the suggested max of 1.3 every week for a year would end up running 117,000 miles in the final week of the year. In both cases, relying solely on a ratio gives you gibberish.
- The four-week average used to compute the chronic training load hides the details of how that training stress was accumulated. Running a steady 50 miles a week is different from mixing 20-mile and 80-mile weeks. Even within a given week, averages don鈥檛 capture how the load is distributed and what the spikes look like.
- The four-week average implies that the training you did 28 days ago is just as relevant to your injury risk as the training you did a week ago. One alternative is to use a weighted average to calculate the chronic load, in which the most recent training sessions count more than the older ones. There鈥檚 some evidence that this approach improves the ACWR鈥檚 predictive power, but it鈥檚 more complicated to use, and according to Shrier and his colleagues it requires as much as 50 days of injury-free baseline data to get the weighting right.
- If you taper before a big competition, your ACWR would suggest that you鈥檒l have a high risk of injury every time you compete. In reality, most athletes would say that resting up before a big competition reduces your injury risk.
- The original data used to calculate the ACWR sweet spot of 0.8 to 1.3 came from studies in cricket, rugby, and Australian rules football. How well does that data generalize to, say, swimming or mountain biking? No one really knows, and it raises the question of whether separate thresholds need to be calculated for every different activity.
- One of the big surprises emerging from the ACWR research was that ratios below 0.8 also seemed to raise the risk of injury. This is puzzling: why would training too little make you vulnerable? One explanation is that in contact sports like rugby, you need to be training consistently in order to survive the rigors of the next game. But another option, Shrier and colleagues point out, is bias in the way the ACWR is calculated. If you get injured on a Tuesday, your training load that week will be low, and consequently so will your ACWR. The apparent risk associated with a low ACWR, in other words, may be a case of reverse causation.
That鈥檚 not even the full list of criticisms in the paper. Tim Gabbett, the University of Southern Queensland researcher who is the ACWR鈥檚 main proponent, addressed some of the pushback in a in January. He cautioned against expecting too much from such a simple metric: training load is just one among many factors such as age, skill, and experience that determine injury risk. And the thresholds are just guidelines, not ironclad rules that should never be violated.
Personally, the ACWR sparked a sense of instant recognition when I first saw it in a journal article. Back in the 1990s and early 2000s, when I was competing seriously, I designed and printed my own training log. At the end of each week, I always updated two key numbers: the week鈥檚 mileage, and the four-week running average. Those two numbers鈥攖he ingredients of the ACWR鈥攇ave me a sense of how my training was progressing relative to previous weeks, and offered me some signposts of what I might reasonably ask of my body in the week to come.
Many of the problems noted above are easy to avoid with a little common sense. I can鈥檛 imagine anyone skipping their pre-race taper because they鈥檙e worried it will give them a dangerous ACWR. The more fundamental question is whether a blunt measure of training stress, ignoring the myriad other factors that play into any injury, can really offer any useful predictive power.
One solution is to produce ever more sophisticated hypothetical causal models that incorporate all the complex relationships between training, biomechanics, injury history, and so on. The other solution is to lower your expectations. There is no magic threshold, no perfect sweet spot, and no guarantees about whether you will or won鈥檛 get injured next week. But the ACWR is intuitive, plausible, and easy to calculate. As long as you remember the caveats listed above, it seems like a handy piece of information to keep in the back of your mind for that moment when the social distancing rules are lifted and you have the irresistible urge to go a little nuts.
For more Sweat Science, join me on and , sign up for the , and check out my book .