Season-Long CMJ Performance Monitoring: Should We Scale to Readiness?
First and foremost, the data I will use for this post consist of real CMJ performance data (i.e., flight height, RSImod, Time to Takeoff) combined with hypothetical player readiness data. This is because I've only just thought of implementing something like this, and the Fun Friday Blog seems like the perfect outlet to share my thoughts and obtain opinions from the smart(er?) folks in the community.
Here's the situation I think many of you currently find yourselves in: You're working with an athlete population, and you have limited time and logistics support for the performance monitoring solutions you're trying to implement. So, you focus on force platform testing of the countermovement vertical jump (CMJ), because it's relatively cost-effective and easy to implement. In addition it provides a great deal of information about desirable athletic qualities like speed, power, agility, and inter-day fatigue (Barnes et al. 2007; Cronin & Hansen 2005; Loturco et al. 2015; Watkins et al. 2017).
However, when we monitor changes over the course of a season or two (i.e., months or years), we'll probably anticipate performance increases by way of training (i.e., strength & power improvements) and on-court practice (refined movement strategies). But general observation of my own testing approach has revealed that I've not been acknowledging or including our athletes' readiness on a given test day. This has been bouncing around in the ol' noggin lately, as I think it's reasonable to presume that there will be times when "worse" performance is not a sign of negative adaptations (i.e., training must change) but actually a sign how tiredness has affected performance (I don't want to trigger anyone by saying fatigue...). This is especially true for collegiate student-athletes, as their schedules are jam-packed from wake-up to bedtime with class, studying, training, practice, film, travel, etc. I am curious whether anyone is accounting for this in their test testing assessments? I know I'm not. But, I'm thinking I should start. Here's my initial thoughts on how to begin doing so (Please comment if you are doing this will some positive results).
When I analyze CMJ data from a force platform, I have two outputs. The first output is solely for me. This includes all the variables I find to have any value or potential to explain performance (the output) and strategy (I combine "strategy", "driver", and "fluffy" metrics [see Jason Lake's post on the Hawkin Dynamics Blog] into one category). So, "my" output has many, many, and far too many variables in it. In fact, I think most of the variables are like seasons 3 & 4 of Homeland. They're not that useful for the big picture, per-se, and you could skip over them completely and still understand the whole show's story line after the series finale. But, those seasons can help you understand every little detail of what's happened so you can wail a Johnny Drama 'Victory" yell after winning a Homeland trivia competition.
The second output is for the coaches and athletes. It is a simple and direct reflection of performance and general strategy, as shown in Figure 1 below. While this output can answer basic questions (did the athlete show positive changes today?) supported by "my" output for detailed assessments, the results could be telling us the "wrong" story.
Figure 1. Bi-Weekly CMJ results for 1 athlete during a competitive basketball season.
For instance, taking the results at face-value, we could conclude that this athlete demonstrated negative adaptations over the second half of the season because their jump height and explosiveness trended downward. But, that time of the season is when conference play occurred (i.e., the competition gets better) and the practices likely became more intense (presumably - I'm not there for practice or training). Given the validity and reliability of perceived exertion scores, I started thinking about athletes' perceived readiness. I've not seen perceived readiness used in sports science settings, but a quick search suggests it has merit in other areas. So, could we scale CMJ performance results to an athlete's perceived readiness on the test days to gain a better understanding of the athlete's changes over time? Let's consider the same data shown in Figure 1, but we'll add in fictitious session-to-session perceived readiness and scale the variables to it, as shown in Figure 2 below.
Figure 2. Scaled Bi-Weekly CMJ results for 1 athlete during a competitive basketball season.
The perceived readiness data are meant to reflect what I would expect throughout a season. This means I think an athlete would maintain a high level of readiness throughout the first half of the season, after which readiness starts to decrease as a result of increased training and game intensity. This might be wrong, and I recognize that, but that's not the point. The point is, we can better understand how an athlete performed towards the end of the season compared to the start of the season after considering the athlete's readiness. I mean, who really thinks readiness will be 100% all season long? So, by test session 7 (3.5 months into the season) the athlete is actually doing a pretty good job at performing well considering their cumulative tiredness or lack of perceived readiness. Obviously, we would like to see maintenance or even improvement of jump height and explosiveness over time, but that's unlikely to happen and we should still have a way to gauge performance.
To summarize, I think we need to account for an athlete's perceived readiness when we conduct a CMJ test (or any other physical test) over a long period of time. I'd love to hear your thoughts on this, especially if you've got a tried and true process that has really benefited your athlete monitoring practices. After all, let's help each other help each other, right?
Thanks for tuning in this week. See you again in 7 days. May the Force be with you (still waiting for sometime to argue against Mace Windu as the Jedi GOAT).