- John R. Harry, PhD, CSCS
Athlete Performance Monitoring: Resisting the Urge to Drown Yourself in Metric Selection
We've all been there. I've permanently resided there since the dawn of my career. You know what I'm talking about... It's that thought process of "more data can help me pinpoint the answer" I'm seeking. It's sort of like holding onto everything you can in case you decide to embark on a fishing exploration to catch any fish possible even though you're trying to catch what you need to whip up some fried catfish po-boy's.
Here's how I try to avoid catching crappie when I'm seeking catfish. Okay, I'll stop with the fishing analogy (I don't even fish!) As you probably predicted, my example comes from work conducting countermovement jump (CMJ) tests using a force platform. Here's the situation (No, not that situation - see image below): The athletes on your squad perform CMJ tests every other week throughout the season so you can track how they're progressing or (hopefully not) regressing. You have them perform 3 CMJs on a force platform, with arms akimbo, on the first training session of the bi-weekly cycle. You've controlled for time of day, minimized potential confounding effects to the best of your ability, and it's now time to select the metrics you want to monitor to understand changes in 1) performance, 2) strategy, and 3) neuro-muscular function. All is good, right?

It's that metric selection part that can get us into hot water. If we select too many metrics, or have multiple metrics that are correlated to each other, what good are they? Why have many when a few does the job well enough? It's easy to blame the research consultant your team hires to conduct the tests and run your analyses or the companies you've purchased force platform solutions from (e.g., Hawkin Dynamics, ForceDecks, etc.) because they include so many metrics in their analysis outputs. But that's a ridiculous thing to do, because they are just providing YOU with the ability to select the appropriate metrics to answer the questions you are seeking instead of forcing you to answer a limited set of questions because of restricted metric availability. It's kind of like the old saying I learned from Dr. Andy Galpin over 10 years ago when I really learned how to design resistance training programs, "there's no such thing as a bad exercise, only bad applications of an exercise." Applying that to CMJ testing, there's no such thing as a bad metric, only bad applications of that metric.
Here's my thoughts on how to avoid drowning in the pool of CMJ metrics. First, we must know what answers we are seeking prior to testing. Duh, right? Actually, this is something I've noticed in sports science settings, as the practitioners just want data but don't know or think about why they want the data for. Knowing the answer we are seeking will help determine the key metrics for performance, strategy, and neuro-muscular function. In addition, we need to establish what we really care about. Do we care that performance is adapting in the way we want or is it acceptable to have performance remain stable but have neuro-muscular function show signs of improvement? Second, we must practice self-control. This is the hardest part of me, because part of my goal as a scientist is to define metrics and movement phases that are both mechanically sound and functionally relevant. This means it's really easy for me to get caught up in a pool of metrics because the metrics are sexy at the moment (Some of you might have noticed this about me and my current fascination with the eccentric yielding phase - I own that. It's the next BIG thing in CMJ testing. You'll see). This isn't just a challenge for me, though. I've witnessed people use two metrics that are literally the same thing but have different "names". I've also seen people place all their focus on two metrics when one of them is calculated using the other. A prime example of the latter is considering the modified reactive strength index (RSImod) and jump height to be equally important "performance metrics" even though jump height is a metric within RSImod (RSImod = jump height divided by jump time)...

To support my reasoning for practicing self-control, here's an exemplar set of graphs I use to monitor CMJ performance (Figure 1). The goal from this example data was to determine whether this collegiate men's basketball athlete maintained CMJ performance throughout the season. For me, CMJ performance is defined by RSImod, which is also considered reflect the explosiveness of the jump. Other performance, strategy and neuro-muscular function metrics are also monitored throughout the season to try and explain changes in explosiveness.

Figure 1. CMJ performance, strategy, and neuro-muscular function changes over 7 test sessions.
To summarize what's shown in the figure, we've tested this one athlete 7 times, or every two weeks (14 weeks total) during a competitive season. The top row of metrics represent the metrics we can use to represent CMJ "performance" and the second and third and fourth rows represent "strategy" and "neuro-muscular function" metrics, respectively. The red diamonds that are sporadically located between test sessions represent a large or meaningful change between the adjacent test sessions. After a quick look at the figure, I think it can be relatively easy to get lost when trying to understand the changes that took place and the reason for the changes. For example, the majority of CMJ research would encourage us to care A LOT about amortization force (i.e., force at zero velocity) and braking RFD (i.e., yank during the eccentric braking phase). I'm not going to link studies here, but you can trust me. Or don't trust me. It won't hurt my feelings. Nevertheless, if we put a ton of emphasis on amortization force and/or braking RFD, we can see that those metrics changed in the same way between the same sessions. Does that mean we need to monitor both to help answer our main question? Maybe. Maybe not. Perhaps more importantly, none of the performance metrics (top row) change in accordance with those two neuromuscular function metrics. Regardless of what the literature says about these and other metrics and their influence on performance, they might not be all that useful for the question we are trying to answer.
A more effective approach, in my humble-to-not-so-humble opinion would be to simplify the output. I know for some people this can be very, VERY difficult. I am leader of that pack, since you know, the above figure is my set of CMJ metrics I monitor... So how do we simplify? Well, for me, I would first ignore all metrics aside from the primary performance metric. As mentioned previously, the key performance metric here is explosiveness defined by RSImod. So that's the first metric we explore. This athlete improved their explosiveness from week 1 to week 3 and from week 3 to week 5 (remember, test sessions occurred every other week). Explosiveness then remained relatively consistent until week 10 (session 6), where we observed consecutive decreases in explosiveness for the rest of the testing period. Any interventions we would make between those sessions, if we were even going to do so, would center around improving explosiveness and not any specific strategy or neuro-muscular function metric (remember, we shouldn't abruptly change our training process because of a couple undesired results - we usually need to stay the course for the planned duration). Once we've identified the changes in the key performance metric, we can explore how those changes took place. Although jump height is often a stand-alone performance metric, RSImod is the primary CMJ performance metric in this example (and this population in my opinion), with jump height and jump time (i.e., time to takeoff) classified as secondary performance metrics (improvements in each are good, but if explosiveness does not also improve we missed out on the training objective) because those two metrics are used to calculate RSImod. Thus, in this situation, you could also think of jump height and jump time as strategy metrics (I just blew your mind, right?). Refer back to the Xzibit (X to the Z baybayyyy) meme and paragraphs 1 & 2 for why I hold this opinion.
We can see from the data that the first increase in explosiveness occurred by increasing jump height while the second increase in explosiveness occurred by decreasing jump time. The penultimate change (i.e., decrease) in explosiveness occurred through a decrease in jump height, while the final decrease of explosiveness occurred because of the dreaded double whammy (decreased jump height and increased jump time). The strategy and neuro-muscular function metrics do nothing for us in the example other than reiterate the fact that both CMJ performance changes and CMJ performance consistency can occur by way of various metric manipulations. The main things I took from this data output were that we 1) needed to keep on keepin' on with our plans for the first 10 weeks of the season and 2) try to counter (expected?) decreases in performance with training modifications (added spice - not complete overhauls) that can contribute to decreases in jump time (to keep explosiveness from decreasing). Unfortunately, we weren't too successful with that at the end of the season.
From an intervention standpoint, it can be easy to see a negative change (like we saw towards the end of the season) and try to make a huge change and identify specific metrics to target through late-season training (e.g., amortization force, braking RFD, average concentric force, etc.). But, we might be better suited to simplify our targets to "increase quickness" or "maintain or increase leg strength" to avoid missing the forest for the trees. That's a tough pill to swallow for some. But, there's a reason that old saying of "keep it simple stupid" as been around since dinosaurs roamed the Earth, right?
Obviously, CMJ performance changes over time are multi-factorial and we can't solely explain them with other CMJ metrics. So other things, such as time spent with the physios, individualized on-court work, training workload, etc., all need to be factored in to really understand why changes occurred. But, I also think it's reasonable to claim that CMJ testing revealed some important information for this athlete. As much as I love a big ol' metric party, more is not always better when seeking to answer a specific question. The key thing for practitioners to remember is that researchers (like myself) try to pinpoint metrics that can be targeted through training to improve physical performance, in this case the CMJ. What researchers are not doing, regardless of how some papers are written, is pinpointing metrics that must be targeted through training or else you and your athlete will fail miserably and with much embarrassment.
Thanks for tuning in this week. I hope you got something useful from the post. If not, don't hold it against me. I'll try to do better next time.