This post is really just a reminder of something that all (I hope...) performance scientists and practitioners out there already know (or, perhaps, recognize but tend to forget). Let's start with the basic concepts of training theory that any purposefully-planned training program includes.
Adaptation is the Main Law of Training
It's all about adaptation. We typically want athletes to get stronger, larger (hypertrophy), more muscular (improved body composition), faster, or some combination of those goals. Adaptations only occur if the stimulus provided to an athlete is greater than the stimulus of the habitual (every-day) level. The figure below from Science and Practice of Strength Training is a great graphic to explain this point. Essentially, stimulating loads (those that are greater than the habitual levels) are required for physical fitness (blanket term for whatever goal is targeted) to improve.
Delivering stimulating loads must be mapped out after considering both the short-term and long-term objectives, logistics (training availability, on-court/pitch sessions, current skill level, etc.). This could include use of specific exercises or collections of exercises, specific training intensities and volumes, and times under tension among others.
Assessment of Adaptation
If you're a frequent (or infrequent) reader of this blog, you probably know that my team and I love to use force platforms for research and performance monitoring objectives. So, our collaborations with various Division 1 athletics programs includes force platform testing, predominantly during jump-landing tests. We use jump-landing tests because the performance and strategy metrics that we can extract from the force platform data are intimately connected to critical athletic qualities, such as strength, speed, agility, recovery/readiness, and overuse injury potential.
Importantly, we (scientists and practitioners) need to stay grounded when we assess the effectiveness of a training program using tools like force platforms. Here's two figures providing an example of force platform results we obtained to understand a male basketball player's response to 8-weeks of purposely-planned training.
Bi-Weekly Countermovement Jump Performances
Bi-Weekly Rebound Jump Performances
At face-value, it would appear that this athlete is responding to training in a negative way, given the massive decrease of explosiveness in both movements. Specifically, it could be stated that the athlete "lost" his quickness but retained his jump height capacity. If we lose sight of our long-term objective upon review of these short-term results, we could prematurely modify training and realize some very costly results. Why is this?
Remembering the Global Objective
The answer to the question above centers on long-term training principles, such as Supercompensation or Fitness-Fatigue Theory. The figure below, taken from Science and Practice of Strength Training, demonstrates the strategy employed for this athlete's training. In essence, the team behind the training program and performance testing used a philosophy centered around an overloading mesocycle, which is a bi-weekly modification of the session-to-session example in the figure. So, at the start of the training cycle, it is expected that this athlete will not have full restitution or recovery from the previous collection of trainings at the time of each test session, as the goal is to realize positive adaptations down the road following an elongated restitution period (which is not included in the figures above - need to keep the results of the secret sauce away from the competition) leading up to the targeted test session during which positive adaptations are desired.
This might all seem painfully obvious to those of you still reading. But, we've all seen a colleague or practitioner-in-training get frustrated by a few "bad" results and have a knee-jerk reaction to throw out the kitchen sink and design a new program even though the current one is actually doing its job based on the global objectives.
To summarize this bad boy, I want to me sure we all (myself included) remind ourselves of our purpose for the training program we prescribe to an athlete or the rationale for a force platform test (or any other test for that matter). Sometimes we need to make sure the program is "doing the job we planned" as opposed to always checking to ensure an athlete "keeps getting better" during times when getting better is not the session-to-session objective. A important thing I tell my students and performance-scientists-in-training is that nothing happens instantaneously (because physics), and most things require more time than we think. That truth applies to most everything we do, especially training. If we expect training to consistently return improvements from week-to-week, the program is most-likely delivering retaining loads (see first figure above) to an athlete instead of stimulating loads, and the positive results are simply due to the athlete "getting better at the test" versus getting better both physically and at performing the test. That age-old saying, "Don't miss the forest for the trees" sure does apply to a great many things, doesn't it?
Okay, party-people, that's all for this week. Sorry it's been such a long delay since the most recent post. Summer teaching is a monstrous time-suck (albeit a rewarding one when the students learn the things we teach - including the concepts motivating me to write this post!). Hit me up if you have any specific topics you'd like me to include in this blog. I know most of you like me for my Matlabbing and probably think I'm a doofus with these other post topics, so I can post about that if the people speak it, hey Stuart?
See you next time.