In this day and age the professional sporting environment is overwhelmed by the scope and magnitude of "monitoring variables". What do we measure. How frequently do we measure? Confusing the mix even more, is a plethora of new technology available now, and more coming out everyday day that allows us to assess everything from HRV to CK to functional ROM etc etc etc.
In a recent blog marketing guru Seth Godin made an insightful observation about the similar proliferation of productivity apps..
"...you'd think that with all the iPad productivity apps, smartphone productivity apps, productivity blogs and techniques and discussions... that we'd be more productive as a result.
Are you more productive?
I wonder how much productivity comes from new techniques, and how much comes from merely getting sick of non-productivity and deciding to do something that matters, right now".
Here's my very simple take on setting up your monitoring and ensuring you do something that matters;
1. Use a balance of objective and subjective information
Objective data is critical to any monitoring process and must be a cornerstone of any model. Objective data well collected gives you reliable information. Subjective data must be treated carefully, but is also a key element. To be very clear, I define Subjective data as anything were an athlete has a choice. Obviously this includes RPE & Welling being type questions but in my view it also extends to assessments such as jump or strength strength assessments. In tests of this nature the athlete must be relied upon to give maximum effort in order to be confident that what the data is showing you is a true function of neural freshness etc. I have seen many times the obvious failure of individuals to give 100% because of how they "feel", clearly a subjective response, hence my categorisation of these tests as subjective in nature. Subjective data does give you an insight into the psychological status of the individual, information that may be useful in helping you understand the athlete's current capacity.
2. Ensure the data collection is repeatable
As with anything based on science ensure your methodologies are sound and easily repeatable. In oder to be capable of making decisions on data you must have complete faith in the information.
3. Understand the limitations of the data
Nothing is ever absolute. Understand clearly the depth of detail data offers you. Be very careful and considered when establishing how much weight you place on any one piece of data.
4. Establish thresholds for your data set
Once you have confidence in a given variable, establish well documented thresholds which can serve as "quick" alerts. By documenting a given threshold you are committing to a systematic approach to your monitoring. Leaving "grey zones" makes it harder to define to a senior or head coach your rationale for making decisions regarding an athlete.
5. Define your decision making process / how you will use the data
Clearly establish in advance how you are going to use data. Talk to your skills coaches in advance so that they understand (even at a superficial level) the rationale behind your decision making process, because ultimately you will need to make decisions about training that will impact them significantly.
6. Internal research is valuable
Running internal research on your data is the most consistent way to establish and validate your procedures. Published academic research can support to an extent but it has been my experience that the strongest research you can generate is that measured in your environment on your people.
7. Connect the dots - Look for the patterns - recognise symptom from cause
Like S&C coaching, Monitoring is both a science and an art form. By committing to a systematic approach as noted above and investing in your own internal research you can quantify the information that dictates immediate action, and the information that on its own isn't necessarily a "call to arms" but when linked with other data paints a picture. There are computer systems that try to put some of this information together, (I've been involved in the design of two) but for me there is a critical skill set in being able to join the dots and see the picture.
For example an 85kg AFL running machine presents the following data;
Forward sacral torsion noted 2x in training 2 days prior - both corrected without issue
Tight left quad noted post training two days ago - treated locally, no loss ROM or strength
Sit and Reach down left this morning
Physio reports forward sacral torsion left this morning - corrected
History restricted Right 1st MTP joint (degenerative)
Looking at the picture the quad tightness is symptomatic of the sacral torsion being caused by an unload due to a degenerative MTP joint. Actions need to be around focus on the right foot, local management of the left quad and modification of the speed content of training until correct dorsi-flexion of the toe can be achieved to even out running mechanics. Looking at the data from a holistic perspective allowed a decision to be made that allowed an appropriate training stimulus to be applied but also addressed the origin of the issue.
NB: At the same time I was testing a running asymmetry model which also supported the observation of a running imbalance. This data contributes to the case for including asymmetry data my monitoring mix. Testing continues!
8. Know your athletes
This is undoubtably the core tenant of being a coach. The data tells a story, but the story describes an individual. Knowing that individual, his capacity to tolerate loads, injury history and other individual nuances allows you to see the patterns in the data and interpret them meaningfully.
Everybody goes overboard with data at different times. Make sure at regular intervals you step back and have a good look at what you are measuring and why. If you can create a sound rationale for you system that is effective at keeping your athletes at their very best you are on the right path.
Yours in S&C