Sunday, May 6, 2012

Interview with Stuart Cormack


First time I’ve heard about Stuart Cormack was when I saw the video of him presenting at Football Science VII: the International Conference in Japan, 2011 which I have posted in the Playing with the Statistics (Part 1).

I immediately wanted to contact Stuart, since the monitoring of the training loads and athlete’s reactions to them is the currently my hot topic of interest (along with learning basic statistic, which as you might have seen/read from my blog posts I don’t know pretty good at the moment). Dan Baker was kind enough to introduce me to Stuart, and what amazed me the most was that Stuart is familiar with my blog. For me, that is a great compliment and reinforcement.

So, I wanted to pick his brain in more detail regarding the monitoring of the training and athletes. When it comes to it, Stuart is the man to go to, since he is one of the world leading experts in that area.


Mladen: First off, I want to thank you for taking your time and effort to do this interview Stuart. Second, I must admit that your work has had a profound influence on the development of the monitoring system I have on my mind. Let us begin with some basic introduction – who you are, what do you do, what is your area of research and what are your short-term and long term goals?

Stuart: Hi Mladen, you’ve been far too kind in your introduction but it’s a pleasure to talk to you. Thanks for inviting me to be part of your blog. I’m currently a Senior Lecturer in Exercise and Sports Science at the Australian Catholic University in Melbourne, Australia. I also hold an Adjunct Senior Lecturer position at Edith Cowan University. I’ve recently moved into a full time academic role after nearly 20 years working in elite sport. Fourteen of those years were spent working in the Australian Football League and 4 at the Australian Institute of Sport. I’ve always been a practitioner who’s tried to straddle the fence between Sports Science and Strength & Conditioning. My major interest is in using scientific evidence to optimize the training process. This ultimately led me to becoming involved in applied research and I was lucky enough to complete my PhD at ECU under Professor Rob Newton and Associate Professor Mike McGuigan. Since then I’ve continued my research work but I’m now doing far less hands on coaching. My “real world” involvement these days revolves around PhD supervision of a number students who are based in elite sport environments and some consulting, including Paris Saint-Germain in the French Ligue 1. I’m also the Strength & Conditioning Coach for an Australian Judo player who has qualified for the London Olympics.

My major area of research has been in the area of training load and fatigue and the implications for performance, particularly in team sport athletes. My plan is to continue this theme and find out more about the specific mechanisms involved in an effort to develop appropriate monitoring, training, and recovery practices.

Outline of monitoring process


Mladen: In my opinion, monitoring of the training loads, fatigue and adaptations brings up the crucial feedback for the coach that allows him to modify the training system and individualize the training process. This process begins with data acquisition, followed with data analysis and later using it for decision making and making action plan. Let’s cover the data acquisition part – what protocol provides most reliability and validity in data acquisition? I am mostly talking about taking subjective indicators (like sRPE, level of fatigue/stress/motivation/soreness and wellness questionnaire) along with some other indicators (like HRV,  reaction time, jumping performance, tapping,etc). How do you avoid boredom (if you do it everyday it get’s tedious), cheating (lying to get better score, influence the training effect) and consistency? Do you use paper, personal report, email, iPhones/iPad? What are pros and cons of each? Technology seems to help in this regard, so what system you find interesting in dealing with data acquisition, especially subjective ones?

Stuart: I agree with you completely that monitoring load, fatigue and adaptation is crucial for providing feedback and ultimately individualizing the training process. The question of which markers to use is a difficult one and I’m not even close to having all the answers in this really interesting area. It’s difficult because I think we need to move away from universally applying monitoring systems and determine which variables are important for specific environments. This applies for both objective and subjective markers. For example, hormonal response might be extremely useful in one sport and completely invalid in another. Whatever the variable, it needs to be valid and reliable. We are still learning about the expected response of many potential monitoring tools to specific performance environments, particularly in team sport. To understand these responses requires some applied research including determination of what we might expect a variable to do at a given time (e.g. 24h post-match) which then gives us the opportunity to compare the actual response to the expected response. There are many markers that are yet to be thoroughly evaluated that may provide great insight.

Having said that, I’m more and more convinced of the benefits of self-reporting. Whether it’s sRPE (and the calculation of Load and Strain), a simple Wellness questionnaire or something more involved such as the RESTQ-Sport or similar we should never underestimate the athlete’s perception of how hard they have worked or how they feel. I’m a fan of talking to the athlete - so whilst technology can automate the data collection and analysis process, it can be detrimental to personal interaction if it’s not used appropriately. Boredom and a standardized response from the athlete are potential issues. I don’t think I have the perfect solution but being selective with how often you use the tool or even changing the staff member who collects the information from the athletes can be enough to help get a truthful response. Appropriate statistical techniques can also highlight when someone has given a response different than what they normally would, even if the raw score they provide isn’t enough to raise concerns. Some people get concerned that the athletes will deliberately report that they are in a negative state to avoid training. I agree that this is possible, but my response is that if athletes are doing this they aren’t the ones you want on your team anyway. Hopefully with some education, the data you gather can be highly accurate.


It’s tempting to want to validate subjective reporting against objective biological markers but I don’t think we should be surprised if we’re not always able to do this. A biological marker may provide an indication of the status of a very discrete system or potentially multiple systems, whereas self-reporting will reflect a much more global perception or status. A poor correlation between the two doesn’t necessarily invalidate the self-reporting measure, it could just be that they are measuring different things. It could be more important that the measure is able to reflect things like previous training load and ideally give some indication of the ability to perform in an upcoming event.

It seems that a mixed methods approach where a combination of objective and subjective markers are utilized may give the fullest picture of the status of the athlete.  Quite understandably, there’s interest in predictive equations that model an outcome based on the relative contributions of numerous indicators. There’s no doubt the search for a single marker will continue.



Mladen: Now when we have the data, we need to analyze it and visualize it for the coaches. How do you develop baseline for the team and individual, and how do you deal with variability of the individual response and different demands for different positions? For example, do you compare sRPE on the team level (and/or position) or based on 4 weeks rolling averages for each person individually with the goal of identifying outliers  and red-flagging them? Besides, which indicator you analyze by comparing to the team or position played (inter-variability) and which ones you compare to individual itself (intra-variability) and why?

Stuart: Establishing a baseline value is probably one of the most difficult aspects of the process and unfortunately it’s far from an exact science. If you’re going to make comparisons to this time point in a effort to determine a meaningful change, it needs to be a valid comparison. The baseline you use is likely to depend on the comparison you want to make. For example, if you want to compare weekly competition phase responses to a baseline it might be important that the baseline represents a relatively fatigue free time but is also from a competitive period. In this case, using a baseline calculated as an average of scores collected during a pre-season cup phase or similar can work well. If you compare to a completely fatigue free and unrepresentative baseline, you can end up with every score being a "red flag". Ultimately, the aim is to establish a stable value for each individual athlete that can then serve as a comparison.

It’s arguable that the most important comparison, regardless of variable, is intra-individual. Although, if an individual is moving in a different direction to the majority of the group that can be very important. A thorough system will involve comparisons on many levels including acute and chronic responses. Identifying appropriate levels to "red-flag" takes some work and this again is not clear cut. However, if you have calculated reliability values (eg: CV%) you could consider determining the importance of a change in scores relative to the error in the test. In simple terms, a change > CV% suggests a biological change. For subjective measures we’ve had some success with using a modified Standard Difference Score, which is effectively a Z score of the change between two time points.

Think double before you red flag an athlete

Mladen: Ok, now that we have identified outliers what are the action plans? How do you incorporate it into team settings and practices? What are our options actually? Are they going to skip the game? I was talking to Dan Baker and he basically told me that they don’t utilize subjective ratings any more, and the guys who are under-recovered and/or injured actually do grueling and strenuous workout in non-specific way (cross-training). This sends two messages in my opinion – first, most injuries happens on the field not on the bike/cross trained/rower, and second, recovery is player’s responsibility (as long as we provide means to it). Most of the guys go out partying and drinking, and then we do what? Reduce their training loads? How do we deal with lack of professionalism in this case? Fining players?

Stuart: Implementing an appropriate action requires planning. It needs to involve coaches, sports science/strength & conditioning staff and medical/physiotherapy personnel. The starting point needs to be a periodised training approach that people are going to try and follow regardless of winning and losing. It shouldn’t be inflexible, but reactive modifications to training load are an almost certain way of creating an inappropriate imbalance between training dose and recovery (this could include potential "under-training" as well). It should allow us to train very hard at the right times. Unfortunately, an effective plan is likely to take a lot of time to put together and this may need to happen on a daily basis. The most important part of implementing an action plan is to consider the individual circumstances. With enough commitment this can be done within the context of team training sessions. Whilst there may be occasions where under-recovery is due to a player’s off-field behaviors, we shouldn’t automatically assume that someone who is reporting as fatigued has been out drinking. It could just be that they are entering a stage of non-functional overreaching. In this case, even though a bike session may not cause injury, it may negatively contribute to a genuine training-recovery imbalance. This is a critical example of where dealing with the individual can tease out the important issues. It probably highlights the importance of education and a consistent approach to the way training is planned and conducted including the performance standards expected of everyone involved.

 

Mladen: Now that we have covered methodology issues, let’s deal with some of the techniques as well. GSR (galvanic skin response) is now wireless and unobtrusive, do you anticipate research on how it correlates with cortisol in athletic training and competition?

 Stuart: In general terms I think the use of various micro-technologies to determine both the internal response and external activity profile will lead us down a very exciting path. We are now in a position to measure very specific things in real-time outside the lab. Whenever a technological development allows us to measure things more effectively in the sporting environment we can anticipate research aimed at validation and exploring links with performance. In this example, the ability to measure skin conductance and how this is influenced by the sympathetic nervous system in response to training and competition has the potential to be a useful monitoring tool. Although, much work will need to be done and we may find as we have with other measures of autonomic nervous system activity that the applications are not universal. Validation and reliability assessment are critical and we need to remember that just because something can measured doesn’t mean it’s useful.


 Mladen: Saliva is a popular way to get cortisol data. Given that cortisol has a specific circadian rhythm, how would you use it with a compliant athlete in team sport looking to monitor overreaching without testosterone? What would the frequency be over a one week period? If one was not using T:C ratios is this possible or would you need to use both?

 Stuart: Using salivary cortisol is a potentially useful tool but once again it’s value is probably environment specific. The assays take some skill to perform and they are relatively time consuming and expensive. However, newly developed analysis methods can make this much faster and easier. Taking circadian variation into account is very important so the collection protocols, including standardizing the time of collection, are critical. The suggestion is that T:C ratio is a representation of anabolic:catabolic balance but of course collecting both T and C requires double the time and expense to analyze the samples. In one of our published studies we showed a small but practically important correlation between C and performance in elite Australian Football. This was combined with a very predictable pattern of response where players returned to baseline values by 96 h post match. Given this, we were able to use C in isolation as a marker of the influence of a match on hormonal status but I wouldn’t suggest for a moment that this would be appropriate in all cases. In the past we’ve utilized C on a weekly basis and collected the sample at a standardized point each week. The most important component was getting the results the same day. There’s probably little point if the samples are a long way apart or the results take so long to determine that the opportunity to use them to help decision making is past. Measuring hormonal status is an interesting area but it requires some good standardized procedures and an in depth analysis of the usefulness (including cost-benefit) in each specific case.


Mladen: For the last question what equipment would you get and how would you set up the monitoring system with low and high budget for a team sport?

Stuart: This is a really important question. No matter how well funded and well resourced we are in a sporting environment, there’s always the question of what gives us our greatest return for effort and expense. Monitoring training load and fatigue are definitely viewed by a lot of people at the moment as being something to spend time and effort on. However, there are probably a lot of things that get measured that aren’t particularly valid or reliable. This can end up with us in a situation where we have an enormous amount of data that we’re not sure how to interpret and most importantly, doesn’t impact the training process. The evidence seems to be pointing towards a mixed methods approach and there’s certainly no suggestion that a good system requires big expenditure. It’s arguably more about a consistent and systematic approach. The only cost to utilizing valid self-reporting measures is staff time and I’d always invest in human resources above anything else. I’m always loathe to suggest equipment purchases because it can be taken as suggesting that a particular variable is the best thing to measure in all environments. If we’re talking about investment, I’d suggest that in addition to hiring good staff;  investing in allowing time to understand the underlying mechanisms involved and the appropriate statistical techniques to utilize as well as considering conducting some in-house applied research to help determine what to use would be time and money well spent.

 
Mladen: Thank you very much for these great insight Stuart. I will all the best in your future projects.

Stuart: It’s my pleasure Mladen and thanks again for the invitation. I’m looking forward to continuing to follow your blog. Keep up the good work with all your interesting posts and the good evidence you provide combined with the art of coaching. All the best.

No comments:

Post a Comment