## Strength Card Builder 1.1

Because of the interest, I decided to modify and update my Excel workbook I have been using for creating strength programs for groups and individuals.

In the video below you can see all the features of this workbook. One of the key features is the ability to create GROUP workout cards based on players 1RMs in core/key exercises, along with writing your own set & rep schemes.

This product is not available any more. Please look for the new version

## Optimizing groups for Small Sided Games (SSGs)

In the following video I will show you how to

• Calculate normal scores for any test/statistic (percentiles & z-scores) and the difference between them
• Calculate composite scores using weighting factors and normal scores
• Use one great Excel tool – SOLVER to provide optimized solution for groups

The rationale behind this approach is that we want groups of players that are balanced in some way (we decide based on what parameters). In the video below I have used skill rating and MAS score, but you can easily expand that to include daily wellness or some fatigue score, or anything else you want.

The idea is that having a balanced groups might produce higher level of competition and on a same quality level. This might be interesting research project to see if balanced vs. unbalanced groups show different performance readings (GPS, iTRIMP, etc) and what option might be better for certain goals.

You can download the workbook HERE (I have updated it a bit compared to the one in the video)

## Individual Qualities vs Positional Demands

One frequent question I get from coaches and try to resolve myself is whether the conditioning should be based on individual characteristics (MAS, YOYO, VMAX, etc) OR based on position demands?

Regarding the position demands: how do we quantify them and what is the worthwhile difference (SWC) between positions that warrants different training prescription? Most of the studies focused on p values instead of SWC and TE. Speaking of TE (Typical Error) there is a huge %CV (coefficient of variation) in game-related data (which means that distances covered vary as much as 30% from game to game). This further complicated applicable positional differences from practical and physical preparation standpoint.

Ok, suppose we know the positional demands for our level of playing - should we focus on where we are in these demands or where we want to be (play)?

Suppose that we don't take individual qualities into consideration and we impose positional demands based on where we want to be on the players. How are we certain they are not being loaded too much or too little?

In ideal world player physical qualities should coincide with his positional demands. Look at this as the law of supply and demand (supply~demand) or potential~expression. Sometimes this might not be the case because players don't play certain position solely on their physical qualities, but also technical, decision making, mental and so forth.

I believe in complementary approach. I also love the approach by Carlo Buzzichelli -

INTENSITY: Individual Quality
VOLUME: Position Demands
ORGANIZATION: Positional Demands
WORK:REST: Positional Demands and Individual Quality
SITUATION/POSITON: Positional Demand

Again certain drills might be more 'suited' for certain positions, but in short if FB are running longer distances (see GPS data on duration and length of efforts in certain velocity zones) than FW, then FW might perform conditioning in shuttles and FBs in straight line [for example]. Another might be the volume - all positions run at certain %MAS, but MFs might do some extra set.

Again, this is not static picture - the emphasis might shift over time and over pre-season/season.

When it comes to blending technique with conditioning [e.g. doing conditioning with finishing for FWs, conditioning with heading the ball out for CDs, etc] I believe positional demands will dominate [and not his technical/tactical qualities, yet again it depends]

Another way to look at this [dichotomist] problem is to use SSGs and games overall to put certain individual into the most position specific context. Thus, if this is solved with practices, is there a need to do it with conditioning too? And why are we splitting these two anyway? [see my presentation on Periodization Confusion].

How much specific work is too much? When does the specific work fails to provide overload and adaptation? When does the adaptation/overload fails to bring transfer to specific work?

Unfortunately I don't have THE answer, except stating that coaches should learn to reconcile and juggle with these two dichotomies [I solved it by using Squiggle Sense] and not to lean too much on pre-made solutions and philosophies. What I mean by the latter is that one needs to take complexity of biological adaptation and skill acquisition of each individual into consideration instead of pursuing certain rigid approach. The solution is smart monitoring and predictive analytics for each individual. This is the work in progress - take the empiricist/experimental stance and test your hypothesis for each individual, instead of rationalizing things based on who said what.

Unfortunately again, taking a stance of empiricist is not easy - it demand knowing what to measure, how to analyze it, how to compare it to other measures and how to make reliable action steps.

## Naming the HIT drills

I love the idea of naming certain drills/workouts (like Crossfit does using female names) because it is easier for the athlete to ‘personalize’ them and remember them. So, instead of saying we are going to do 30:30 intervals at 100:70% MAS, it might be easier to say “Boys, get ready for Bloody Mary”.

In the table below I have presented couple of HIT variations, including SIT (Sprint Interval Training, a.k.a Anaerobic Power/Capacity) and RST (Repeat Sprint Training). Think of those at tools in your toolbox – Long HIT, Short HIT, SIT, RST.

One interesting ‘finding’ is that ‘playing’ with work:rest and active:passive we are able to come up with different variations. This might be important boredom wise, if nothing else.

What I was thinking to do is to name these drills/exercises using (a) girl names, (b) cocktail/spirit names, (c) work tools, (d) guns or (e) anything else manly. If you have some ideas be free to put it down in comments.

MAS stands for Maximum Aerobic Speed

Mean %MAS is average/mean intensity of the drill taking duration and intensity of both rest and work periods

For passive rest I have took walking and approximated it to 30% MAS (which is later used in Mean %MAS calculus)

Also, make sure to read the best review paper on HIT by Martin Buchheit and Paul Laursen

## Saturday, August 10, 2013

### Analyzing Time-Series of Individual Data #2: Using Harmful/Trivial/Beneficial chances

Analyzing Time-Series of Individual Data #2: Using Harmful/Trivial/Beneficial chances

Just a quick update – after a comment by John Fitzpatrick (@JFitz138) on the use of Will Hopkins’s approach of calculating chances I decided to give it a shot.

For a Typical Error (used together with SWC to calculate chances) I used SD of the Rolling Average.

To calculate chances I have coded simple VBA function that use NORM.DIST function of Excel.

To calculate chances, one must assume that data points in Rolling average are normally distributed (and that might not be the case!) around their mean with SD. Here is my sketch: (see papers by Hopkins for more)

 My wonderful drawing skills

Even simpler approach than this might involve pure counting of days (data points) above, within and below baseline and SWC, especially for non-normally distributed data set (someone correct me if I said something stupid). Using Box and Whisker plot and interquartile ranges might also seem possible solution.

Anyway, here is the short video of the workbook and below you can find update download link.

Click HERE to download Excel workbook.

## Friday, August 9, 2013

### Analyzing Time-Series of Individual Data

Analyzing Time-Series of Individual Data

If you haven’t been living in a cave for the last couple of years, you definitely noticed an increase in data collection, data mining and visualization. HRV tracking, jump output tracking, estimating 1RMs from velocity-load data, game statistics, performance analysis, various testing statistics, body weight, Run Keeper, Run Tracker, and all that quantified-self movement.

Collecting data is getting easier and easier – even without one being aware of it. What is still falling behind is making sense of all that data. For example, you might have been collecting HRV or rest HR every morning for the last couple of months, or even better training load using session RPE and duration. How do you analyze this? How do you visualize this data? How do you make sense of it? How much certain statistic need to drop to provide any worthwhile change and real-world effect?

Luckily, the statistics we learned in school didn’t help us. Too much reliance on Fisherian approach (using p value) and too much usage of statistical significance that doesn’t mean much to a coach. Even worse, they (lay people with no formal education in inferential statistic) misinterpret term statistical significance as real-world significance, instead of low chance [p<0.05, p<0.01, p<0.001 etc] of acquiring such an extreme score if null hypothesis is true. If this sounds confusing – it is, and unfortunately, according to Geoff Cumming (author of excellent Understanding the New Statistics book) even the researchers don’t get these concepts right.

If you are interested in these subjects you should definitely read everything ever written by Will Hopkins – and I will give you a quick-start presentation one need to read to understand the important concepts of magnitude base statistics and SWC (Smallest Worthwhile Change) and TE (Typical Error):

Couple of great researcher, like Martin Buchheit (@mart1buch) are pushing the envelope in using magnitude-based statistics (SWC and TE and chances) – but as far as I know a lot of journal editors are still resistant to forget about p value.

The Dance of p values

Anyway, as coaches we are not interested in group averages and making an inferences to a populations (at least we shouldn’t if we are not thinking about research career). We are interested in individual response and unfortunately we had a lot of flawed thinking over the years using flaw of the averages and thinking that all individuals will respond in a similar and predictable way. Welcome to the biological complexity.

Presentation slides from WindSprint 2013

Luckily a lot more studies are leaned toward showing inter-individual variability, quantifying it and visualizing it, besides worrying only on the group averages and whether they get statistically significant effect of the treatments.

What we need to do is start thinking in terms of individuals and their unique reactions. All training is single subject experiment, even if you work in team sports (a bit harder to implement, but still very important).

Taisuke Kinugasa (@umekinu) is one of the few researchers focusing on single-case research design and analysis of single-subject time-series. If you are wondering what are single subject time series it is all that data you collect on yourself (quantified self), like HRV.

Speaking of HRV, recent papers coauthored by Martin Buchheit and other great researchers, brought into light some very applicable tips for coaches to be used on a daily basis. Part of that applicability is using SWC and TE (progressive statistics, magnitude-based approach) and single-case design (in some papers).

What they showed is that having either week averages or rolling 7-days averages “appears to be superior method for evaluating positive adaption to training compared with assessing its value on a single isolated day”.

I have wrote about rolling averages and Z-scores in evaluating wellness data HERE so I won’t go into details too much.

Another interesting approach was to estimate BASELINE for each athlete and estimate SWC of that baseline. The researchers did this by taking first two weeks of the intervention as baseline. Then this baseline and SWC of it (usually 0.3 to 0.5 of intra-individual SD) is used to estimate ‘context’ to 7-days rolling averages.

Sometime this approach is used in sports and for baseline is taken certain period of the year. Another option is to have ‘rolling’ average as well and that might include longer time frame than 7-days rolling average. Again, there are pros and cons of each approach and analyzing time series is more an art than it is a science. Not sure if there is a right thing to go about it.

The idea is to get baseline and SWC, and then to use Rolling averages and TE (it is beyond me how is this calculated, except using rolling 7-days SD) to get chances for beneficial/trivial/harmful changes (see links above from Will Hopkins).

The simplest approach might be to use percent change between last score and rolling average (or longer baseline). Unfortunately this approach doesn’t take individual variability into considerations (see more HERE).

Another approach that takes this into account is to get daily Z-Score which is number of rolling 7-days SDs that last score is different that rolling average [Z-Score = (Last_Score – Rolling_AVG) / Rolling_SD ]. I believe that this is the approach behind iThlete HRV coding system. If you are out of your normal variability then you get a flag.

What we want to achieve with all these approaches is ‘flags’ – what is a normal score and what is abnormal. Again this is more art than it is a science, but I believe the right analysis is a must – one just need to put it in the right context.

Long story short, I have created a Excel workbook that analyses time-series using some of the approaches above. I wanted to thank Andrew Flatt (@andrew_flatt) for providing me with his HRV data and to Andrew Murray (not the tennis player - @cudgie) for giving me an idea of using Effect sizes for comparing Baseline and Rolling average (same as daily Z-Score).

Here is the video of me demonstrating the software and below you can find a link for downloading the Excel workbook.

Click HERE to download Excel workbook

## Some great findings and ideas from velocity-based strength training

I have been thinking more and more about velocity-based lifting recently as a method of prescribing load and volume for an individual. I have wrote couple of blog posts and article on this topic that you might want to read first:

The mentioned links should cover the bases. What I want to do now is to discuss possible applications and some interesting findings.

What the researchers did is to perform Bench Press and Parallel Squat exercises to failure with 60%, 65%, 70% and 75% of 1RM, while trying to perform each rep as fast as possible.

These graphs are very interesting and I will get back to them, but first I want to convey some of my ‘insights’ using velocity measurement in the gym over the last year and something.

Here is the load-velocity table for my ATG pause back squat and pause bench press I did somewhere in February this year. This is based on the first rep for each load which is usually the fastest.

I we visualize %1RM used in bench press and squat and mean velocity reached, we get the following graph:

What is immediately apparent is that the slope of the curve for bench press is steeper. We can calculate those and we get -1.46 for bench press and -0.89 for squat. In plain language, one tend to lose more speed with increasing loads in bench press than in squat.

Another apparent feature is that velocity at 1RM (we are going to call it MVT – minimal velocity threshold) is lower for the bench press (~0.1 m/s) than for squat (~0.3m/s). We can talk all that why this might be the case – but at the end of day it is not really important. What is important is to remember that there is different MVT for every exercise and that they differ from individual to individual (not much thought).

What might be interesting to find out is weather MVT changes when some improves his 1RM? According to study by González-Badillo and  Sánchez-Medina (Movement Velocity as a Measure of Loading Intensity in Resistance Training) it is not changing over time, at least for bench press

This is important for couple of reasons – we can use velocity to prescribe intensity. Even more important is that 1RM might vary from day to day due readiness or normal variability, thus using %1RM might be misleading and not taking into account improvements or decrements in strength. Using velocity might solve this issues, along with being auto-regulatory in nature.  Long story short, instead of prescribing 5 reps with 80% 1RM, one might prescribe 5 reps with 0.5m/s starting speed. What is also interesting is that providing immediate feedback might increase motivation, competition, stability of performance and higher improvements – based on the studies by Randell et al. (study1, study2), at least for jump squats – but I believe that this might be true for non-ballistic exercises as well.

Ok – this is what happens to the velocity of the first (best) rep across loads. But what happens to velocity when we repeat sub-max sets to failure? That is also what the study by Izquierdo et al ought to find out.

What we can see from the graphs (see at the beginning of this article) is that the velocity across reps is falling down quicker in the bench press than in squat. I will get back to this soon and why is this important.

What is VERY INTERESTING is the finding that mean velocity in 1RM load is pretty much the same as Mean velocity in the last rep of nRM test. What does this mean is that my last rep in 5RM is probably going to be very close to 0.3m/s for squat and 0.1m/s for bench press.  How can we apply this to practical settings? Well, the closer we come to our MVT in multiple-rep sets, the closer we are to failure. According to a study I blogged about here, the closer we are to failure (indicated by loss of velocity) the higher the neuro-muscular fatigue. Hence, by monitoring last rep velocity we might produce different levels of fatigue in a given set.

I am interested to see if this prediction holds true across loads (or reps-per-set), or if I have 2 reps in the tank with 12RM load would the velocity be same as when I have 2 reps in the tank with 5RM load? Since I don’t have data for this, I ought to digitalize the data point from the graphs in this study. Here is what I got for bench press

And for squat

What is interesting to note here is that number of reps done with same %1RM is higher in squat than in bench press.

What I had to do to test my hypothesis (that the velocity for the same number or reps left in the tank across %1RM is similar) is to re-organize this table and visualize it. Here is what I got for bench press and for squat

As you can see from the graphs this relationship between reps left in tank and velocity is sound (it is beyond me how to quantify magnitudes for this – my statistic knowledge is medium); average SD for velocity across reps left in the tank is 0.02 and average %CV is around 5% for both squat and bench press.

What this means, and is VERY INTERESTING, is that we can estimate proximity of failure based on rep speed (taking into account that the effort to lift fast should be 100%). Not with 100% accuracy, but pretty close.

I have mentioned that velocity drops a lot faster across reps for bench press than for squat. This is important since certain authors in velocity-based strength training circuits recommend using % drop as a threshold to stop a set. For example, they prescribe starting velocity (e.g. 0.5 m/s) and velocity drop of 10% (stop doing set when velocity drop for more than 10%, and in this case that is 0.45 m/s) for both bench press and squat. Based on the data I have presented I think this is not that smart (I have tried it also, with me and with some players once). It works for bench press, but for squat you might end up doing a lot more reps (especially if there is a bit of bouncing in the hole). Over aprox. 75% 1RM velocities for squat are faster (see bench vs squat load-velocity graph), his velocity drop across reps is slower and hence using % is not the way to go.

The solution might be prescribing absolute velocity stop. For example starting with 0.45 and finishing when velocity reach 0.4 m/s.

There is still a lot of practical trial-and-error left to be done, but here are some recommendations

 Estimate load-velocity curve for each individual and each core lift when you do 1RM testing Estimate MVT (or velocity at 1RM or last rep) for each individual and each core list If you perform 3-5RM test, estimate the slope of the velocity curve – this might be used later to predict proximity of failure and velocity stops (when to stop the set).   Use associated velocity with certain nRM and/or %1RM (e.g. 0.5m/s for 5RM or 85% 1RM; 0.3 for 1RM or 100% 1RM) and use the velocity to prescribe instead of nRM or %1RM because it is more reliable plus you get a lot more other benefits To manage fatigue in the set, prescribe velocities stops (should I put the trade mark on this one?) for certain reps-left-in-the-tank.  Manage volume (number of sets) by time allotment (e.g. 20min for squats); or by comparing the average velocity of the sets across reps (need to ‘research’ this approach); or by prescribing number of sets based on cycle; or by allowing certain drop in reps from set to set.   When doing multiple sets, one could stick to the weight associated with certain velocity intensity and velocity stops (in this case that might result in drop in reps across sets) or one might decrease weight to maintain starting velocity. I am clueless what method to use – might depend on the cycle or be rotated from time to time.  So the prescription might be something along these lines: 20min time allotment slot for squats (or prescribed number of sets) Starting velocity 0.4m/s  Keep that weight across sets Velocity stop 0.35 m/s If your first rep is equal to or less than 0.35, stop completely even if time is still available Damn, I am becoming to sound like DB Hammer
Anyway, I urge coaches to try velocity-based approach. Make sure to check the best LPT system on the market today – GymAware. Data collection and analysis is walk in the park, along with the setup.

## Friday, August 2, 2013

### Measuring external workload in Boxing using accelerometers

For the last month we are having a pleasure of using MiniMax GPS devices (two of them) which we got for three months testing by courtesy of Catapult. I am short of amazed by the simplicity of it’s use and great software that comes with it – Sprint.

I have been playing with normal features and uses of Catapult GPS, but my wild (or weird?) spirit won’t let me alone so I wanted to experiment a bit with the units and software while we have them and try something unorthodox to GPS devices.

Actually, the GPS devices by Catapult are equipped with accelerometers (among other great sensors) which are used to get Player Load statistic. Player Load is used to estimate non-running based loads using change in acceleration (three axes). It is very usable in quantifying changes of direction, hits, feints, jumps, etc. So I decided to put the devices in indoor mode and use Player Load feature to try to quantify boxing load.

Disclaimer: Accumulated Player Load also depends on something that is called ‘dwell time’ (someone correct me if I am wrong and if this is only applied to efforts analysis) and various filters, that are used to smooth out curves and get rid of errors (like knocking the unit, or dropping it). I have decrease ‘dwell time’ for Player Load to 0,2s compared to usual 1,0s. This is used because units are meant to be wore in a ‘bra’ between shoulder blades to represent whole-body movement. Playing with these parameters might affect the analysis. This is important since the units are not meant to be used for hitting the heavy bag. It would be also more valid to have couple more units that could be wore on ankles, head gear and between shoulder blades to get the full body movement (or head pounding in sparring). Maybe next time.

Quantifying external work in boxing was always been difficult, so the coaches relied on notational analysis, HR data (internal work), bLA (internal work) and RPE (subjective indicator). Using small accelerometers one could quantify load. This was I tried to do (as a beta self-experiment)

So I have put two devices (named LEFT and RIGHT) on my wrists and put a bandage over:

I decided to do 9 ‘rounds’ of 2minute duration and 1 minute rest. I have done the following activities just to see the possible differences:

1. Run on treadmill at 8km/h at incline 1%
2. Run on treadmill at 12km/h at incline 1%
3. Jump Rope (easy)
4. Jump Rope (hard; every now and then 10 faster high knees)
5. Shadow Boxing (two round)
6. Heavy Bag (three rounds)

I have put 14oz glove for heavy bag work. First heavy bag round was light contact, 4-6 hits, nothing hard – just warming up. Second round were 1-2 very hard strikes with a nice reset and pause in between. Third round I have tried to throw hard punches in combinations and just move around like in a real sparring (at least as my shape allowed me).

Here is the picture of Player Load and HR for different rounds for LEFT arm.

On this one you can see one heavy bag period in higher zoom:

As you have guessed correctly each spike is a hit to the heavy bag.

Here is the table with some numbers pulled out (for LEFT and RIGHT arm)

This is also an example of CRT (Customized Team Report) provided by Sprint software. I have used this Excel to graph some data further.

I have summarized load for LEFT and RIGHT to get TOTAL and I did some simple descriptive statistics.

On the following picture you can see relationship between Mean HR and Player load for different rounds. Please make sure that correlation might be wrong because I have put running and jump rope as rounds and that might affect the relationship.

Same as in the picture above there is relationship (but lower) between Player Load and HR Exertion Index (something like TRIMP score). Again each dot represent one 2min round.

On the following pictures we can see different rounds and the HRmax, Mean HR and Player Load

 Max HR

 Mean HR

 Player Load
There are couple of interesting insights. For Player Load, Jump Ropes get a bit lower position compared to HR mainly because the arms are almost stationary to the side of the body. Same thing for Running at 12 km/h. Again Player Load in this example represent the movement of the hands.

Compare to HR data, Heavy Bag 1-2 (hard punches in 1-2 with longer reset and rest in between) get lower score in Player Load for couple of reasons: the short re-set time decrease the score, or because of dwell time and filters hard hits are not taken into account. As all the researchers admit - we need more data.

On the following graph I have created the ratio between Player Load and Mean HR

The idea for this type of analysis comes from cycling world (see blog entry by Joe Friel on this). By dividing Player Load with Mean HR we get some idea of efficiency and cost (short term; between activities comparison) or how the athletes are adapting/improving (long term, time series; within activity comparison).

Since the accelerometers are on hands, the activities that create lot more arm movement per internal load (penalty; mean HR) get better score.  We can use this between activities comparison to give us some insights into the differences in external work and internal penalty for that work. We might be able to compare individuals and see who might be more efficient (if mean HR are expressed at %HRmax) as a boxer (if we compare sparring or heavy bag data).

In the case we keep tracking extrenal/internal data for one activity, we could see how the athlete is adapting (as Joe Friel did) and when is he starting to plateau (or start showing some possible drop due overtraining or detraining). This might aid in the design of the training block. Again more research is needed.

I hope that this article gave Boxing, MMA, Karate, KickBox, Tae-Kwon Do coaches and researchers some ideas that they might try expanding upon, start using or start experimenting.