e94
Abstracts / Journal of Science and Medicine in Sport 20S (2017) e67–e105
response to training, measured as self-reported wellness, can provide further insight into how load may be associated with injury. This study investigated the relationship between wellness variables (according to the Hooper Index) and injury occurrence. Methods: Elite junior Australian Football players (n = 562) were followed over the 2014 season (24 weeks). The Hooper Index of wellness is an example of an easy-to-implement questionnaire that includes key items on sleep, fatigue, soreness, stress and motivation (rated from 1 to 5, with higher values indicating improved wellness). Players entered their wellness data three times a week into an online monitoring system. Injury was defined as any bodily tissue damage arising from an incident that resulted in a modified training session, missed training session or match. Injury severity was measured as the number of days of modified or missed training. A generalised estimating equation (GEE) analysis was used to model injury as a binary outcome. Results: The injury incidence rate was 42.6 injuries per 1000 h of exposure. Players had a median injury severity of 7.0 days, with an interquartile range of 3.0–8.0 days. Players slept on average 8.30 ± 1.06 hours per night across the season. Although motivation (3.47 ± 0.90) and sleep quality (3.76 ± 0.75) were rated the lowest, stress (4.23 ± 0.84), fatigue (3.92 ± 0.81) and soreness (3.82 ± 0.87) were rated the highest. The mean soreness for non-injured players was 3.82 ± 0.85 and the mean soreness for injured players was 3.66 ± 0.87, whereas the mean stress for was 4.39 ± 0.74 and 4.12 ± 0.87, respectively. Each unit increase in the soreness and stress indices (high values indicating better wellness) was negatively associated with reduced odds of injury (OR = 0.82, 95% CI 0.74–0.92, p = 0.001 and OR = 0.84, 95% CI 0.75–0.94, p = 0.002). Discussion: These results indicate that the monitoring of elite junior AF players, particularly in relation to specific wellness indices, is warranted. Assessing how players are responding to their training and match loads can provide extra information of player symptoms that is helpful in reducing the risk for injury. http://dx.doi.org/10.1016/j.jsams.2017.01.064 217 Four weeks of progressive velocity throwing training may be protective against injury in collegiate-level baseball players J. Freeston Discipline of Exercise, Health and Performance, The University of Sydney, Australia Introduction: Progressive velocity throwing training is a commonly utilised, and effective method for improving throwing velocity in overhead throwing athletes. Given the positive association repeatedly shown between throwing volume and injury however, it may also pose a significant injury risk to athletes. This study aims to determine the effect of throwing training on performance and markers of injury risk in a group of baseball players. Methods: Ten collegiate-level baseball players (22.4 ± 2.1 years, 1.77 ± 0.08 m and 78.4 ± 6.2 kg), performed two throwing training sessions per week for four weeks. Each training session consisted of a general warm up followed by 60 throws in total; 10 throws were performed at each of six different velocities corresponding to 50, 60, 70, 80, 90 and 100% of the individual’s maximal throwing velocity. Participants were assessed for maximal throwing velocity (km/h); throwing accuracy (cm), shoulder internal and external rotation strength (kg) and arm soreness (Likert scale 0–10) before and after the training program. Results: Maximal throwing velocity (109.2 ± 2.3 vs 112.7 ± 1.8 km/h, p < 0.05), shoulder internal (23.1 ± 1.6 vs
27.7 ± 1.3 kg, p < 0.05) and external rotation strength (15.4 ± 0.6 vs 17.5 ± 0.7 kg, p < 0.05) were significantly increased following training. Arm soreness was significantly reduced following training (5.1 ± 0.8 vs 3.3 ± 0.8, p < 0.05). Throwing accuracy remained unchanged. Discussion: The acute:chronic workload ratio potentially explains these findings. At program onset, the acute:chronic workload ratio was high, and was accompanied by lower shoulder strength and higher arm pain values. Following four weeks of throwing training, the acute:chronic workload ratio was reduced, and was accompanied by increased shoulder strength and reduced arm pain. While much of the scientific literature advocates for conservative restrictions on the number of throws made by baseball players, this data suggests that regular throwing may actually be protective against subsequent injury, potentially mediated through increases in shoulder strength. This is preliminary evidence that the acute:chronic workload ratio model of injury may provide greater sensitivity of injury risk than the more conservative ‘pitching restriction’ model currently used throughout the baseball world and is stimulus for future investigations into the potential benefits associated with chronic throwing load. http://dx.doi.org/10.1016/j.jsams.2017.01.065 218 The association between running exposure and the risk of hamstring strain injury in elite Australian footballers J. Ruddy ∗ , R. Timmins, C. Pollard, D. Opar Australian Catholic University, Australia Background: Hamstring strain injuries (HSIs) are the most common injury in elite Australian football. A number of modifiable risk factors have been previously examined, however despite thorough scientific investigation, the incidence and recurrence of HSIs has not declined. Running, specifically high-speed running, is the most common mechanism of HSI in Australian football, and has been closely linked to the aetiology of HSI. Therefore, the purpose of the current study was to investigate the association between running exposure derived from global positioning systems (GPS), and future HSI risk in elite Australian footballers. Methods: Two-hundred and twenty elite male Australian footballers provided GPS data (OptimEye S5, Catapult Sports, Melbourne, Australia) from every training session and match for an eight month period. Injury history, demographic data and prospective HSI reports, including magnetic resonance imaging confirmation, were also collected. Univariate relative risk (RR) of subsequent HSI was determined for 36 running exposure variables. Variables which resulted in significant increases in RR of subsequent HSI were entered into logistic regression models, and the probability of subsequent HSI was estimated. Results: Twenty-eight HSIs occurred. Distance covered in the previous week above 20 km/h (≥2461 m, RR = 2.6, p = 0.018), and above 24 km/h (≥594 m, RR = 3.1, p = 0.005) and week-to-week change in distance above 10 km/h (≥2865 m, RR = 2.4, p = 0.034), above 20 km/h (≥514 m, RR = 2.4, p = 0.034) and above 24 km/h (≥140 m, RR = 4.3, p = 0.001) significantly increased risk of subsequent HSI. The number of efforts performed in the previous week above 10 km/h (≥587, RR = 3.1, p = 0.005), above 20 km/h (≥114, RR = 3.8, p = 0.018) and above 24 km/h (≥31, RR = 3.8, p = 0.018) and week-to-week change in the number of efforts performed above 10 km/h (≥125, RR = 3.5, p = 0.001), above 20 km/h (≥26, RR = 3.5, p = 0.001), and above 24 km/h (≥8, RR = 3.5, p = 0.001) significantly increased risk of subsequent HSI. Logistic regression identified the
Abstracts / Journal of Science and Medicine in Sport 20S (2017) e67–e105
largest increase in the probability of sustaining a HSI in the following week based on running exposure variables was never greater than 5%. Discussion: Elite Australian footballers were at an elevated risk of HSI in the seven days following greater weekly running exposure. In addition, greater week-to-week changes in running exposure also increased the risk of future HSI. Whilst the aetiology of, and risk factors for HSI still require ongoing investigation, markers of running exposure appear to contribute to the likelihood of subsequent HSI in elite Australian footballers, and should be closely monitored using GPS technology.
e95
(R2 = 0.53). External load variables, particularly moderate speed running, provided better model fits than internal loads measured using session RPE, suggesting they may be more suited to the study population. The strength of relationship between athlete workload ratio and match injury likelihood was strongly moderated by the choice of acute time window, with 3 and 6-days giving the best results. This may be reflective of the particular schedule of training and competition in Australian football. http://dx.doi.org/10.1016/j.jsams.2017.01.067 220
http://dx.doi.org/10.1016/j.jsams.2017.01.066 219
Predicting illness and injury risk in elite female water polo: Is there an optimal way to calculate acute–chronic workload ratio?
Acute:chronic workload ratio and injury risk in Australian football: A comparison of time windows and workload variables
O. Cant 1,∗ , J. Tran 1,2 , M. Menaspa 1,2,3,4 , M. Drew 2 , P. Gastin 1
D. Carey 1,∗ , P. Blanch 1,2,3,4 , K. Leong-Ong 4 , K. Crossley 1 , J. Crow 1,2,3,4 , M. Morris 1 1 La Trobe Sport and Exercise Medicine Research Centre, La Trobe University, Australia 2 Essendon Football Club, Australia 3 School of Allied Health Sciences, Griffith University, Australia 4 SAS Analytics Innovation Lab, La Trobe University, Australia
Background: Training load monitoring for injury prevention is a common practice in professional team sport. Studies have highlighted the importance of tracking the relative amounts of short term (acute) and long term (chronic) training load through the acute:chronic workload ratio. The influence of varying the acute and chronic time periods on the ability of the workload ratio to explain injury risk has not been investigated. This study aimed to identify which combination of workload variable, acute and chronic time window best explained injury likelihood in Australian football. Methods: Athletes (N = 53) were recruited from a professional Australian football club and monitored over 2 seasons. Workload ratios were calculated for each player on all training and match days. Global positioning system devices and accelerometers were used to collect external workloads, and ratings of perceived exertion (RPE) quantified internal loads. Acute time windows were varied between 2 and 9 days and chronic between 2 and 5 weeks (336 different combinations). Non-contact injury likelihood was modelled against workload ratio using a quadratic relationship. Percentage variance explained (R2 ) was used to evaluate how well each combination of parameters fit injury likelihood. Results: Injury likelihood was considerably higher in matches than training sessions for all workload ratios. The ratio of moderate speed running load (18–24 km h−1 ) using a 3-day acute and 21-day chronic time windows gave the best overall fit to the injury likelihood observed in: matches (R2 = 0.79), matches and the following 2 days (R2 = 0.76), and matches and the following 5 days (R2 = 0.82). A 6-day acute time window resulted in similar model performance. Other choices of acute window lead to poorer fits. The shape of risk profiles suggested injury likelihood was minimised for workload ratios in the range 0.8–1.0, and increased when athletes deviated from this region. Discussion: Acute:chronic workload ratios, calculated on a daily basis were able to explain the variation in match injury likelihood. Model fit results (R2 = 0.76–0.82) compared favourably to previous studies using 1-week and 4-week acute and chronic loads
1
Centre for Sport Research, Deakin University, Australia 2 Physical Therapies, Australian Institute of Sport, Australia 3 Water Polo Australia, Australia 4 Geelong Cats Football Club, Australia Introduction: Conceptually, acute load represents transient fatigue induced by training, typically calculated as the sum of loads for the current week. Chronic load represents fitness adaptations and is often calculated as the average weekly load from the preceding 4 weeks. No studies have examined whether these time decays are optimal for predicting injury and illness risks. The aim of this study was to compare the accuracy of 3 different workload ratio calculations for predicting illness and injury risk in elite female water polo players. Methods: Twenty-three athletes self-reported daily information on session-RPE loads, illness symptoms, and injury throughout the 2013/2014 (model training set) and 2014/2015 seasons (model testing set). Three different workload ratios were calculated: WR1 = 7:28 days; WR2 = 15:50 days; WR3 = 5:60 days. Mixed logistic regression models were created to estimate the probability of illness and injury occurring within three different time frames: (i) on the same day as a given workload ratio, (ii) on the same day or 1 day after a given workload ratio, and (iii) on the same day or within 2 days after a given workload ratio. Results: For the 2013/2014 and 2014/2015 seasons, injury incidence was 16.3 and 13.2 injuries per 1000 exposure hours, and illness symptoms were reported on 118 and 124 occasions, respectively. Workload ratios that performed well for sensitivity (true positive rate) also performed poorly for specificity (true negative rate), and vice versa. Specificity was generally highest when injury predictions were made for the same day as a given workload ratio (68.8–75.8%). Sensitivity was generally highest when injury predictions were made over a longer time frame (occurring on the same day, or the day after, or 2 days after a given workload ratio; 61.0–73.0%). When predicting illness events, specificity ranged from 57.3 to 71.3%. Most models were poor for detecting true positives (sensitivity: 38.5–61.7%). Discussion: None of the workload ratios tested were able to predict injury or illness risk with both high specificity and high sensitivity. Injury prediction accuracy improved over longer time frames, agreeing with previous observations of delayed effects of load on injury risk. Practitioners should consider which aspect of model performance is most important in their setting: correct detection of true positive or true negatives. Illness risk estimates were unreliable using workload ratio data alone. While changes in