Public Health training session, Waltham Forest

Critical appraisal session – Waltham Forest Public Health team

Yesterday (17/02/2016) I ran a critical appraisal session for a team of public health strategists at Waltham Forest Town Hall. The paper that we discussed was:

Wolfenden, L. et al. (2015). Improving availability, promotion and purchase of fruit and vegetable and non sugar-sweetened drink products at community sporting clubs: a randomised trial. International Journal of Behavioral Nutrition and Physical Activity 12:35. This is an open access paper. Available in this link.

We found that the trial addressed a clearly focussed issue: it investigated whether a range of interventions was useful in improving availability, promotion and purchase of fruit and vegetable and non sugar sweetened drinks at community sporting clubs in Australia was effective in achieving those outcomes, and if so to what degree and whether the club’s revenue from sales was affected.

The assignment of clubs to intervention or control groups was random and done by a statistician who had no further involvement in the study. It was stratified by football code (ie the rules used by the club) and geographical area, which would influence sociodemographic variables. This was so the groups would be more comparable. We were puzzled by the design decision to get data from individual club members’ self-reports rather than using canteen sales data, as we were not clear how much of an incentive there was for people to give false reports, given that the study seemed to be targeted towards clubs’ reaching an alcohol management accreditation which was dependent on healthy canteen as well as alcohol strategies.

We discussed the issue of blinding in this study to quite an extent. Research personnel involved in post-intervention data collection were blind to group allocation of football clubs, which is good. However, due to the nature of the intervention, it is not clear in how far the individual participants from the different sports clubs would have been aware of the kind of study that was going on, especially if they were going to other clubs to compete against one another etc. There was some debate as to whether it is possible and necessary to blind people in an intervention such as this one.

The groups looked to be broadly similar at the start of the trial, but we felt that more data regarding individual members’ characteristics such as ethnicity would have been helpful. We also thought that there were quite big discrepancies between the percentages of participants who were players (), participants’ gender, and the mean club revenue between control and intervention group.

The question of whether the groups were treated equally aside from the experimental intervention again provoked some debate. It was put forward that all the attention and support that was given to the intervention clubs could be classed as part of the intervention, but the paper was not clear on which other interventions were in place as a result of this being part of a larger study.

The flow chart that showed participants’ and clubs’ progress through the trial provoked some comment, as the word “intention-to-treat analysis” was used in ways that are questionable. The flow chart says that 576/567 members respectively were part of an “intention to treat” analysis, out of a baseline sample of 705/689. We were confused as to what this meant and did not find a satisfying explanation in the rest of the paper.

The table with the results puzzled us as there clearly were errors in it. The numbers in the table do not match the numbers in the write-up. We can’t be certain what is true. The confidence intervals given are very wide, which is to be expected in a small study. A big limitation is the reliance on self-reporting for all outcomes, when at least the club revenue/sales of fruit, vegetables and water could have easily been obtained by analysing direct sales data.

The application questions were interesting to discuss with a group of public health workers. As for the applicability of results in the local context, we pointed out that there was no data available on participants’ ethnicity or characteristics such as how seriously they take their amateur football as a sporting activity. We also weren’t sure if Australian and British amateur football are comparable as settings.

As for clinically important outcomes, we wondered whether an intervention like this could realistically be expected to have an effect on people’s general eating behaviours. The study wasn’t designed to detect this, but it would be an interesting question to investigate. It was pointed out that introducing healthy options in settings where people eat on a more regular basis, for example workplace canteens, could have a far greater effect – but it was also pointed out that this wasn’t the point of the study.

Finally, in terms of the benefits being worth the harms and costs, the perceived harm was a reduction in the club’s revenue, which despite the researchers’ claims to the contrary was one effect of the intervention. There was no cost-benefit analysis of for example the staff time and training involved and how much the intervention cost to implement per club. So again we couldn’t say for sure if the benefits were worth the harms and costs.

Overall, the group felt that the paper added to the existing evidence in the area and fit in with it, but that by itself it wasn’t enough to justify starting a similar intervention in the local area.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s