Stuff that occurs to me

All of my 'how to' posts are tagged here. The most popular posts are about blocking and private accounts on Twitter, also the science communication jobs list. None of the science or medical information I might post to this blog should be taken as medical advice (I'm not medically trained).

Think of this blog as a sort of nursery for my half-baked ideas hence 'stuff that occurs to me'.

Contact: @JoBrodie Email: jo DOT brodie AT gmail DOT com

Science in London: The 2018/19 scientific society talks in London blog post

Monday 3 August 2015

Blunt tool idea: colour-coded system for medical research papers' study design, and conclusions

Yesterday I tweeted this
I was pleasantly surprised that no-one laughed and it even got a few retweets, and a later 'bump' tweet resulted in a bit of discussion with Jon Mendel and Adam Jacobs on the practicalities.

A great deal of research is about getting your bearings - something looks interesting to study and so people chip away at it. This is fine. What's problematic is when someone reads what's essentially a compass reading pointing North-ish and announces that a paper is evidence that we're at the North Pole. I had the idea that some sort of colour scheme might have one colour to indicate that Paper X is a well-designed meta-analysis and conclusions are reasonably firm and a different colour for Paper Y which reports on a small observational study that, while perfectly well designed on its own, cannot support such strong conclusions.

My blog is littered with ideas that sound great (to me) in theory but are a bit unimplentable and I suspect that this may be one of them but after I managed (top-of-the-head idea mine, actual effort in making it happen was @McDawg) to get share-buttons on PubMed I've become drunk on my own success ;)

Discussions with Jon and Adam considered whereabouts in the publication cycle this colour scheme might be employed. Authors might not be keen to have a 'lower grade' of colour, editors might not agree on the colour scheme (think food manufacturers and the traffic light system for fats and salt etc). My thinking is that it would happen post-publication, if at all.

Background
Making sense of medical research abstracts involves a variety of types of knowledge, even when the abstract's telling you stuff there are things you already need to know. Those things might include physiological facts (knowing that gastric refers to the stomach and not the leg) but perhaps more likely to trip someone up is the specialist background knowledge needed to know if the study's conclusions - or a newspaper's conclusions - can be drawn from the method.

A simple example is a study that gives 500 people a pill and measures what happens. This doesn't really confirm that any effects were due to the pill. You probably need a control group and it might be helpful to randomly assign trial participants into the 'get pill' and 'not get pill' groups. The plan is to try and compare two similar groups and make the presence or absence of the pill the only difference.

My experience of people not knowing enough about study design (and the conclusions that can reasonbly be drawn) comes from several domains.

1. People who have a health condition and want to find out about the latest research. 
These are very motivated people who often do learn a lot about research methods but I've spoken to a lot of people (in my former capacity as science info officer at a health charity) who floundered a bit. Similarly not everyone working in a health charity is confident about trial design (I nearly always had to look stuff up myself!) so a few pointers might be helpful here. I've expanded on this in a much earlier post: Might #AcademicSpring change the way in which journal articles (esp medical) are written?  (11 April 2012)

2. Newspaper reports
#NotAllNewspaperReports of course, and sometimes the text is fine but the headline lets it down. However there are plenty of newspaper reports that imply that something is more certain than it is and people are confused and misinformed.

3. Homeopathy advocates
Anyone who keeps an eye on the #homeopathy hashtag will see supporters of homeopathy adding links to various PubMed abstracts in their tweets and stating or implying that the paper proves that homeopathy is not a load of twaddle. These are often very small studies, no control group, or insufficient information about what other treatments people were having alongside. The abstracts are not strong enough to support some of the conclusions advocates make for them.




No comments:

Post a Comment

Comment policy: I enthusiastically welcome corrections and I entertain polite disagreement ;) Because of the nature of this blog it attracts a LOT - 5 a day at the moment - of spam comments (I write about spam practices,misleading marketing and unevidenced quackery) and so I'm more likely to post a pasted version of your comment, removing any hyperlinks.

Comments written in ALL CAPS LOCK will be deleted and I won't publish any pro-homeopathy comments, that ship has sailed I'm afraid (it's nonsense).