The saying goes something like "87% of statistics are made up on the spot," but I wonder if we would be better off if that was true? Take this statistic for example,
[In "toxic body burden" research preformed in 2003, medical scientists at the Mount Sinai School of Medicine found an average of fifty or more toxic chemicals in the bloodstream and urine samples of nine volunteers, most of whom led normal or even environmentally conscious lives.]*
Ironically, this book warns us of the dangers all the corporations relentlessly pushing their deceitful messages on us to make us buy their bullshit.
Still, let me avoid digression and get back to my main point. This statistic is 100% useless, why?
1 Sample size: 9 people.
This leaves me speechless. I, I mean, How, or gahhh...
2 Control group: None.
The paragraph goes on to blame manufacturers polluting on these toxins, but without a control group that isn't affected by the pollution, these readings are useless. At the very least some sort of context is needed. What are you defining as poisons? At what levels were these poisons? For all I know you are registering .0001 ppm of caffeine as a toxin.
3 Spread: lol
To get anything meaningful you need a good distribution of samples. Nine volunteers at a college is a joke, and a very poor one at that.
4 Clearly incompetent : Yes
But the absurdity doesn't stop there, if we look at how poorly they did this "study" I wouldn't be surprised if they contaminated their samples, had faulty equipment, or just made stuff up.
Not all statistics are this obviously flawed though, and many studies are on complex things that most of us wouldn't know the correct methodologies for testing. There are some things we can do to find out if these are faulty statistics, but often sources are not cited, and even when they are it can be a lot of work to look them up and evaluate them.
*(Affluenza the all consuming epidemic, second edition. page 103)
No comments:
Post a Comment