Trawling The Brain – Science News

Earlier this year, I had a belly laugh reading about the fMRI of the dead fish. Here is a long article about fMRI as a tool to look into a brain from Science News: Trawling The Brain – Science News.

Advertisements

More on Type M errors in statistical analyses

A bit earlier, I was intrigued by a blog post by Columbia Statistics and Political Science professor Andrew Gelman about “Type M” errors in statistical analyses (link).  A Type M error is an overestimation of the strength of the relationship between two variables and such an error is caused by having too small a sample to draw upon.

I can try to explain this to you now this because I have now read “Of Beauty, Sex and Power” by Andrew Gelman and David Weakliem (American Scientist, Volume 97, 310-316, 2009). I found the text of the article by following a link in the Gelman post I quoted earlier. I think I now understand a little what’s going on here and I really enjoyed reading the article.

Suppose there are two variables I care to study with an eye to whether they are related. Perhaps I have a theory, based on a hypothesis from evolutionary psychology, that “Beautiful parents have more daughters”. (In fact, Gelman and Weakliem wrote their article after being prompted by a paper with this very title, and some other papers by the same author, published in the prestigious Journal of Theoretical Biology.) Let’s call these variables X and Y (behold the poverty of my imagination).

Let’s also suppose that there is in fact a relationship between these variables, but very small in magnitude. As a researcher, I do not know this relationship but I want to discover it and make my name based on the discovery. What do I do then? I go after data sets that contain variables X and Y and try some statistical estimation techniques, looking for a number to indicate how strongly the variables are related. Classical statistical methodology tells me to estimate not only that number, but also an interval around my estimate that gives an idea of the error of my estimation. This is called a “confidence interval”. (Gelman and Weakliem also explain how this argument goes if I were to use Bayesian estimations techniques, for those in my vast* readership who know what these are.) Roughly speaking, if I have done my stats well, and do the same estimation work with 100 different data sets, then the true value of the number I am after will be in 95 of the 100 confidence intervals that I will find.

But here’s the rub. What I really am testing, if I am doing classical statistics, is whether the number I want to estimate can be shown (with 95 percent confidence) to be different from some a priori estimate (the “null hypothesis”). For a relationship that is very small, presumably any previous evidence will have shown it is small, and perhaps would have shown conflicting results about the sign of the relationship: some studies would have found it negative, some positive. So I should have as my null hypothesis that X and Y are unrelated.

Now let’s say I find that this relationship coefficient that I am trying to estimate is in fact equal to 0. I do not know this, of course. If I do 100 independent studies to estimate this coefficient, then I can expect 5 of them to indicate to me that the coefficient is statistically significant from zero; all of the 5 would be misleading. But concluding that the correlation I want to find is in fact not there is not exciting, and will get me no fame. If I find one of the erroneous “significant” results, on the other hand, I will send my study to a prestigious journal, talk to some reporters, and maybe even write a book about it. All of the noise thus generated would be good for my name recognition. But I would still be wrong, having infinitely overestimated the coefficient of interest.

The same kind of error could arise if the true relationship was in fact positive. Say the coefficient was not 0 but instead 0.3, and my data allowed me an estimate with a standard error of 4.3 percent. Then I would have a 3 percent probability of estimating a positive coefficient that would appear statistically significant and, perhaps worse, a 2 percent probability of estimating a _negative_ coefficient that would appear statistically significant. I could even be strongly convinced, then about the wrong sign of my coefficient! Whichever of these two errors I fall into, the estimated coefficient will be more than an order of magnitude larger in absolute value than the true coefficient. This is why we are talking about Type M effects; M stands for magnitude, indeed. (Well, we also saw a Type S effect in this example, when the sign of the estimated coefficient was wrong.)

Is there an escape from this trap? More data would help expose my error. The more data I base my estimation on, the more the so-called “statistical power” of my testing procedure, and the less likely I will be to fall in error. For variables with small but significant correlations, which happens in the medical literature, often the data sets contain millions of observations. It is understood by sophisticated scientists that you need a lot of power (a lot of data) to tease out small effects.

What can we conclude from this? Besides the obvious value of skepticism when assessing the value of any statistical finding, we should also realize that not all studies that use statistics are created equal. Some have more power than others, and we should trust their results more. And that’s why “more research is needed” is such a refrain in discussions of studies on medical or social questions. I know “more research is needed” is also a plea for funds, and should be always met with the aforementioned skepticism, but bigger data sets do give us the power of more secure conclusions.

—-
*This poor attempt at irony is also an example of a particular Type M error, this one about the correlation of the variable “the size of the set of readers of my blog” and “vast, for not ridiculously small values of ‘vast'”. I hope you’ve heard some variation of the joke that goes something like “It is true that I have made only two mistakes in my life, for very large values of ‘two'”.

Deciding the conclusion ahead of time : Applied Statistics

The more serious issue is that this predetermined-conclusions thing happens all the time. (Or, as they say on the Internet, All. The. Time.) I’ve worked on many projects, often for pay, with organizations where I have a pretty clear sense ahead to time of what they’re looking for. I always give the straight story of what I’ve found, but these organizations are free to select and use just the findings of mine that they like.

 

Once I started reading the Applied Statistics blog, for my previous post, I just had to read one more item and guess what: I found this one, which is motivated by an article by the economist blogger Mark Thoma. thoma points out an ad by the Chamber of Commerce that blatantly says they are looking an economist to write a “study” to support what the Chamber wants to appear to be true. Reading the full post is highly recommended (click on “scienceblogs.com” above, after “via”).

Why most discovered true associations are inflated: Type M errors are all over the place

« Deciding the conclusion ahead of time | Main

Why most discovered true associations are inflated: Type M errors are all over the place

Posted on: November 21, 2009 3:22 PM, by Andrew Gelman

Jimmy points me to this article, “Why most discovered true associations are inflated,” by J. P. Ioannidis. As Jimmy pointed out, this is exactly what we call type M (for magnitude) errors. I completely agree with Ioannidis’s point, which he seems to be making more systematically than David Weakliem and I did in our recent article on the topic.

My only suggestion beyond what Ioannidis wrote has to do with potential solutions to the problem. His ideas include: “being cautious about newly discovered effect sizes, considering some rational down-adjustment, using analytical methods that correct for the anticipated inflation, ignoring the magnitude of the effect (if not necessary), conducting large studies in the discovery phase, using strict protocols for analyses, pursuing complete and transparent reporting of all results, placing emphasis on replication, and being fair with interpretation of results.”

These are all good ideas. Here are two more suggestions:

1. Retrospective power calculations. See page 312 of our article for the classical version or page 313 for the Bayesian version. I think these can be considered as implementations of Iaonnides’s ideas of caution, adjustment, and correction.

2. Hierarchical modeling, which partially pools estimated effects and reduces Type M errors as well as handling many multiple comparisons issues. Fuller discussion here (or see here for the soon-to-go-viral video version).

 

 

If you have studied statistics, you may remember Type I and Type II errors. This blog post by Andrew Gelman, from Scienceblogs > Applied Statistics, brings to my attention (probably shamefully late) the prevalence of Type M errors. I am very intrigued and have printed out the article found under “our recent article” in the above quotation. When I manage to wrap my head around this idea a little more, i will post a follow-up. (I am posting on my “general interest” blog as this should interest everyone, not just scientists. Sorry I don’t have a plain language explanation ready yet…)

Cooperating bacteria are vulnerable to slackers : Not Exactly Rocket Science

Game theory applies to all living organisms. I was recently saying this to a surprised undergraduate. Yet, it is true, as this blog post from Not Exactly Rocket Science illustrates: Cooperating bacteria are vulnerable to slackers : Not Exactly Rocket Science. It tells the story of a kind of bacterial colony in which some members freeload on the efforts of the others to make the environment more nourishing for all members of the colony. In a range of population sizes of the colony, the freeloading bacteria do so well that they multiply faster than the rest. This advantage dissipates, however, when they become so preponderant in the population of the colony that the whole colony is weakened. It seems like these bacteria have figured out how to deal with the “tragedy of the commons”, where people (or living creatures of any kind) overexploit a common resource because it is in the benefit of each individual to do so, even if it harms the group.

Dubner’s response to the “superfreakonomics” accusations

Read it here. I note it does not discuss Krugman’s criticism, which I deem serious, and which aired in a NYT blog, just as the freakonomics blog is a NYT blog. I am curious to see what their response will be to Krugman, if any. Dubner and Levitt can hardly say Krugman is spreading smears about them; they either misread the Weitzman article, or they did not. It looks like they misread it; it’s up to them to convince me otherwise.

I still have no intention of buying Superfreakonomics. I’ll be damned if I reward the authors and the publisher of such stuff that passes for science writing. Again, I will keep an eye open for any adequate answers by Levitt and Dubner; I have not seen any yet.