What’s wrong with scientific research?

Posted on by TheLastPsychiatrist and tagged , . Bookmark the permalink.

The standard complaints against science is that it is influenced by money, which in turn leads to deliberate or unconscious manipulation of data.  Contradictory findings are may not be published, either out of a belief they are wrong or a realization that they aren’t.

But the real source of the problem is only now becoming more recognized: the structure of modern scientific research.
An interesting case study from The Netherlands: a consortium of researchers was tasked with investigating the role of the HPA axis to mental disorders. How did it go?

However, it appeared that it was not possible for TRAILS to make a comprehensible synthesis regarding the potential role of cortisol in the etiology of psychopathology. Concerned by this observation, we analyzed the strategies used by the consortium to answer the questions on cortisol and psychopathology, and found that, although the strategies employed within the papers were usually correct, there were inconsistencies across papers. These inconsistencies concerned the operationalization of psychopathology (different questionnaires, informants, cutoff levels), the cortisol variables (different composite measures), and the use of statistical methods and included confounders. The end result was a rather confusing pattern of findings.

But:

In general, the results could not be combined in an overarching model, and were thus disappointing with regard to scientific progress. In contrast, the end result in terms of publication output was quite positive: the majority of papers were presented at international conferences and published in highly cited journals and several students earned PhD degrees based on their work on the subject.

The report elaborates on the specific ways the research went wrong, but there is one unifying cause that explained all others: the ultimate goal of the research was a publication, and the ultimate purpose of the publication was individual advancement.

Though the article doesn’t go further than this, it is logical that this problem should get worse, not better, as more worldwide collaborative research is conducted. For any given investigation there may be two equally reasonable approaches, but if two labs each use one their results may be rigorous yet still not be comparable or integratable. In theory, deciding on a strategy at the outset avoids this problem, but “local” pressures may cause individuals to make certain choices that are scientifically acceptable but catastrophic to the overall collaboration. The social sciences are, of course, most susceptible to this.

And this is as good time as any to point out that the scientists almost never begin an investigation directly from a work of previous research; they begin from an intuition, a perspective, that is almost always informed by the popular press. People will doubt this, but it is true

Related posts:

  1. National Academy of Sciences study finds that FBI’s anthrax evidence is inconclusive. Now to the voir dire
  2. “Cigarettes are good for you”– science
  3. If you want kids to learn science, you need a better sales pitch.

4 Responses to What’s wrong with scientific research?

  1. JohnJ says:

    Scientists are people too? Whoddathunk?

  2. Tim says:

    This transcript from the Huff Post (sorry) is a nice summary of what it’s like to be a reporter. I found it really interesting reading although maybe because it told me what I already thought.

    I think it applies to researchers too – you need to publish, and you do what ever will most likely lead to publication. Be that avoiding overly hard problems, or reporting inconclusive results with a spin on them so they look significant.

    http://www.huffingtonpost.co.uk/richard-peppiatt/journalistic-practice_b_998292.html

  3. Guy Fox says:

    It might not be as bad as all that because 1) getting similar results from different methods can reinforce the robustness of the results. As far as I understand it, no experiment to find the Higgs boson at CERN has reached the conventional p-value in physics alone, but the fact that different experiments are achieving respectable p-values makes the residual doubt that much more doubtful. 2) Yes, the social sciences are susceptible to researchers not designing their studies for comparability, but much of that is just part of the recursiveness in the business that you have to accept. Much European social science isn’t very comparable to N. American stuff, and a lot of that has to do with what the respective cultures find to be compelling in terms of philosophical doctrine (e.g. even a linguist inspired by Rawls & Nozick is going to do very different work to one inspired by Derrida & Luhmann), and a lot has to do with European aversion to statistics, which is sometimes just math-hating and sometimes more philosophical. However, these differences tell you something about the cultures, and the academic output feeds back into the cultures, keeping them different. Different knowledges produce different cultures and keep them different, which is more fundamental than just being a superficial difference of opinion about research design among like-minded scholars. In a medical analogy, the incomparability in the social sciences isn’t just a side-effect of a particular treatment (i.e. modern academic study), it’s right in the the physiologies of the subjects.

Leave a Reply