The standard complaints against science is that it is influenced by money, which in turn leads to deliberate or unconscious manipulation of data. Contradictory findings are may not be published, either out of a belief they are wrong or a realization that they aren’t.
But the real source of the problem is only now becoming more recognized: the structure of modern scientific research.
An interesting case study from The Netherlands: a consortium of researchers was tasked with investigating the role of the HPA axis to mental disorders. How did it go?
However, it appeared that it was not possible for TRAILS to make a comprehensible synthesis regarding the potential role of cortisol in the etiology of psychopathology. Concerned by this observation, we analyzed the strategies used by the consortium to answer the questions on cortisol and psychopathology, and found that, although the strategies employed within the papers were usually correct, there were inconsistencies across papers. These inconsistencies concerned the operationalization of psychopathology (different questionnaires, informants, cutoff levels), the cortisol variables (different composite measures), and the use of statistical methods and included confounders. The end result was a rather confusing pattern of findings.
In general, the results could not be combined in an overarching model, and were thus disappointing with regard to scientific progress. In contrast, the end result in terms of publication output was quite positive: the majority of papers were presented at international conferences and published in highly cited journals and several students earned PhD degrees based on their work on the subject.
The report elaborates on the specific ways the research went wrong, but there is one unifying cause that explained all others: the ultimate goal of the research was a publication, and the ultimate purpose of the publication was individual advancement.
Though the article doesn’t go further than this, it is logical that this problem should get worse, not better, as more worldwide collaborative research is conducted. For any given investigation there may be two equally reasonable approaches, but if two labs each use one their results may be rigorous yet still not be comparable or integratable. In theory, deciding on a strategy at the outset avoids this problem, but “local” pressures may cause individuals to make certain choices that are scientifically acceptable but catastrophic to the overall collaboration. The social sciences are, of course, most susceptible to this.
And this is as good time as any to point out that the scientists almost never begin an investigation directly from a work of previous research; they begin from an intuition, a perspective, that is almost always informed by the popular press. People will doubt this, but it is true.