Posted by & filed under CSAG Blog, Frontpage.

A recent article in The Economist titled ‘How science goes wrong’ got me thinking about what their assertions mean to Climate Science as we know it.

In summary, the author of the article argues that the great success of the scientific method has bred complacency such that scientific hegemonies are rarely addressed. I guess we believe that the way we do science works, so why change it? Although scientific research has been enhanced and expanded exponentially over the years, the way we DO science has changed very little since the 17th century. Contrariwise the scientific community has changed substantially over the years – with academic life becoming a very different field than was practiced in early science days. The biggest change has been growth, there is competition in science like never before, and from what I have observed in my short time in the academic world, an obligation to ‘publish or perish’ (The Economist terminology) is overwhelming.

The increase in the number of researchers and increase in funding for research has fostered an immense growth of scientific research and many ground-breaking papers have been published over the years – but with this increase comes competition and careerism which encourages the embellishment and cherry picking of results. In order for popular scientific journals to defend their exclusivity, they impose very high rejection rates, in fact more than 90% of submitted manuscripts are rejected. Submitted papers with the most striking findings naturally have the greatest chance of being published. It is no wonder that so many researchers know of a fellow colleague or colleague of a colleague who has polished up their paper by excluding inconvenient data from results based on ‘a gut feeling’. Conversely, research where scientists have failed to prove their hypothesis are rarely offered for publication – let alone published. ‘’Negative results’’ now only account for 14% of published papers, down from 30% in 1990. Yet aren’t we taught during our undergraduate studies that knowing what is false is as important to science as knowing what is true? The failure to report failures means that researchers waste money and effort exploring dead end research paths that have in actual fact, already been investigated by other scientists. The pressure to publish sometimes prevents us from examining already published papers critically, when we neglect this responsibility we end up cluttering the knowledge base and hindering further analysis.

In Climate Science there is the added pressure of intense social and political interest in the results of climate research, and there is a media focus on climate research that perhaps is absent in science faculties such as physics and biochemistry. On the one side you have intense vested interests in the status quo which includes a shrill minority who are openly resigned to refuse to challenge their long held beliefs. This backdrop makes any review or adjustment of previous assertions by climate science even more challenging. Further pressure faced by climate scientists in particular is the pressure to present a coherent front. The intense political nature of climate science adds an enormous amount of pressure to scientists – with many scientists being overly cautious and some being overly eager with statements – all of which the media laps up.

I’m not sure what all of this means for us as climate scientists (oops I can’t call myself one of those yet but you know what I mean!) But I know how easy it is for the human mind to get stuck in comfort zones – especially when these comfort zones seem to be working so well and producing so much ‘stuff’. I guess the question that comes to mind is, is this mass of research we as a scientific community produce of a quality that is just ‘’acceptable’’ and what other people want to hear?, Perhaps we could expand our minds even further in the field of science such that the science we DO allows for a finer balance between creativity, innovation, and replication of methodology.

 

Read the article I make reference to here: http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong

Also interesting read: http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

5 Responses to “Are we doing science right? Does anyone care when we get it wrong?”

  1. Piotr Wolski

    Just two more pennies:

    One:
    Indeed, what we (people in a position perceived by the society as that of a scientist) do on a daily basis is not always science. We do not always apply our minds to devise an experiment meant to provide falsifiable evidence of a theory adding to human knowledge about universe. Sometimes we just perform mundane task of generating uninspiring data. But perhaps our lack of acceptance of such predicament results from skewed perception of who we really are. We only seem to be scientists. We work at a source of all knowledge – the university. To get there we had to go through the process of conceiving and conducting research following the Scientific Method. Our performance is judged through the lens of papers published in scientific journals following Peer Review. We therefore appropriately assume the societal role of a scientist and attempt to do what scientists have been doing for 300 years: think, test, reject, think, test, reject, think, test, not reject, achieve sense of fulfillment, have name printed in gold next to Crick and Watson’s in the annals of humankind… That. And a paper in Nature, of course. But perhaps at this point of space-time, the ever-evolving human system has a somewhat different role for us – the noble function of Information Provisioners and Model Runners, however you call it, interfacing science and real life (consulting, engineering, policy-making) and supporting science. And with a small doze of economic facilitation we fulfil this role. Somebody has to give data to the engineers, and test the models, someone with enough knowledge to know the data and the models.

    Two:
    The reality is, we have democratized science. Almost anyone who is doing well at school can become a scientist. As my PhD advisor once told me “One does not need brains to get a PhD these days. What one needs is persistence” (well, perhaps I should have taken this personally at the time, but I didn’t… so here we are!). There are millions of scientists out there. Would you imagine that there are 304 scientific journals which can potentially publish a paper on cancer? But perhaps this scale of things is necessary. We’ve been doing science consistently for over 300 years, and in practice since we’ve appeared on the surface of the Earth. The bubble of knowledge is large and getting larger. Every scientist starts in the centre, and has to wiggle their way to the bubble’s boundary, before they are able to meaningfully contribute, something like this: http://matt.might.net/articles/phd-school-in-pictures/. Some make it through brilliance of intellect, some through hard work, some by chance. But all in all they are few. Most, I bet, are not even aware where the exact boundary is, and somewhat blindly probe left, right and centre. Wide “human resource base”, some sort of crowd-sourcing, is perhaps necessary to find individuals who are able to get to the edge and push it significantly. What about these who cannot? Well, again, with a bit of societal and economic pressure we’re busy producing (and peer-reviewing) non-science and share it at conferences and in journals.

  2. Stefaan Conradie

    On a slightly more mundane note than the previous two comments, I’d just like to say thanks, Claire, for an interesting and thought-provoking read. I was particularly interested in your comments about the additional pressures experienced in climate science.

    However, there are some concerns that have been expressed about the economist articles you link to (see e.g. http://ksj.mit.edu/tracker/2013/10/how-science-writing-goes-wrong-overreach, or http://www.skepticink.com/smilodonsretreat/2013/10/18/how-science-goes-wrong-a-response/). In particular, a lot of the ideas raised in that article seem to be about academic research more generally, rather than science in particular; the focus seems to be mostly on biomedicine and neuroscience and some examples from psychology and economics, which are not generally considered natural sciences, are included to support the notions put forward.

    However, I do think the “publish or perish” matter is a serious problem (although that does tend to contradict the idea that scientists avoid publishing “uninteresting” results, doesn’t it?). Also, a lot of the problems seem to arise from a poor understanding of statistics and as pointed out in the post Chris linked to, a lack of background in computer science/computing, which seems to me to suggest that a lot of researchers lack knowledge/skills from the basic “tool set” for research: subjects such as Maths (which underlies much of the Stats and numerical analysis that is done), Statistics, Computer Science and Physics (where there is a well-developed framework for applying such ideas). Maybe the way these subjects are incorporated into other fields should be adapted? Disclaimer: yes, I am biased towards Maths.

  3. Alex

    Having spent time around engineers, my observation was the difference in the goals sought; the difference between asking “does it work?” and “why does it work?”. On that basis, I draw the distinction between the two via the application of the Scientific Method. According to the OED, this is defined as “a method of procedure […] consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.” The key difference between the disciplines, in my mind, lies in testing of hypotheses.

    One of the things that drew me to Theoretical Physics as an undergraduate was how clear-cut an elegant the fundamental theory was. However, the higher up you go in spatial scales, the more complex the systems become (as illustrated succinctly by http://xkcd.com/435/). Physicists deal with this by using a bottom-up physical modelling approach (simplest problem first), and neglecting lower order terms. In some cases, these idealised toy models may be sufficient approximations to the real thing. However, in general, scientists don’t have the luxury of being able to neglect terms in modelling real-world processes that occur on intermediate length scales (>>atomic and <<galactic).
    In the 60’s and 70’s, the field of mathematical biology grew rapidly, with many physicists seeking to apply modelling techniques to other fields. While this undoubtedly lead to a greater understanding of the field (predator-prey, SIR models etc), a cynical observer might call these early successes “low-lying fruit”. As a whole, analytic breakthroughs in the field are much less frequent. Real-world problems such as target patterns for magnetic resonance imaging and cardiac models rely as much on computational methods as they do on analytic techniques. Does that imply that these approaches are somehow any less scientific?

    To me, the key lies in reproducibility and testing. Does a doctor in a hospital care what image processing techniques are used in their MRI machines? All they care about is whether they can be confident that the resultant image is a sufficiently accurate approximation of the underlying physical system. However, to get to the point where such equipment can be safely placed in the hands of those that do not know (or care) what is “under the hood”, the systems were meticulously tested for a range of typical and edge cases. Moreover, each anomaly is dissected and interpreted. Why did the method work/not work? Can we predict another scenario that would cause such output? Test hypothesis. Rinse. Repeat.

    How much physical meaning is given to the outputs of the latest fancy Bayesian learning method, PCA, etc? How believable is the climate anomaly you computed in the latest run of your favourite RCM, and how big would you have to make the ensemble before you would bet your house on it? Are climate models at the stage where they can be handed over to “users” with a clear conscience?

  4. Chris

    This post and the associated article, along with another article a colleague pointed me to on the growing need for computer science skills in science (http://jakevdp.github.io/blog/2013/10/26/big-data-brain-drain/) has solidified some emerging thoughts about climate science. This is also partly prompted by attending the recent CORDEX conference (see Joseph’s blog post: http://bit.ly/HPFDB1).

    What is the difference between science and engineering? In particular, what is the difference between environmental hard sciences (climate, hydrology, etc..) and engineering. Tomorrow I will have another of many meetings with a large engineering consultancy. We have worked with them a lot in the past. They often outsource components of their projects to other engineering companies with more expertise. They outsource components of their projects to us. And the truth is that we don’t deliver “science”, we deliver data to feed into engineering designs be they “hard” infrastructure or “softer” management plans and similar outputs. This is engineering, its taking a problem and applying a range of tools to measure the problem, design a solution, and implement the solution. As climate “scientists” we feed into that process.

    So what is science? The Oxford dictionary defines it as: “the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment” (http://bit.ly/1bAwFAs). So climate science would be the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the earths climate through observation and experiment.

    A few key words jump out there. Firstly is the combination of “intellectual” and “practical”. There is a thinking component to science. This is remarkably evident if you read anything written by or about some really classic scientists of the past. They often made some pretty magnificent leaps of thought. I stand in awe of people like Robert Hooke who came up with ideas like the wave nature of light. As a teenager I fantasized about coming up with a ground breaking new scientific idea. One thing is obvious from reading scientific history, it requires real imagination and a willingness to fail and be proved wrong (and look rather stupid). I’m not sure how much these two characteristics are encouraged in our current science education.

    But the thing about many of these discoveries is that they were not just dreamed up. They emerged from the reality that practical experiments were producing results that couldn’t be explained by existing theories. So there is a very practical side to science. Designing experiments to test hypothesis generally also involved building equipment to either observe the world or change something and observe the result. For early physics the tools of the trade were lenses and weights and pendulums and levers. Now days the tools have changed for many scientists and there is a massive dependence on computers and coding, to the extent that some (http://jakevdp.github.io/blog/2013/10/26/big-data-brain-drain/) suggest that computer coding skills trump science domain knowledge. But I seriously challenge that even though my “science career” so far has been strongly carried on the back of my computer science training. Looking back in history, what set scientists apart was not there ability to grind a glass lens or blow glass tubes, but the ability to imagine, to think critically, to construct truly clever experiments and thereby prove or disprove an idea. Many people could grind a lens, very few could devise a new theory of light.

    And so at the recent CORDEX conference I spent a lot of time listening to talks by people who are probably cleverer than me and certainly have done a lot more “stuff” than me. But I’m not sure how much science was being presented. Engineering maybe, but not science. That’s not to discredit the work done. Some of it is very impressive. But if it doesn’t advance our understanding of the “structure and behavior” of the climate system then is it science? But of course maybe this is also my misconception. CORDEX is largely about climate modeling. And climate modeling is big and complex stuff. Around 15 years ago I spent many many months of my life compiling and running regional climate models. That was not science. It was in support of science, but it was not science. I was the lens grinder in the workshop swearing every time the new lens cracked (again).

    And that is a role that is certainly needed, perhaps critically so. But, and I think this is my main point, we dangerously deceive ourselves if we think that running a climate model is “doing science”. Interestingly, one of the few presentations that stood out for me last week was relatively ignored and only solicited a single rather mechanistic (engineering) response from the audience. But it stood out because it proposed an interesting hypothesis about a component of the climate system. That droplet size in rainfall has an impact on erosivity. The scientist presented an experiment (using a climate model) to test the hypothesis and ended up convincingly proving a result. She openly admitted the relative simplicity of the experiment and the almost obvious nature of the result. But in my mind knowledge had been gained about the climate system.

    In contrast, many presentations examined the performance of different climate models either in terms of representing the observed climate system, or in terms of representing responses to future climate scenarios. These presentations added knowledge yes, but not about the climate system, only about these models. This is critically important information and this hard and difficult work needs to be done and coordinated (hence CORDEX). But I propose that this is not science.

    I want to end by saying that I’ve taken quite an extreme line here on purpose. I want to prompt discussion! I really don’t intend to offend or discourage anyone within CSAG or outside. There are many many nuances to this topic and there is a lot of very interesting and good science (if I have any authority to judge) happening and being documented. But I’m nervous that as climate scientists, and in particular as climate modelers, we are at risk of forgetting what science is and becoming mechanics or engineers. And I’m also the first to admit that I have worryingly little real science to my name so four fingers point back towards me…