Friday, January 27, 2017

ALTERNATIVE FACTS: Scientists got 'em too!

“WHAT”, I hear you saying, “That is precisely the point. Science has the facts and the facts are the facts – no alternatives. Anyone who makes statements not supported by the data are presenting fiction, not alternative facts.”

Seen in an Ottawa book store.
Yet maybe we scientists shouldn't be so smug about our FACTS. As the editor and reviewer of countless papers, I am struck by how often two (or more) fully qualified, insightful, and fair scientists can make opposite assertions of fact based on the same data and analysis. Most obviously, data are often interpreted in the light of preconceived expectations (we are all Bayesians) and, so, faced with a series of mixed results, scientists consciously or unconsciously emphasize the subset of results that support their expectations, largely ignoring or discounting or explaining away the rest.  “Ah”, I hear you saying “Those are interpretations, not facts. Facts belong in the results, interpretations are in the discussion.”

I am not so sure. Everyone knows that the same data can be analyzed in multiple ways, each yielding different outcomes. Hence, the statements and statistics in the results are interpretations too, not facts. “Ok, then, the DATA are the facts.” Yet, data are not collected with error, and – sometimes – are biased. Data collected in different ways at different times in different years can yield different outcomes. Different measure of central tendency (mean, median, etc.) yield different numbers. Different data transformations yield different outcomes. Given that data have no meaning without interpretation, even in the narrowest possible sense of a central tendency, data are not facts either.

Alternative facts?
“Ah, but this is what measures of uncertainty are for.” Indeed, standard errors, confidence intervals, credibility intervals and so on can – and should – be calculated but, again, they have no meaning without interpretation. Anytime you look at a result and a credibility interval and say “I therefore conclude the evidence more strongly supports option 1 than option 2.” Sure, but some other scientist can come along and say “I require a higher degree of confidence, so my conclusion is different.” Or. “I would use a different measure of uncertainty.” Or. “You did not include all sources of uncertainty.” Or “You didn’t consider option 3.” And, of course, we ultimately need to accept an outcome one way or the other or we never act on the information we have collected.


So, are there any real facts out there, any “objective truths?” Yes, of course, there are many of them, but many, perhaps most, of those are – ultimately – unknowable. Instead, they are “latent variables” about which we try to make inferences based on our imperfect surrogate measures.

Andy Gonzalez and I with our alternative facts
Alternative facts, which I have just argued are pervasive in science, depend on the level of inference. “Evolution isn’t a theory, it is a fact.” Sure, evolution as a process is a fact. However, evolution in any given case (a particular population facing a particular environmental change) might not be. Don’t believe me? Just look at the knots people studying adaptation to climate change or fisheries have tied themselves into trying to say that evolution has or has not occurred in any given instance. “Look at allele frequency changes,” you might say. But no one cares if those are changing (and they ALWAYS are), we care about the evolution of the traits. “Ok, then, measure the genes influencing those traits.” Yes, that works but what matters is the effect size: how much of the change in phenotype is the result of evolution? That is extremely hard to measure. Climate change is a fact, certainly, but stating that it is a fact is not informative or helpful. We need to know the effect size (rate of change) and its consequences – and then it depends on the place, the time, and the measure taken. Plenty of room for “alternative facts” in this context.

Alternative facts?
So, maybe us scientists shouldn’t be so smug. If we are ever going to state conclusions as facts (as opposed to facts being unknowable), then room exists for alternative “facts” in the same sense. Otherwise we wouldn’t have debates and arguments. Otherwise, we wouldn’t need to replicate our studies, and those of other scientists. Otherwise, we wouldn’t have retractions.

The difference between science and propaganda is that scientists are willing to give up our “facts” when enough evidence suggests they aren’t “facts” after all. 



--------------------------

This post was motivated by reading scientific papers where authors are testing an hypothesis, obtain results that are mixed with respect to that hypothesis, emphasize only those results supporting the hypothesis, and then conclude in favor of the hypothesis. Presumably this sequence of events usually transpires without INTENT to deceive - and, regardless, the consequences are rarely dire. The same cannot be said for the current US government.

Wednesday, January 18, 2017

Eco-Evolutionary Dynamics Spotlight Session at Evolution 2017

Want to give a talk in our spotlight session on Eco-Evolutionary Dynamics at the joint SSE-ASN-SSB Evolution 2017 meeting in Portland, Oregon, June 23-27?

To quote the organizers: "A Spotlight Session is a focused, 75 min. session on a specific topic. Spotlight Session talks are solicited in advance, unlike regular sessions that are assembled, often imperfectly, from the total pool of contributed talks. Each Spotlight Session is anchored by three leading experts (each giving a 14 min talk) and rounded out with six selected speakers (each giving a 5 min. ‘lightning' talk) pursing exciting projects in the same field. By having a focused session with high-profile researchers on a specific topic, there will be high value in presenting even a 5 min. talk as the room is likely to contain the desired target audience as well as other relevant and well-known speakers in the field. The 14 min. talks are invited by the organizer, while the 5 min. talks are selected via an open application process also run by the organizer." Giving a talk in a spotlight sessions does NOT preclude also giving a regular talk in the meeting. More information is here.

For our Eco-Evolutionary Dynamics spotlight session, the "leading experts" giving 14 minute talks will be myself, Fanie Pelletier, and Joe Bailey. Now are seeking contributions from six "selected speakers" to round out our session. 

Please send me an email at andrew.hendry@mcgill.ca with your proposed title and a short abstract by Feb. 10. We will then quickly review the talks and tender an invitation to six of them. Our hope is to highlight exciting new research on interactions between ecology and evolution. While we will consider all contributions, we particularly encourage young investigators (students, postdocs, new profs) and especially those developing new systems for studying eco-evolutionary dynamics.

Thanks to Matt Walsh for encouraging me to organize this spotlight session. 

If you want to see what Eco-Evolutionary Dynamics can do for you, check out #PeopleWhoFellAsleepReadingMyBook




Saturday, January 14, 2017

Blinded by the skills.

OK, I’m just gonna come right out and say it: I ain’t got no skills. I can’t code in R. I can’t run a PCR. I can’t do a Bayesian analysis. I can’t develop novel symbolic math. I can’t implement computer simulations. I don't have a clue how to do bioinformatics. I simply can’t teach you these things.

So why would anyone want to work with me as a graduate supervisor. After all, that’s what you go to graduate school for, right – SKILLS in all capitals. You want to be an R-hero. You want to be a genomics whiz. You want to build the best individual-based simulations. You want to be able to have these things so you can get a job, right? So clearly your supervisor should be teaching you these skills, yeah?

I most definitely cannot teach you how to code this Christmas tree in R. Fortunately, you can find it here

I will admit that sometimes I feel a bit of angst about my lack of hard skills. Students want to analyze their data in a particular way and I can’t tell them how. “I am sure you can do that in R,” I say. Or they want to do genomic analysis and I say “Well, I have some great collaborators.” I can’t check their code. I can’t check their lab work. I can’t check their math.

Given this angst, I could take either of two approaches. I could get off my ass and take the time and effort to learn some skills, damn it. Alternatively, I might find a way to justify my lack of skills. I have decided to take the latter route.

I think your graduate supervisor should be helping you in ways that you can’t get help for otherwise. Hence, my new catch-phrase is: “If there is an online tutorial for it, then I won't be teaching it to you.” Or, I might as well say: “If a technician can teach it to you, I won't be.” Now it might seem that I am either trying to get out of doing the hard stuff or that I consider myself above such things. Neither is the case - as evidenced by the above-noted angst. Instead, I think that the skills I can – and should be – offering to my graduate students are those one can’t find in an online tutorial and that can’t be taught by a technician.

Check out these crazy-ass impressive equations from my 2001 Am Nat paper. (My coauthor Troy Day figured them out.) 
I should be helping students to come up with interesting questions. I should be putting them in contact with people who have real skills. I should be helping them make connections and forge collaborations. I should be helping them write their proposals and their papers. I should be giving them – or helping them get for themselves – the resources they need to do their work. I should be challenging them, encouraging them, pushing them in new directions with new ideas. These are the things that can’t be found in an online tutorial; the things that a technician can’t teach them. In short, I should be providing added value beyond what they can find elsewhere.

Hey, in 1992, my genetic skills weren't bad - although, to be honest, my allozyme gels usually weren't this pretty
You might say I could, and should, do both – teach hard skills and do all the extra “soft” stuff just noted. Indeed, some of my friends and colleagues are outstanding at teaching hard skills and also at the “soft” skills I am touting. However, certainly for me personally, and – I expect – even for my polymath colleagues, there is a trade-off between teaching hard skills and doing the soft stuff. If a supervisor is an R whiz, then the student will sit back and watch (and learn) the R skills. The supervisor will have less time for the other aspects of supervision, the student will rely on the supervisor for the skills, the student might not take the initiative to learn the skills on their own, and the student might not experience the confidence-building boost of “figuring it out for themselves.”

Beyond my personal shortcomings when it comes to hard skills, it is important to recognize that graduate school is not about learning skills. Yes, hard skills come in handy and are often necessary. Certainly, skills look good on the CV – as long as they are reflected in publications. But, really, graduate school is not about technical aspects, it is about ideas (and, yes, data). PhDs aren’t (or shouldn’t be anyway) about learning bioinformatics or statistics – those are things that happen along the way, they aren’t the things that make you a Doctor of Philosophy. Most research universities don’t hire people with skills, they hire people with ideas. (I realize there are exceptions here – but that is another post.)

So, don’t come to me for skills. Don't come to any supervisor for skills. Come to us for ideas and enthusiasm. Come to us for arguments and challenges. Come to us for big ideas, stupid ideas, crazy ideas, and even a few good ideas. Come to us expecting us to expect you to learn your own skills – and to help point you to the place you can find them. We will tell you who has the skills. You will learn them on your own. 

We supervisors will help you with the things you can’t find on your own.




----------------------------

Notes:

1. I have mad field-work skills - come to me for those!
2. Max respect to my colleagues who do actually have real skills.
3. Sometimes skills ARE the research/ideas, such as development of new methodologies.
4. Thanks to Fred Guichard (with Steph Weber and Simon Reader) for the "blinded by the skills" title - suggested during our weekly "periodic table" at McGill.

OK, so I do have a few some skills I can actually teach my students. I can catch guppies better than most.

Friday, January 6, 2017

'Urban cold islands' and adaptation in cities


The cover of the recent issue of Proceedings B. (Photo: Marc Johnson.)

I was not very optimistic when my M.Sc. supervisor, Dr. Marc Johnson, proposed that we study whether plants were adapting to urban environments. Looking back, with the study being recently published, it is clear that my pessimism was unwarranted. This study ended up being a very fun ‘whodunit’ with unanticipated discoveries around every corner, and one that will, it seems, keep on surprising us into the future.

During the summer of 2014, I was living at the Koffler Scientific Reserve at Joker’s Hill, the University of Toronto’s picturesque* field research station. There, I was conducting an experiment on the evolution of plant defences using white clover (Trifolium repens L.). This plant has a genetic polymorphism for the production of hydrogen cyanide (cyanogenesis), where within-population genetic variation causes some individuals to produce cyanide, and others to lack it.

A long history of research on the topic has armed us with a solid understanding of the ecological factors that drive the evolution of cyanogenesis in clover populations. In the field, populations at high latitudes and elevations tend to lack cyanogenesis, whereas populations at low latitudes and elevations are highly cyanogenic. The general hypothesis is that cyanogenesis is favoured in warm climates because of high herbivory. In cold habitats, cyanide—which is normally stored in tissues in a benign state and is activated locally only where herbivores disrupt the plant’s cells**—is selected against because freezing ruptures plant cells and causes self-toxicity when cyanide is released involuntarily.

Me and my clover mane.
Because I was already familiar with the clover cyanogenesis system, Marc came to me with an idea that cyanogenesis may evolve along urbanization gradients. Our prediction was straightforward: given that freezing temperatures select against cyanogenesis, we expected urban heat islands would reduce the incidence of freezing and therefore relax selection against cyanogenesis in cities. (Herbivores don’t seem to have a consistent relationship with urbanization.) Because the urban heat island causes gradual increases in temperature toward urban centres, we expected to see more cyanogenesis in natural clover populations with increasing proximity to the city.

On a humid July morning in 2014, Marc picked me up at the Koffler Reserve and we set off to collect plants along an urbanization gradient. We stopped every kilometre to sample, and our sites ranged from idyllic countryside to chaotic downtown Toronto. We sampled two additional transects in Toronto—in August and again in September. I screened each plant for cyanide, and quantified how the proportion of cyanogenic plants in populations changed along the urbanization gradient.

From what I hear it’s pretty uncommon to get a clean result in science, and even less common to get a clean result that is the exact opposite of one’s prediction. Our results followed the latter scenario: cyanogenesis was lowest in the urban centre, and increased toward rural areas—the opposite of what we had predicted. The reason for this, we naïvely thought, was so obvious: lower herbivory in the urban centre is relaxing selection for cyanogenesis there.

Figure 1 from our paper. In three of four cities, the frequency of cyanogenesis increased toward rural areas.

We rationed that we needed to do an experiment to test whether herbivory changed along the urbanization gradient***. This came with the unsettling realization that I would have to procure space on lawns from folks that lived in urban, suburban, and rural areas. I secured my urban and suburban sites mostly by emailing people I knew, but I lacked rural sites. Marc advised that I’d need to go door-to-door and solicit people for lawn-space donations in order to cover the full urban-rural gradient. After many discouraging answers of ‘no’ and some slammed doors, I finally hit a stride. In the end, more than half of my 40 study populations were on the private property of generous citizens. 

While the field experiment was ongoing, I wanted to see if the patterns we observed were unique to Toronto. I, along with Marc and our co-author, Marie Renaudin, loaded up the lab car and sampled clover populations along transects in Montreal, Boston, and New York City. The trip had some ups and downs. The downs included being kept awake until the wee morning during a torrential downpour (in a leaky tent) because our campsite-neighbours were blasting Evanescence. Our car also broke down in downtown Boston and needed a new alternator, putting us a day behind. Despite these hiccups, we managed to get plants from all three cities back to the lab. We found that patterns in both Boston and New York City were consistent with what we observed in Toronto, but there was no pattern in Montreal.

The three authors about to depart on a ferry crossing the Ottawa River to Oka, QC.

When the field experiment ended, we were surprised to find that there was no change in herbivory along the urbanization gradient in Toronto. This was initially disappointing because I was left with no ideas about the causal factor, but this feeling didn’t last. At my committee meeting, the ever-insightful ecologist, Peter Kotanen, posed an alternative explanation for our findings. Peter suggested that reduced urban snow cover caused by the heat island effect could ultimately leave plants exposed to cold air temperatures, while rural plants would be kept warm by a relatively thick layer of insulating snow cover.

After Peter’s ecological revelation, I was especially glad that Marc had asked me to put out some ground-level temperature probes during the previous winter. Sure enough, when I looked closely at the data from these probes, it was perfectly in line with Peter’s hypothesis. The data show that urban ground temperatures were much colder than rural ground temperatures during the winter, and that this pattern reverses following snowmelt. We’ve taken to calling this pattern the ‘urban cold island’ effect****. In the paper, we use remote sensing and weather station data to suggest that this urban cold island effect doesn’t happen in Montreal because of exceptionally high snow cover along the entire rural-urban gradient.

Figure 3A from our paper. The 'relative urban coldness' index shows the cold island (values above 0) appearing during the winter, and then changing back into a heat island (values below 0) following snowmelt at the end of winter.  Curve is 95% CI. More details in paper.

The next steps of this work, on which other lab members are taking the lead, are very exciting. We’re testing whether snow cover actually changes selection on cyanogenesis. We’re also quantifying gene flow along urbanization gradients, and sampling transects in cities of different sizes and with different climates. From what I've seen of the preliminary results, it seems that many more surprises await. 

Sampling clover at the Washington Monument.

Growing up in a big city is a fantastic way to be exposed to a wide range of diverse cultures, perspectives, and ideas. Just as exposure to diversity of human ideology/sexuality/culture (etc.) is important for generating an appreciation of the human world, exposure to biological diversity is important for us to attain a grounded perspective of our place in the world. Unfortunately, when human diversity and abundance increases, biodiversity tends to decline. Today, urban areas are expanding rapidly and an increasing proportion of humans are living in cities. With this, more young people than ever are growing up disconnected from nature. (A poetic example of this is how city lights erase the stars, making it even easier to forget our origins.) While some people are able to regularly leave the city, many—especially those from disadvantaged groups—are stuck in the city and thus can only experience nature there.

While urban evolution studies may be well-suited for testing fundamental questions in evolution, they have a unique ability to motivate ecologically-minded urban design & policy. There have been many ecological studies conducted in urban environments, but it’s not always clear that the variables measured are important for the biology of organisms. The unique promise of urban evolutionary studies is to identify the ecological variables that affect biological fitness (i.e., 'reverse ecology') in cities, and in doing so can motivate urban design that mitigates such stressors. My ultimate hope for the field of urban evolutionary biology is that its discoveries are used to generate in city-dwellers a curiosity for the natural world. And who knows, maybe some theoretical advances will be made along the way.

--------------------------------------------------------------------------------

Ken A. Thompson is a Ph.D. student studying adaptation and speciation at the University of British Columbia. To learn more, visit his website.

--------------------------------------------------------------------------------

*This isn’t just my opinion—a recent film adaptation of Anne of Green Gables, staring Martin Sheen, chose to film there because of its rural scenery. 

**Over 3000 plant species from 110 different families (from ferns to flowering plants) are cyanogenic. The release mechanism invariably is akin to a ‘bomb’, where the two cyanide compounds—a cyanogenic glycoside molecule and an enzyme that cleaves the HCN molecule from the glycoside—are stored in different parts of the cell and only brought together following tissue disruption.

***Studying patterns of herbivory on wild plants wouldn’t work because we knew that defense was strongly associated with the gradient.

****To our knowledge we are the first to document this phenomenon. 

Sunday, January 1, 2017

F**k replication. F**k controls.


Just kidding – high replication and proper controls are the sine qua non of experimental science, right? Or are they, given that high replication and perfect controls are sometimes impossible or trade-off with other aspects of inference? The point of this post is that low replication and an absence of perfect controls can sometimes indicate GOOD science – because the experiments are conducted in a context where realism is prioritized.


Replication and controls are concepts that are optimized for laboratory science, where both aspects of experimental design are quite achievable with relatively low effort – or, at least, low risk. The basic idea is to have some sort of specific treatment (or treatments) that is (are) manipulated in a number of replicates but not others (the controls), with all else being held constant. The difference between the shared response for the treatment replicates and the shared response (or lack thereof) for the control replicates is taken as the causal effect of the specific focal manipulation.

However, depending on the question being asked, laboratory experiments are not very useful because they are extracted from the natural world, which is – after all – the context we are attempting to make inferences about. Indeed, I would argue that pretty much any question about ecology and evolution cannot be adequately (or at least sufficiently) addressed in laboratory experiments because laboratory settings are too simple and too controlled to be relevant to the real world.

1. Most laboratory experiments are designed to test for the effect of a particular treatment while controlling for (eliminating) variation in potential confounding and correlated factors. But why would we care about the effect of some treatment abstracted from all other factors that might influence its effects in the real world? Surely we what we actually care about is the effect of a particular causal factor specifically within the context of all other uncontrolled – and potentially correlated and confounding – variation in the real world.

2. Most laboratory experiments use highly artificial populations that are not at all representative of real populations in nature – and which should therefore evolve in unrepresentative ways and have unrepresentative ecological effects (even beyond the unrealistic laboratory “environment”). For example, many experimental evolution studies start with a single clone, such that all subsequent evolution must occur through new mutations – but when is standing genetic variation ever absent in nature? As another example, many laboratory studies use – quite understandably – laboratory-adapted populations; yet such populations are clearly not representative of natural populations.

In short, laboratory experiments can tell us quite a bit about laboratory environments and laboratory populations. So, if that is how an investigator wants to focus inferences, then everything is fine – and replicates and controls are just what one wants. I would argue, however, that what we actually care about in nearly all instances is real populations in real environments. For these more important inferences, laboratory experiments are manifestly unsuitable (or at least insufficient) – for all of the reasons described above. Charitably, one might say that laboratory experiments are “proof of concept.” Uncharitably, one might say they tend to be “elegantly irrelevant.”


After tweeting a teaser about this upcoming post, I received a number of paper suggestions. I like this set.
To make the inferences we actually care about – real populations in real environments – we need experiments WITH real populations in real environments. Such experiments are the only way to draw robust and reliable and relevant inferences. Here then is the rub: in field experiments, high replication and/or precise controls can be infeasible or impossible. Here are some examples from my own work:

1. In the mid 2000s, I trotted a paper around the big weeklies about how a bimodal (in beak size) population of Darwin’s finches had lost their bimodality in conjunction with increasing human activities at the main town on Santa Cruz Island, Galapagos. Here we had, in essence, an experiment where a bimodal population of finches was subject to increasing human influences. Reviewers at the weeklies complained that we didn’t have any replicates of the “experiment.” (We did have a control – a bimodal population in the absence of human influences.) It was true! We did not have any replicates simply because no other situation is known where a bimodal population of Darwin’s finches came into contact with an expanding human population. Based on this criticism of no replication – despite the fact that replication was both impossible and irrelevant – our paper was punted from weeklies. Fortunately, it did end up in a nice venue (PRSB) – and has since proved quite influential.

Bimodality prior to the 1970s has been lost to the present at a site with increasing human influence (AB: "the "experiment") but not at a site with low human influence (EG: "the control"). This figure is from my book.

2. More recently, we have been conducting experimental evolution studies in nature with guppies. In a number of these studies, we have performed REPLICATE experimental introductions in nature: in one case working with David Reznick and collaborators to introduce guppies from one high-predation (HP) source population into several low-predation (LP) environments that previously lacked guppies. Although several of these studies have been published, we have received – and continue to receive – what seem to me to be misguided criticisms. First, we don’t have a true control, which is suggested to be introducing HP guppies into some guppy-free HP environment. However, few such environments exist and, when such introductions are attempted (Reznick, pers. comm.), the guppies invariably go extinct. So, in essence, this HP-to-HP control is impossible. Second, our studies have focused on only two to four of the replicate introductions, which has been criticized because N=2 (or N=4) is too low to make general conclusions about the drivers of evolutionary change. Although it is certainly true that N=10 would be wonderful, it is simply not possible in nature owing to limited available of suitable introduction sites. Moreover, N=2 (N=1 even) is quite sufficient to infer how those specific populations are evolving, and, for N>1, whether they are evolving similarly or differently.

Real, yes, but not unlimited.



3. Low numbers of replicate experiments have also been criticized because too many other factors vary idiosyncratically among our experimental sites (they are real, after all) to allow general conclusions. The implication is that we should not be doing such experiments in nature because we can’t control for other covarying and potentially confounding factors – and because the large numbers of replicates necessary to statistically account for those other factors are not possible. I first would argue that the other covarying and confounding factors are REAL, and we should not be controlling them but rather embracing their ability to produce realism. Hence, if two replicates show different responses to the same experimental manipulation, those different responses are REAL and show that the specific manipulation is NOT generating a common response when layered onto the real complexities of nature. Certainly, removing those other factors might yield a common response to the manipulation but that response would be fake – in essence, artificially increasing an effect size by reducing the REAL error variance.


For experiments the experiments that matter, replication and controls trade-off with realism – and realism is much more important. A single N=2 uncontrolled field experiment is worth many N=100 lab experiments. A single N=1 controlled field experiment is worth many different controlled lab experiments. Authors (and reviewers and editors) should prioritize accordingly.

1. It is certainly true that limited replication and imperfect controls mean that some inferences are limited. Hence, it is important to summarize what can and cannot be inferred under such conditions. I will outline some of these issues in the context of experimental evolution.

2. Even without replication and controls, inferences are not compromised about evolution in the specific population under study. That is, if evolution is document in a particular population, then evolution did occur in that population in that way in that experiment. Period.

3. With replication (let’s say N=2 experiments), inferences are not compromised about similarities and differences in evolution in the two experiments. That is, if evolution is similar in two experiments, it is similar. Period. If evolution is different in two experiments, it is different. Period.

4. What is more difficult is making inferences about specific causality: that is, was the planned manipulation the specific cause of the evolution observed, or was a particular confounding factor the specific cause of the difference between two replicates? Despite these limitations, an investigator can still make several inferences. Most importantly, if evolution occurs differently in two replicates subject to the same manipulation (predation or parasitism or whatever), then that manipulation does NOT have a universal over-riding effect on evolutionary trajectories in nature. Indeed, experiment-specific outcomes are a common finding in our studies: despite a massive shared shift in a particular set of environmental conditions, replicate populations can sometimes respond in quite different ways. This outcome shows that context is very important and, thereby, highlights the insufficiency of laboratory studies that reduce or eliminate context-dependence and, critically, its idiosyncratic variation among populations. Ways to improve causal inferences in such cases are to use “virtual controls,” which amount to clear a priori expectations about ecological and evolutionary effects of a given manipulation, and or “historical replicates,” which can come from other experimental manipulations done by other authors in other studies. Of course, such alternative methods are still attended by caveats that need to be made clear.

I argue that ecological and evolutionary inferences require experiments with actual populations in nature, which should be prioritized at all levels of the scientific process even if replication is low and controls are imperfect. Of course, I am not arguing for sloppy science – such experiments should still be designed and implemented in the best possible manner. Yet only experiments of this sort can tell us how the real world works. F**k replication and f**k controls if they get in the way of the search for truth.


Additional points:

1. I am not the first frustrated author to make these types of arguments. Perhaps the most famous defense of unreplicated field experiments was that by Stephen Carpenter in the context of whole-lake manipulations. Carpenter also argued that mesococosms were not very helpful for understanding large scale phenomena. 


2. Laboratory experiments are obviously useful for some things, especially physiological studies that ask, for example, how do temperature and food influence metabolism in animals and how do light and nutrients influence plant growth. Even here, however, those influences are likely context dependence and could very well differ in the complex natural wold. Similarly, laboratory studies are useful for asking questions such as “If I start with a particular genetic background and impose a particular selective condition under a particular set of otherwise controlled conditions, how will evolution proceed?” Yet those studies must recognize that the results are going to be irrelevant outside of that particular genetic background and that particular selective condition under that particular set of controlled conditions.

3. Skelly and Kiesecker (2001 – Oikos) have an interesting paper where they compare and contrast effect sizes and sample sizes in different “venues” (lab, mesocosms, enclosures in nature) testing for effects of competition on tadpole growth. They report that the different venues yielded quite different experimental outcomes, supporting my points above that lab experiments don’t tell us much about nature. They also report that replication did not decrease from the lab to the more realistic venues – but the sorts of experiments reviewed are not the same sort of real-population real-environment experiments described above, where trade-offs are inevitable.

From Skelly and Kiesecker (2001 - Oikos).
4. Speaking of mesocosms (e.g., cattle tanks or bags in lakes), perhaps they are the optimal compromise between the lab and nature, allowing for lots of replication and for controls in realistic settings. Perhaps. Perhaps not. It will all depend on the specific organisms, treatments, environments, and inferences. The video below is an introduction to the cool new mesocosm array at McGill.


5. Some field experimental evolution studies can have nice replication, such as the islands used for Anolis lizard experiments. However, unless we want all inferences to come from these few systems, we need to also work in other contexts, where replication and controls are harder (or impossible).

6. Some investigators might read this blog and think “What the hell, Hendry just rejected me because I lacked appropriate controls in my field experiment?” Indeed, I do sometimes criticize field studies for the lack of a control (or replication) but that is because the inferences attempted by the authors do not match the inferences possible from the study design. For instance, inferring a particular causal effect often requires replication and controls – as noted above. 

Predicting Speciation?

(posted by Andrew on behalf of Marius Roesti) Another year is in full swing. What will 2024 hold for us? Nostradamus, the infamous French a...