Who remembers the old LP record albums? They were made of vinyl, and music was recorded by etching tiny variations along a spiral groove. You put an LP onto a turntable, and you set the stylus with a fine needle into the groove. As the turntable rotated, the needle vibrated according to those tiny variations along the groove. And by amplifying that analog signal, music emanated from your speakers.
The LP replaced an earlier format that usedshellac instead of vinyl. The older format rotated on the turntable at 78 rpm, and a 12-inch diameter record allowed for only about 5 minutes of music per side. The vinyl LP allowed finer etching along a narrower groove, and these albums turned at 33 and 1/3 rpm. This technology allowed over 20 minutes of music to be recorded on each side of the disc. Hence the acronym LP, which stands for “long play.”
Why am I telling you this? I started the LTEE on February 24, 1988. A year on our planet is about 365.25 days, and so a century is 36,525 days. There have been 12,175 days from February 24, 1988, until today. That’s exactly one third of a century.
The LTEE has now revolved around our sun 33 and 1/3 times! I think that qualifies as an LP.
An old LP album cover … even older than the LTEE.
Writing in the lab notebook on the occasion of the LTEE circling the sun 33 and 1/3 times.
Even as the experiment is on ice, the lab team continues to analyze recently collected data, prepare papers that report their findings, and make plans for future work. Their analyses use data collected from the LTEE itself, as well as from various experiments spun off from the LTEE. Nkrumah Grant is writing up analyses of genomic and phenotypic aspects of metabolic evolution in the LTEE populations. Kyle Card is examining genome sequences for evidence of historical contingencies that influence the evolution of antibiotic resistance. Zachary Blount is comparing the evolution of new populations propagated in citrate-only versus citrate + glucose media. Minako Izutsu is examining the effects of population size on the genetic targets of selection, while Devin Lake is performing numerical simulations to understand the effects of population size on the dynamics of adaptive evolution. So everyone remains busy and engaged in science, even with the lab temporarily closed.
Today, I’m excited to announce two new developments. First, the National Science Foundation (NSF) has renewed the grant that supports the LTEE for the next 5 years. This grant enables the continued propagation of the LTEE lines, the storage of frozen samples, and some core analyses of the evolving populations. The grant is funded through the NSF’s Long Term Research in Environmental Biology (LTREB) Program, which “supports the generation of extended time series of data to address important questions in evolutionary biology, ecology, and ecosystem science.” Thank you to the reviewers and program officers for their endorsement of our research, and to the American public and policy-makers for supporting the NSF’s mission “to promote the progress of science.”
Second, Jeff Barrick joins me as co-PI on this grant for the next 5 years, and I expect he will be the lead PI after that period. In fact, Jeff and his team will take over the daily propagation of the LTEE populations and storage of the sample collection even before then. I’m not planning to retire during the coming grant period. Instead, this transfer of responsibility is intended to ensure that the LTEE remains in good hands for decades to come. In the meantime, Jeff’s group will conduct some analyses of the LTEE lines even before they take over the daily responsibilities, while my team will continue working on the lines after the handoff occurs.
Several years ago I wrote about the qualifications of scientists who would lead the LTEE into the future: “My thinking is that each successive scientist responsible for the LTEE would, ideally, be young enough that he or she could direct the project for 25 years or so, but senior enough to have been promoted and tenured based on his or her independent achievements in a relevant field (evolutionary biology, genomics, microbiology, etc.). Thus, the LTEE would continue in parallel with that person’s other research, rather than requiring his or her full effort, just like my team has conducted other research in addition to the LTEE.”
Jeff is an outstanding young scientist with all of these attributes. Two years ago he was promoted to Associate Professor with tenure in the Department of Molecular Biosciences at the University of Texas at Austin. He has expertise in multiple areas relevant to the LTEE including evolution, microbiology, genomics, bioinformatics, biochemistry, molecular biology, and synthetic biology. He directs a substantial team of technicians, postdocs, and graduate students, which will provide ample coverage for the daily LTEE transfers (including weekends and holidays). Last but not least, Jeff has participated in the LTEE and made many contributions to it including:
Participated in propagating the LTEE lines and related activities while he was a postdoc in my lab from 2006 to 2010.
Authored many papers using samples from the LTEE, including almost all of them that have analyzed genome sequences as well as several recent papers examining the genetic underpinnings of the ability to use citrate that evolved in one lineage.
Developed the open-source breseq computational pipeline for comprehensively identifying mutations that distinguish ancestral and evolved genomes.
Someone might reasonably ask if the LTEE will work in the same way when it is moved to another site. The answer is yes: the environment is simple and defined, so it is readily reproduced. Indeed, I moved the LTEE from UC-Irvine to MSU many years ago, the lab has moved between buildings here at MSU, and we’ve shared strains with scientists at many other institutions, where measurements and inferences have been satisfactorily reproducible. As an additional check, Jeff’s team at UT-Austin ran a set of the competition assays that we use to measure the relative fitness of evolved and ancestral bacteria, and we compared the new data to data that we had previously obtained here at MSU. The two datasets agreed well, in line with the inherent measurement noise in assessing relative fitness. Fitness is the most integrative measure of performance of the LTEE populations, and it is potentially sensitive to subtle differences in conditions. These results provide further evidence that, when the time comes, the LTEE can continue its journey of adaptation and innovation in its new home.
Luckily, we don’t have to go back to the beginning–the LTEE wouldn’t have survived if we did. We freeze whole-population samples every 75 days, and those provide the backups that keep us going when needed.
So the LTEE is 32 years old today. The evolving bacteria lineages, though, are younger, at a little over 30 years (11,000 / 365). I prefer to think of them as timeless, though … having survived in and adapted to their tiny flask worlds for more than 73,000 generations.
Here’s grad student and lab manager Devin Lake doing today’s transfer.
And here’s Devin & me with the lab notebook. Devin is pointing to today’s entries.
And here’s what we wrote:
For those with pathogens on their mind (and that’s a lot of us, with the new coronavirus spreading), you might wonder: Aren’t E. coli dangerous? The short answer is only rarely. All of us have harmless or even beneficial strains of E. coli and many other bacterial species in our GI tract. The LTEE uses one of these harmless strains, one that has been studied in many labs for close to a century without problems. There are some strains of E. coli, though, that are nasty, and which are usually acquired by eating contaminated foods. So wash your raw fruits and vegetables, cook your meats, and don’t worry about the LTEE bacteria … Just wish them a happy birthday today, and many more years of scientific discovery.
Michael Behe has a new book called Darwin Devolves, published by HarperOne. Nathan Lents, Joshua Swamidass, and I wrote a review of that book for the journal Science. (You can find an open-access version of our review here.) As our review says (in agreement with Behe), there are many examples of evolution in which genes and their functions have been degraded, sometimes yielding an advantage to the organism. Unfortunately, though, Behe largely ignores the ways that evolution generates new functions and thereby produces complexity. That’s a severe problem because Behe uses the evidence for the ease of gene degradation to support his overarching implication that our current understanding of the mechanisms of evolution is inadequate and, consequently, the field of evolutionary biology has a “big problem” and is therefore in scientific trouble.
I hope to accomplish several things in a series of posts. (I initially planned to write three posts, but it will now be more than that, as I delve deeper into several issues.) In my first post, I explained why Behe’s so-called “first rule of adaptive evolution” does not imply what he says it does about evolution writ large. In summarizing, I wrote that Behe is right that mutations that break or blunt a gene can be adaptive. And he’s right that, when such mutations are adaptive, they are easy to come by. But Behe is wrong when he implies these facts present a problem, because his thesis confuses frequencies over the short run with lasting impacts over the long haul of evolution.
In this post, I take a closer look at Behe’s “rule” and how one might decide whether or not a particular mutation is damaging to a particular gene in a particular context. I’ll then describe and discuss the example that Behe chose to illustrate his argument at the outset of his book, calling attention to the fact that his inferences were indirect, and as a result a key conclusion was quite possibly wrong. [These issues came to my attention based on work by Nathan Lents, Art Hunt and Joshua Swamidass. They voiced concerns about this example on their own blogs, here and here. I’ve now done my own reading, and in this post I attempt to provide just a tiny bit of important technical background before addressing the main concern, as I see it.]
II-A. How does one know if a mutation has damaged a gene?
Behe’s first rule of adaptive evolution says this: “Break or blunt any functional gene whose loss would increase the number of a species’ offspring.” Every biologist knows that many mutations break or reduce the functionality of genes and the products they encode. Every biologist also realizes that this can sometime increase an organism’s fitness (i.e., its survival and reproductive success), in particular when two conditions are met. First, the function has to be one that is not—or rather, no longer—useful to the organism. For example, eyes are no longer useful to an organism whose ancestors lived above ground, but which itself now lives in perpetual darkness in a cave. Second, there must be a meaningful cost to the organism (again, in the currency of fitness) of having the functional form of the gene, and that cost must be reduced or eliminated for the mutated version of the gene. This second point means that mutations that break or blunt a particular gene—even one that is useless—are not necessarily advantageous; they might instead be selectively neutral, such as when an encoded protein is still expressed but, for example, has diminished activity on a substrate that isn’t even present. Therefore, compelling evidence for a broken or blunted gene in a particular lineage suggests that the gene’s function is under what evolutionary biologists call “relaxed” selection—relaxed because some capability that was useful during the history of a lineage is no longer important under the organisms’ present circumstances. However, that does not mean that the loss or diminution of the capability necessarily provided any advantage; instead, the gene could have decayed by the random fixation of mutations that were entirely inconsequential for fitness.
Two very important issues center on (i) how an observer can tell whether a particular mutation breaks or blunts a gene; and (ii) how that observer can determine whether the resulting mutation is advantageous. In short, neither inference is ironclad without an in-depth case-by-case investigation, although there are shortcuts that biologists often take because they make sense and are often sound, provided one takes care to understand the potential limitations of the inference. To characterize the biochemical consequences of a mutation, for example, the gold standard would be to perform detailed analyses of the activities of proteins encoded by different forms (alleles) of the same gene. That’s difficult, technical work.
But as I said, there are shortcuts that allow scientists to draw reasonable inferences in some cases. For example, a mutation that generates a premature stop codon (a so-called “nonsense” mutation) usually eliminates the encoded protein’s function. However, there are exceptions, such as when the premature stop is very near the end of the gene. It’s also possible that a truncated protein might even have some new activity and function, or that it might accumulate additional mutations that produce a new activity. That’s unlikely in any one case, but a lot of unlikely things can happen over the vast scales of space and time over which evolution has operated. As the Nobel laureate François Jacob famously wrote years ago, “natural selection does not work as an engineer works. It works like a tinkerer—a tinkerer who does not know exactly what he is going to produce but uses whatever he finds around him whether it be pieces of string, fragments of wood, or old cardboards; in short, it works like a tinkerer who uses everything at his disposal to produce some kind of workable object.”
At the other end of the spectrum with respect to inferred functionality, some mutations change the DNA sequence of a gene, but they have no affect on the resulting amino-acid sequence of a protein. That happens because the genetic code is redundant, with multiple codons for the same amino acid. Such mutations are called “synonymous” and they are generally presumed to be neutral precisely because they don’t change a protein. Once again, however, there are some exceptions to this usually reliable inference; a synonymous mutation could affect, for example, the rate at which the protein is produced and even its propensity to fold into a specific conformation.
In the middle ground between these (usually) clear-cut extremes are the cases where a mutation produces an amino-acid substitution in the encoded protein. Does that mutation change the protein’s activity? If it does, is it necessarily damaging to the protein and/or to the organism with that altered protein? Biochemical and structural studies of proteins have shed light on this issue by identifying so-called “active sites” of many proteins—positions in the structure of a protein molecule where it interacts with a substrate and facilitates a chemical reaction. Mutations in and around active sites are more likely to affect a protein’s activity than ones that are far away. Also, even at the same site in a protein, different mutations are likely to have more pronounced affects on the protein’s activity, depending on whether the substitution affects the charge and/or size of the amino acid at that site.
Computational biologists have developed tools that take into account these types of information, which can be used to draw tentative inferences or make predictions about the likely effect of a specific mutation. Not surprisingly, one application is for understanding possible health effects of genetic variation in humans. For example, are certain variants in some gene likely to affect an individual’s susceptibility to cardiovascular disease?
One such tool is called PolyPhen-2. The website says: “PolyPhen-2 (Polymorphism Phenotyping v2) is a software tool which predicts possible impact of amino acid substitutions on the structure and function of a human proteins using straightforward physical and comparative considerations.” In addition to using structural information described above, it also uses information on whether a given site is highly conserved (little or no variation) or quite variable across humans and related species for which we have information. Why does it use that information? In essence, the program assumes that evolution has optimized a given protein’s activity for whatever it does in humans, related species, and our common ancestors. If a particular site in a protein varies a lot, according to that implicit assumption, the variants probably aren’t harmful because, well, if they were, then those lineages would have died out. If a site is hardly variable at all, by contrast, it’s presumably because mutants at those sites damaged the protein’s important function and led to the demise of those unfortunate lineages.
All that makes a lot of good sense … provided the protein of interest is performing the same function, and with the same optimal activities, in everybody and every species used in the analysis. Let’s look now at a specific case that Behe chose to highlight in his book.
II-B. The APOB gene in polar bears
Behe sets the stage for his rule—“break or blunt any functional gene whose loss would increase the number of a species’ offspring”—by summarizing the results of a study by Shiping Liu and coauthors that compared the genomes of polar bears and brown bears. Their paper examined mutations that distinguish these two species. The authors identified a set of mutations that had accumulated along the branch leading to modern polar bears, and in a manner that was consistent with those changes having been beneficial to the polar bears. One of the mutated genes, which was discussed in some detail both by the paper’s authors and by Behe, is called APOB. As Liu et al. wrote (p. 789), the APOB gene encodes ApoB, “the primary lipid-binding protein of chylomicrons and low-density lipoproteins (LDL) … LDL cholesterol is a major risk factor for heart disease and is also known as ‘bad cholesterol.’ ApoB enables the transport of fat molecules in blood plasma and lymph and acts as a ligand for LDL receptors, facilitating the movement of molecules such as cholesterol into cells … The extreme signal of APOB selection implies an important role for this protein in the physiological adaptations of the polar bear.”
As part of their study, Liu et al. analyzed the polar-bear version of the APOB gene using the PolyPhen-2 computational tool described above. Roughly half the mutations in APOB were categorized by that program as “possibly damaging” or “probably damaging,” and the rest were called “benign.” Behe than concluded that some of the mutations had damaged the protein’s function, and that these mutations were beneficial in the environment where the polar bear now lives. In other words, Behe took this output as strong support for his rule.
So what’s the problem? The PolyPhen-2 program, as I explained, is designed to identify mutations that are likely to affect a protein’s structure and therefore its function. It assumes such mutations damage (rather than improve) a protein’s function because structurally similar mutations are rare in humans and other species used for comparison. It does so because it presumes that natural selection has optimized the protein to perform a specific function that is the same in all cases, so that changes must be either benign or damaging to the protein’s function. In fact, the only possible categorical outputs of the program are benign, possibly damaging, and probably damaging. The program simply cannot detect or suggest that a protein might have some improved activity or altered function.
The authors of the paper recognized these limiting assumptions and their implications for the evolution of polar bears. In fact, they specifically interpreted the APOB mutations as follows (p. 789): “… we find nine fixed missense mutations in the polar bear … Five of the nine cluster within the N-terminal βα1 domain of the APOB gene, although the region comprises only 22% of the protein … This domain encodes the surface region and contains the majority of functional domains for lipid transport. We suggest that the shift to a diet consisting predominantly of fatty acids in polar bears induced adaptive changes in APOB, which enabled the species to cope with high fatty acid intake by contributing to the effective clearance of cholesterol from the blood.” In a news piece about this research, one of the paper’s authors, Rasmus Nielsen, said: “The APOB variant in polar bears must be to do with the transport and storage of cholesterol … Perhaps it makes the process more efficient.” In other words, these mutations may not have damaged the protein at all, but quite possibly improved one of its activities, namely the clearance of cholesterol from the blood of a species that subsists on an extremely high-fat diet.
It appears Behe either overlooked or ignored the authors’ interpretation. Determining whether those authors or Behe are right would require in-depth studies of the biochemical properties of the protein variants, their activities in the polar bear circulatory stream, and their consequences for survival and reproductive success on the bear’s natural diet. That’s a tall order, and we’re unlikely to see such studies because of the technical and logistical challenges. The point is that many proteins, including ApoB, are complex entities that have multiple biochemical activities (ApoB binds multiple lipids), the level and importance of which may depend on both intrinsic (different tissues) and environmental (dietary) contexts. In this example, Behe seems to have been too eager and even determined to describe mutations as damaging a gene, even when the evidence suggests an alternative explanation.
[The picture below shows a polar bear feeding on a seal. It was posted on Wikipedia by AWeith, and it is shown here under the indicated Creative Commons license.]
This post follows up on my post from yesterday, which was about choosing a dilution factor in a microbial evolution experiment that avoids the loss of too many beneficial mutations during the transfer bottleneck.
If we only want to maximize the cumulative supply of beneficial mutations that survive dilution, then following the reasoning in yesterday’s post, we would choose the dilution factor (D) to maximize g Ne = (g2) Nmin = (g2) Nmax / (2g), where Nmax is a constant (the final population size) and D = 1 / (2g). Thus, we want to maximize (g2) / (2g) for g > 0, which gives g = ~2.885 and D = ~0.1354, which is in agreement with the result of Wahl et al. (2002, Genetics), as noted in a tweet by Danna Gifford.
The populations would therefore be diluted and regrow by ~7.4-fold each transfer cycle. But as discussed in my previous post, this approach does not account for the effects of clonal interference, diminishing-returns epistasis, and perhaps other important factors. And if I had maximized this quantity, the LTEE would only now be approaching a measly 29,000 generations!
So let’s not be purists about maximizing the supply of beneficial mutations that survive bottlenecks. There’s clearly also a “wow” factor associated with having lots and lots of generations. This wow factor should naturally and powerfully reflect the increasing pleasure associated with more and more generations. So let’s define wow = ge, which is both natural and powerful. Therefore, we should maximize wow (g2) / (2g), which provides the perfect balance between the pleasure of having lots of generations and the pain of losing beneficial mutations during the transfer bottlenecks.
It turns out that the 100-fold dilution regime for the LTEE is almost perfect! It gives a value for wow (g2) / (2g) of 75.93. You can do a tiny bit better, though, with the optimal ~112-fold dilution regime, which gives a value of 76.03.
Every day, we propagate the E. coli populations in the long-term evolution experiment (LTEE) by transferring 0.1 ml of the previous day’s culture into 9.9 ml of fresh medium. This 100-fold dilution and regrowth back to stationary phase—when the bacteria have exhausted the resources—allow log2 100 = 6.64 generations (doublings) per day. We round that to six and two-thirds generations, so every 15 days equals 100 generations and every 75 days is 500 generations.
A few weeks ago, I did the 10,000th daily transfer, which corresponds to 66,667 generations. Not bad! But as I was walking home today, I thought about one of the decisions I had to make when I was designing the LTEE. What dilution factor should I use?
If … if I had chosen to use a 1,000-fold dilution instead of a 100-fold dilution, the LTEE would be past 100,000 generations. That’s because log2 1,000 = ~10 generations per day. In that case, we’d have reached a new power of 10, which would be pretty neat. As it is, it will take us (or rather the next team to take over the LTEE) another 14 years or so to get there.
I’ll discuss my thinking as to why I chose a 100-fold dilution factor in a bit. But first, here’s a question for you, which you can vote on in the poll below.
Let’s say that we had done a 1,000-fold daily dilution all along. And let’s say we measured fitness (relative to the ancestral strain, as we usually do) after 10,000 days. Do you think that the mean fitness of the evolved populations subjected to 1,000-fold dilutions after 100,000 generations (on day 10,000) would be higher or lower than that of the evolved populations subjected to 100-fold dilutions after 66,667 generations (also day 10,000)?
I’ll begin by mentioning a couple of practical issues, but then set them aside, as they aren’t so interesting. First, a 100-fold dilution is extremely simple to perform given the volumes involved (i.e., 0.1 and 9.9 ml). And the LTEE was designed to be simple, in order to increase its reliability. A 1,000-fold dilution isn’t quite as easy, as it involves either an intermediate dilution or the transfer of a smaller volume (0.01 ml), which in my experience tends to be a bit less accurate. Second, the relative importance of the various phases of growth—lag, exponential, transition, and stationary—for fitness would change a bit (Vasi et al., 1994).
Setting those issues aside, here was my thinking about the dilution factor when I planned the LTEE. In asexual populations that start without any standing genetic variation, the extent of adaptive evolution depends on both the number of generations and the supply rate of beneficial mutations. The supply rate of beneficial mutations, in turn, depends on the mutation rate (m) times the fraction of mutations that are beneficial (f) times the effective population size (Ne).
There are many different uses and meanings of effective population size in population genetics, depending on the problem at hand: the question is “effective” with respect to what process? Without going into the details, we would like to express Ne such that it takes into account the expected loss of beneficial mutations during the daily dilutions. To a first approximation, theory shows that the relevant Ne is equal to the product of the “bottleneck” population size right after the dilution (Nmin) and the number of generations (g) between Nmin and the final population size during each transfer cycle (Lenski et al., 1991).
The final population size in the LTEE is ~5 x 108 cells (10 ml x 5 x 107 cells per ml), and it is the same regardless of the dilution factor, provided that the bacteria have enough time to reach that density between transfers. The 1,000-fold dilution regime would reduce Nmin by 10-fold relative to the 100-fold regime, although the 50% increase in the number of generations per cycle would offset that reduction with respect to the effective population size. Nonetheless, Ne would be ~6.7-fold higher in the 100-fold regime than in the 1,000-fold regime.
The greater number of generations in 10,000 days under the 1,000-fold regime would also increase the cumulative supply of beneficial mutations by 50%. Nonetheless, the extent of adaptive evolution, which is (under this simple model) proportional to the product of the elapsed generations and Ne, would be ~44% greater under the 100-fold dilution regime than the 1,000-fold dilution regime. So that’s why I chose the 100-fold dilution regime … I was more interested in making sure we would see substantial adaptation than in getting to a large number of generations.
Now you know why the LTEE has only reached 67,000 or so generations.
Of course, I could also have chosen a 10-fold regime, and by this logic the populations might have achieved even higher fitness levels. I could also have chosen a much higher dilution factor; even with a 1,000,000-fold dilution the ancestral strain could double 20 times in 24 h, allowing them to persist. Or at least they could persist for a while. With severe bottlenecks, natural selection becomes unable to prevent the accumulation of deleterious mutations by random drift, so that fitness declines. And if fitness declines to the degree that the populations can no longer double 20 times in 24 h, then the bacteria would go extinct as the result of a mutational meltdown.
Returning to the cases where the bottlenecks are not so severe, the theory that led me to choose the 100-fold dilution regime ignores a number of complicating factors, such as clonal interference (Gerrish and Lenski, 1998; Lang et al., 2013; Maddamsetti et al., 2015) and diminishing-returns epistasis (Khan et al., 2011; Wiser et al., 2013; Kryazhimskiy et al., 2014). It’s predicated, I think, on the assumption that the supply rate of beneficial mutations limits the speed of adaptation.
When the LTEE started, I had no idea what fraction of mutations would be beneficial. I think it was generally understood that beneficial mutations were very rare. But the LTEE and other microbial evolution experiments have shown that beneficial mutations, while rare, are not so rare as we once thought, especially once an experiment has run long enough (Wiser et al., 2013) or otherwise been designed (Perfeito et al., 2007; Levy et al., 2015) to allow beneficial mutations with small effects to be observed and counted.
So I think it remains an open question whether my choice of the 100-fold dilution regime was the right one, in terms of maximizing fitness gains.
And that makes me think about redoing the LTEE. OK, maybe not starting all over, as we do have a fair bit invested in the last 29 years of work. But maybe expanding the LTEE on the fly, as it were. We could, for example, expand from 12 populations to 24 populations without too much trouble. We’d keep the 12 original populations going, of course, but we’d spin off 12 new ones in a paired design (i.e., one from each of the 12 originals) where we changed the dilution regime. What do you think? Is this a good idea for a grant proposal? And if so, what dilution factor would you suggest we add?
Feel free to expand on your thoughts in the comments section below!
Note: See my next post for a bit more of the mathematics, along with a tongue-in-cheek suggestion for combining the effects of the beneficial mutation supply rate and a “wow” factor associated with having lots of generations.