Tag Archives: Nathan Lents

Evolution goes viral! (And how real science works)

This is the fourth in a series of posts about a new book by Michael Behe, Darwin Devolves. Behe is a leading proponent of intelligent-design creationism (IDC), which asserts that known processes cannot adequately account for evolution and, therefore, some intelligent agent must be involved in the process. Behe is a professor of biochemistry, which gives him knowledge and credentials that most IDC advocates do not have. However, my posts explain why I think his logic is unsound and his evidence weak and biased.

In brief, Behe argues that random mutation and natural selection are almost entirely degradative forces that break or blunt the various functions encoded by genes, producing short-term advantages that are so pervasive that they prevent constructive adaptations, which he claims are very unlikely to emerge in the way that evolutionary biologists have proposed. Unlike young-Earth creationists, Behe accepts the descent of living species from common ancestors over billions of years. To reconcile these seemingly conflicting views, Behe invokes that an intelligent agent (presumably God, though IDC proponents avoid that word so that their ideas might appear to be scientific) has purposefully guided evolution over its long history by somehow inserting new genetic information into chosen lineages along the way. To make his strange argument, Behe works very, very hard to convince readers that standard evolutionary processes are (i) really, really good at degrading functions, and (ii) really, really bad at producing anything new.

In my first post, I explained that Behe’s arguments confuse and conflate what is easy and commonplace over the short run (i.e., mutations that break or blunt functional genes) with the lasting impacts of less frequent but constructive adaptations (i.e., new functions and subsequent diversification) over the long haul of evolution. My second post examined a case involving polar bears, which Behe highlighted as a compelling example of degradative evolution, but where a careful review of the science suggests that gene function improved. Behe also highlighted results from my lab’s long-term evolution experiment with bacteria, but in my third post I explained that he overstates his case by downplaying or dismissing evidence that runs counter to his argument.

In this post, I’ll discuss an experiment that Behe ignores in Darwin Devolves. (Behe clearly knows the work, because he wrote about it on the Discovery Institute’s anti-evolution blog. But as usual, he spun the story to obscure the problems for his arguments, all the while accusing the scientists who collect data to test hypotheses of spinning the story.) In fact, as I’ll explain, the results also undermine the claims in Behe’s two previous books, Darwin’s Black Box and The Edge of Evolution, about the supposed shortcomings of evolution.

(Before presenting this experiment, I want to mention briefly two other papers that readers interested in what else Behe missed or downplayed might want to read. First, Rees Kassen posted a preprint of a paper on “Experimental evolution of innovation and novelty.” He reviews empirical evidence and discusses conceptual issues bearing on the origin of new functional abilities observed in many experiments with bacteria and other microbes. Second, Chris Adami, Charles Ofria, Rob Pennock, and I published a paper over 15 years ago that demonstrated the logical fallacy of Behe’s assertions about irreducible complexity. Behe mentions that paper derisively, without addressing its substance in Darwin Devolves, as follows: “A computer simulation of computer program development that ignores biology entirely.” A more accurate statement would have been: “Computer programs can evolve by random mutation and natural selection the ability to perform complex functions that show the concept of irreducible complexity is total nonsense.” The rest of this post is longer than I planned, because I want to provide background for readers who aren’t microbiologists, and because—like so much of science—it’s an interesting story with unexpected twists and turns along the way.)

IV. Phage lambda evolves a new capability without breaking anything

There are a lot of viruses in the world. Fortunately, most of them don’t infect humans. Many of them infect bacteria, as it so happens. In fact, before antibiotics were used as therapeutic agents, there was hope that bacteriophages (“bacteria eaters”), or phages for short, would be useful in treating diseases. And now, with the evolution of pathogenic bacteria that are resistant to many or all available drugs, researchers are reconsidering the possibility of using phages to treat some infections.

My lab is best known for the long-term evolution experiment (LTEE) with E. coli bacteria. But over the years, my students have also performed other experiments with a variety of microbes, including some viruses that infect E. coli. One of those viruses is called lambda. For decades, lambda was probably the most intensively studied virus on the planet—just as E. coli was a model for understanding bacterial genetics and physiology, lambda became a model for understanding viral genetics and infection.

One reason lambda became a hit was because it has an interesting life cycle. After lambda enters a bacterial cell (and assuming the cell lacks some internal defenses), the virus can do one of two things. It can commandeer the host, hijacking the cellular machinery to produce a hundred or so progeny before bursting the host cell and releasing its “babies” to find new cells to infect. Alternatively, the virus’s DNA may be integrated into the host’s chromosome, hiding out and being replicated alongside the host’s genes—though the virus may later exit the chromosome and reactivate its lethal program. (Pretty neat, and a bit scary, right?)

Well, as cool as that is, it’s not what my student Justin Meyer (now on the faculty at UCSD) was studying. He was using a strain of lambda that can’t integrate into the bacterial cell’s chromosome—a successful infection takes only the first route, killing the cell in the course of making more viruses. Justin was studying this simpler virus because we were interested in whether the evolution of the bacterial hosts in response to the presence of lambda virus might depend on what food we gave the bacteria.

Let’s back up and explain why that might matter. Viruses like lambda don’t just glom onto any part of a bacterial cell; instead, they adsorb to specific receptors on the cell’s surface, with a successful attachment triggering the injection of their DNA into the cell. Lambda recognizes a particular cell-surface protein called LamB. (Despite decades of study of the interaction between lambda and E. coli, including experiments that specifically sought to see whether mutants could exploit other receptors, no one had ever seen lambda use any other receptor.) Of course, E. coli doesn’t make LamB for the sake of the virus. The LamB protein is one of several “porin” proteins that E. coli produces, and which serve as channels to allow molecules, like sugars, to cross the outer cell envelope. (Other proteins transport sugars across the inner cell membrane.) LamB, in particular, is a fairly large channel that allows the sugars maltose and maltotriose to enter the cell. Maltose and maltotriose are made of two and three linked glucose molecules, respectively. Glucose, being smaller, can readily enter a cell via smaller channels. When growing on glucose, E. coli cells don’t bother to produce much LamB protein. However, when cells sense that maltose or maltotriose, but not glucose, is present they activate the gene that encodes LamB. In doing so, however, the cells become more vulnerable to lambda, because that protein serves not only to transport these larger sugars but also as the receptor for the virus.

Coming back to Justin Meyer’s research, we wanted to see how different sugars affected the bacteria’s evolutionary response to lambda. (Justin and I have a paper in press comparing outcomes across the glucose, maltose, and maltotriose environments.) We reasoned that, if the bacteria were fed glucose, they could damage or delete the lamB gene that encodes the LamB protein. If the bacteria mutated the LamB protein, then the virus might counter with a mutation that restored their affinity for the mutated protein; but if the bacteria deleted or otherwise destroyed the LamB protein, we reasoned the virus would go extinct.

However, the first experiment using only the glucose treatment played out differently than what we expected—that’s science, and that’s why you do experiments—and it set Justin’s research off in a new direction. Instead of mutating the lamB gene, the bacteria evolved resistance to the virus by mutating another gene, called malT, that encodes a protein that activates the production of LamB. The viruses didn’t go extinct, however, because there was some residual, low-level expression of the LamB protein. That was enough to keep the viruses going, which also meant they could keep evolving.

To make a long story short, after just 8 days, one of six lambda populations evolved the ability to infect malT-mutated cells by attaching to a different surface protein, one called OmpF (short for outer membrane protein F). This evolved lambda virus could now infect E. coli cells through the original receptor, LamB, or this new one, OmpF. It had gained a new functional capability.

To understand this change, Justin sequenced the genome of this virus. He found a total of 5 mutations compared to the lambda virus with which he had begun. All 5 mutations were in the same gene, one that encodes the J protein in the “tail” of the virus that interacts with the cell surface. He also sequenced the J gene for some other viruses isolated from the same population. He found one virus that had 4 of these 5 mutations, but which could not infect cells via the OmpF receptor. Did that mean that only one of the 5 mutations was necessary to evolve this new function?

As it turns out, the answer is no. To better understand what had happened, Justin scaled up his experiments and ran an additional 96 replicates with lambda, E. coli, and glucose. In 24 cases, the viruses evolved the new mode of infection within three weeks. Justin sequenced the J gene from the viruses able to target OmpF in those 24 cases, and in 24 other cases where the virus could still use only the LamB receptor. He found that all 24 with the new capability had at least 4 mutations; these included 2 changes that were identical in all 24 lines, a third that further mutated one of the same codons (sets of 3 DNA bases that specify a particular amino acid to be incorporated into a protein), and another mutation that was always within a span of 11 codons. All of these mutations cause amino-acid substitutions near the end of the J protein, which is known to interact with the LamB receptor. The J protein is over 1100 amino acids in length, and so this concentration and parallelism (repeatability across lineages) is striking and strongly implies that natural selection favored these mutations.

Remember, too, that nothing is broken. These viruses can now use both the original LamB receptor and the alternative OmpF receptor. (This fact was demonstrated by showing that the viruses can grow on two different constructed hosts genotypes, one completing lacking LamB and the other completely lacking OmpF.)

None of the 24 viruses that had not evolved the ability to use the OmpF receptor had all 4 of these mutations. However, three of them shared 3 of the 4 mutations with viruses that had acquired that new ability. And yet, none of those had any capacity to grow on cells that lacked the LamB receptor. In other words, the set of all 4 of these mutations was needed to produce this new ability—no subset could do the job. (We initially lacked one of the four possible viral genotypes having each subset of 3 mutations. Later work confirmed that all four mutations are required.)

At first glance, it seems like none of the viral lineages should have been able to acquire all 4 mutations, at least if you accept the flawed reasoning from Behe’s previous book on The Edge of Evolution. If you need all 4 mutations for the new function, so the thinking goes, and if none of them provide any degree of that function, then you would need all 4 mutations to occur in one lineage by chance, which is extremely unlikely. (How unlikely is difficult to calculate precisely. To get some inkling, none of the 48 sequenced J genes—including both those that did and did not evolve the new capability—had even one synonymous mutation. Synonymous mutations don’t change the amino-acid sequence of an encoded protein, and so they provide a benchmark for the accumulation of selectively neutral mutations.)

And yet, 24 of the 96 lineages did just that—they evolved the new ability, and in just a few weeks time. If you’re into intelligent design, then I guess you’d have to conclude that some purposeful agent was pretty darn interested in helping the viruses vanquish the bacteria. If you’re a scientist, though, you’re trained to think more carefully and look for natural explanations—ones that you can actually test.

So how could 4 mutations arise so quickly in the same lineage? Natural selection. But wait, didn’t Justin find that all 4 of those mutations were required for the virus to exploit the new OmpF receptor? Yes, he did.

Our hypothesis was that the mutations that set the stage for the virus to evolve the ability to target OmpF were beneficial because they improved lambda’s ability to use its original LamB receptor. But wait, that’s the receptor they’ve always used. Shouldn’t they already be perfectly adapted to using that receptor? How can there be room for improvement?

If you’ve read my posts on polar bears and bacteria, you’ve probably got the idea. When the environment changes, all bets are off as to whether a function is optimally tuned to the new conditions. Lambda did not evolve in the same medium where Justin ran his experiments; and while lambda certainly encountered E. coli and the LamB receptor in its history, the cell surfaces the virus had to navigate in nature were more heterogeneous than what they encountered in the lab. In other words, there might well be scope for the viral J protein to become better at targeting the LamB receptor under the new conditions.

To an evolutionary biologist, this hypothesis is so obvious, and the data on the evolution of the J protein sequence so compelling, that it scarcely needs testing. Nonetheless, it’s always good to check one’s reasoning by collecting new data, and another talented student joined the project who did just that. Alita Burmeister (now a postdoc at Yale) competed lambda strains with some (but not all) of the mutations needed to use OmpF against a lambda strain that had none of those mutations. She studied six “intermediate” viruses, each of them isolated from an independent population that later evolved the ability to use OmpF.

Alita ran two sets of competitions between the evolved and ancestral viruses. In one set, the viruses fought over the ancestral bacterial strain; in the other set, they competed for a bacterial strain that had previously coevolved with lambda and become more resistant to infection. Four of the six evolved intermediate viruses outcompeted their ancestor for the naïve bacteria, and all six prevailed when competing for the tough-to-infect coevolved host cells. Alita ran additional experiments showing that the intermediates were better than the ancestral virus at adsorbing to bacterial cells—the precise molecular function that the J protein serves. These results clearly support the hypothesis that the first few mutations in the evolving virus populations improved their ability to infect cells via the LamB receptor.

Natural selection did its thing, in other words, discovering mutations that provided an advantage to the viruses. Some of the resulting viruses—those with certain combinations of three mutations—just happened to be poised in the space of possible genotypes such that a fourth mutation gave them the new capacity to use OmpF.

Now let’s step back and think about what this case says about the validity of the arguments that Behe has made in his three books.

Anybody remember Behe’s first book, Darwin’s Black Box, published in 1996? There, Behe claimed evolution doesn’t work because biological systems exhibit so-called “irreducible complexity,” which he defined as “… a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.” Evolution can’t explain these functions, according to Behe, because you need everything in place for the system to work. Strike one! Lambda’s J protein required several well-matched, interacting amino acids to enable infection via the host’s OmpF receptor. Removing any one of them leaves the virus unable to perform that function. (Alas, Behe’s argument wasn’t merely mistaken, it also wasn’t new—since Darwin, and as explained in increasing detail by later biologists, we’ve known that new functions evolve by coopting and modifying genes, proteins, and other structures that previously served one function to perform a new function.)

The Edge of Evolution, Behe’s second book, claimed that evolution has a hard time making multiple constructive changes, implying the odds are heavily stacked against this occurring. Strike two!! Lambda required four constructive changes to gain the ability to use OmpF, yet dozens of populations in tiny flasks managed to do this in just a few weeks. That’s because the intermediate steps were strongly beneficial to the virus, so that each step along the way proceeded far faster than by random mutation alone.

Darwin Devolves says that adaptive evolution can occur, but that it does so overwhelmingly by breaking things. Strike three!!! The viruses that can enter the bacterial cells via the OmpF receptor are not broken. They are still able to infect via the LamB receptor and, in fact, they’re better at doing so then their ancestors in the new environment. (In his blog post after our paper was published in Science, Behe used the same sleight of hand he used to downplay the evolution of the new ability to use citrate in one LTEE population. That is, Behe called lambda’s new ability to infect via the OmpF receptor a modification of function, instead of a gain of function, based on his peculiar definition, whereby a gain of function is claimed to occur only if an entirely new gene “poofs” into existence. However, that’s not the definition of gain-of-function that biologists use, which (as the term implies) means that a new function has arisen. That standard definition aligns with how evolution coopts existing genes, proteins, and other structures to perform new functions. Behe’s peculiar definition is a blatant example of “moving the goalposts” to claim victory.)

As Nathan Lents, Joshua Swamidass, and I wrote in our book review, “Ultimately, Darwin Devolves fails to challenge modern evolutionary science because, once again, Behe does not fully engage with it. He misrepresents theory and avoids evidence that challenges him.”

If you’ve followed the logic and evidence in the three systems I’ve written about—polar bears adapting to a new diet, bacteria fine-tuning and even evolving new functions as they adapt to laboratory conditions, and viruses evolving a new port of entry into their hosts—you’ll understand why Behe’s arguments against evolution aren’t taken seriously by the vast majority of biologists. As for Behe’s arguments for intelligent design, they rest on his incredulity about what evolution is able to achieve, and they make no testable predictions about how the designer intervenes in the evolutionary process.

[The images below show infection assays for 4 lambda genotypes on 2 E. coli strains. The dark circles are “plaques”—areas in a dense lawn of bacteria where the cells have been killed by the virus. The viruses (labeled at bottom) include the ancestral lambda virus and 3 evolved genotypes. One bacterial strain expresses the LamB receptor (top row), while the other lacks the gene that encodes LamB (bottom row). All 4 viruses can infect the cells that produce LamB, but only the “EvoC” virus is able to infect the cells without that receptor. Images from Meyer et al., 2012, Science paper.]

Lambda plaque assays

Advertisements

14 Comments

Filed under Education, Science

On damaged genes and polar bears

Michael Behe has a new book called Darwin Devolves, published by HarperOne. Nathan Lents, Joshua Swamidass, and I wrote a review of that book for the journal Science. (You can find an open-access version of our review here.) As our review says (in agreement with Behe), there are many examples of evolution in which genes and their functions have been degraded, sometimes yielding an advantage to the organism. Unfortunately, though, Behe largely ignores the ways that evolution generates new functions and thereby produces complexity. That’s a severe problem because Behe uses the evidence for the ease of gene degradation to support his overarching implication that our current understanding of the mechanisms of evolution is inadequate and, consequently, the field of evolutionary biology has a “big problem” and is therefore in scientific trouble.

I hope to accomplish several things in a series of posts. (I initially planned to write three posts, but it will now be more than that, as I delve deeper into several issues.) In my first post, I explained why Behe’s so-called “first rule of adaptive evolution” does not imply what he says it does about evolution writ large. In summarizing, I wrote that Behe is right that mutations that break or blunt a gene can be adaptive. And he’s right that, when such mutations are adaptive, they are easy to come by. But Behe is wrong when he implies these facts present a problem, because his thesis confuses frequencies over the short run with lasting impacts over the long haul of evolution.

In this post, I take a closer look at Behe’s “rule” and how one might decide whether or not a particular mutation is damaging to a particular gene in a particular context. I’ll then describe and discuss the example that Behe chose to illustrate his argument at the outset of his book, calling attention to the fact that his inferences were indirect, and as a result a key conclusion was quite possibly wrong. [These issues came to my attention based on work by Nathan Lents, Art Hunt and Joshua Swamidass. They voiced concerns about this example on their own blogs, here and here. I’ve now done my own reading, and in this post I attempt to provide just a tiny bit of important technical background before addressing the main concern, as I see it.]

II-A. How does one know if a mutation has damaged a gene?

Behe’s first rule of adaptive evolution says this: “Break or blunt any functional gene whose loss would increase the number of a species’ offspring.” Every biologist knows that many mutations break or reduce the functionality of genes and the products they encode. Every biologist also realizes that this can sometime increase an organism’s fitness (i.e., its survival and reproductive success), in particular when two conditions are met. First, the function has to be one that is not—or rather, no longer—useful to the organism. For example, eyes are no longer useful to an organism whose ancestors lived above ground, but which itself now lives in perpetual darkness in a cave. Second, there must be a meaningful cost to the organism (again, in the currency of fitness) of having the functional form of the gene, and that cost must be reduced or eliminated for the mutated version of the gene. This second point means that mutations that break or blunt a particular gene—even one that is useless—are not necessarily advantageous; they might instead be selectively neutral, such as when an encoded protein is still expressed but, for example, has diminished activity on a substrate that isn’t even present. Therefore, compelling evidence for a broken or blunted gene in a particular lineage suggests that the gene’s function is under what evolutionary biologists call “relaxed” selection—relaxed because some capability that was useful during the history of a lineage is no longer important under the organisms’ present circumstances. However, that does not mean that the loss or diminution of the capability necessarily provided any advantage; instead, the gene could have decayed by the random fixation of mutations that were entirely inconsequential for fitness.

Two very important issues center on (i) how an observer can tell whether a particular mutation breaks or blunts a gene; and (ii) how that observer can determine whether the resulting mutation is advantageous. In short, neither inference is ironclad without an in-depth case-by-case investigation, although there are shortcuts that biologists often take because they make sense and are often sound, provided one takes care to understand the potential limitations of the inference. To characterize the biochemical consequences of a mutation, for example, the gold standard would be to perform detailed analyses of the activities of proteins encoded by different forms (alleles) of the same gene. That’s difficult, technical work.

But as I said, there are shortcuts that allow scientists to draw reasonable inferences in some cases. For example, a mutation that generates a premature stop codon (a so-called “nonsense” mutation) usually eliminates the encoded protein’s function. However, there are exceptions, such as when the premature stop is very near the end of the gene. It’s also possible that a truncated protein might even have some new activity and function, or that it might accumulate additional mutations that produce a new activity. That’s unlikely in any one case, but a lot of unlikely things can happen over the vast scales of space and time over which evolution has operated. As the Nobel laureate François Jacob famously wrote years ago, “natural selection does not work as an engineer works. It works like a tinkerer—a tinkerer who does not know exactly what he is going to produce but uses whatever he finds around him whether it be pieces of string, fragments of wood, or old cardboards; in short, it works like a tinkerer who uses everything at his disposal to produce some kind of workable object.”

At the other end of the spectrum with respect to inferred functionality, some mutations change the DNA sequence of a gene, but they have no affect on the resulting amino-acid sequence of a protein. That happens because the genetic code is redundant, with multiple codons for the same amino acid. Such mutations are called “synonymous” and they are generally presumed to be neutral precisely because they don’t change a protein. Once again, however, there are some exceptions to this usually reliable inference; a synonymous mutation could affect, for example, the rate at which the protein is produced and even its propensity to fold into a specific conformation.

In the middle ground between these (usually) clear-cut extremes are the cases where a mutation produces an amino-acid substitution in the encoded protein. Does that mutation change the protein’s activity? If it does, is it necessarily damaging to the protein and/or to the organism with that altered protein? Biochemical and structural studies of proteins have shed light on this issue by identifying so-called “active sites” of many proteins—positions in the structure of a protein molecule where it interacts with a substrate and facilitates a chemical reaction. Mutations in and around active sites are more likely to affect a protein’s activity than ones that are far away. Also, even at the same site in a protein, different mutations are likely to have more pronounced affects on the protein’s activity, depending on whether the substitution affects the charge and/or size of the amino acid at that site.

Computational biologists have developed tools that take into account these types of information, which can be used to draw tentative inferences or make predictions about the likely effect of a specific mutation. Not surprisingly, one application is for understanding possible health effects of genetic variation in humans. For example, are certain variants in some gene likely to affect an individual’s susceptibility to cardiovascular disease?

One such tool is called PolyPhen-2. The website says: “PolyPhen-2 (Polymorphism Phenotyping v2) is a software tool which predicts possible impact of amino acid substitutions on the structure and function of a human proteins using straightforward physical and comparative considerations.” In addition to using structural information described above, it also uses information on whether a given site is highly conserved (little or no variation) or quite variable across humans and related species for which we have information. Why does it use that information? In essence, the program assumes that evolution has optimized a given protein’s activity for whatever it does in humans, related species, and our common ancestors. If a particular site in a protein varies a lot, according to that implicit assumption, the variants probably aren’t harmful because, well, if they were, then those lineages would have died out. If a site is hardly variable at all, by contrast, it’s presumably because mutants at those sites damaged the protein’s important function and led to the demise of those unfortunate lineages.

All that makes a lot of good sense … provided the protein of interest is performing the same function, and with the same optimal activities, in everybody and every species used in the analysis. Let’s look now at a specific case that Behe chose to highlight in his book.

II-B. The APOB gene in polar bears

Behe sets the stage for his rule—“break or blunt any functional gene whose loss would increase the number of a species’ offspring”—by summarizing the results of a study by Shiping Liu and coauthors that compared the genomes of polar bears and brown bears. Their paper examined mutations that distinguish these two species. The authors identified a set of mutations that had accumulated along the branch leading to modern polar bears, and in a manner that was consistent with those changes having been beneficial to the polar bears. One of the mutated genes, which was discussed in some detail both by the paper’s authors and by Behe, is called APOB. As Liu et al. wrote (p. 789), the APOB gene encodes ApoB, “the primary lipid-binding protein of chylomicrons and low-density lipoproteins (LDL) … LDL cholesterol is a major risk factor for heart disease and is also known as ‘bad cholesterol.’ ApoB enables the transport of fat molecules in blood plasma and lymph and acts as a ligand for LDL receptors, facilitating the movement of molecules such as cholesterol into cells … The extreme signal of APOB selection implies an important role for this protein in the physiological adaptations of the polar bear.”

As part of their study, Liu et al. analyzed the polar-bear version of the APOB gene using the PolyPhen-2 computational tool described above. Roughly half the mutations in APOB were categorized by that program as “possibly damaging” or “probably damaging,” and the rest were called “benign.” Behe than concluded that some of the mutations had damaged the protein’s function, and that these mutations were beneficial in the environment where the polar bear now lives. In other words, Behe took this output as strong support for his rule.

So what’s the problem? The PolyPhen-2 program, as I explained, is designed to identify mutations that are likely to affect a protein’s structure and therefore its function. It assumes such mutations damage (rather than improve) a protein’s function because structurally similar mutations are rare in humans and other species used for comparison. It does so because it presumes that natural selection has optimized the protein to perform a specific function that is the same in all cases, so that changes must be either benign or damaging to the protein’s function. In fact, the only possible categorical outputs of the program are benign, possibly damaging, and probably damaging. The program simply cannot detect or suggest that a protein might have some improved activity or altered function.

The authors of the paper recognized these limiting assumptions and their implications for the evolution of polar bears. In fact, they specifically interpreted the APOB mutations as follows (p. 789): “… we find nine fixed missense mutations in the polar bear … Five of the nine cluster within the N-terminal βα1 domain of the APOB gene, although the region comprises only 22% of the protein … This domain encodes the surface region and contains the majority of functional domains for lipid transport. We suggest that the shift to a diet consisting predominantly of fatty acids in polar bears induced adaptive changes in APOB, which enabled the species to cope with high fatty acid intake by contributing to the effective clearance of cholesterol from the blood.” In a news piece about this research, one of the paper’s authors, Rasmus Nielsen, said: “The APOB variant in polar bears must be to do with the transport and storage of cholesterol … Perhaps it makes the process more efficient.” In other words, these mutations may not have damaged the protein at all, but quite possibly improved one of its activities, namely the clearance of cholesterol from the blood of a species that subsists on an extremely high-fat diet.

It appears Behe either overlooked or ignored the authors’ interpretation. Determining whether those authors or Behe are right would require in-depth studies of the biochemical properties of the protein variants, their activities in the polar bear circulatory stream, and their consequences for survival and reproductive success on the bear’s natural diet. That’s a tall order, and we’re unlikely to see such studies because of the technical and logistical challenges. The point is that many proteins, including ApoB, are complex entities that have multiple biochemical activities (ApoB binds multiple lipids), the level and importance of which may depend on both intrinsic (different tissues) and environmental (dietary) contexts. In this example, Behe seems to have been too eager and even determined to describe mutations as damaging a gene, even when the evidence suggests an alternative explanation.

[The picture below shows a polar bear feeding on a seal.  It was posted on Wikipedia by AWeith, and it is shown here under the indicated Creative Commons license.]

File:Polar bear (Ursus maritimus) with its prey.jpg

8 Comments

Filed under Education, Science

Does Behe’s “First Rule” Really Show that Evolutionary Biology Has a Big Problem?

Michael Behe has a new book coming out this month called Darwin Devolves. Nathan Lents, Joshua Swamidass, and I wrote a review of that book for the journal Science. (You can also find an open-access copy of our review here.) It provides an overview of the problems we see with his thesis and interpretations. As our review states, Behe points to many examples of evolution in which genes and their functions have been degraded, but he largely ignores the ways that evolution generates new functions and thereby produces complexity. That’s a severe problem because Behe uses the evidence for the ease of gene degradation to support his overarching implication that the current scientific understanding of the mechanisms of evolution is inadequate and, consequently, the field of evolutionary biology has a “big problem.”

I won’t attempt to summarize Behe’s entire book nor our short review, as people can read those for themselves if they want. Instead, I hope to accomplish three things in this post and two more that will follow. In this first post, I explain why Behe’s so-called “first rule of adaptive evolution” does not imply what he says it does about evolution writ large. In the second post, I’ll discuss whether my long-term evolution experiment (the LTEE for short) does or doesn’t provide strong support for Behe’s position in that regard. In my third post, I’ll explain why I think that Behe’s positions, taken as a whole, are scientifically untenable.

I. Behe’s “First Rule of Adaptive Evolution” Confounds Frequency and Importance

Behe’s latest book is centered around what he calls “The First Rule of Adaptive Evolution: Break or blunt any gene whose loss would increase the number of offspring.” As he wrote in an immediate, dismissive response to our review: “The rule summarizes the fact that the overwhelming tendency of random mutation is to degrade genes, and that very often is helpful. Thus natural selection itself acts as a powerful de-volutionary force, increasing helpful broken and degraded genes in the population.”

Let’s work through these two sentences, because they concisely express the thrust of Behe’s book. The first sentence regarding “the tendency of random mutation” is not too bad, though it is overly strong. I would tone it down as follows: “The tendency of random mutation is to degrade genes, and that is sometimes helpful.” My reasons for these subtle changes are that: (i) many mutations are selectively neutral or so weakly deleterious as to be effectively invisible to natural selection; (ii) while loss-of-function mutations are sometimes helpful to the organism, I wouldn’t say that’s “very often” the case (though it may be in some systems, as I’ll discuss in part II); and (iii) even those degradative mutations that are not helpful on their own sometimes persist and occasionally serve as “stepping stones” on the path toward new functionality. This last scenario is unlikely in any particular instance, but given the prevalence of degrading mutations it may nonetheless be important in evolution. (This scenario does not fit neatly within the old-fashioned caricature of Darwinian evolution as only proceeding by strictly adaptive mutations, but it is certainly part of modern evolutionary theory.)

Behe’s next sentence then asserts the power of the “de-evolutionary” process of gene degradation. This is an unjustifiable extrapolation, yet it is central to Behe’s latest book. (It’s not the sort of error I would expect from anyone who is deeply engaged in an earnest effort to understand evolutionary science and present it to the public.) Yes, natural selection sometimes increases the frequency of broken and degraded genes in populations. But when it comes to the power of natural selection, what is most frequent versus most important can be very different things. What is most important in evolution, and in many other contexts, depends on timescales and the cumulative magnitude of effects. As a familiar example, some rhinoviruses are the most frequent source of viral infections in our lives (hence the expression “common cold”), but infections by HIV or Ebola, while less common, are far more consequential.

Or consider an investor who bought stocks in 100 different companies 25 years ago, of which 80 have been losers. Ouch? Maybe not! A stock can’t lose more than the price that was paid for it, and so 20 winners can overcome 80 losers. Imagine if that investor had picked Apple, for example. That single stock has increased in value by well over 100-fold in that time, more than offsetting even 80 total wipeouts all by itself. (In fact, research on the stock market has shown the vast majority of long-term gains result from a small minority of companies that, like Apple, eventually become big winners.)

In the same vein, even if many more mutations destroy functions than produce new functions, the latter category has been far more consequential in the history of life. That is because a new function may enable a lineage to colonize a new habitat or realm, setting off what evolutionary biologists call an “adaptive radiation” that massively increases not only the numbers of organisms but, over time, the diversity of species and even higher taxa. As one example, consider Tiktaalik or some relative thereof, in any case a transitional kind of fish whose descendants colonized land and eventually gave rise to all of the terrestrial vertebrates—amphibian, reptiles, birds, and mammals. That lineage left far more eventual descendants (including ourselves), and was far more consequential for the history of life on Earth, than 100 other lineages that might have gained a transient advantage by degrading some gene and its function before eventually petering out.

Asteroid impacts aren’t common either, but the dinosaurs (among other groups) sure felt the impact of one at the end of the Cretaceous. (There remains some debate about the cause of that mass extinction event, but whatever the cause its consequences were huge.) Luckily for us, though, some early mammals survived. Evolution often leads to dead ends, sometimes as a consequence of exogenous events like asteroids, and other times because adaptations that are useful under a narrow set of conditions (such as those caused by mutations that break or degrade genes) prove vulnerable over time to even subtle changes in the environment. It has been estimated that more than 99% of all species that have ever existed are now extinct. Yet here we are, on a planet that is home to millions of diverse species whose genomes record the history of life.

Summing up, Behe is right that mutations that break or blunt a gene can be adaptive. And he’s right that, when such mutations are adaptive, they are easy to come by. But Behe is wrong when he implies these facts present a problem for evolutionary biology, because his thesis confuses frequencies over the short run with lasting impacts over the long haul of evolution.

[The picture below shows the Tiktaalik fossil discovered by Neil Shubin and colleagues.  It was posted on Wikipedia by Eduard Solà, and it is shown here under the indicated Creative Commons license.]

Tiktaalik

 

54 Comments

Filed under Education, Science