Your mother *is* always with you

Mother and child, microchimeras

When you’re in utero, you’re protected from the outside world, connected to it only via the placenta, which is supposed to keep you and your mother separated. Separation is generally a good thing because you are foreign to your mother, and she is foreign to you. In spite of the generally good defenses, however, a little bit of you and a little bit of her cross the barrier. Scientists have recently found that when that happens, you often end up toting a bit of mom around for decades, maybe for life.

The presence of cells from someone else in another individual is called microchimerism. A chimera in mythology was a beast consisting of the parts of many animals, including lion, goat, and snake. In genetics, a chimera carries the genes of some other individual along with its own, perhaps even the genes of another species. In microchimerism, we carry a few cells from someone else around with us. Most women who have been pregnant have not only their own cells but some cells from their offspring, as well. I’m probably carrying around cells from each of my children.

Risks and benefits of sharing

Microchimerism can be useful but also carries risks. Researchers have identified maternal cells in the hearts of infants who died from infantile lupus and determined that the babies had died from heart block, partially from these maternal cells that had differentiated into excess heart muscle. On the other hand, in children with type 1 diabetes, maternal cells found in the pancreatic islets appear to be responding to damage and working to fix it.

The same good/bad outcomes exist for mothers who carry cells from their children. There has long been an association between past pregnancy and a reduced risk of breast cancer, but why has been unclear. Researchers studying microchimerism in women who had been pregnant found that those without breast cancer had fetal microchimerism at a rate three times that of women who with the cancer.

Microchimerism and autoimmunity

Autoimmune diseases develop when the body attacks itself, and several researchers have turned to microchimerism as one mechanism for this process. One fact that led them to investigate fetal microchimerism is the heavily female bias in autoimmune illness, suggesting a female-based event, like pregnancy. On the one hand, pregnancy appears to reduce the effects of rheumatoid arthritis, an autoimmune disorder affecting the joints and connective tissues. On the other hand, women who have been pregnant are more likely to develop an autoimmune disorder of the skin and organs called scleroderma (“hard skin”) that involves excess collagen deposition. There is also a suspected association between microchimerism and pre-eclampsia, a condition in pregnancy that can lead to dangerously high blood pressure and other complications that threaten the lives of mother and baby.

Human leukocyte antigen (HLA)

The autoimmune response may be based on a similarity between mother and child of HLA, immune-related proteins encoded on chromosome 6. This similarity may play a role in the immune imbalances that lead to autoimmune diseases; possibly because the HLAs of the mother and child are so similar, the body clicks out of balance with a possible HLA excess. If they were more different, the mother’s immune system might simply attack and destroy fetal HLAs, but with the strong similarity, fetal HLAs may be like an unexpected guest that behaves like one of the family.

Understanding the links between microchimerism and disease is the initial step in exploiting that knowledge for therapies or preventative approaches. Researchers have already used this information to predict the development of a complication in stem cell transplant called “graft-versus-host disease” (GVH). In stem cell transplants, female donors with previous pregnancies are more associated with development of GVH because they are microchimeric. Researchers have exploited this fact to try to predict whether or not there will be an early rejection of a transplant in kidney and pancreas organ transplants.

(Photo courtesy of Wikimedia Commons and photographer Ferdinand Reus).

An author reading today

Doing my author reading at the library today with my friend Charles Darwin on the screen

Today, I did my first reading/teaching presentation from The Complete Idiot’s Guide to College Biology. Below are a couple of excerpts from what I read today.

From Chapter 16: Darwin, Natural Selection, and Evolution

Evolution, a change in a population over time, can be a controversial concept, and things were no different when Darwin first proposed his theory of how evolution happens. Since that time, we’ve identified several other ways by which evolution can occur. Scientists have synthesized natural selection and genetics and worked out a way to identify if evolution is happening in a population.

The Historical Context of Darwin’s Ideas

Charles Darwin was born on February 12, 1809, into a society with fixed ideas about the role of divinity–specifically the Christian God–in nature. Darwin’s destiny, as it turned out, was to address nature’s role in nature, rather than God’s. He was not completely comfortable in some respects with that destiny, but this man was born with his ear to the ground, listening to Nature’s heartbeat. He was born to bring to us a greater understanding of how nature fashions living things.

Yet, he did not emerge into a howling wilderness of antiscientific resistance. Scientists and philosophers who had come before him had posited bits and pieces of what would become Darwin’s own theory of how evolution happens. But it required Charles Darwin to synthesize those bits and pieces–some of them his own, gathered on the significant voyage of his lifetime–to bring us a complete idea of how nature shapes new species from existing life.

Alfred Russel Wallace: The Unknown Darwin

Alfred Russel Wallace developed the theory of evolution by natural selection at the same time as Darwin. His road to enlightenment came via his observations on another island chain, the Malay Archipelago. Like Darwin, Wallace was a naturalist savant, and on this archipelago alone, he managed to collect and describe tens of thousands of beetle specimens. He, too, had read Malthus and under that influence had begun to formulate ideas very similar to Darwin’s. The British scientific community of the nineteenth century was a relatively small world, and Wallace and Darwin knew one another. In fact, they knew each other well enough to co-present their ideas about natural selection and evolution in 1858.

Nevertheless, Wallace did not achieve Darwin’s profile in the field of evolution and thus today does not have his name inscribed inside a fish-shaped car decal. The primary reason is likely that Darwin literally wrote the book on the theory of evolution by means of natural selection. Wallace, on the other hand, published a best seller on the Malay Archipelago.

From Chapter 13: DNA

DNA, as the central molecule of heredity, is key to many aspects of our lives (besides, obviously, encoding our genes). Medicines and therapies are based on it. TV shows and movies practically feature it as a main character. We profile it from before we’re born until after we die, using it to figure out what’s wrong, what’s right, what’s what when it comes to who we are, and what makes us different and the same.

But it wasn’t so long ago that we weren’t even sure that DNA was the molecule of heredity, and it was even more recently that we finally started unlocking the secrets of how its genetic material is copied for passing along to offspring.

The History and Romance of DNA

The modern-day DNA story is dynamic and fascinating. But it can’t compare to the tale of the trials, tribulations, and downright open hostilities that accompanied our recognition of its significance.

Griffith and His Mice

Our understanding started with mice. In the 1920s, a British medical officer named Frederick Griffith performed a series of important experiments. His goal was to figure out the active factor in a strain of bacteria that could give mice pneumonia and kill them.

His bacteria of choice were Streptococcus pneumoniae, available in two strains. One strain infects and kills mice and thus is pathogenic, or disease causing. These bacteria also have a protein capsule enclosing each cell, leading to their designation as the Smooth, or S, strain. The other strain is the R, or rough, strain because it lacks a capsule. The R strain also is not deadly.

Wondering whether or not the S strain’s killer abilities would survive the death of the bacteria themselves, he first heat-killed the S strain bacteria. (Temperature changes can cause molecules to unravel and become nonfunctional, “killing” them.) He then injected his mice with the dead germs. The mice stayed perky and alive. Griffith mixed the dead, heat-killed S strain bacteria with the living, R-strain bacteria and injected the mice again. Those ill-fated animals died. Griffith found living S strain cells in these rodents that had never been injected with live S strain bacteria.

With dead mice all around him, Griffith had discovered that something in the pathogenic S strain had survived the heat death. The living R strain bacteria had picked up that something, leading to their transformation into the deadly, pathogenic S strain in the mice. It was 1928, and the question that emerged from his findings was, What is the transforming molecule? What, in other words, is the molecule of heredity?

Hershey and Chase: Hot Viruses

A fiery debate tore through the ranks of molecular biologists and geneticists in the early twentieth century, arguing about whether proteins or DNA were the molecules of heredity. The protein folk had a point. With 20 possible amino acids, proteins offer far more different possible combinations and resulting molecules than do the four letters (nucleotide building blocks) of the DNA alphabet. Protein advocates argued that the molecule with the most building blocks was likely responsible for life’s diversity.

In a way, they were right. Proteins underlie our variation. But they were also fundamentally wrong. Proteins differ because of differences in the molecule that holds the code for building them. And that molecule is DNA.

Like what you’ve read? Read the rest in The Complete Idiot’s Guide to College Biology.

Genetic analysis: my results and my reality

A few months ago, it was National DNA Day or something like that, and one of the genetics analysis companies had a sale on their analysis kits, offering a full panel of testing for only $100. Giddy with the excitement of saving almost $1000 on something I’d long been interested in doing, I signed on, ordering one kit each for my husband (a.k.a. “The Viking”) and me. Soon, we found ourselves spending a romantic evening spitting into vials and arguing about whether or not we’d shaken them long enough before packaging them.

The company promised results in six weeks, but they came much faster than that, in about three weeks. Much to my relief, I learned that neither of us carries markers for cystic fibrosis and that I lack either of the two main mutations related to breast cancer. Those basic findings out of the way, things then got more complex and more interesting.

How it works

First, a bit of background. These tests involve sequencing of specific regions to look for very small changes, a single nucleotide, in the DNA. If there is a study that has linked a specific mutation to a change in the average risk for a specific disorder or trait, then the company notes that. The more data there are supporting that link, the stronger the company indicates the finding is. Thus, four gold stars in their nomenclature means, “This is pretty well supported,” while three or fewer slate stars means, “There are some data for this but not a lot,” or “The findings so far are inconsistent.”

Vikings and Ireland

The Viking is a private person, so I can’t elaborate on his findings here except to say that (a) he is extraordinarily healthy in general and (b) what we thought was a German Y chromosome seems instead to be strictly Irish and associated with some Irish king known as Niall of the Nine Hostages. Why hostages and why nine, I do not know. But it did sort of rearrange our entire perception of his Y chromosome and those of our three sons to find this out. For the record, it matches exactly what we learned from participating in the National Geographic Genographic project. I’d ask the Viking if he were feeling a wee bit o’ the leprachaun, but given his somewhat daunting height and still Viking-ish overall demeanor (that would be thanks to his Scandinavian mother), I’m thinking he doesn’t. Lord of the Dance, he is not.

Markers that indicate an increased risk for me

I have an increased risk of…duh

Looking at the chart to the left (it’s clickable), you can see where I earned myself quite a few four gold stars, but the ones that seem most relevant are those with a 2x or greater increased risk: lupus, celiac disease, and glaucoma. The first two do not surprise me, given my family’s history of autoimmune disorders.

If you focus on a list like this too long, you can start to get a serious case of hypochondria, worrying that you’re gonna get all of these things thanks to those glaring golden stars. But to put it into context, for the lupus–for which my risk is 2.68 times higher than a regular gal’s–that still leaves me in the population in which 0.66 persons out of every 100 will develop this disorder. Compare that to the 0.25 out of every 100 in the regular-gal population, and it doesn’t strike me as that daunting.

Some of those other things on there? Well, let’s just say they’re close. My risk of thyroid cancer might be raised…but I no longer have a thyroid. Hypertension risk is increased–and I have stage 2 hypertension. Gallstones, gout, alcholism, asthma…based on family history, it’s no surprise to me to see some mixed or clear risk involved with these, although I have none of them. Does that mean that someone else with these increased risks will have related real-life findings? No. It only means that you’re at a bit more risk. It’s like riding a motorcycle vs. driving a car. The former carries more risk of a fatal wreck, but that doesn’t mean you’re absolutely gonna die on it if you ride it.

Disorders for which my risk is allegedly decreased

I have a decreased risk of...

None of my decreased risk findings are very eye catching in terms of actual drop in risk except for Type II diabetes (now where is my bag of sugar?). As I have been under evaluation for multiple sclerosis and have a family member with it, it’s interesting to see that my risk for it, based on existing studies and known polymorphisms, is decreased. And even though I know that much of this is largely speculative and based on little firm data, it’s still sort of comforting to see “decreased risk” and things like “melanoma” in the same group.

Don’t make my brown eyes blue!

And they didn’t. They nailed the eye color and other trait-related analysis, such as level of curl to the hair, earwax type, alcohol flush reaction, lactose intolerance (unlikely), and muscle performance (I am not nor have I ever been a sprinter). And even though I do not have red hair, they reported that I had a good chance of it, also true given family history. I am not resistant to malaria but allegedly resistant to norovirus. I wish someone had informed my genes of that in 2003 when I was stricken with a horrible case of it.

Ancestral homeland

Yep. They nailed this one. One hundred percent European mutt. Mitochondria similar to…Jesse James…part of a haplogroup that originated in the Near East about 45,000 years ago then traveled to Ethiopia and Egypt and from there, presumably, into Europe. It’s a pretty well traveled haplotype and happens to match exactly with the one identified by the National Geographic Genographic project. When it comes to haplotypes, we’re batting 1000.

In summary

Some of these findings are reliable, such as the absence of the standard breast cancer mutations or the presence of certain mutations related to autoimmune disorders, while other findings are iffy. The company duly notes their iffiness  in the reports, along with the associated citations, polymorphisms, and level of risk identified in each study. They don’t promise to tell you that your ancestors lived in a castle 400 years ago or hailed from Ghana. From this company, at any rate, the results are precise and precisely documented, and as I noted, pretty damned accurate. And they’re careful to be a clear as possible about what “increased risk” or “decreased risk” really means.

It’s fascinating to me that a little bit of my spit can be so informative, even down to my eye color, hair curl, and tendency to hypertension, and I’ve noted that just in the days since we received our results, they’ve continually updated as new data have come in. Would I be so excited had I paid $1100 for this instead of $200? As with any consideration of the changes in risk these analyses identified, that answer would require context. Am I a millionaire? Or just a poor science writer? Perhaps my genes will tell.

Is the tree of life really a ring?

A proposed ring of life

The tree of life is really a ring

When Darwin proposed his ideas about how new species arise, he produced a metaphor that we still adhere to today to explain the branching patterns of speciation: The Tree of Life. This metaphor for the way one species may branch from another through changes in allele frequencies over time is so powerful and of such long standing that many large studies of the speciation process and of life’s origins carry its name.

It may be time for a name change. In 2004, an astrobiologist and molecular biologist from UCLA found that a ring metaphor may better describe the advent of earliest eukaryotes. Astrobiologists study the origins of life on our planet because of the potential links between these earthly findings and life on other planets. Molecular biologists can be involved in studying the evolutionary patterns and relationships that our molecules—such as DNA or proteins—reveal. Molecular biologist James Lake and astrobiologist Mary Rivera of UCLA teamed up to examine how genomic studies might reveal some clues about the origins of eukaryotes on Earth.

Vertical transfer is so 20th century

We’ve heard of the tree of life, in which one organism begets another, passing on its genes in a vertical fashion, with this vertical transfer of genes producing a tree, with each new production becoming a new branch. The method of gene transfer that would produce a genuine circle, or ring, is horizontal transfer, in which two organisms fuse genomes to produce a new organism. The ends of the branches in this scenario fuse together via their genomes to close the circle. It is this fusion of two genomes that may have produced the eukaryotes.

Here, have some genes

Eukaryotes are cells with true nuclei, like the cells of our bodies. The simplest eukaryotes are the single-celled variety, like yeasts. Before eukaryotes arose, single-celled organisms without nuclei—called prokaryotes—ruled the Earth. We lumped them together in a single kingdom until comparatively recently, when taxonomists broke them into two separate domains, the Archaebacteria and the Eubacteria, with the eukaryotes making up a third. Archaebacteria are prokaryotes with a penchant for difficult living conditions, such as boiling-hot water. Eubacteria include today’s familiar representatives, Escherichia coli.

Genomic fusion

According to the findings of Lake and Rivera, the two prokaryotic domains may have fused genomes to produce the first representatives of the Eukarya domain. By analyzing complex algorithms of genomic relationships among 30 organisms—hailing from each of the three domains—Lake and Rivera produced various family “trees” of life on Earth, and found that the “trees” with the highest cumulative probabilities of having actually occurred really joined in a ring, or a fusion of two prokaryotic branches to form the eukaryotes. Recent research If we did that, the equivalent would be something like walking up to a grizzly bear and hand over some of your genes for it to incorporate. Being eukaryotes, that’s not something we do.

Our bacterial parentage: the union of Archaea and Eubacteria

Although not everyone buys into the “ring of life” concept, their findings help resolve some confusion over the origins of eukaryotes. When we first began analyzing the relationship of nucleated cells to prokaryotes, we identified a number of genes—that we call “informational” genes—that seemed to be directly inherited from the Archaea branch of the Tree of Life. Informational genes are involved in the processes like transcription and translation, and indeed, recent “ring of life” research suggests a greater role for Archaea. But we also found that many eukaryotic genes traced back to the Eubacteria domain, and that these genes were more organizational in nature, being involved in cell metabolism or lipid synthesis.

Applying the tree metaphor did not help resolve this confusion. If eukaryotes vertically inherited these genes from their prokaryotic ancestors, we would expect to see only genes representative of one domain or the other in eukaryotes. But we see both domains represented in the genes, and the best explanation is that organisms from each domain fused entire genomes—horizontally transferring genes—to produce a brand new organism, the progenitor of all eukaryotes: yeasts, trees, giraffes, killer whales, mice, … and us.

How the genetic code became degenerate

Our genetic code consists of 64 different combinations of four RNA nucleotides—adenine, guanine, cytosine, and uracil. These four molecules can be arranged in groups of three in 64 different ways; the mathematical representation of this relationship is 4 x 4 x 4 to illustrate the number of possible combinations.

Shorthand for the language of proteins

This code is cellular shorthand for the language of proteins. A group of three nucleotides—called a codon—is a code word for an amino acid. A protein is, at its simplest level, a string of amino acids, which are its building blocks. So a string of codons provides the language that the cell can “read” to build a protein. When the code is copied from the DNA, the process is called transcription, and the resulting string of nucleotides is messenger RNA. This messenger takes the code from the nucleus to the cytoplasm in eukaryotes, where it is decoded in a process called translation. During translation, the code is “read,” and amino acids assembled in the sequence the code indicates.

The puzzling degeneracy of genetics

So given that there are 64 possible triplet combinations for these codons, you might think that there are 64 amino acids, one per codon. But that’s not the case. Instead, our code is “degenerate;” in some cases, more than one triplet of nucleotides provides a code word for an amino acid. Thus, these redundant codons are all synonyms for the same protein building block. For example, six different codons indicate the amino acid leucine: UUA, UUG, CUA, CUG, CUC, and CUU. When any one of these codons turns up in the message, the cellular protein-building machinery inserts a leucine into the growing amino acid chain.

This degeneracy of the genetic code has puzzled biologists since the code was cracked. Why would Nature produce redundancies like this? One suggestion is that Nature did not use a triplet code originally, but a doublet code. Francis Crick, of double-helix fame, posited that a two-letter code probably preceded the three-letter code. But he did not devise a theory to explain how Nature made the universal shift from two to three letters.

A two-letter code?

There are some intriguing bits of evidence for a two-letter code. One of the players in translation is transfer RNA (tRNA), a special sequence of nucleotides that carries triplet codes complementary to those in the messenger RNA. In addition to this complementary triplet, called an anticodon, each tRNA also carries a single amino acid that matches the codon it complements. Thus, when a codon for leucine—UUA for example—is “read” during translation, a tRNA with the anticodon AAU will donate the leucine it carries to the growing amino acid chain.

Aminoacyl tRNA synthetases are enzymes that link an amino acid with the appropriate tRNA anticodon.  Each type of tRNA has its specific synthetase, and some of these synthetases use only the first two nucleotide bases of the anticodon to decide which amino acid to attach. If you look at the code words for leucine, for example, you’ll see that all four begin with “CU.” The only difference among these four is the third position in the codon—A, U, G, or C. Thus, these synthetases need to rely only on the doublets to be correct.

Math and doublets

Scientists at Harvard believe that they have solved the evolutionary mystery of how the triplet form arose from the doublet. They suggest that the doublet code was actually read in groups of three doublets, but with only the first two “prefix” or last two “suffix” pairs actually being read. Using mathematical modeling, these researchers have shown that all but two amino acids can be coded for using two, four, or six doublet codons.

Too hot in the early Earth kitchen for some

The two exceptions are glutamine and asparagine, which at high temperatures break down into the amino acids glutamic acid and aspartic acid. The inability of glutamine and asparagine to retain structure in hot environments suggests that the in the early days of life on Earth when doublet codes were in use, the primordial soup must have been too hot for stable synthesis of heat-intolerant, triplet-coded amino acids like glutamine and asparagine.

With clones like these, who needs anemones?

Finding Nemo makes marine biologists of us all

I once lived a block away from a beach in Northern California, and when my sons and I wandered the sands at low tide, we often saw sea anemones attached to the rocks, closed up and looking much like rocks themselves, waiting for the water to return. My sons, fans of Finding Nemo, still find these animals intriguing because of their association with a cartoon clownfish, but as it turns out, these brainless organisms have a few lessons to teach the grownups about the art of war.

Attack of the clones

Anemones, which look like plants that open and close with the rise and fall of the tides, are really animals from the phylum Cnidaria, which makes them close relatives of corals and jellyfish. Although they do provide a home for clownfish in a mutualistic relationship, where both the clownfish and the anemone benefit from the association, anemones are predators. They consist primarily of their stinging tentacles and a central mouth that allows them to eat fish, mussels, plankton, and marine worms.

Although anemones seem to be adhered permanently to rocks, they can, in fact, move around. Anemones have a “foot” that they use to attach to objects, but they also can be free-swimming, which comes in handy in the art of sea anemone warfare. (To see them in action, click on video, above.)

Sea anemone warfare could well be characterized as an attack of the clones. These animals reproduce by a process called lateral fission, in which new anemones grow by mitosis from an existing anemone, although they can engage in sexual reproduction when necessary. But when a colony of anemones is engaged in a battle, it consists entirely of genetically identical clones.

Yet even though they are identical, these clones, like the genetically identical cells in your liver and your heart, have different jobs to do in anemone warfare. Scientists have known that anemones can be aggressive with one another, tossing around stinging cells as their weapons of choice in battle. But observing groups of anemones in their natural environment is almost impossible because the creatures only fight at high tide, masked by the waves.

To solve this problem, a group of California researchers took a rock with two clone tribes of anemones on it into the lab and created their own, controlled high and low tides. What they saw astonished them. The clones, although identical, appeared to have different jobs and assorted themselves in different positions depending on their role in the colony.

Battle arms, or “acrorhagi”

The warring groups had a clearly marked demilitarized zone on the rock, a border region that researchers say can be maintained for long periods in the wild. When the tide is high, though, one group of clones will send out scouts, anemones that venture into the border area in an apparent bid to expand the territory for the colony. When the opposition colony senses the presence of the scouts, its warriors go into action, puffing up large specialized battle arms called acrorhagi, tripling their body length, and firing off salvos of stinging cells at the adventuresome scouts. Even warriors as far as four rows back get into the action, rearing up the toss cells and defend their territory.

In the midst of this battle, the reproductive clones hunker down in the center of the colony, protected and able to produce more clones. Clones differentiate into warriors or scouts or reproducers based on environmental signals interacting with their genes; every clonal group has a different response to these signals and arranges its armies in different permutations.

Poor Stumpy

Warriors very rarely win a battle, and typically, the anemones maintained their territories rather than achieving any major expansions. The scouts appear to run the greatest risk; one hapless scout from the lab studies, whom the researchers nicknamed Stumpy, was so aggressive in its explorations that when it returned to its home colony, it was attacked by its own clones. Researchers speculated that it bore far too many foreign stinging cells sustained in the attacks, thus resulting in a case of mistaken identity for poor Stumpy.

The piggish origins of civilization

Follow the pig

For researchers interested in tracing the path of human civilization from its birthplace in the Fertile Crescent to the rest of the world, they need only follow the path of the pig.

Pig toting

Until this research was reported, humans agreed that pigs had fallen under our magical domestication powers only twice about 9,000 years ago, once in what is called the Near East (Turkey), and a second time in what is called the Far East (China). Morphological and some genetic evidence seemed to point to these two events only. That led human geographers to conclude that humans must have toted domesticated pigs around from the Far or Near East to other parts of the world like Europe or Africa, rather than domesticating the wild boars they encountered in every new locale.

Occam’s Razor violation

As it turns out, those ideas—which enjoyed the support even of Charles Darwin—were wrong. And they provide a nice example of a violation Occam’s Razor, the rule that scientists should select the explanation that requires the fewest assumptions. In the case of the pig, two domestication events definitely required fewer assumptions than the many that we now believe to have occurred.

Research published in the journal Science in 2005 has identified at least seven occurrences of the domestication of wild boars. Two events occurred in Turkey and China, as previously thought, but the other five events took place in Italy, Central Europe, India, southeast Asia, and on islands off of southeast Asia, like Indonesia. Apparently, people arrived in these areas, corralled some wild boars, and ultimately domesticated them, establishing genetic lines that we have now traced to today.

As usual, molecular biology overrules everything else

The scientists uncovered the pig domestication pattern using modern molecular biology tools. They relied on a genetic tool known as the mitochondrial clock. Mitochondria have their own DNA, which they use as the code for their own, specialized mitochondrial proteins. Because mitochondria are essential to cell function and survival, changes in DNA coding sequences are rare because selection pressures against them are strong. For this reason, any changes are usually random changes in noncoding regions, changes that accumulate slowly and at a fairly predictable rate over time. This rate of accumulation is the mitochondrial clock, which we use to tick off the amount of time that has passed between mutations.

Tick-tock, mitochondrial clock

Very closely related individuals will have almost identical mitochondrial sequences; for example, the mitochondria that you have are probably exactly alike in sequence to the mitochondria your mother has. You inherited those mitochondria only from your mother, whose egg provided these essential organelles to the zygote that ultimately became you. Were someone to sample the mitochondria from one of your relatives thousands of years from now, they would probably find only a few changes, but if they compared this sample to one from someone unrelated to you, they would find different changes and a different number of changes, indicating less of a relationship.

That’s how the researchers figured out the mystery of the pigs. They sampled wild boars from each of the areas and sampled domestic pigs from the same locales. After comparing the mitochondrial DNA sequences among these groups, they found that pigs in Italy had sequences very like those of wild boars in Italy, while pigs in India had sequences very like those of wild boars there.

Approachable teenage-like pigs

How did we domesticate the pigs? Researchers speculate that adult boars (males) who still behaved like teenagers were most likely to approach human settlements to forage. They were also less aggressive than males who behaved like full adults, and thus, easier to domesticate. They fed better on human food scraps than did their more-mature—and more-skittish—brethren, and enjoyed better survival and more opportunities to pass on their juvenile characteristics, which also included shorter snouts, smaller tusks, and squealing, to their offspring. Domestication was just the next step.

Identical twins grow less identical

DNA sequence is just a starting point

Identical twins are identical only in that their DNA is the same. In what might be an argument against cloning yourself or a pet hoping to get an identical reproduction, scientists have found that having an identical genetic code does not translate into being exactly alike. We have long known that identical twins do not always share the same health fate, for example. One twin can have schizophrenia while another twin may never develop it. Or one twin might develop cancer or diabetes, while the other remains disease-free, even though there can be a strong genetic component to all of these disorders.

So a burning question in the field of genetics and disease has been identifying the difference between a twin who gets a disease and one who does not. A strong candidate mechanism has been the process of genomic modification in which molecules attached to the DNA can silence a gene or turn it on. Typically, methyl groups attached to DNA will make the code unavailable, and acetyl groups attached to the histone proteins that support DNA will ensure that the code is used.

Chemical tags modify DNA sequences

This process of genomic regulation is involved in some interesting aspects of biology. For example, methylation is the hallmark of genomic imprinting, in which each set of genes we inherit from our parents comes with its own special pattern of methylation. The way some genetic disorders manifest can be traced to genomic imprinting. In Prader-Willi syndrome, a person inherits a paternal mutant allele and manifests characteristic symptoms of the disorder, which include obesity and intellectual disability. But people who inherit the same mutant allele from the mothers will instead have Angelman’s syndrome, in which they are small and gracile, have a characteristic elfin face, and also have intellectual disability. Modification from methyl or acetyl groups, also called epigenetic modification, plays a role in dosage compensation for the X chromosome. Women, who have two X chromosomes, shut most of one down through methylation to produce an X chromosome gene dosage like that of men, who have a single X.

Twinning: Nature’s clones

Identical twins have identical DNA because they arise from a single fertilized egg. The egg divides mitotically into two identical cells, and then each cell, for reasons we don’t understand well, resets the developmental process to the beginning and develops as a new individual. The process of twinning carries interesting implications for bioethics, cloning discussions, and questions about when life begins, but it also has helped us tease apart the influences of genetics and environment. A recent study examining life history differences and differences in epigenetic modification in 80 pairs of twins ranging in age from 3 to 74 has revealed some fascinating results that have implications for our understanding of nature vs. nurture and our investigations into the role of epigenesis in development of disease.

You are what you do to yourself

The older the twins were, the more differences researchers found in methylation or acetylation of their DNA and histones. For twins raised apart, these differences were even more extreme. Researchers also concluded that environmental influences, such as smoking, diet, and lifestyle, may have contributed to the differences in the twins’ epigenetic modifications. The three-year-old twins were almost identical in their methylation patterns, but for twins older than 28 years, the patterns were significantly different for 60 percent of the pairs.

These results have major implications for our understanding of disease. For example, we can use this knowledge to identify genes that are differently methylated in people with and without a disorder and use that as a lead in identifying the genes involved in that disease state. We also may be able to pinpoint which environmental triggers result in differential methylation and find ways to avoid this mechanism of disease.

Polio vaccine-related polio

Polio virus bits in vaccine rarely join forces with other viruses, become infectious

[Note: some of the links in this piece are to New England Journal of Medicine papers. NEJM does not make its content freely available, so unfortunately, unless you have academic or other access, you’d have to pay per view to read the information. I fervently support a world in which scientific data and information are freely available, but…money is money.]

Worldwide, billions of polio vaccine doses have been administered, stopping a disease scourge that before the vaccine killed people–mostly children–by the thousands in a horrible, suffocating death (see “A brief history of polio and its effects,” below). The polio vaccination campaign has been enormously successful, coming close to the edge of eradicating wild-type polio.

But, as with any huge success, there have been clear negatives. In a few countries–15, to be exact–there have been 14 outbreaks of polio that researchers have traced to the vaccines themselves.  The total number of such cases as of 2009 was 383. The viral pieces in the vaccine–designed to attract an immune response without causing disease–occasionally recombine with other viruses to form an active version of the pathogen. Some kinds of viruses–flu viruses come to mind–can be notoriously tricky and agile that way.

Existing vaccine can prevent vaccine-related polio

Odd as it sounds, the existing vaccines can help prevent the spread of this vaccine-related form of polio. The recombined vaccine-related version tends to break out in populations that are underimmunized against the wild virus, as happened in Nigeria. Nigeria suspended its polio vaccination program in 2003 because rumors began to circulate that the vaccine was an anti-Muslim tactic intended to cause infertility. In 2009, the country experienced an outbreak of vaccine-derived virus, with at least 278 children affected. Experts have found that the existing vaccine can act against either the wild virus or the vaccine-derived form, both of which have equally severe effects. In other words, vaccinated children won’t get either.

Goal is eradication of virus and need for vaccine

Having come so close to total eradication before wild-type-associated cases plateaued between 1000 and 2000 annually in the 21st century, global health officials hold out the hope for two primary goals. They hope to eradicate wild-type polio transmission through a complete vaccination program, which, in turn, will keep vaccine-derived forms from spreading. Once that goal is achieved, they will have reached the final target: no more need for a polio vaccine.

As Dr. Bruce Aylward, Director of the Global Polio Eradication Initiative at WHO, noted: “These new findings suggest that if (vaccine-derived polio viruses) are allowed to circulate for a long enough time, eventually they can regain a similar capacity to spread and paralyse as wild polioviruses. This means that they should be subject to the same outbreak response measures as wild polioviruses. These results also underscore the need to eventually stop all (oral polio vaccine) use in routine immunization programmes after wild polioviruses have been eradicated, to ensure that all children are protected from all possible risks of polio in future.”

If that sounds nutty, it’s been done. Until the early 1970s, the smallpox vaccination was considered a routine vaccination. But smallpox was eradicated, and most people born after the early ’70s have never had to have the vaccine.

A brief history of polio and its effects

I bring you the following history of polio, paraphrased from information I received from a physician friend of mine who works in critical care:

The original polio virus outbreaks occurred before the modern intensive care unit had been invented and before mechanical ventilators were widely available. In 1947-1948, the polio epidemic raged through Europe and the United States, with many thousands of patients dying a horrible death due to respiratory paralysis. Slow asphyxiation is one of the worst ways to die, which is precisely why they simulate suffocation in torture methods such as water boarding. The sensation is unendurable.

In the early twentieth-century polio epidemics, they put breathing tubes down the throats of patients who were asphyxiating due to the respiratory paralysis caused by the polio virus. Because ventilators were unavailable, armies of medical students provided the mechanical respiratory assist to the patients by hand-squeezing a bag which was connected to the breathing tube, over and over and over, 16 times a minute, 24 hours each day, which drove air in and out of the patients’ lungs.  Eventually the iron lung was developed and became widely implemented to manage polio outbreaks. The iron lung subsequently gave way to the modern ventilator, which is another story.

Pesticide link to ADHD

It’s correlation, not causation

A common pesticide and metabolites have been linked in a large study to ADHD, an attention deficit disorder characterized also by hyperactivity and impulsivity. ADHD has previously been associated with specific genes and even hailed as a one-time advantageous evolutionary adaptation. But many neurological differences likely will trace to an interaction of genes and environment, or, in fancy science talk, a multifactorial causality.

But it’s also not a surprise

This study looked at metabolites in the urine of more than 1000 children, 119 of whom had ADHD. It’s not mechanistically outre to think that pesticides designed to send a pest’s nervous system astray might have a similar effect on vertebrate systems. But this study showed links, not mechanisms, which often is a necessary first step to justify further pursuing a hypothesis. The researchers found that levels of specific metabolites of organophosphate pesticides are associated with an increased risk–by as much as two-fold–of developing ADHD.

Join the ever-expanding club

If further research does identify a mechanistic tie to this identified correlation, then these pesticides will join an ever-growing suite of chemicals we’ve introduced into the environment that influence our endocrine and neural systems. These chemicals are called endocrine disruptors.

%d bloggers like this: