Your mother *is* always with you

Mother and child, microchimeras

When you’re in utero, you’re protected from the outside world, connected to it only via the placenta, which is supposed to keep you and your mother separated. Separation is generally a good thing because you are foreign to your mother, and she is foreign to you. In spite of the generally good defenses, however, a little bit of you and a little bit of her cross the barrier. Scientists have recently found that when that happens, you often end up toting a bit of mom around for decades, maybe for life.

The presence of cells from someone else in another individual is called microchimerism. A chimera in mythology was a beast consisting of the parts of many animals, including lion, goat, and snake. In genetics, a chimera carries the genes of some other individual along with its own, perhaps even the genes of another species. In microchimerism, we carry a few cells from someone else around with us. Most women who have been pregnant have not only their own cells but some cells from their offspring, as well. I’m probably carrying around cells from each of my children.

Risks and benefits of sharing

Microchimerism can be useful but also carries risks. Researchers have identified maternal cells in the hearts of infants who died from infantile lupus and determined that the babies had died from heart block, partially from these maternal cells that had differentiated into excess heart muscle. On the other hand, in children with type 1 diabetes, maternal cells found in the pancreatic islets appear to be responding to damage and working to fix it.

The same good/bad outcomes exist for mothers who carry cells from their children. There has long been an association between past pregnancy and a reduced risk of breast cancer, but why has been unclear. Researchers studying microchimerism in women who had been pregnant found that those without breast cancer had fetal microchimerism at a rate three times that of women who with the cancer.

Microchimerism and autoimmunity

Autoimmune diseases develop when the body attacks itself, and several researchers have turned to microchimerism as one mechanism for this process. One fact that led them to investigate fetal microchimerism is the heavily female bias in autoimmune illness, suggesting a female-based event, like pregnancy. On the one hand, pregnancy appears to reduce the effects of rheumatoid arthritis, an autoimmune disorder affecting the joints and connective tissues. On the other hand, women who have been pregnant are more likely to develop an autoimmune disorder of the skin and organs called scleroderma (“hard skin”) that involves excess collagen deposition. There is also a suspected association between microchimerism and pre-eclampsia, a condition in pregnancy that can lead to dangerously high blood pressure and other complications that threaten the lives of mother and baby.

Human leukocyte antigen (HLA)

The autoimmune response may be based on a similarity between mother and child of HLA, immune-related proteins encoded on chromosome 6. This similarity may play a role in the immune imbalances that lead to autoimmune diseases; possibly because the HLAs of the mother and child are so similar, the body clicks out of balance with a possible HLA excess. If they were more different, the mother’s immune system might simply attack and destroy fetal HLAs, but with the strong similarity, fetal HLAs may be like an unexpected guest that behaves like one of the family.

Understanding the links between microchimerism and disease is the initial step in exploiting that knowledge for therapies or preventative approaches. Researchers have already used this information to predict the development of a complication in stem cell transplant called “graft-versus-host disease” (GVH). In stem cell transplants, female donors with previous pregnancies are more associated with development of GVH because they are microchimeric. Researchers have exploited this fact to try to predict whether or not there will be an early rejection of a transplant in kidney and pancreas organ transplants.

(Photo courtesy of Wikimedia Commons and photographer Ferdinand Reus).

Advertisements

Think the eye defies evolutionary theory? Think again

The compound lens of the insect eye

Win for Darwin

When Darwin proposed his theory of evolution by natural selection, he recognized at the time that the eye might be a problem. In fact, he even said it was “absurd” to think that the complex human eye could have evolved as a result of random mutations and natural selection. Although evolution remains a fact, and natural selection remains a theory, the human eye now has some solid evolutionary precedence. A group of scientists that has established a primitive marine worm, Platynereis dumerilii, as a developmental biology model has found that it provides the key to the evolution of the human—and insect—eye.

Multiple events in eye evolution, or only one?

The divide over the eye occurred because the insects have the familiar compound-lens—think how fly eyesight is depicted—and vertebrates have a single lens. Additionally, insects use rhabdomeric photoreceptors, and vertebrates have a type known as ciliary receptors. The rhabdomeric receptors increase surface area in the manner of our small intestine—by having finger-like extensions of the cell. The ciliary cells have a hairy appearance because of cilia that pop outward from the cell. A burning question in evolutionary biology was how these two very different kinds of eyes with different types of photoreceptors evolved. Were there multiple events of eye evolution, or just one?

Just once?

P. dumerilii work indicates a single evolutionary event, although the usual scientific caveats in the absence of an eyewitness still apply. This little polychaete worm, a living fossil, hasn’t changed in about 600 million years, and part of its prototypical insect brain responds to light. In this system is a complex of cells that forms three pairs of eyes and has two types of photoreceptor cells. Yep, those two types are the ciliary and the rhabdomeric. This little marine worm has both kinds of receptors, using the rhabdomeric receptors in its little eyes and the ciliary receptors in its brain. Researchers speculate that the light receptors in the brain serve to regulate the animal’s circadian rhythm.

How could the existence of these two types of receptors simultaneously lead to the evolution of two very different kinds of eyes? An ancestral form could have had duplicate copies of one or both genes present. Ultimately, if the second copy of the rhabdomeric receptor gene were recruited to an eye-like structure, evolution continued down the insect path. But, if the second copy of a ciliary cell’s photoreceiving gene were co-opted for another function, and the cells were ultimately recruited from the brain for use in the eye, then evolution marched in the vertebrate direction.

All of the above is completely speculation, although this worm’s light-sensitive molecule, or opsin, is very much like the opsin our own rods and cones make, and the molecular biology strongly indicates a relationship. It doesn’t completely rule out multiple eye-evolution events, but it certainly provides some nice evidence for a common eye ancestor for insects and vertebrates.

Note: This work appeared in 2004 and got a detailed writeup at Pharyngula.

Is the tree of life really a ring?

A proposed ring of life

The tree of life is really a ring

When Darwin proposed his ideas about how new species arise, he produced a metaphor that we still adhere to today to explain the branching patterns of speciation: The Tree of Life. This metaphor for the way one species may branch from another through changes in allele frequencies over time is so powerful and of such long standing that many large studies of the speciation process and of life’s origins carry its name.

It may be time for a name change. In 2004, an astrobiologist and molecular biologist from UCLA found that a ring metaphor may better describe the advent of earliest eukaryotes. Astrobiologists study the origins of life on our planet because of the potential links between these earthly findings and life on other planets. Molecular biologists can be involved in studying the evolutionary patterns and relationships that our molecules—such as DNA or proteins—reveal. Molecular biologist James Lake and astrobiologist Mary Rivera of UCLA teamed up to examine how genomic studies might reveal some clues about the origins of eukaryotes on Earth.

Vertical transfer is so 20th century

We’ve heard of the tree of life, in which one organism begets another, passing on its genes in a vertical fashion, with this vertical transfer of genes producing a tree, with each new production becoming a new branch. The method of gene transfer that would produce a genuine circle, or ring, is horizontal transfer, in which two organisms fuse genomes to produce a new organism. The ends of the branches in this scenario fuse together via their genomes to close the circle. It is this fusion of two genomes that may have produced the eukaryotes.

Here, have some genes

Eukaryotes are cells with true nuclei, like the cells of our bodies. The simplest eukaryotes are the single-celled variety, like yeasts. Before eukaryotes arose, single-celled organisms without nuclei—called prokaryotes—ruled the Earth. We lumped them together in a single kingdom until comparatively recently, when taxonomists broke them into two separate domains, the Archaebacteria and the Eubacteria, with the eukaryotes making up a third. Archaebacteria are prokaryotes with a penchant for difficult living conditions, such as boiling-hot water. Eubacteria include today’s familiar representatives, Escherichia coli.

Genomic fusion

According to the findings of Lake and Rivera, the two prokaryotic domains may have fused genomes to produce the first representatives of the Eukarya domain. By analyzing complex algorithms of genomic relationships among 30 organisms—hailing from each of the three domains—Lake and Rivera produced various family “trees” of life on Earth, and found that the “trees” with the highest cumulative probabilities of having actually occurred really joined in a ring, or a fusion of two prokaryotic branches to form the eukaryotes. Recent research If we did that, the equivalent would be something like walking up to a grizzly bear and hand over some of your genes for it to incorporate. Being eukaryotes, that’s not something we do.

Our bacterial parentage: the union of Archaea and Eubacteria

Although not everyone buys into the “ring of life” concept, their findings help resolve some confusion over the origins of eukaryotes. When we first began analyzing the relationship of nucleated cells to prokaryotes, we identified a number of genes—that we call “informational” genes—that seemed to be directly inherited from the Archaea branch of the Tree of Life. Informational genes are involved in the processes like transcription and translation, and indeed, recent “ring of life” research suggests a greater role for Archaea. But we also found that many eukaryotic genes traced back to the Eubacteria domain, and that these genes were more organizational in nature, being involved in cell metabolism or lipid synthesis.

Applying the tree metaphor did not help resolve this confusion. If eukaryotes vertically inherited these genes from their prokaryotic ancestors, we would expect to see only genes representative of one domain or the other in eukaryotes. But we see both domains represented in the genes, and the best explanation is that organisms from each domain fused entire genomes—horizontally transferring genes—to produce a brand new organism, the progenitor of all eukaryotes: yeasts, trees, giraffes, killer whales, mice, … and us.

How the genetic code became degenerate

Our genetic code consists of 64 different combinations of four RNA nucleotides—adenine, guanine, cytosine, and uracil. These four molecules can be arranged in groups of three in 64 different ways; the mathematical representation of this relationship is 4 x 4 x 4 to illustrate the number of possible combinations.

Shorthand for the language of proteins

This code is cellular shorthand for the language of proteins. A group of three nucleotides—called a codon—is a code word for an amino acid. A protein is, at its simplest level, a string of amino acids, which are its building blocks. So a string of codons provides the language that the cell can “read” to build a protein. When the code is copied from the DNA, the process is called transcription, and the resulting string of nucleotides is messenger RNA. This messenger takes the code from the nucleus to the cytoplasm in eukaryotes, where it is decoded in a process called translation. During translation, the code is “read,” and amino acids assembled in the sequence the code indicates.

The puzzling degeneracy of genetics

So given that there are 64 possible triplet combinations for these codons, you might think that there are 64 amino acids, one per codon. But that’s not the case. Instead, our code is “degenerate;” in some cases, more than one triplet of nucleotides provides a code word for an amino acid. Thus, these redundant codons are all synonyms for the same protein building block. For example, six different codons indicate the amino acid leucine: UUA, UUG, CUA, CUG, CUC, and CUU. When any one of these codons turns up in the message, the cellular protein-building machinery inserts a leucine into the growing amino acid chain.

This degeneracy of the genetic code has puzzled biologists since the code was cracked. Why would Nature produce redundancies like this? One suggestion is that Nature did not use a triplet code originally, but a doublet code. Francis Crick, of double-helix fame, posited that a two-letter code probably preceded the three-letter code. But he did not devise a theory to explain how Nature made the universal shift from two to three letters.

A two-letter code?

There are some intriguing bits of evidence for a two-letter code. One of the players in translation is transfer RNA (tRNA), a special sequence of nucleotides that carries triplet codes complementary to those in the messenger RNA. In addition to this complementary triplet, called an anticodon, each tRNA also carries a single amino acid that matches the codon it complements. Thus, when a codon for leucine—UUA for example—is “read” during translation, a tRNA with the anticodon AAU will donate the leucine it carries to the growing amino acid chain.

Aminoacyl tRNA synthetases are enzymes that link an amino acid with the appropriate tRNA anticodon.  Each type of tRNA has its specific synthetase, and some of these synthetases use only the first two nucleotide bases of the anticodon to decide which amino acid to attach. If you look at the code words for leucine, for example, you’ll see that all four begin with “CU.” The only difference among these four is the third position in the codon—A, U, G, or C. Thus, these synthetases need to rely only on the doublets to be correct.

Math and doublets

Scientists at Harvard believe that they have solved the evolutionary mystery of how the triplet form arose from the doublet. They suggest that the doublet code was actually read in groups of three doublets, but with only the first two “prefix” or last two “suffix” pairs actually being read. Using mathematical modeling, these researchers have shown that all but two amino acids can be coded for using two, four, or six doublet codons.

Too hot in the early Earth kitchen for some

The two exceptions are glutamine and asparagine, which at high temperatures break down into the amino acids glutamic acid and aspartic acid. The inability of glutamine and asparagine to retain structure in hot environments suggests that the in the early days of life on Earth when doublet codes were in use, the primordial soup must have been too hot for stable synthesis of heat-intolerant, triplet-coded amino acids like glutamine and asparagine.

Think you’re eating snapper? Think again

Grad students learn PCR, uncover fish fraud

It’s a great thing if you get your name published in the journal Nature, the pinnacle of publishing achievement for a biologist, while you’re still in school. Such was the fate of six graduate students participating in a course designed to teach them DNA extraction, amplification, and sequencing. They identified a real question to answer in the course of applying their techniques, and their results got them brief communication in Nature and national recognition. Not bad; I hope everyone also earned an “A.”

The group, led by professors Peter Marko and Amy Moran at the University of North Carolina-Chapel Hill, suspected that fish being sold as red snapper in markets in the U.S. were actually mislabeled, in violation of federal law. This kind of fraud is nothing new; marketers have in the past created “scallops” by cutting scalloped-shaped chunks from the wings of skates (part of the cartilaginous fish group), and have labeled the Patagonian toothfish as Chilean sea bass.

Protections can drive fraud

Such mislabeling has far-reaching implications, well beyond concerns about defrauding consumers of the fish they want. If fisheries and fish dealers are reporting their catches as red snapper or scallops or sea bass when they are, in fact, other marine species, then data on the abundance and distribution of all of these species will be misleading. Red snapper, Lutjanus campechanus, was placed under strict management in 1996, a move that gave incentive to the fishing industry and retailers to mislabel fish. Some experts suspect that many fish under heavy restriction end up with their names on a different species for market.

Who is responsible for the mislabeling? Fishermen pull in their catches and identify them on the boat or at the dock. The catch goes to a fish dealer, who is also responsible for reporting what species and how many of each species were caught. This report becomes the official number for the species. The dealer then sends the fish on to the retail market, where it is sold in stores and restaurants. Misidentification on the boat or dock is one reasonable possibility because some of the species identified in the North Carolina study frequent the same types of habitat, primarily offshore waters around coral reefs. These species, which include vermillion snapper and silk snapper, do look very much like red snapper, although there are some identifiable morphological differences.

One filet is just like the other?

So misidentification could be an honest mistake or purposeful change at the boat or dock, or it could be a willful relabeling at the restaurant or market. By the time a fish is processed, it consists essentially of a filet that is indistinguishable from that of other, similar fish. Hapless consumers end up paying twice as much for silk snapper, thinking they’re getting the pricier red snapper, instead.

But the DNA sequencing the North Carolina group performed not only turned up species closely related and very similar to red snapper, but also uncovered some sequences that have no identity with those of known species in gene databanks. In other words, fish of unknown identity are being caught, sold, and eaten as red snapper before we even have a chance to document what they are, their habitats, or their numbers.

Mislabeling is rampant

The grad students and professors also found that some of the fish being marketed as Atlantic red snapper were, in a few cases, from the other side of the planet, including the crimson snapper, which occurs in the Indo-West Pacific. All told, they found that 77% of the fish samples from stores in the eastern and midwestern U.S. were mislabeled as red snapper.

One way to prevent such mislabeling is to require identification of the country of origin of fish sold at market. The USDA has instituted such a program, although confusion will likely persist about fish caught in international waters. And the mislabeling isn’t only a U.S. phenomenon.

In the meantime, how do you know you’re getting red snapper? Some fish ecologists recommend avoiding it entirely because it still suffers from overfishing; however, one way to know your fish is to ask for it with the skin on, or completely intact. If you’ve got a smart phone, you can just look up the image and compare. Alternatively, you could just order the salad.

The piggish origins of civilization

Follow the pig

For researchers interested in tracing the path of human civilization from its birthplace in the Fertile Crescent to the rest of the world, they need only follow the path of the pig.

Pig toting

Until this research was reported, humans agreed that pigs had fallen under our magical domestication powers only twice about 9,000 years ago, once in what is called the Near East (Turkey), and a second time in what is called the Far East (China). Morphological and some genetic evidence seemed to point to these two events only. That led human geographers to conclude that humans must have toted domesticated pigs around from the Far or Near East to other parts of the world like Europe or Africa, rather than domesticating the wild boars they encountered in every new locale.

Occam’s Razor violation

As it turns out, those ideas—which enjoyed the support even of Charles Darwin—were wrong. And they provide a nice example of a violation Occam’s Razor, the rule that scientists should select the explanation that requires the fewest assumptions. In the case of the pig, two domestication events definitely required fewer assumptions than the many that we now believe to have occurred.

Research published in the journal Science in 2005 has identified at least seven occurrences of the domestication of wild boars. Two events occurred in Turkey and China, as previously thought, but the other five events took place in Italy, Central Europe, India, southeast Asia, and on islands off of southeast Asia, like Indonesia. Apparently, people arrived in these areas, corralled some wild boars, and ultimately domesticated them, establishing genetic lines that we have now traced to today.

As usual, molecular biology overrules everything else

The scientists uncovered the pig domestication pattern using modern molecular biology tools. They relied on a genetic tool known as the mitochondrial clock. Mitochondria have their own DNA, which they use as the code for their own, specialized mitochondrial proteins. Because mitochondria are essential to cell function and survival, changes in DNA coding sequences are rare because selection pressures against them are strong. For this reason, any changes are usually random changes in noncoding regions, changes that accumulate slowly and at a fairly predictable rate over time. This rate of accumulation is the mitochondrial clock, which we use to tick off the amount of time that has passed between mutations.

Tick-tock, mitochondrial clock

Very closely related individuals will have almost identical mitochondrial sequences; for example, the mitochondria that you have are probably exactly alike in sequence to the mitochondria your mother has. You inherited those mitochondria only from your mother, whose egg provided these essential organelles to the zygote that ultimately became you. Were someone to sample the mitochondria from one of your relatives thousands of years from now, they would probably find only a few changes, but if they compared this sample to one from someone unrelated to you, they would find different changes and a different number of changes, indicating less of a relationship.

That’s how the researchers figured out the mystery of the pigs. They sampled wild boars from each of the areas and sampled domestic pigs from the same locales. After comparing the mitochondrial DNA sequences among these groups, they found that pigs in Italy had sequences very like those of wild boars in Italy, while pigs in India had sequences very like those of wild boars there.

Approachable teenage-like pigs

How did we domesticate the pigs? Researchers speculate that adult boars (males) who still behaved like teenagers were most likely to approach human settlements to forage. They were also less aggressive than males who behaved like full adults, and thus, easier to domesticate. They fed better on human food scraps than did their more-mature—and more-skittish—brethren, and enjoyed better survival and more opportunities to pass on their juvenile characteristics, which also included shorter snouts, smaller tusks, and squealing, to their offspring. Domestication was just the next step.

Magnetic fields and the Q

Sorry, not for Trekkies. This Q is chemical.

People have been concerned for years about magnetic fields having adverse health effects–or even have peddled magnets as being health beneficial. But although scientists have demonstrated repeatedly a chemical response to magnetic fields, no one has ever shown the magnetic fields directly affecting an organism.

The earth’s core is weakly magnetic, the result of the attraction between electric currents moving in the same direction. Nature presents plenty of examples of animals that appear to use magnetic fields. Some bacteria can detect the fields and use them for movement. Birds appear to use magnetic fields to navigate, and researchers have shown that attaching magnets to birds interferes with their ability to navigate. Honey bees become confused in their dances when the earth’s magnetic fields fluctuate, and even amphibians appear to use magnetism for navigation. But no one has clearly demonstrated the mechanism by which animals sense and use magnetic fields.

Do pigeons use a compass?

Some research points to birds using tiny magnetic particles in their beaks to fly the right way. But these particles don’t tell the birds which way is north; they simply help the bird create a topographical map in its head of the earth over which it flies. The magnetic particles tell a pigeon there’s a mountain below, but not that the mountain is to the north. The conundrum has been to figure out how the pigeon knows which way is north in the absence of other pointers, such as constellations.

The answer to the conundrum lies with the bacteria. Scientists in the UK have used the purple bacterium Rhodobacter sphaeroides to examine what magnetic fields do at the molecular level. These bacteria are photosynthetic, and absorb light to convert to energy in the same way plants do. The absorbed light triggers a set of reactions that carry energy via electrons to the reaction center, where a pigment traps it. Under normal conditions, as the pigment traps the energy, it also almost instantaneously converts it to a stable, safe form, but sometimes the energy can form an excited molecule that can be biologically dangerous. As it turns out, these reactive molecules may be sensitive to magnetic fields.

A radical pair…or dangerous triplet

A chemical mechanism called the “Radical Pair Mechanism” is the method by which the potentially dangerous molecules can form. In this mechanism, an electron in an excited state may pair with another type of electron in an excited state. If the two excited molecules come together, they can form what is called a “radical pair in a singlet state,” because they are two singlets that have paired. Under normal conditions, this pairing does not happen; in the photosynthetic bacterium, for example, a compound called a quinone (Q) inhibits formation of this pair or of an equally damaging triplet of one electron type or the other.

But when a Q is not present, the singlet or triplet state results. If the triplet forms, it can interact with oxygen to produce a highly reactive, biologically damaging singlet molecule that we know as a “radical.” You have probably heard of radicals in the context of antioxidants—they are the molecules that antioxidants soak up to prevent their causing harm. You may also have heard of carotenoid, a pigment that is an antioxidant. In a normal photosynthetic bacterium, the carotenoids present serve as the Q, the compound that prevents formation of the damaging radical.

A helpful effect of magnetic fields?

Where do magnetic fields come in? Previous work indicated an influence of magnetic fields on triplet formation, and thus, on radical formation. One excellent model to test the effects of fields in a biological system is to remove the Q, the molecular sponge for the triplets, and then apply magnetic fields to see whether triplets—and radicals—form.

That’s exactly what the researchers did, using a mutated form of R. sphaeroides that did not make carotenoids—the Q. The result? The stronger the field, the less radical product was made. They have demonstrated a magnetic field effect in an organism for the first time, and the effect was helpful, not damaging. Their next step, which they are working on, is examining whether or not the bacteria grow better in the presence of the fields.

Platypus spur you? Grab a scorpion

The most painful egg-laying mammal: the platypus

The duckbill platypus is an impossible-looking, risible creature that we don’t typically associate with horrific pain. In fact, besides its odd looks, its greatest claim to fame is that it’s a mammal that lays eggs. But that’s just because you’re not paying close enough attention. On the hind legs of the male platypus are two spurs that inject a venom so painful, the recipient human writhes for weeks after the encounter. In spite of the fact that platypuses (platypi?) and humans don’t hang out together much, platypus venom contains a specific peptide–a short protein strand–that can directly bind to receptors on our nerve cells that then send signals of screeching pain to our brains. Ouch.

Hurting? Reach for a scorpion

If you’ve ever experienced platypus-level pain and taken pain killers for it, you know that they have…well…side effects. It’s because they affect more than the pain pathways of the body. The search for pharmaceuticals that target only the pain pathway–and, unlike platypus venom, inhibit it–forms a large part of the “rational design” approach to drug development. In other words, you rationally try to design things that target only the pathway of interest. In this case, researchers reached for the scorpion.

Their decision has precedent. In ancient Chinese medical practice, scorpion venom has been used as a pain reliever, or analgesic. But as developed as the culture was, the ancient Chinese didn’t have modern protein analysis techniques to identify the very proteins that bind only to the pain receptors and inhibit their activity. Now, a team from Israel is doing exactly that: teasing apart the various proteins in scorpion venom and testing their ability to bind pain receptors in human nerve cells.

The next step? Mimicry

With proteins in-hand, the next step will be to create a synthetic mimic that influences only the receptors of interest. It’s a brave new world out there, one where we wrestle proteins from scorpion venom and then make copycat molecules to ease our pain.

For your consideration

Why do you think the platypus makes proteins in its venom that human pain receptors can recognize if humans generally haven’t targeted platypuses (platypi?) as prey over its evolution?

In the human body, a receptor may be able to bind each of two closely related molecules–as a hormone receptor does with closely related hormones–but one of the molecules activates the receptor, while the other molecule inhibits it. Taking this as a starting point, why do you think some proteins in scorpion venom–which often causes intense pain–have the potential effect of alleviating pain?

Mosquito nose transplanted to frogs, flies

To combat malaria, we must understand the mosquito’s nose

Malaria affects several hundred million people worldwide every year, and each year, more than one million people–mostly children–die of the disease.  The vectors for transferring the plasmodium that causes malaria to humans are female mosquitoes from the Anopheles genus. To combat these mosquitoes and this deadly disease, we must first understand the mosquito nose.

The mosquito sense of smell is localized to the animal’s antennae. There, nerve cells sense various odors (all smells are particulate!) via molecules of protein called receptors (because they “receive” the input). Scientists have reasoned that if they can understand which odors trigger these receptors–and thus, the mosquito’s interest–they may be able to develop odorants (smells) that distract the mosquito from people, thus reducing transmission of malaria.

Fruit flies and frogs with mosquito noses

While using the actual animal might seem to be the way to go, scientists turn to more standard laboratory models for such work. Fruit flies and frog eggs are long-time, well-characterized standbys in the lab environment, and specific manipulations allow researchers to introduce genes from other organisms into these species. Because fruit flies and frogs are such prolific animals, reproducing by the hundreds, the proteins that these introduced genes encode can be produced in the context of the whole organism in large numbers. In science and industry, a process that allows big production outputs like this is called “high throughput.”

The labs of Dr. John Carlson of Yale and Dr. Lawrence Zweibel of Vanderbilt have respectively co-opted the fruit fly (Drosophila melanogaster) and a frog (species not specified) as their method of high-throughput production of these mosquito nose proteins. The fly approach is a bit slower, involving painstaking insertion of the mosquito genes into flies one at a time. The flies express the proteins in their own antennae, a replacement for their own receptors that have been knocked out.

The frog egg approach is more truly high throughput, as the engineered frog eggs express an abundance of the mosquito nose proteins. The smell-sensitive egg then can be tested using a system that measures nerve signals: Whenever a specific odorant dissolved in the buffer solution surrounding the egg sets off the nose protein receptor, the system registers the electrical response.

The flies are good for testing compounds that volatilize in air, showing by their behavior whether or not the odor attracts, while the frog eggs allow for a more truly high-throughput analysis. Together, they make quite a team when it comes to testing the mosquito’s sense of smell.

Frogs and flies, working together

The two labs tested each system using 72 receptors from the Anopheles “nose” and a panel of 110 odorants. The mosquito-nosed frog eggs and mosquito-nosed flies yielded results that pretty much matched: Some receptors are generalist types, reacting to just about any smell, but a special few focus more on specific odors. As it turns out, 27 of these receptors are fine-tuned to respond to the odorants in human sweat. The results from these studies are reported simultaneously in two papers, one in Nature and one soon to appear in the Proceedings of the National Academy of Sciences.

A decoy smell for the mosquito

Why go to all the trouble to make mosquito noses in flies and frogs? The hope is to use these high-throughput methods to identify compounds that can serve as decoys for the mosquitoes by deceiving these “nose” receptors. If researchers can identify an eau d’ sweat that distracts the mosquito away from a human target or an odorant combination that repels the mosquito from people, the outcome could be a decrease in transmission rates of malaria.

UPDATE: Malaria does not distinguish between kings and commoners: News reports indicate that the microscopic plasmodium may have felled King Tut himself.

Ideas for questions

Why do scientists focus on species like fruit flies or frogs (e.g, Xenopus laevis) when they do research like this? Why not use the species being studies instead?

Do some research on the relationship between the malarial plasmodium and the mosquito. Do all species of mosquito transmit this pathogen? What distinguishes species that transmit malaria?

The article references measuring electrical activity in the frog eggs in response to odorants. Look up “voltage clamp.” How is that used to measure electrical activity?

World/public health question: What has been done in the past to combat malaria? How effective were these efforts? What is being done today? Some efforts are high-tech, like the studies described above. Some are low-tech. Can you identify a few examples of each?

%d bloggers like this: