Tuesday, December 28, 2010

The Light Side of the Road

Since I don’t own a scale, I can’t tell you exactly how much my bike weighs. But rest assured, it is heavy. It’s a secondhand steel-frame model purportedly manufactured by Sears department store, probably in the 1970’s. Friends complain of its unwieldy mass (particularly anyone hoisting it onto their car’s bike rack) and more than one person has suggested “upgrading” to something more modern. But I’ve bonded with the vehicle and strive to paint it in a positive light. Sure, it sucks carrying the behemoth up a flight of stairs to shelter when it rains. But overall, it’s a solid bike. Sturdy and hardworking. “And a comfortable ride too”, I tell the skeptics, “not like those flimsy featherweight carbon-frame bicycles that retail for ten times as much.” And so I eagerly put aside any ambitions of doing a year-end top ten science innovations/headlines/etc. list (you’re surely as sick of them by now as you are of holiday cookies) to report on this merry little piece from the British Medical Journal's Christmas edition, in which an anesthesiologist attempted to determine if the newer, lighter bikes had any advantage over their clunkier predecessors.

Being fortunate enough to have two bicycles available for his daily commute, the good doctor designed a simple experiment to examine which of the bikes was the more efficient way to get to work. Over a 6-month period, (winter to summer) he chose his bike for the day by flipping a coin. The riding time for each round-trip journey, as well as the top speed, was recorded by a bicycle computer. During the experimental period, Dr. Groves made 30 trips on the steel-frame (809 miles) and 26 trips on the carbon-frame (711 miles). * At the end of month 6, he totaled the data and, wouldn’t you know it, the older, heavier steel bike fared no worse than the shiny new one, which had been purchased for what is almost a month’s salary to an average person without an MD.

The author discusses several physical forces that can affect the cyclist; rolling resistance, drag and gravity. Rolling resistance (the friction encountered by round objects, such as bike tires, moving on a flat surface) is minimal on paved roads, so the additional work needed to overcome it is slight. The effect of drag (aka air resistance) on the cyclist is significant. However, drag is an odd force. It is independent of mass and instead varies relative to velocity. More velocity results in more air resistance. It’s a drag, but not any more so on a heavier bike. This leaves gravity as the most relevant consideration. As you may vaguely recall from your first semester of physics, more work is needed to push a bike with greater mass up a hill. But since a round-trip commute can’t actually be all uphill both ways, things should even out a bit as you coast downhill. Unless, of course, you were crazy enough to purchase a fixed-gear bicycle.

On average, Groves’ commute was about 7 minutes shorter in summer than in winter. He attributes this in part to the poorer weather and bulkier clothing that plague winter cycling, but he also mentions that greater caution taken to avoid falling on the ice and snow may be an additional slowing factor. This raises an interesting question. Was the upper speed limit in the summer months a result of the physical limitations of how fast the rider could propel his bike, or merely the highest speed at which the rider could safely control the bike. If it was the latter, then one of the bikes might be less efficient and the rider may just be working harder to achieve the maximum comfortable riding speed on that vehicle. The author made no mention of whether he felt a greater desire for a cold alcoholic beverage following commutes made on the steel bike.

It’s understandable then if you still feel that a lighter bike would be easier to peddle. But keep in mind that you’re also hauling your own weight up those hills. Groves’ steel-frame bike was about 9 lbs heaver than his carbon-frame (the bikes were about 30 lbs and 21 lbs, respectively). This looks like an impressive weight difference until you add to each bike the weight of its rider. Couple this the frequently-made observation that lighter bikes are less comfortable (rumor has it one feels the bumpy road more on the newer bikes), and it’s hard to justify paying more money for less mass.

To be fair, I should note that the author of the bicycle paper does not claim that his single-subject study is a conclusive and exhaustive exploration of the subject. My willingness to generalize his findings to all bicycles on both sides of the Atlantic is a result of the human tendency to gratefully accept any data that supports one’s existing conceptions. ‡ I like my bike and have no plans to get a newer, lighter one. And as far as I’m concerned, there is now medical literature to back me up.

* That’s 56 total trips, so either he doesn’t work a 5-day week or he used other means of transportation more than half the time.

† This makes sense when you think about it. On a reasonably calm day, the “wind” blowing at you as you ride the bike is created by the forward motion of the bike. The faster you go, the windier it feels.

‡ That’s confirmation bias, for those with a fondness for psychological terminology.

Friday, December 17, 2010

Natural’s Not In It

To be clear, I should start by telling you that I’ve never much cared for meat. I deleted it from the menu at age 19 and was completely unfazed by the change. Prior to that I’d favored meat products that were so chopped, burnt and salted that their original source could scarcely be detected. I don’t find steak delicious or barbeque irresistible. I have never wistfully looked at friends eating hamburgers and thought, “Sigh, that could be me.” These days I eat fish from time to time and enjoy it, but if society outlawed the consumption of sea creatures, I doubt I would spend much time mourning the loss. In short, the question of whether or not scientists will be able to produce in vitro meat is not really my problem. I won’t be eating it either way.

Many of my fellow humans don’t share my preferences. People around the world can’t seem to get enough meat, and the growing demand for it threatens not only the health of the consumers and the quality of life of the consumed, but also the planet we all occupy. (So I guess this subject may affect me after all, beyond just the level of scientific curiosity.) The inefficiency of meat production is no secret. 70% of agricultural land is allotted to livestock farming, mostly to supply food for these animals. More than double the amount of water and energy is required to support a meat-eating diet than a vegetarian one. How do we go about fixing this issue? Some think the answer lies in finding a way to grow meat without growing animals.

In vitro meat is real muscle tissue grown from stem cells of animals.* While being able to grow a steak in a Petri dish would theoretically eliminate many of the problems of traditional meat production, it is (as with many laboratory endeavors) more complicated than it might initially sound. Scientists in the Netherlands had been dutifully working toward assembling the world’s first in vitro sausage, but came up with only about a tenth of the material needed when their funding ran out last year. The logistics of lab-grown meat are complicated, especially with the ultimate goal being the ability to grow meat on a large enough scale to meet the demands of a carnivorous populace. For one thing, an animal-free growth medium must be found. Regular cell-culture medium is made from fetal calf serum and thus using it would require, well, cows. This hardly solves the problem of maintaining livestock. So far the best contender is a medium made from maitake mushrooms. Additionally, just letting stem cells proliferate yields a limp, texture-less product. The lab-grown tissue needs exercise to resemble that from live animals. This can be accomplished by regularly administering an electrical current to the growing cells. But of course this process requires electricity, not to mention that it would likely fuel the “Frankenfoods” name-calling that will inevitably greet in vitro meat at its grand debut.

Despite these hurdles, some victories have already been reported. In 2002, scientists successfully grew goldfish fillets in a laboratory using whole muscle biopsy pieces rather than isolated stem cells. The samples grew between 13% and 79% (depending on the type of culture medium used), which makes the experiment sound more like meat amplification than the creation of in vitro meat. It’s sort of like the Bible story about the loaves and fishes, except in this version Jesus dons a lab coat, cuts the fish into small slices, centrifuges the slices into pellets and lets them sit in growth medium for week. Amen. After growing the fillets, the team marinated them in lemon, pepper, garlic and olive oil and fried them (I am not making this up, it’s in the Methods and Material section of their article) and presented them to a panel to be viewed and smelled, though not tasted.† The observers concluded that the product appeared to be edible. Not bad, considering that people aren’t exactly queuing up to eat goldfish from any source.

Taste and public willingness to ingest such a new and novel form of meat are potentially bigger stumbling blocks than any of the technical problems that have thus far arisen. Lab-grown meat is doomed to be perceived as “unnatural”. At this stage, scientists are mostly working on making an in vitro version of processed meat. Because the muscle can only be grown to a limited thickness without creating some sort of artificial vascular structure, steaks are still very far from being realized. The current goal is to make enough thin strips to grind up, flavor and assemble into something like a sausage or a patty. It doesn’t sound especially appetizing. However, consider what consumers already tolerate (not always knowingly) in processed meat made from real animals. Let’s examine how natural the common hamburger is.

Once upon a time, ground beef was made by taking a piece of beef and putting it through a meat grinder. Simple enough. And while the best cuts of beef may not have been selected for this honor, it was at least a single piece of meat from a single cow. With the rise of factory farming and the push for more and cheaper meat, things have gotten a bit messier. A package of ground beef purchased from a modern supermarket is a grim potpourri of meat products from multiple cows, slaughter houses, cities and countries. Much of the meat used in ground beef is what is referred to as “fatty trimmings”. These are parts cut off from higher-grade meat. They come from sections of the animal that are the most susceptible to E. coli contamination. In order to offer the consumer a lower-fat ground beef (something more comparable to what could be made by grinding whole cuts of high-grade meat) these trimmings can be mixed with processed “texturized beef product”, a substance made by centrifuging fatty trimmings. (Look! A centrifuge, just like in the lab.) About a decade ago, an innovation made it possible to sell trimming that would previously have been usable only in pet food, due to their high bacterial content. This ingenious method involved simply adding ammonia to the product to kill bacteria. Ammonia, in case you’ve forgotten, is the chemical you use to clean your toilet. Miraculously, the FDA approved this as safe and, since ammonia is considered a “processing agent”, it needn’t even appear on the ingredients of processed beef.‡

The lab-grown meat is starting to sound pretty tasty, isn’t it? If nothing else, it’s at least free of E. coli. Real animals have digestive systems that house this bacteria. Muscle grown in a Petri dish doesn’t generate solid waste, thus eliminating the problem of elimination. But if bad PR doesn’t thwart in vitro meat, cost likely will. So far the research is expensive and there is no solid plan for making the product cheaper than the already rock-bottom (in price and quality) meat pastiches of our modern world. The obvious question is whether creating meat that is kinder to the environment and to animals is even the right approach. Given all the possible hindrances, it might actually be easier convince society to reduce its meat consumption. I wouldn’t expect meat enthusiasts to give up the product entirely, but the low quality of the meat being consumed says something about its erroneously-perceived necessity. Are people really so desperate to consume this substance that they’re willing to buy beef soaked in toilet cleaner? Maybe meat should be an occasional splurge rather than a daily dietary requirement. Much of the scary processed beef I described in this article is sold to cash-strapped public schools that need to cut back on the cost of their lunch programs. Why not just go the extra step and not buy meat at all?

As for which is more sick and wrong, in vitro meat or regular processed meat, it’s up for debate. One of the more creative objections to lab-grown meat I encountered while researching this article was the possibility of cannibalism. If one can grow muscle tissue from pig or cow explants without killing the animals, one could also grow human meat. In fact, there’s no reason a person couldn’t grow meat from tissue samples from their own body. Given the bizarre items that adventurous gourmands will go out of their way to eat, lab-grown human flesh doesn’t seem out of the question.§ But there is no need to address these ethical concerns yet. We’ve yet to even finish that lab sausage. It’s just food for thought. Bon appetite.

* Thus far this has been done using adult stems cells, which have already differentiated into a specific tissue type (in this case muscle). Unlike the pluripotent embryonic stem cells you hear about in the news, adult stem cells are not immortal. They have a finite number of cell divisions in them before they expire.

† Society is still somewhat unclear as to whether or not it is legal to eat what is still an experimental product. If anyone gave in to curiosity and took a bite of the fried goldfish before feeding it to the trash, they wouldn’t be encouraged to disclose their observations to us.

‡ The punch line to this story is that, following complaints about the nasty smell and taste of ammonia, the processors reduced the amount of the chemical being added to levels that may not be sufficient to kill bacteria. So there is now simultaneously too much and not enough ammonia in America’s hamburgers.

§ Cheese fermented by live maggots (Casu Frazigu), coffee made from beans ingested and excreted by exotic mammals (Kopi Luwak), deliberately rotten eggs (“century egg”). The list goes on.

Who told you this?

Jones, N. 2010. “A Taste of Things to Come?” Nature 468: 752-753.

Marloes, L.P. et al. 2010. “Meet the New Meat: Tissue Engineered Skeletal Muscle.” Trends in Food Science & Technology 21: 59-66.

Benjaminson, M.A. et al. 2002. “In Vitro Edible Muscle Protein Production System (MPPS): Stage 1, Fish.” Acta Astronautica 51: 879-889.

Hopkins, P.D. and Dacey, A. 2008. “Vegetarian Meat: Could Technology Save Animals and Satisfy Meat Eaters.” Journal of Agricultural and Environmental Ethics 21: 579-596.

Moss, M. “The Burger That Shattered Her Life.” The New York Times. October 3, 2009.

Moss, M. “Safety of Beef Processing Method is Questioned.” The New York Times. December 30, 2009.

Friday, December 10, 2010

Desperate Living

Amidst this week’s buzz surrounding Wikileaks, the arsenic-eating bacteria of California’s Mono Lake is almost forgotten. But last week, it was front-page news and people who normally cared little for microbiology were updating their Facebook pages with exuberant quotes about a newly-discovered organism that “redefined life” and somehow related to NASA and the existence of space aliens. In actuality, the discovery was not quite as earth-shattering as your friends would have had you believe. Ever the voice of reason, I’d like to offer a bit of perspective, along with some other organisms to get excited about.

In case you somehow missed the headlines, here’s happened in California. NASA-funded scientists scooped some bacteria out of Mono Lake, a salt-water lake with a high arsenic concentration, brought it back to the lab and then tried to grow it in an arsenic-rich, phosphorus-deprived environment. The idea was to see if the bacteria could be persuaded to replace phosphorus, an element essential to all previously-discovered life, with arsenic, which is conveniently located one row directly below phosphorus on the periodic table and shares certain chemical properties with it. Phosphorus is incorporated into proteins and lipids, fuels metabolic reactions in the form of ATP and, perhaps most notably, helps form the backbone of DNA. It’s an important element.*

Well, the big news was that the bacteria, christened strain GFAJ-1, lived. This bacteria was selected specifically because it was already tolerant of arsenic, an element that is toxic to many living things.† The hopeothesis‡ was that this tolerance would enable it make do with arsenic in its daily maintenance if phosphorus was unavailable. And make do it did. However, that is all it did. GFAJ-1 didn’t exactly thrive on its new diet. While the bacteria still managed to grow in arsenic, it fared much better when provided with phosphorus.

More disappointingly, in recent days criticism from numerous biologists has threatened to turn an unexceptional experiment into an embarrassing one. The NASA team has been accused of science that is literally sloppy – poor washing of DNA and that sort of thing. Critics suggest that arsenic found in GFAJ-1 may be from contamination rather than actual incorporation into its DNA, and that the bacteria simply survived by grabbing every shred of phosphorus it could find (phosphorus couldn’t be completely removed from the growth medium, just significantly reduced). Luckily for the authors of the original paper, which appeared online in Science last week, these concerns have thus far appeared mostly on blogs, and everyone know you can’t trust those things. Nonetheless, doubts have been planted that GFAJ-1 is merely an arsenic-tolerant bacteria that builds its DNA using phosphorus just like the rest of us.

And why should you be impressed by an arsenic-tolerant bacteria? GFAJ-1 is just one of many extremophiles living in equally improbable environments on our planet. Extremophiles are organisms that live at temperatures, pH and salinity well outside the norm.§ They not only live in these environments, they grow best in them, having adapted to their unique challenges. You don’t read about these life forms very often because they are incompatible to your own external, and often even internal, environment. Their names don’t turn up food recalls. Such microorganisms would wither and die if exposed to a world as ordinary as an undercooked hamburger or a sun-soaked potato salad.

Take, for instance, psychrophiles, who have optimum growth temperatures below -20°C. They won’t even start growing until the thermometer gets down to 0°C (the freezing point of water). They live in climates where the snow never melts; permanent ice fields. If normal bacteria could do this, the freezer would be as bad a choice as a cupboard for storing your perishables. Thermophiles and hyperthermophiles live on the opposite end of the temperature spectrum, with optimum growth temps of above 45°C (113°F) and above 80°C (176°F) respectively.** These organisms can grow in places like hot springs, which are often at boiling point for their altitude, and have been known to find their way into artificial hot spots, such as water heaters. Changes in enzymes and cell membrane structure enable these organisms to flourish in environments that would quickly kill mesophiles like ourselves.

Thermophiles add brilliant colors to a hot spring at Yellowstone National Park, while psychrophiles can turn ice red. How cool is that?

Other organisms have adaptations that allow them to live in extremely salty environments (these are called halophiles) or strongly acidic or alkaline environments (acidophiles and alkaliphiles respectively). The acidophilic (and thermophilic) archaeon Thermoplasma acidiphilum was originally discovered in a self-heating coal refuse pile (pH of about 2), which sounds easily as uninviting as an arsenic-filled lake. And if there is no oxygen available, that’s not a problem. Thermoplasma acidiphilum can also use sulfur for respiration, which is unequivocally awesome. You are amazed.

So where does this leave poor GFAJ-1? Well, to survive in Mono Lake it already had to be a halophile and an alkaliphile, not to mention its striking ability to handle arsenic. It was an impressive bacteria in its own right before it got swept up in all this arsenic-eating hype and inevitable backlash. And there is no reason to think any less of it. It’s still a fine extremophile, it’s just not the organism that revolutionized biology.

But let’s pretend for a moment that the lab techniques of the NASA experiment were flawless and that the entire scientific community agreed on the results. How important is the creation of an organism that can build DNA from arsenic? How does this “redefine life”? The definition of life is already a complicated and changing one, which can’t be summarized in a single sentence. While living things tend to be made up of the same batch of elements (carbon, hydrogen, nitrogen, oxygen, sulfur and phosphorus), being assembled from these ingredients is not the criteria for being considered alive. Living things grow and reproduce. They have some sort of metabolism. They hold their cellular components together and resist entropy, at least while they are alive. In theory, anything that succeeds in these activities could be categorized as “life”, regardless of which elements it uses. The reason the word “life” is often paired with qualifiers like “as we know it” or “carbon-based” is that we acknowledge that the organisms we have found thus far may not represent the only possibly system of life. Had scientists made or discovered an organism that actually used arsenic in its DNA, this would not overturn previously held scientific beliefs, it just would confirm ideas that have yet to be matched to empirical evidence. We can’t say that arsenic-based or silicon-based organisms do not or cannot exist. But we might have to admit that nobody, including the folks at Mono Lake, has yet encountered such an organism.

* Phosphorus in living things exists mostly as phosphate (PO43-). The arsenic analog is arsenate (AsO43-). These molecules, rather than elemental P and As, were used in the NASA experiment.

† Arsenate can be harmful specifically because it is so similar to phosphate. It bonds to receptors intended for phosphate and gums up all sorts metabolic pathways.

‡ This is my attempt to create a new word. It means a hypothesis that is based more on wishful thinking than on likelihood of outcome. You can help me get it to catch on by using it in daily conversation.

§ Most known extremophiles are microorganisms in the domains Archaea and Bacteria. However they don’t have to be. Who knows, perhaps in the future some lucky explorer will find a species of squirrel or cat that lives in active volcanoes. It could happen.

** The recommended setting for a hot tub is no higher than 104°F, and even then you might pass out and drown if you stay in it for over 20 minutes. And you shouldn’t be in the thing at all if you’re pregnant or on blood thinners or have a heart condition….The list goes on. Humans are sissies.

Who told you this?

Wolfe-Simon, F. et al. “A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus.” Science. Published Online December 2, 2010.

Feller, G. and Gerday, C. 2003. “Psychrophilic Enzymes: Hot Topics in Cold Adaption.” Nature Reviews Microbiology 1: 200-208.

Stetter, K.O. 2006. “Hyperthermophiles in the History of Life.” Philosophical Transactions of the Royal Society of Biological Sciences 361: 1837-1843.

Baker-Austin, C. and Dopson, M. 2007. “Life in acid: pH homeostasis in acidophiles.” Trends in Microbiology 15: 165-171.

Grossman, L. “Doubts Brew About NASA’s New Arsenic Life.” Wired.com December 7, 2010.

Zimmer, C. “‘This Paper Should Not Have Been Published’.” Slate.com December 7, 2010.

Wednesday, December 1, 2010

Species of the Month: DECEMBER

The quaint holiday decoration you invited into your home and hung over your doorways is a vicious parasite that leeches nutrients from innocent host trees. It is riddled with cytotoxins, and its seeds are dispersed via bird crap. Merry Christmas.

Parasitism: It’s Not Just For Bacteria and Worms
Pretty things can be parasites too. Viscum album is one species of mistletoe*, a group of parasitic flowering plants in the order Santalales. It is an obligate hemiparasite. This means that while it does not derive all of its sustenance from a host plant, it does need some interaction with the host to reach its mature state. † As a hemiparasite, Viscum album need only steal from to its host tree’s xylem, the transport tissue that handles water and water-soluble nutrients. It is gracious enough to eschew the host’s phloem, which transports sugars. This makes it less of a pathogen, as the host loses water but not food to the parasite.

Host Infection
Viscum album bears a fruit that some birds find rather tasty. The seeds of its white berries are covered in a gluey substance called viscin. Birds eat the berries and then fly off to another tree where they eventually expel the digested remains of the fruit, its viscin coating still adhering to the seeds. The sticky seeds cling to the new branch and begin to grow. As it enlarges, the plant forms a peg that drills through the host branch and eventually reaches the xylem. Now the parasite develops its haustorium, a root-like appendage that allows it to siphon nutrients from the host.

Life in the Old Country
Viscum album is native to Europe and parts of Asia. It is the original Christmas mistletoe, a leafy green shrub adorned with white berries. It has a wide host range, infecting over 450 tree species, including both hardwood and coniferous varieties. So, yes, hypothetically your Christmas mistletoe could attack your Christmas tree. It’s a rather unsavory mental image.

Coming to America
Viscum album made its way from Europe to the new world in 1900, when horticulturist Luther Burbank deliberately allowed the plant to infect trees in Northern California so that the parasitic shrub could be harvested for Christmas decorations. Over the past century it has expanded its territory by about 4 miles, which isn’t exactly cause for alarm. Despite Burbank’s efforts, most U.S. holiday make-out mistletoe is more likely to be Phoradendron flavescens, which is native to North America.

Can it Hurt You?
Mistletoe contains strong cytotoxins (harmful to cells). Those festive white berries are fine for the birds, but you should definitely not add them to Christmas fruit cake. Nor should you feed them to your dogs or cats or children. Ingesting mistletoe can cause gastrointestinal problems and slow heartbeat, among other things. If anyone at your holiday party eats more than a couple of them, you might want to call poison control.

Can it Help You?
Mistletoe may offer humans something beyond just a flimsy excuse to steal a kiss. In Europe, Viscum album extract (VAE) is widely used in the treatment of cancer, often under the name Iscador. Mistletoe as cancer therapy was first introduced in 1920 by Rodolf Steiner, founder of anthroposophy.‡ Clinical trials of VAE have not always demonstrated consistent results and many doctors, particularly in the U.S., are skeptical of its efficacy. In Europe it is generally used as a complementary, rather than primary, cancer treatment and it credited more with improving quality of life than increasing survival rates. Still, given the unpleasantness of cancer therapies, such an improvement is an impressive contribution. Especially for a parasitic lowlife like mistletoe.

What Does This Have To Do With Jesus and/or Kissing?
As far as I can tell, very little. Like many peculiar holiday customs, mistletoe usage likely predates Christianity. It crops up in discussions of Norse Mythology and Druid rituals, but nobody seems able to form a cohesive narrative of how it came to be that a person could demand a kiss if they managed to lure somebody under the hanging holiday decoration. Most references to mistletoe as a Christmas ornament appear in the 18th century or later, by which time its role was already established. I asked a few scholars of things European and didn’t get anything more concrete. I did, however, learn about a popular 19th century song called The Mistletoe Bough which tells the charming tale of a young bride who suffocates in a chest while playing a game of hide and seek. How’s that for holiday cheer?

* I will refer to Viscum album throughout this article as mistletoe. However multiple plants go by that moniker. To properly distinguish it from its fellows, it should be addressed as European Mistletoe, or Common Mistletoe.

† As opposed to an obligate parasite, a facultative parasite can, in a pinch, grow without the aide of a host. A holoparasite, in contrast to a hemiparasite, lacks chlorophyll and thus cannot photosynthesize . It is completely dependent on its host for both water and carbon (aka food).

‡ Anthroposophy is described as a spiritual philosophy. During my New York City days, I once lived down the street from the Center for Anthroposophy. It always seemed closed when we walked by it. Mostly I just made fun of its ambiguous, hybridized name and joked that one day I would start my own spiritual philosophy, which would be called Knowlogy and would be devoted to the accumulation of random trivia…. I am perhaps on my way to doing that here.

Wednesday, November 24, 2010

Gladly Making An Exception

Like Groucho Marx, some people never forget a face.* I, on the other hand, seldom recall one, or at least not right away. Several introductions are usually required before I can properly recognize somebody. I can talk to a new person directly for a decent length of time, at a job interview for instance, and then fail to recognize them when I pass them on the street the next day. As you can imagine, it’s a bit embarrassing. The list of social faux pas I’ve made as a result of this problem is lengthy and often absurd. Most famously, I once tried to strike up a conversation with a guy I’d already been on a date with. Even after he politely told me that we’d already met, it took me a few minutes to place where I’d last seen him. In my defense, he’d changed his hair…well, or maybe just his coat. Something. Whatever detail had been disrupted instantly rendered him a stranger to me. In a more practical social universe, everyone would have dramatic features or distinguishing accessories that they always wore. Better still, they would each just wear the same outfit every day, because it is far easier to identify a shirt than a face.

My difficulty recognizing faces is relatively mild. I can eventually commit a new face to memory, it just takes a frustratingly long time. But there are those for whom the situation is far worse. In 1947 neurologist Joachim Bodamer introduced the term prosopagnosia to describe the inability to identify faces. The condition, which also goes under the more pronounceable nickname “face blindness”, can be so severe that those afflicted with it struggle to recognize life-long friends, family members or even their own faces in the mirror. Like most of the exotic maladies I opt to write about, prosopagnosia is not especially common, affecting perhaps 1 or 2% of the population. However, researchers have only recently started paying attention to milder forms of face blindness, so it’s difficult to say how many more people like me are walking around oafishly ignoring acquaintances at the grocery store and inadvertently snubbing potential employers.

AP vs. CP
Face blindness comes in two flavors. Acquired prosopagnosia (AP) was the first to be described and, as its name suggests, is brought on by some sort of calamity (often a stroke or a head injury) after any number of years of prior normal face recognition. Congenital prosopagnosia (CP) begins at birth and seems to run in families. There are some curious differences between the 2 variations. Notably, individuals with AP have been found to have abnormal FFA activity when viewing faces. CP individuals however generally exhibit no FFA abnormalities when subjected to similar tests. “This would be fascinating,” You’re thinking, “if I actually had any idea what the FFA was.”

What The FFA
In reality you’ve probably heard of this already, though perhaps not by its proper name. The letters stand for Fusiform Face Area, but the person who told you about it at a party may have called it by some approximation such as, “that special face part of the brain”. The FFA is a region of the visual cortex thought to be specialized for the processing of face images. fMRI brain imaging has shown the FFA to be more active in subjects when viewing faces than when viewing various other objects and body parts (namely human hands). Since first being described in the 1990s, the function of the FFA has been debated, with some studies showing that it is also active during the viewing of non-face objects by viewers who are “experts” in these objects (automobile enthusiasts looking at cars, bird-watchers looking at birds, etc.). But face stimuli continue to elicit strong FFA responses in normal subjects and thus brain imaging of this region is a must for any self-respecting experiment hoping to shed light on prosopagnosia.†

Don’t bother looking it up. No reasonable dictionary or spell check would except “faceness” as a word. It’s just the folks in the laboratory playing with neologisms again. What they’re trying to convey with this term is the nebulous quality that can make a non-face object appear face-like. Humans excel at finding face imagery in objects that are in no way related to faces of our species or any other. Clusters of shapes that evoke faces are almost suspiciously ubiquitous. People see faces in clouds and wood grain and rock formations. They see a man on the moon and the Virgin Mary on a grilled cheese sandwich.‡ The reported prevalence of car-shapes or chair-shapes in abstract patterns is much lower. Clearly there is something special about the structure of a face. Not only are we prone toward perceiving face-shapes, but the FFA in more active when viewing objects with higher levels of “faceness”, even though it is well understood that they are not actual human forms. Unless you suffer from severe face blindness, your own FFA probably lit up when it saw the 3 electrical outlets used to dress up this article (and would have done so even had I not added facial expressions to them). What is it about such shapes that so readily captures your imagination? Without going into a lot of evolutionary speculation as to the benefit of being able to spot a face, I can relay to you experimental findings of what kinds of shapes get the most FFA response. They are symmetrical shapes with more elements in the upper portion than the lower portion. Basically something approximating 2 eyes on top and 1 mouth on the bottom, like so…   ^_^

Trees vs. Forests
There is some speculation that congenital prosopagnosia (CP) may be associated with deficiencies in global processing. Certain patterns can be viewed on a global and local level. A common experimental model of global vs. local perception is nested letters, in which larger letters are built out of smaller ones. These can be built using matching or non-matching nested letters.

Normal subjects tend to identify both global and local elements more quickly when they are matched (a large letter F made from small letter Fs, rather than from small letter Ts). With unmatched nested letters, global information can interfere with processing of local information and vice versa. In such cases, normal subjects are faster at spotting the global shapes (big letters) than the local shapes (smaller nested letters). The reverse trend has been observed in some individuals with CP. Not only did these subjects have more trouble spotting the large letters in the non-matched nested-letter stimuli than those in the control group (individuals with normal ability to recognize faces), but they were actually faster than the control group at identifying the smaller local letters, even in the non-matched scenarios. It is as though people with CP barely notice the global pattern and instead make a beeline for the interior details. In a face, the local elements are individual features - eyes, nose, mouth – while the global element is the entire face. Recognizing a face relies more on noting the configuration of the parts within the whole (are the eyes set wide or narrow, is the chin long or short) than on scrutinizing the parts. Thus people with CP may be missing the global face by getting mired in the details of its features.

Identification vs. Expression
Another curious difference observed between people with acquired (AP) and congenital (CP) prosopagnosia is the ability or lack thereof to accurately perceive facial expressions. Individuals with AP are often as unable to recognize the expression on a face as they are its identity.  However, some CP individuals have been documented to perform as well as the control group on facial expression tasks while still being completely hopeless in facial identification tasks. What this means in day-to-day life is that while they might be able to spot that a face is angry, they still won’t be able to tell who the face belongs to. However, given that the mystery face may well be that of a friend or acquaintance, they should at least have a good guess as to why it is angry.

What About Me?
If the symptoms I’ve been describing sound distressingly familiar, it is possible that you may be afflicted with some degree of prosopagnosia. Unfortunately it’s a bit harder to test for face blindness than for colorblindness. Probably you should be talking to a neurologist, but if you’re like me (short on time and money) you might prefer to take your chances with the internet instead. I found an online test that was quick and painless enough. Though loudly proclaiming that it can’t actually diagnose you, it does offer some vague quantification in the form of a score and numbers indicative of normal vs. impaired facial recognition performance. Stunningly I got a 66, which puts me below average (71) but still well above impaired (47). Perhaps there’s hope for me after all.

What’s a person to do if they score closer to a 47? Not a whole lot, I’m afraid. Intentionally or not, individuals with face blindness often rely on non-face cues to help sort out who’s who. Hairdos, glasses, and style of clothing often do the trick (or at least until their wearers suddenly decide they need a new look). Mannerisms, voices and context help too (your professor is the one in the front of the class, your neighbor is the one in the adjacent lawn, etc). From my own experience I would offer that, especially if you’re female, smiling politely at anyone who smiles at you first is not always the best strategy. Though if someone addresses you by name, there’s a good chance you’ve met them, so just roll with it.

As for the rest of you (the 71 and over scorers), my apologizes if I didn’t say hello when you saw me at the movie theater last weekend. It’s nothing personal. You might consider getting yourself some sort of accessory that makes you more readily recognizable, like an eye patch or a cane or an electric blue feather boa. It would really help me out.

* The oft-repeated quote is, “I never forget a face, but in your case I’ll be glad to make an exception.” I have no idea which film (if any) it came from. I’m not a Marx brothers fanatic, I’ve just heard the quote here and there.

† Another brain region, the Occipital Face Area (OFA), also figures prominently into such studies. However, I already have far too many details to cram into one little article.

‡ The Virgin Mary sammich came into existence in 1994 but is predated by nearly 2 decades by Maria Rubio’s pioneering Jesus tortilla. For a while, the latter could be viewed at a shrine in New Mexico. However, since both the tortilla and the grilled cheese look like fairly non-descript faces rather than specific religious figures, you can probably just make your own lunchtime miracle using whatever is currently in your fridge.

Who told you this?

Kanwisher, N. et al. 1997. “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” The Journal of Neuroscience 17: 4302-4311.

Tarr, M.J. and Gauthier, I. 2000. “FFA: a flexible fusiform area for subordinate-level visual processing automatized by expertise.” Nature Neuroscience 3: 764-769.

Caldara, R. and Seghier, M. 2009. “The Fusiform Face Area Responds Automatically to Statistical Regularities Optimal for Face Categorization.” Human Brain Mapping 30: 1615-1625.

Bentin, S. et al. 2007. “Too Many Trees to See the Forest: Performance, Event-related Potential, and Functional Magnetic Resonance Imaging Manifestations of Integrative Congenital Prosopagnosia.” Journal of Cognitive Neuroscience 19: 132-146.

Schiltz, C. et al. 2006. “Impaired Face Discrimination in Acquired Prosopagnosia Is Associated with Abnormal Response to Individual Faces in the Right Middle Fusiform Gyrus.” Cerebral Cortex 16: 574-586.

Humphreys, K. et al. 2007. “A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia.” Experimental Brain Research 176: 356-373.

Love, B.C. et al. 1999. “A Structural Account of Global and Local Processing.” Cognitive Psychology 38: 291-316.

Friday, November 12, 2010

Pretty and Witty and Bright

What can beauty buy for you these days? More dates and party invitations? Absolutely. Fewer speeding tickets? Sure. Better chances of being offered the job even after a so-so interview? You bet. How about a world that understands your many unique qualities, a world that pays attention to the nuances of your individual personality, a world that “gets” you? Quite possibly yes. Or at least your odds are better than that of your homelier friends. So says a recent study in which first impressions formed about attractive people were found to be more accurate than those formed about less attractive people.

In the study, groups of college students of varying degrees of physical beauty were allowed to interact for a scant 3 minutes, after which they attempted to assess each others’ personalities.* Additionally they rated the attractiveness of other members in the group and lastly answered questions about their own personalities. In order to minimize confusion and maximize judgmental shallowness, I have opted to divide the participants into two categories: pretty and ugly.† Researchers compared how subjects scored on positive traits (relative to the average) with their attractiveness score. They also examined the consistency of perceivers’ impressions of specific personality traits with the self-reported personality questionnaires of both pretty and ugly subjects. This latter phenomenon is called “distinctive accuracy”. Greater distinctive accuracy means that first impressions about a subject more closely match that subject’s own view of their personality. For instance, a person who thought himself to be very sociable but not as strongly intellectual would be seen this way by others, even if he was viewed as being more sociable and more intellectual than the “average” person.

Fig 1. Fake data that did not come from the study discussed here, or from any other study. Really. But it's easier on the eyes than the real data, which means you'll pay closer attention to it and rate it more favorably than if I'd shown you the actual data.

It has been observed numerous times that pretty people tend to be perceived as possessing more overall positive traits than ugly folk. Pretty individuals are seen as smarter, friendlier and generally just better than their ugly counterparts. This is termed the “attractiveness halo effect”. It was no surprise that the results of this new study followed the trend. Members of the group who were perceived to be prettier were also seen as being graced with larger servings of intelligence and other desirable traits. To remind us of the platitude that beauty is in the eye of the beholder, attractiveness ratings were not entirely unanimous. Some subjects were ranked as being ugly by the majority of participants, but still had one or two fans who found them attractive. Interestingly, in these cases, the ugly subjects also benefited from the halo effect. Those who rated an ugly subject’s appearance favorably, lavished equal praise on their inner qualities.

Results regarding distinctive accuracy featured a few more twists.‡ With all this talk about positive biases and preferential treatment, it is easy to assume that more attractive people would be viewed with less accuracy. Wouldn’t they just be seen as generically flawless? So lacking in imperfections that all their good qualities were uniformly present? In actuality, the results showed the opposite. Personality assessments for pretty subjects were more accurate (ie, better matched to the self-reported questionnaires) than those for ugly subjects. Why would this happen? The authors note two possible causative factors. One is that the pretty subjects may simply attract more attention from their perceivers. People want to connect with good-looking individuals, so they make more of an effort and in doing so manage to observe more detail. The other possible factor revolved around the person being perceived, the “target”, rather than the perceiver. In order for perceiver to glean information about the target, the target must first put out some sort of cues. The authors suggest that pretty targets, using the superior social skills of someone that spent a lifetime being beautiful, are more likely to make information about themselves available to the perceiver. For support of this idea, we go back to the subjects that were rated as ugly by most participants but as pretty by a few. As mentioned before, these subjects were viewed by their admirers as having more positive traits. However, they were not necessarily viewed with greater distinctive accuracy by these same admirers.§ While they benefited from the halo effect, they lacked the confidence to give their perceivers enough material to form accurate impressions about their personalities. They were too introverted to read.

This may not be the happiest news for mousey wallflower types. However, it’s mildly encouraging to hear that our ability as humans to accurately judge personality doesn’t completely shut down when we encounter physical beauty. Shallow and biased though we are, we can at least tell which of an attractive individual’s copious and remarkable positive traits are their most pronounced. We can differentiate that they are more generous than they are eloquent, for instance. Though, of course, they still possess greater generosity and eloquence (and intelligence and sociability and impressive math skills…) than the ugly person sitting next to them. We’re also not too bad at counting up who has more favorable personality traits when comparing two similarly attractive individuals. The trouble really starts when we are faced with a choice between the ugly but brilliant job candidate and the beautiful but incompetent one. But, as Oscar Wilde wrote, “it is better to be beautiful than to be good,” so even that is a simple enough problem to solve.

* This was accomplished via a 21-item questionnaire based on “Big Five” personality traits (openness, conscientiousness, extraversion, agreeableness, and neuroticism…which together make the acronym OCEAN, how cute) along with 3 additional questions about general perceived positive qualities.

† In reality they were rated on a 1-7 scale of attractiveness, and consensual attractiveness (the average of the ratings from the entire group for each subject) was considered along with subjects’ attractiveness ratings by individual perceivers. But we’ll get to that in a minute.

‡ Please recall one more time that distinctive accuracy is the weighing of specific personality traits relative to one another, rather than just an overall thumbs up given to all possible desirable traits.

§ We’re talking about the really ugly people now, the statistically ugly, not just the average Joes and plain Janes. These individuals were 1 standard deviation or more below the average level of attractiveness.

Monday, November 1, 2010

Species of the Month: NOVEMBER

After living in Austin for over a year and making several trips into the more wildlife-infested surrounding Hill Country, I had my first scorpion sighting right in town last spring. The animal wandered into a classroom where I was learning German, causing much surprise and standing up from chairs amongst the students. The scorpion was escorted out of the building before any stinging ensued, but since then I’ve heard 2 tales of painful scorpion stings from friends and decided that it was worth looking into the matter. Texas is home to about 20 species of scorpions, but Centruroides vittatus is the most commonly seen and the only one found throughout the state. It is one of a small handful of scorpion species recorded in the Austin area and its appearance is satisfactorily similar to that of the creature that briefly attended my German class back in April.

Physical Attributes

Like spiders and ticks, scorpions are arachnids. As such they have eight legs. Additionally they are equipped with a set of lobster-esque pinchers in the front and a long tail, complete with venomous stinger, in their rear half. The striped bark scorpion is yellowish to tan in color and wears 2 characteristic stripes down its back.* Adults average about 2.5 inches in length, with males somewhat predictably having longer tails.

Dining Habits

Like other scorpions, Centruroides vittatus eats primary insects. They eat a lot of things you probably don’t much care for including centipedes, flies, and spiders. They subdue their dinner by grasping it with their pinchers and then killing it with their venom-packed stingers. The actual eating part is a bit complicated. Scorpions have tiny mouths, so they do most of their digesting externally by coughing up digestive fluids onto their prey and then sucking up the liquefied remains. If it helps, you can think of it as akin to drinking a nutritious smoothie.

Hanging out

As I mentioned, Centruroides vittatus is the mostly commonly observed species of scorpion in Texas. It is also a species most commonly found in Texas. While striped bark scorpions live in various other US and Mexican states, Texas is headquarters for these critters. Being as they are not mammals, they must resort to behavioral thermoregulation. They tend to be more active at night and spend their days seeking shelter in cool, damp places (Texas summers are too hot even for scorpions). This can be any number of locations, from the undersides of logs and rocks to your air-conditioned apartment.

Courtship and Mating

Scorpions have a fancy mating ritual where they pair off, grab each other by the pinchers and do a little dance. At the end of their date the male drops a sperm sac on the ground, which the female scoops up into her abdomen.

Scorpions are pretty special arachnids in that they are oviviparous. That means they don’t lay eggs. The eggs remain in the body of the female until birth. Gestation for Centruroides vittatus is a lengthy 8 months, after which about 30 baby scorpions emerge. The mother carries the new brood on her back for a few days until they are ready to care for themselves, which you must admit is pretty cute by arachnid standards.† Striped bark scorpions live for about 4 years and will generally reproduce several times in their life.

At the Disco

Centruroides vittatus, as well as other scorpions, glow in under ultraviolet light (the “black light” seen in certain nightclubs). Needless to say, this is pretty cool. However it is not always advantageous to the animals. I noticed that several pest control websites sell a product called a “scorpion UV flashlight”, presumably used to find and stomp the little guys after nightfall.

Can They Hurt You?

They sure can. While no reasonable scorpion would mistake a human for their desired meal, they will sting you if you inadvertently surprise them during their normal activities.

Can They Kill You?

Of the well over 1000 known species of scorpion, only about 25 have venom toxic enough to kill a human.‡ Centruroides vittatus is not one of these species. As with bee stings, some people may have an allergic reaction to the venom. In these cases, death due to anaphylactic shock can occur when treatment is not sought. If difficulty breathing is one of the post-sting symptoms, paramedics should definitely be called to the scene. Such incidents are rare though. In most cases the sting of the striped bark scorpion just yields about 20 minutes of sharp pain followed by another day or so of mild discomfort. An ice pack helps.

Don’t Mow the Lawn Backward, and Other Sage Advice

I spoke to Venecia, one of the recent scorpion sting victims about her experience. She had been mowing the lawn when the attack occurred and was informed only after the fact that Hill Country wisdom recommends always pushing the mower forward in tall gross, so that any stinging creatures encountered will be preventatively puréed. Venecia unfortunately made a backward sweep and picked up a scorpion in her shoe. Shortly after she felt what she describes as “an intense stinging pain” accompanied by a “a burning sensation that doesn't subside”. Venecia described her assailant (which she shook from her shoe upon being stung) as dark brown and about 3/4 the size of a pinky finger, so it was probably not our friend Centruroides vittatus that got her, but another similarly non-lethal Texas scorpion.

In addition to modifying your lawn mowing style, the best way to avoid run-ins with scorpions around your house is to not create a lot of comfortable sheltering spots for them. Leaving logs, stones, building materials and trash around your yard can attract scorpions (not to mention cockroaches and raccoons). It’s also not a good idea to bring firewood into your home unless it’s going directly onto the fire. And you might consider a bit of weather stripping while you’re at it. In addition to keeping the scorpions out, you’ll waste less electricity.

* Technically, this would be “the upper surface of the abdomen”, but if I start using proper anatomical terminology to describe this thing, we’ll likely be here all day.

† Drinking blood (ticks), cannibalizing mates (spiders), etc.

‡ These belong to the family Buthidae, which is coincidently the same family in which Centruroides vittatus. Striped bark scorpions, luckily, so not share their relatives’ venom potency.

Thursday, October 28, 2010

Things That Go Bump in the Night, Part 3

At last we have come to the final and arguably the most frightening stop on our terrifying tour of sleep disorders. Part 1 (fatal familial insomnia) threatened to keep you awake. Part 2 (sleep paralysis and night terrors) was almost guaranteed to cause nightmares. I shall leave it up to you to deduce the effect that Part 3 will have on your circadian rhythm and overall metabolism. Brace yourself for the horror of…

It was a dark and stormy night. The boy awoke and instantly sensed that something wrong. Very wrong. He tried to calm himself, searching his mind for a rational explanation. Perhaps it was rain seeping in from an open window. Perhaps he’d stirred in his slumber and knocked over a glass of water on the nightstand. The window, however, was firmly shut and there was no evidence of a glass anywhere. Slowly he realized he knew what wicked thing had caused this. It was the same evil that had stalked him for as long as his memory could record. It had tracked him down even after his family had moved to a new house in a new town. It was nocturnal enuresis. He had wet the bed.

The details of the protagonist in my ghost story were chosen not just to demonstrate that the tale was not an autobiographical one, but also to set up the demographics of the affliction. As with the more glamorous parasomnias discussed in Part 2 of this series, nocturnal enuresis, the polite term for bedwetting, is more common in childhood than in adulthood. It is also more common in males than in females. DSM-IV * defines nocturnal enuresis as urinating in bed past the age of 5 years, at least twice a week and without any known provocation.† The condition is generally further divided in 2 subcategories – primary, in which the patient has never achieved nighttime bladder control for any length of time and secondary, in which they managed 6 or more “dry” months before the problem recurred.

Bedwetting runs in families and several physiological factors have been suggested as causes. One is insufficient nighttime production of arginine vasopressin (AVP). AVP is a hormone that helps regulate water within the body. Its release causes less water to go into urine, which of course results in a lower volume of urine. Ideally, thanks to circadian rhythms, the body produces more AVP and thus less pee at night.‡ Lower nighttime AVP production and subsequent higher urine volume has been documented in some children suffering from enuresis.

Another physiological difference between bedwetters and non-bedwetters may be how easily they wake during sleep. For a lighter sleeper, a full bladder would provide ample physical stimulus to rouse them from sleep. Heavier sleepers are not always so fortunate. We should take a moment to note the difference between “heavy sleep” and “deep sleep”. Deep sleep refers to slow-wave sleep, stages 3/4, which you may recall from last week’s discussion of night terrors. Heavy sleep simply indicates that it would take a great amount of noise, light or discomfort to wake a sleeper. The sleep patterns of enuretic patients are no different than those non-enuretic ones. Those who suffer from nocturnal enuresis do not spend more time in deep sleep, they are just potentially harder to wake from any sleep stage.

Environmental and social factors also play a role in bedwetting. Those exhibiting nocturnal enuresis often hale from larger, less stable, and more financially disadvantaged families. Divorce, especially during the critical toilet training years, can often precede the onset of bedwetting. However there is little evidence that any of these factors actually cause nocturnal enuresis rather than merely aggravating it in those who are already physiologically predisposed toward the condition. Plenty of people experience difficult childhoods and never wet the bed.

At this point I could tell you about behavioral and pharmaceutical treatments for bedwetting, but it being so close to Halloween I prefer to discuss serial killers and something called the “McDonald triad”. The triad was proposed by its namesake J.M. McDonald in 1963 in an attempt to predict future violent behavior based on traits exhibited in childhood. He noted that 3 such traits were often found in psychiatric patients with violent tendencies. The triad consists of firesetting, cruelty to animals and bedwetting.§ Now while burning down houses and kicking puppies is the sort of thing one expects to find in the childhood of a violent criminals, the third item is a bit harder to fathom. Why bedwetting? When I first heard of the triad I (insensitively) joked that wetting the bed must be the ultimate sign of disregard for society and its rules. More realistically it may just be another symptom of the stress of an abusive upbringing (also quite common to violent crime). And humiliating experiences themselves are another frequent occurrence in the life histories of serial killers. Bed wetting certainly comes with its share of social stigma. Not to say that all children who wet the bed will grow up to be multiple murderers. However if your child has taken to peeing in their bed after already exhibiting the other 2 behaviors on the checklist, now may be the time to start sleeping with your bedroom door locked.

That’s all I have for you on this sleep disorder. I need to finish my Halloween costume now, and I owe you a new species of the month for November. Happy Halloween!

* DSM is short for the Diagnostic and Statistical Manual of Mental Disorders. It is a 5-axis system for attempting to describe the spectrum of human psychiatric ills and is the accepted diagnostic tool of most U.S. mental health workers. The thick and expensive book is published by the American Psychiatric Association. The most recent edition, DSM-IV-TR, came out in 2000 and DSM-V is set to hit the stands in 2013. If anyone wishes to pre-order the book for me, I will gladly spend a post or two comparing it to the previous edition, which by that time I should be able to get on eBay for a few bucks plus shipping.

† Illness, medication side effects, having a sibling put your hand into warm water, etc.

‡ The benefit of channeling less of the body’s water into urine production at night is not just the prevention of enuresis, it also protects the sleeper from dehydration by conserving water at a time when none is likely to be ingested.

§ Future research did not always support the McDonald triad, but it was catchy enough to remain in the folklore of psychology, where I originally heard of it. In the articles I read, bedwetting did, however, emerge as the most consistently-correlated to crime of the 3 traits.

Who told you this?

Laberge, L. et al. 2000. “Development of Parasomnias From Childhood to Early Adolescence.” Pediatrics 106: 67-74.

Butler, R.J. 2004. “Childhood nocturnal enuresis: Developing a conceptual framework.” Clinical Psychology Review 24: 909-931.

Nappo, S. et al. 2002. “Nocturnal enuresis in the adolescent: a neglected problem.” British Journal of Urology International 90: 912-917.

McKenzie, C. 1995. “A Study of Serial Murder.” International Journal of Offender Therapy and Comparative Criminology 39: 1-10.

Friday, October 22, 2010

Things That Go Bump in the Night, Part 2

This week’s installment of scary sleep disorders will probably give you nightmares. Consider yourself warned…

It has never happened to me. At least not that I know of. But I imagine it would be something like that feeling you get when the alarm clock rings during a too-deep afternoon nap and suddenly you can’t recall where you are, how you got there, and whether it’s night or day. Except it would be much worse, multiplied by a number with at least 6 digits, and instead of quickly orienting to your surroundings, the panic would persist, even escalate until somehow you relapsed back into sleep, neither understanding nor remembering the cause of your distress. And there would be a name for the experience. It would be called a night terror.

In part one of this series I mentioned that there are three states of consciousness – waking, REM sleep* and non-REM sleep – and that mixing them yielded strange phenomena. Night terrors are among these. They are a mixture of waking and slow-wave sleep (SWS), one of the subcategories of non-REM sleep.† They typically occur during an abrupt transition from SWS to wakefulness. SWS generally occurs during the first few hours of sleep and thus so do night terrors, after that more time is given over to dreaming.‡

As with many parasomnias §, night terrors are more common in children than in adults, however the problem can persist into adulthood. Figures on the prevalence of night terrors are too varied for me to bother disseminating any of them here, but I will mention that adults prone to night terrors have been reported to be higher than average in personality traits of anxiety and hysteria. It is a dramatic event. The night terror often begins with a loud “blood-curdling” scream. The sleeper is panicked and confused, their heart racing. They may thrash about, or even jump out of bed and run around**. Often, they return to sleep without fully awakening. In the morning they will have no memory of the sleep disturbance that freaked out their roommates or bedmates.

A characteristic symptom of a night terror is complete inconsolability. No amount of, “There, there, darling, you just had a nightmare…” will do any good. And, in fact, the sleeper did not have a nightmare.†† As they occur in non-REM sleep, night terrors are strikingly content-free. There is no cause of the panic, just panic.

Sleep paralysis is a rather different amalgamation of conscious states than a night terror. I had initially conflated the two disorders, having heard a description of sleep paralysis somewhere and the term “night terror” somewhere else. The name seemed a good match for the phenomenon, which sounded unequivocally terror-inducing. While a night terror is a mixture of wakefulness and slow-wave sleep, sleep paralysis is the troublesome commingling of wakefulness and REM sleep. As the dreaming sleeper begins to emerge from REM sleep, they open their eyes and take in their surroundings. Unfortunately the rest of their muscles are still paralyzed from being in the REM state. They feel pinned down and, naturally, frightened. There may be a sense of a menacing presence in the room. Sometimes the dreams of REM sleep also linger in the form of auditory and visual hallucinations.

Unlike the sufferer of night terrors, a person experiencing sleep paralysis will still remember their chilling ordeal after the sun rises. Prior to the scientific study of sleep (and the advent of polysonography) all manner of ghost stories emerged to explain the incident. Something sinister and supernatural was holding the victim down, suffocating them. Incubi, succubi, vampires, hags and various ghoulies and ghosties and things that go bump in the night where clearly responsible. Witches were casting spells, they called the sleeper’s name (auditory hallucination) as they used their magic to bind their helpless prey to his or her bed. In more modern cultures, it has been suggested that the belief in alien abductions is the latest form of the waking brain trying in vain to make sense of what the hell is happening to the sleeping body.
There’s something of a positive feedback loop happening in the interpretation of the sleep paralysis experience. The sleeper makes their first grasp at an explanation while the event is still occurring. These ideas can give more vivid form to the hallucination, increasing the sense of fear.

Not everyone perceives sleep paralysis as a sensation of being held down or chocked. Some interpret the immobility of their muscles as an out of body experience. They are floating. They are flying. This is not necessarily reported as an unpleasant experience, but one can easily see how it might also fit the stereotypical description of an alleged alien abduction. There are even those who don’t impose any mythology onto their encounters with sleep paralysis. They just chalk it up to random weirdness and hope it doesn’t happen again.

Prevalence of sleep paralysis follows a similar pattern to that of night terrors – more common in childhood, difficult to assess the exact proportion of the adult population afflicted with the parasomnia. I encourage you to do your own demographic research during your Halloween escapades. A good-sized party should provide at least a few colorful anecdotes amongst the guests. Over the past week, I read so much about night terrors and sleep paralysis that I began to wonder if one could inadvertently summon the conditions by thinking about them excessively. You should consider this too, while you drift off to sleep tonight. Sweet dreams.

* Not that I doubt your faithful weekly readership, but just in case you missed part one, I’ll restate that REM sleep is where dreaming generally occurs. During this stage the voluntary muscles of the body are also paralyzed.

† The stages of sleep are 1-4 and REM and are categorized by brainwave activity, eye movement, etc as captured by polysomnography. Slow wave sleep is stages 3/4. The order of the stages is not simply sequential as can be seen in the typical sleep hypnogram shown here.

‡ Intriguingly, people prone to night terrors sometime exhibit periods of SWS later into the night as well. It has been suggested that the disruptions to earlier periods of SWS causes it to reappear throughout the sleep cycle.

§ Parasomnias are unwanted behavioral or experiential phenomena that occur during sleep, as opposed to problems with the physical processes behind sleep (such as insomnia or narcolepsy).

** Night terrors are a close cousin of sleepwalking, another parasomnia brought on by the combination of waking and non-REM sleep.

†† Just to further complicate the issue, I’d like to point out that the word “mare” and variations of it exist in several languages with meanings usually corresponding to some form of supernatural creature. It is likely that the term “nightmare” was originally used to describe sleep paralysis rather than its current usage to denote bad dreams.

Who told you this?

Mahowald, M.W. and Schenck, C.H. 2005. “Insight From Studying Human Sleep Disorders.” Nature 437: 1279-1285.

Laberge, L. et al. 2000. “Development of Parasomnias From Childhood to Early Adolescence.” Pediatrics 106: 67-74.

Szelenberger, W. et al. 2005. “Sleepwalking and night terrors: Psychopathological and psychophysiological correlates.” International Review of Psychiatry 17: 263-270.

Crisp, A.H. 1996. “The sleepwalking/night terrors syndrome in adults.” Postgrad Medical Journal 72: 599-604.

McNally, R.J. and Clancy, S.A. 2005. “Sleep Paralysis, Sexual Abuse, and Space Alien Abduction.” Transcultural Psychiatry 42: 113-122.

Cheyne, J.A. et al. 1999. “Hypnagogic and Hypnopompic Hallucinations during Sleep Paralysis: Neurological and Cultural Construction of the Night-Mare.” Consciousness and Cognition 8: 319-337.

Saturday, October 16, 2010

Things That Go Bump in the Night, Part 1

For the latter half of October I have opted to do undertake (pun intended) a 3-part series on scary sleep disorders. If all goes as scheduled, I shall deliver the final installment of this terrifying trilogy just in time for Halloweekend. And now, we begin our journey into darkness with a topic that is sure to keep you up all night…

Humans divide their time between 3 main conscious states. Wakefulness, slow-wave non-rapid eye movement sleep (NREM) and rapid eye movement sleep (REM).* Generally these states are experienced sequentially, and all is well. However, the absence or mixing of any of the states can be disorienting, debilitating and even deadly

Insomnia is the most frequently reported sleep disorder in the general population. It is defined as the inability to obtain enough sleep in order to feel rested, which can mean insufficient quantity or quality of sleep, or both. Its causes are diverse, ranging from physical problems such as obstructive sleep apnea † and restless leg syndrome‡ to social and psychological ones like night shift work and anxiety. Sometimes there is no traceable cause. As a lifelong poor sleeper, insomnia is a familiar experience that I would describe as frustrating to maddening, depending on severity. I was, however, surprised to learn that the condition could also be completely incapacitating and ultimately lethal. Such is the fate of those afflicted with fatal familial insomnia (FFI), a genetic disease as disturbing as it is rare.

FFI is, in fact, extremely rare. So much so that the mere diagnosis of a new case is often deemed worthy of its own journal article. The conditional was first described in a 1986 New England Journal of Medicine article and is believed to affect only about 40 families in the world today. It is a prion disease with autosomal dominant inheritance.§ This means that it requires only one gene with the disease-causing mutation from either parent. As with other lethal dominantly-inherited diseases, such as Huntingtons, FFI was able to persist because its symptoms generally do not manifest until after childbearing age. A parent who carries the problematic gene has a 50% chance of passing it on to any one of his or her children, but may appear to be in perfect health until things go dreadfully awry somewhere in middle to late adulthood.

Prion diseases cause degeneration within the central nervous system. FFI does its damage to the thalamus, a region of the brain that is involved, among other things, with the regulation of sleep and wakefulness. The clinical presentation of FFI begins with a progressive inability to sleep. Patients may also exhibit weight loss, difficulties with focus and memory, loss of coordination of muscles and muscle twitching. Gradually they become completely incapable of achieving slow-wave or REM sleep. During this process another curious set of symptom appears – hallucinations and the enactment of dreams during wakefulness. It is as though the dreams normally experienced during REM sleep begin to intrude into the waking state. Patients live in a gloomy, clouded limbo, neither fully asleep nor fully awake. Eventually coma and death arrive to turn off the lights. The course of this entire nightmare varies from about 8 month to 2 years. It is a long time to go without a good night’s sleep.

The name fatal familial insomnia may be a bit misleading, as it implies that its victims actually die from insomnia. Since the inability to sleep present in FFI is caused by the destruction of the thalamus, insomnia could just as easily be viewed as another byproduct of the disease. A symptom rather than a cause. It’s one of those tricky chicken-or-the-egg questions. Scientists are pretty good at designing experiments in which animals drop dead after being forcibly kept awake for weeks, but they have a harder time determining what exactly killed them. Examinations of the animals after they die tend to turn up healthy organs and no clear signs as to what physically went wrong. Sleep is a nebulous field. There is little debate that we need it, but no solid proof as to why we need it. So cherish the sleep that you can fit in to your busy schedule, lest it be brutally taken from you by a rare genetic illness.**

* REM sleep is the stage in which dreaming occurs. During REM sleep the brain is active but the body’s voluntary muscles are paralyzed, theoretically to protect the sleeper from acting out dreams.

† Obstructive sleep apnea is the blocking of airways during sleep, which leads to multiple brief awakenings during the night to obtain more air (up to 100 per hour of sleep). Sufferers may not even realize that they are losing sleep at night as they will not recall these episodes and experience their symptoms mostly in the form of daytime sleepiness. The condition also causes snoring.

‡ Restless leg syndrome was recently mocked (by me) in another article. It is a real malady, it is just not as prevalent as the pharmaceutical industry would have you believe. It is characterized by uncomfortable creepy-crawling feelings in the legs and a subsequent urge to move them. These symptoms worsen during inactive periods, for example – lying down and attempting to go to sleep.

§ A prion is a mis-folded protein that replicates itself using healthy cells, not unlike a virus. You have likely read about prions before, as they are also the culprit behind bovine spongiform encephalopathy, aka mad cow disease.

** There is no actual causal relationship between these two things. I’m just being dramatic. It’s October.

Who told you this?

Mahowald, M.W. and Schenck, C.H. 2005. “Insight From Studying Human Sleep Disorders.” Nature 437: 1279-1285.

Raggi, A. et al. 2008. “The behavioral features of fatal familial insomnia: A new Italian case with pathological verification.” Sleep Medicine 10: 581-585.

Krasnianski, A. et al. 2008. “Fatal Familial Insomnia: Clinical Features and Early Identification.” Annals of Neurology 63: 658-661.

Gallassi, R. et al. 1996. “Fatal familial insomnia: Behavioral and cognitive features.” Neurology 46: 935-939.

Medori, R. et al. 1992. “Fatal familial insomnia: A prion disease with a mutation at codon 178 of the prion protein gene.” The New England Journal of Medicine 326: 444-449.

Special thanks to Elizabeth, who first alerted me to the existence of this ailment during an evening of dancing and karaoke.

Friday, October 8, 2010

Change of Heart

In my daytime alter ego of “faceless bureaucratic cog”, I attend many professional training classes. Most of these are devoted to navigating some new online database that has replaced the previous online database etc. But recently my employers really came through and sent me, along with the rest of the staff, to become CPR certified. Now, before you declare, “I’m already CPR certified… this is of no value to me!” let me tease you with the news that CPR is different now than it used to be, perhaps even different than it was when you learned it (depending on how expired your CPR certification card is). In fact, more research and more change in guidelines for how CPR should be performed by laypeople* has occurred in the past decade than during the rest of the 50 years since the method was introduced. You are living in an exciting time! But first, the stuff you already know, or will probably claim you knew even though you never actually bothered to think about it.

CPR stands for Cardiopulmonary Resuscitation. That means the heart and lungs are the organs of focus. The target audience for CPR is anyone experiencing cardiac arrest outside of a hospital setting. In cardiac arrest the heart stops circulating blood (and thus oxygen), causing the victim’s breathing to be impaired. This is the problem that performing CPR is aiming to fix. CPR does not address realigning dislocated shoulders, sucking venom out snake bites, escaping a burning building and countless other first-aid and wilderness-survival emergencies. You’ll need to go elsewhere to acquire those skills. However, CPR is potentially life saving to those for whom regular heartbeat and breathing have suddenly ceased.

CPR training got an overhaul in 2005. In the late 1990’s The American Heart Association (AHA) commissioned a reevaluation of existing guidelines for providing CPR, and the new guidelines were based on these findings. The biggest changes were made to how laypeople are taught to do CPR. In the past, we would have been given instructions similar to those designed for healthcare providers. However, things have been significantly dumbed down for our frail civilian brains. The reasons for this can be distilled to the observation that laypeople often forgot the intricacies of their training soon after obtaining it and then, when faced with an emergency, worried about screwing up. They lost valuable time fretting over making things worse when almost anything would have been better than nothing. With this in mind, the 2005 AHA guidelines dropped distinctions in the chest compression-to-ventilation ratio for different ages (sizes) of people. Every man, woman, child and infant now receives cycles of 30 chest compression and 2 breaths. You’re just told not to press as hard on the smaller humans (2 hands for an adult, 1 hand for a child, 2 fingers for a baby).

Another innovation by omission is that laypeople are no longer instructed to take the pulse of the suspected victim of cardiac arrest. We are only to check for breathing. No breathing = CPR. Why? Apparently we were being really slow about it. Locating a pulse is harder than it looks, and this was found to delay the initiation of CPR. With CPR, sooner is better than later.

And something is better than nothing. The rescue breathing is optional. If for whatever reason you do not feel comfortable blowing into a stranger’s mouth, the AHA says to just go ahead and do the chest compressions. There’s been much talk lately that switching to a “hands-only” protocol in general might be beneficial. The logic behind this echoes the above-mentioned concerns about bystanders being more likely to rapidly initiate CPR if it is made as uncomplicated as possible. Just press on the person’s chest in a rhythm similar to a normal heartbeat. Also noted was that the ick-factor of the breathing might deter more germ-phobic would-be-rescuers.

A number of studies have examined this, 2 of which were recently published in the New England Journal of Medicine. Both studies were conducted by having emergency dispatchers deliver randomized different sets of instructions to callers reporting cardiac arrest emergencies: some were instructed to provide chest compression and breaths, the others chest compression only. What the authors found was the while recipients of the hands-only CPR did not fare significantly better than those who received the traditional sets of compressions and breaths, they certainly didn’t fare worse. Additionally, in one of the studies, those whose cardiac arrests had cardiac causes (as opposed to non-cardiac causes such as drug overdose) tended toward better outcomes when given just chest compressions and no breaths. It can be argued that, since at the time of a sudden cardiac arrest the body still has a decent volume of oxygen available (roughly 10 minutes worth), rescue breathing is not as important in the first few minutes and it is best to minimize interrupting chest compressions, other than to reassess breathing.

There has yet to be a consensus as to which version of CPR is best. Our class taught the chest compressions and rescue breathing version, although at least 2 members of our small class (myself and the lady who posed the question) were already aware of the debate.

I had a number of other questions for our instructor. What if the unconscious person might have choked on something (as small children are prone to do)? Answer: still perform CPR, it won’t make anything worse and might help. Can I get sued for doing this? Answer: in America, anyone can get sued for anything, but such cases are generally dismissed.

Had I consulted with others prior to attending the class, I would also have inquired if, once I got my CPR certification card, I could get sued for not performing CPR. One friend claimed to have been told something to that effect during his CPR certification, but I have yet to find any confirmation of this. Either way, I will gladly administer chest compressions to any of you who have heart attacks in my presence. Although I can’t make any promises about doing the rescue breathing.

* For our purposes here, a layperson is anyone who is not a healthcare provider, regardless of how brilliantly you did on your college biology exams.

The EMS guy who taught our CPR class informed us that the song “Stayin’ Alive” has a suitably paced beat to it, so you can always hum the Bee Gees to yourself if you’re unsure of what a normal heart rate feels like.

If you’re thinking that it sounds like people were participating in these studies without consenting, you’re absolutely correct. However, the authors assure us that ethics committees and “appropriate review boards” signed off on their methods. If it makes you feel any better, surviving participants of one of the studies were eventually informed of their contribution to science. And now you’re probably thinking, “What about the friends and families of the non-survivors?” and I just don’t have an answer for you. I’m guessing no?

Who told you this?

2005. “Overview of CPR.” Circulation 112: IV-12-IV-18.

Sayre, M.R. et al. 2008. “Hands-Only (Compression-Only) Cardiopulmonary Resuscitation: A Call to Action for Bystander Response to Adults Who Experience Out-of-Hospital Sudden Cardiac Arrest.” Circulation 117: 2162-2167.

Weisfeldt, M.L. 2010. “In CPR, Less May Be Better.” New England Journal of Medicine 363: 481-483.

Rea, T.D. et al. 2010. “CPR with Chest Compression Alone or with Rescue Breathing.” New England Journal of Medicine 363: 423-433.

Svensson, L. et al. 2010. “Compression-Only or Standard CPR in Out-of-Hospital Cardiac Arrest.” New England Journal of Medicine 363: 434-442.

Helpful and patient CPR instructor whose name I forgot.