Posts Tagged ‘the brain’
June 29th, 2010 | Meera
In Part I of this essay, I told you how a short story by Swedish writer Lars Gustafsson presented me with what seemed like a useful analog for talking about how I experience scientific nomenclature. This second part of the essay probably won’t make much sense if you haven’t read the first.
As a reminder, here is the sentence I stole from Gustafsson’s marvelous short story “Greatness Strikes Where it Pleases,” and edited to suit my purposes. Apologies to him.
Scientists have such funny names for their things: that is their peculiarity, and they have a right to all those names which I don’t have.
In case you’re one of the few people reading this who doesn’t know me personally, I’ll clarify that I’m a working, early-career science writer with a graduate degree—in the humanities. In other words, I’m an educated nonscientist with a deep interest in science and some hard-earned, on-the-job training in understanding scientific concepts (especially within the field of health and medicine, about which I have begun to write regularly in the past year). But my formal academic background doesn’t help me much when it comes to grappling with the nomenclature of science.
In Gustafsson-terms, I don’t have a right to the “funny names” scientists have for “their things.” And that can make science a difficult world to travel in.
At the simplest level, unfamiliarity with the naming of things in science can act as a barrier to understanding. As a writer, even one who has a defined “beat,” my livelihood depends on flexibility. I need to be able to sensibly cover a broad range of topics, each of which has its own names for its own things. The more specific the scientific field, the less likely I am to know all of those names and the higher the barrier I have to scale.
I’ll give you an example. At the moment, I’m researching a story about multiple sclerosis. Even before I began working on the piece, I grasped the basic facts of the disease. I knew it was a neurological disorder marked by lesions in the tissues of the brain, spinal cord, and optic nerves. Specifically, multiple sclerosis causes patchy plaques in the insulating myelin sheath—composed of proteins and phospholipids—around the nerve fibers of the central nervous system. In doing so, it disrupts the smooth transmission of action potentials traveling along the axons between nerve cells. This leads to numbness, weakness, poorly controlled muscle movements, and changes in vision.
I would argue that the text above is reflective of some of the reasons names in science are problematic for a nonscientist. For one thing, it, like many clinical texts, uses two different names—lesion and plaque—for the same thing. For another, both those words have everyday connotations that contradict their scientific meanings. In ordinary English, a plaque is a flat object, while the plaques of multiple sclerosis are typically raised, or even wedge-shaped. In ordinary English, a lesion is often thought of as an open wound or fresh cut, but in the disease context it’s an area of scar tissue: sclerosis comes from a Greek root that means “hardening”. (I think of Gustaffson’s boy, bewildered by saws called tails, even though they have nothing to do with tails.)
In addition, though it is careful to avoid more specialized terms like CD4 T-cells or MS-susceptibility SNPs, the description also includes a number of words that are limited to the scientific domain. Of course, my job demands that I know, comprehend, and accurately use names like myelin sheath and phospholipids (and CD4 T-cells and MS-susceptibility SNPs). In learning them, I have added the concepts they represent (and the concepts required for understanding what they represent, which are themselves numerous) to the objects of my world. By extension, I have reached for the right to know that they exist. I consider them, and many other names like them, as tools in my shed.
Yet even when it comes to a single disease, that’s not saying very much.
This Dictionary of Multiple Sclerosis, for instance, spans 254 pages and contains over 600 entries, some of which define words familiar to me but most of which do not (I hadn’t encountered Experimental Autoimmune Encephalomyelitis before last week, and while it may or may not appear in my article, I’ve found it necessary for understanding several of the research papers I’m reading).
Before I finish work on this story, there will be several dozen more scientific terms that will have entered my vocabulary. Some of them will become permanent fixtures in my toolshed: old friends that I may use to pound in future fence posts. Others, though, will inevitably retreat once again into the world of things whose names I do not know. And the same will be true of the next piece I write, and the next. Though my comfort with and command of the naming of things in science grows daily, I will probably always operate, in a deep sense, within a world where what exists and what does not is at least a little “vague and uncertain.”
I say these things not to bemoan my fate, which is self-chosen and quite beloved (and not in order to defend writers from criticism when we get things wrong), but because I think it’s worth talking about. I think it’s worth examining the ways in which, when it comes to scientific terminology, many of us—even those of us who work with scientists—are akin to Gustafsson’s boy. We may feel unsure of what things the world contains, and we may lack a sense of true ownership over those things and their names.
I attended the wedding of an old friend two weekends ago. My roommate from college, a third-year medical resident and one of the smartest, most driven people I know, had brought some work with her for the weekend. Looking at the first sentence of a scientific paper on her iPhone—a paper she needed to understand in order to properly diagnose a difficult case—she chuckled to herself. “Can I read something to you?” she asked. When I nodded, she read:
Hemophagocytic lymphohistiocytosis (HLH) is also known as the autosomal recessive familial hemophagocytic lymphohistiocytosis (FHL), familial erythrophagocytic lymphohistiocytosis (FEL), and viral-associated hemophagocytic syndrome (VAHS).
As soon as she finished, we both broke out into laughter. It was impossible not to laugh. The sentence, as written, was impenetrable.
This was the case despite the fact that we both recognized its capacity to hold and convey meaning. If you had complete access to the terms it used—if you knew all the funny names for all the things in it—you would have a fairly precise understanding of what the paper happened to be about (as it happens, a rare genetic autoimmune disorder affecting the cells of the blood and which apparently is known by at least four names).
You might argue that those words weren’t written with me in mind. This is partly true. My friend was much better equipped than I for the task of overcoming the barrier of all the terms in that first sentence. She continued reading the paper as I sat by her in the sun, bringing the full weight of eight years of medical training to bear on the density of terminology it contained, and (presumably) managing to hop quite neatly over the problem.
There are excellent reasons for science to keep its nomenclature separate from the vocabulary of ordinary speech. Scientific discourse values specific denotation, not ill-defined connotation. It values the compression of ideas. It abhors ambiguity. This is why so many scientific terms, including the ones that dominate the sentence we laughed over, have been derived from Greek and Latin: languages that, unlike our own modern tongues, have ceased to evolve and can provide (apparently) stable containers for precise concepts.
I appreciate these qualities of scientific speech, even though they serve to build a world in which I sometimes founder. Assuming the names for things really are precise and unambiguous, I can believe that in spite of any confusion I may personally feel, the language of science actually does serve to draw clear demarcations around objects and ideas. I can trust that no one will be sending me to fetch tools by the wrong name; or, worse, to look for tools that do not exist. And I—unlike Gustafsson’s boy—can quite happily accept the limits of my knowledge and work to expand it.
But there was still something true in the laughter I shared with my friend. The sheer bulk of scientific nomenclature, and (more problematic) the fact that it sometimes fails to live up to its ideal of clarity, isn’t lost on scientists themselves.
Physics PhD-holder Philip Ball called for his peers to be clearer and more transparent in their application of existing terms and the invention of new ones, not just for their own sakes but for the rest of us poor saps as well. Fertility, he points out, is now routinely used by demographers to mean both “birth rate” and “the ability to reproduce,” thus “allowing the existence of fertile people who have zero fertility.” And for an example that’s closer to home, take this. My husband is a graduate student in computer science. An early page in one of his textbooks lists several translations between computer science and statistics, which often use different language for the same thing. Estimation in statistics equals learning in computer science (and neither, as Ross can tell you based on many extraordinarily frustrating conversations with me, quite equals what these two common English words mean outside those fields).
We are sent for a tool, but by the wrong name.
Simon Young, co-editor-in-chief of the Journal of Psychiatry & Neuroscience, ranted about the bloating of research vocabulary with jargon and neologisms in 2006, reserving his sharpest vitriol for words ending in what he considers to be the preternaturally ugly suffix -omics. Young’s aesthetic judgments aside, what he really objects to is a troubling disconnect between word and meaning that has arisen as a result of fashion. “I find it interesting,” he comments, “that all journals with it (the word neuropsychopharmacology) in the title publish papers not involving drugs and, therefore, outside the scope of the journal title. Why use such a cumbersome word if you ignore its precise meaning?”
We are sent for a tool, but it does not exist.
True; research is not a woodshed. It is fluid, ongoing, additive. Uncertain names that mean uncertain things multiply daily in the world of science, thanks to the constant formation of neologisms and the lack of a standardized, universally accepted process for coining names for new discoveries or inventions.
To their credit, scientists recognize the problem of vague or inconsistent terminology, and frequently make recommendations to improve the situation. Should I go on? Because I can. What troubles me most is that even when clear and logical rules for how to name things are proposed by well-meaning scientists, as often as not they fail to be adopted by the community at large.
Why? Inertia, probably. Genuine disagreement with the standards, possibly. A simple attachment to what one knows and is habituated to, certainly. And, of course, there is the issue of control. Simply knowing the name of a thing means you have the right to know it exists in the world. But owning a name means you own the thing itself. It means you decide how it exists in the world.
This is not mystical talk. This is, very simply, about power. You only have to look at the heated historical disputes over the naming rights of atomic elements to know the truth of it. The late 1990s christening-pangs of element 104—a highly radioactive substance, most of whose isotopes decay in a matter of minutes or seconds—reflected a struggle for dominance, not just between individual scientists, scientific labs, or associations, but between nations. (The U.S. overpowered Russia. Surprised?)
Here is a sentence from “Greatness Strikes Where it Pleases” that I did not have to edit:
In actual fact, the strong decide what words should be used for.
In the story, the boy who lacks the names of things is not one of the strong. He has no way of knowing what does and does not exist. And he feels the world itself, governed by names he cannot grasp, to be a strange and unfriendly place: full of fearful things that rise up like birds out of the bushes. As a result, he rejects words entirely, retreating into an inner landscape of branching trees and mysterious mushrooms—a world he builds himself from the patterns of shadow and wallpaper.
Greatness strikes where it pleases, writes Gustaffson, and what we are meant to understand from this is that there is a kind of greatness in the boy and his shadowy world. In the context of the story this is a deeply satisfying conclusion. Exquisite, even.
In the context of reality, it’s frustrating. I have no wish to retreat into a world of my own making, and neither, I would wager, do most nonscientists. What I want is for science to meet me halfway.
I am happy to accept that I will never know all the names there are to know, and that I must learn the ones I will learn slowly, one by one. I can take on that work with pleasure. I am far less happy to accept that, having learned a name, it will not always point to the same thing. Or that, having learned about the existence of a new thing, it will not always be called by the same name. And I mourn the idea that the naming of things—in science especially—should fall to the strong, or be used as a national power-play or marketing tool for a discipline. In every scientific field, from genomics to geology to astrophysics, rational minds are calling for the simplification and standardization of language.
Don’t let the strong decide what words should be used for; decide sensibly, as a community, on how to name things. And then share those names with nonscientists as clearly as you can. It will still be difficult for us to understand you sometimes. But we all, I think, would very much like to have the right to know what does and does not exist in this extraordinary world of ours.
June 28th, 2010 | Meera
Last Saturday night, I heard a reading of an extraordinary story by Swedish writer Lars Gustafsson, published in his 1981 collection Stories of Happy People. The piece takes as its central character a severely mentally retarded individual, following him from boyhood to middle-age in a dense fourteen pages and constructing a delicate contrapuntal narrative in which outward circumstances—harsh and melancholy—and an inner world—complex and immensely beautiful—act as intertwining melodies. In its entirety, the story is infused with sweetness and melancholy in equal measure, and it is well worth your investigation.
The reason I’m telling you about it here, though, is because I was struck by how Gustafsson uses nomenclature as an alienating force. In a deep and surprising way, the story reminded me of my own interactions with the scientific world and its language. More about that later.
First, here is how Gustafsson describes the uneasy relationship between the boy and the array of tools he encounters in his family’s woodshop. (Throughout the story, his inability to grasp the names of things sets the boy, who clearly suffers from a profound language impairment, apart from others—who approach objects and command them comfortably through their names.)
Grownups had such funny names for their things: that was their peculiarity, and they had a right to all those names which he didn’t have. He always laughed awkwardly and crept into a corner when his brother and sister tried to teach him those names.
Those things belonged to them: dovetail saws, punches. The old wooden mallet used for pounding in fence posts…they hit him when he came in from the woodshed with wounds and gashes from the tools in the woodshed. They were afraid that he’d really hurt himself. They wanted to keep him away from the tools.
His brother and sister, who knew how, were allowed to handle them. It gave him the feeling that the words, too, belonged to them. Sometimes they might send him to fetch tools that did not exist, “bench marks,” things like that. It gave him a feeling that it would always be vague and uncertain which things existed in the world and which did not. Evidently using words was harder than you might imagine.
They always laughed loudly, doubled up with laughter when he returned empty-handed, or when they had fooled him into going to the far end of the barn searching for impossible objects. In actual fact, the strong decided what words should be used for.
—Greatness Strikes Where it Pleases
When I heard this passage read aloud in the firm voice of actor Colm O’Reilly, I felt a funny tremor of recognition. At first it seemed odd to me that I should so empathize with the boy’s mistrust of language. I spend my life, after all, with words. They are my instruments and my toys. And generally, I love learning new words, especially nouns.* One of my favorite things about skinning a bird is the act of writing its names in my log. I take a special pleasure in tracing those letters, doing my best to control my wayward script and form the words precisely, as if it really matters that I get their shape just right; as if by laying down ink over Dendroica fusca, Blackburnian Warbler, I am not simply recording something that already exists, but re-creating it as well. When I name a bird it becomes known instead of unknown.
Of course, there are many ways to know a thing. I can scrutinize the patterns of a bird’s plumage, the shape of its bill, its size in my hands. I can construct knowledge of a thing, quite deep and true knowledge, in fact, by adding up a hundred different pieces of information. But to hold them together is difficult. Give me a name, and I have a sturdy container for those hundred pieces: a shape for my knowledge.
This is exactly what science tells us, isn’t it, about the human brain? That it craves order? That the unique gift of language is to provide a set of labels with which the brain can produce order out of the too-great tidal stream of data it accepts from the world through the sensory organs? In 2001, for instance, an elegant series of experiments with 36 no doubt adorable participants showed that as early as nine months after birth, saying words aloud while introducing two similar and unfamiliar toys helped babies to reliably differentiate between them.
Playing sounds while introducing the objects, like a spaceship takeoff or a car alarm, did not—and neither did a human voice producing a non-verbal expression of emotion, such as a sound of satisfaction or disgust. Words, and words alone, enabled the babies to place each toy into a separate category. (This was true whether the names were real or nonsense labels, ruling out the notion that the babies were simply responding to word-object pairings they already knew.)
There is also the possibility—not proven, but tantalizing—that language doesn’t just organize sensory information, but influences how it is perceived. Most famously, a number of experiments have shown that speakers of languages with a greater number of words for different but similar hues are better able to distinguish between those hues in the color spectrum.
Last year one study of Greek speakers—who unlike English speakers make a linguistic distinction between light and dark blue with the breathy nouns ghalazio and ble—went a step further. By measuring the electrical activity in their brains as subjects looked at visual stimuli, researchers showed that the greater acuity for color enjoyed by Greek speakers could actually be recorded, in the form of electrophysiological differences, as early as 100 milliseconds after being presented with a colorful shape. This interval is consistent with what we know about the time it takes information to reach the visual processing areas of the brain, and is considered too brief for the participants to have engaged in a conscious awareness of what they were seeing. In addition, the differences arose even though subjects were instructed to attend to the shapes of various stimuli, not their colors. (The paper, along with a few caveats, is detailed here by Language Log. The most interesting caveat has to do with the suggestion, drawn from previous studies, that this kind of language-based interference in color perception is likely limited to the right visual field, which sends information to the left—language dominant—hemisphere of the brain.)
So there is some evidence, preliminary though it may be, that the names we know really do affect, on at least some level, “which things exist in the world and which do not.”
This makes it easy to understand why Gustafsson’s boy, so ill-equipped to learn names, finds the external world vague and uncertain. When you cannot grasp how words connect to objects, navigating amongst objects is confusing and unpredictable. You might find yourself searching for impossible things or overlooking what is right in front of your nose. Also easy to appreciate, in the light of these color studies: the boy’s sense that the right to use each tool is inextricably linked to the ownership of its name. The things in the shed belonged to his brother and sister and so did the words for them. Whereas the boy, lacking words, had neither the right to use the tools nor to know if they existed.
What does all this have to do with me and science and scientific nomenclature?
Well, this: If I make a few edits to a sentence from Gustafsson’s story, it captures something of the experience I sometimes have when I try to navigate within the scientific world.
Grownups had such funny names for their things: that was their peculiarity, and they had a right to all those names which he didn’t have.
I would say:
Scientists have such funny names for their things: that is their peculiarity, and they have a right to all those names which I don’t have.
If anyone is still with me, I’ll talk more about this in Part II of this essay tomorrow.
*(Incidentally, in Hebrew the prosaic “vocabulary” is rendered as the lovely phrase “treasury of words.” I still have the notebook, thin and yellowing, in which I collected some of my first words in that language: book, picture, boa constrictor, prey, primeval forest. If you don’t know or haven’t already guessed why I began with those words in particular, ask me sometime and I’ll tell you.)
March 7th, 2010 | Meera
I’m four, going on five, and walking with my class along a corridor that goes between the room where we take our naps to the room where we paint our pictures. I’m wearing the tiny red-checked uniform of my kindergarten. It has a pocket on the right hand side, and inside it is a piece of tissue paper that I used a few minutes ago to blow my nose. I’m fingering it nervously because I don’t know what to do with it now. There is a rubbish bin, I think, by the bathroom, but I am too shy to ask if I can leave the little choo choo train we’ve made—chugging along so smoothly—to walk over there and throw it away. I keep worrying at the tissue, wadding it up and tearing bits off it as I walk.
Then I have an idea. I am the last one in line, the caboose to this convoy. I roll the tissue into my palm, tight and invisible, and casually remove my hand from my pocket and lower it to my side, still balled up. Like a practiced sneak, I slowly unfurl my fingers one by one. The tissue falls, my step quickens. In a moment I am a few feet beyond it—and no one has seen. I let out my breath.
This isn’t the earliest memory I have, but it’s one of the few that has a distinct narrative—it makes me laugh to consider how terrified I was of doing anything even remotely against the rules, or that called attention to myself—and how devious I was willing to be in the service of that anonymity. It tells me I have not, perhaps, changed all that much.
There are other things I remember: eating porridge with slices of boiled chicken at my upstairs neighbor’s house, singing “You Are My Sunshine” in rounds in the car, burning the skin of my knees on the scratchy red carpet that only existed in one room of my family’s old apartment, getting Barbie dolls out from under the bed. But in general, the impressions I have of my early childhood are few, vague, and fugitive. When I can see them at all they are like the patterns on the insides of your eyelids—try to focus on them, and they change.
It’s not uncommon for a few startlingly clear visions to persist from a very young age. When I ask, my friend Regina says she can feel herself lying on her brother’s warm, comforting back, the two of them in a cot surrounded by the noise of strange children at a daycare center; she was 18 months old. Yvette, not much older than that when she was in the hospital for heart surgery, has on her tongue the taste of the popsicle a nurse thought to give her: Grape. But for the most part, when it comes to early memories we are all, relatively speaking, paupers caressing a small handful of coins.
You might imagine that young minds haven’t yet developed the neurological capacity—the physical equipment, so to speak—to store memories about experiences over time. Brain structures known to be vital for processing episodic memory, after all, such as the hippocampus and the prefrontal cortex, do not develop fully for years.
Sensible as this theory seems, it’s hard to pit it against the facts. Six month-old babies can remember previously formed associations, like the fact that if they kick their leg just so, a pretty mobile that some strange scientific hand has tied to their ankle will twist in the air above over and over, like a bird, all color and light. And pain, of course, just as well as pleasure, makes its way into the brain. When my nephew was barely a year and a half old he crashed his head against a glass table. For days, my sister says, he’d return to the same spot and show her how it had happened, pantomiming his bump, face crumpling into a facsimile of the wail he’d wailed when it first happened. It is almost as if—not really, I know, but as if—he had some intuition that the moment would not last long, and thought to place it with someone who could hold it after he himself had forgotten.
Amazingly, scientists have been able to show that the ability to form complex episodic memories starts literally in the womb; we know this thanks to Dr. Seuss and two curious researchers. In 1986, A.J. De Casper and M. Spence asked pregnant women to read aloud one of three similar excerpts from The Cat in the Hat every day, several times a day, for six weeks before they gave birth. Three days after each baby was born, an ingenious set up allowed them to “choose” which of the three short passages they wanted to hear by varying the rate at which they suckled on a teat. By significant margins, the tiny infants showed they remembered and preferred the familiar reading to the ones they had never heard before. (A control group of unread-to babies had no particular feelings on the subject.)
In other words, children are not, by any means, sieves through which experiences flow like water without ever being caught. Yet the empirical evidence that most of us hold fewer memories from the earliest years of our lives than from later ones is impossible to ignore. If people are asked to describe as many childhood memories as they can, almost none of the items they recall will have occurred before their third birthday; after that, the number of memories they cite soars markedly. A statistical analysis of memories plotted against age finds that the scarcity of early recollections is even greater than you would expect after taking into consideration the fact that the older a memory is, the more likely it is to have decayed.
Caroline Miles, questioning a hundred college-aged women in 1893, found that the average age from which a first recollection came was 3.04 years; no subject of hers cited an event, impression, or sensation dating from when they were younger than 2.6 years. Since then, over a century of studies of early childhood memories have arrived at conspicuously similar figures, with some small, but interesting variations across culture and gender: Women typically remember slightly more childhood details than men, Americans typically reach slightly further back than do Chinese.
Psychologists have a name for this lacuna in our lives, this band of time at the end of which, it seems, we each line up to drink deeply from Lethe’s stream and give up most of what we once knew. This first forgetting. Depending on who you ask, it is called in the literature either “infantile amnesia” or “childhood amnesia,” names which have something of the absurdly overblown—they make us all sound like so many desperate soap opera characters bumbling about in a world full of strangers, our whole past lives erased at a single stroke.
And yet there is, truly, a note of tragedy about this very ordinary amnesia. We have reason to believe that the sensations we have as infants and very young children are exquisitely intense, full of vivid sounds, shapes, smells, images, and ideas that fly across our consciousness from every corner. Because we are less cognizant of established patterns, less able quickly to file away each impression into a neat category as soon as it arrives, we are (in the way so many of us strive to be in our adult lives) flooded with excitement and adventure—hyper-aware of the bright, sweet world in which we live.
But look at us now. Look at me. In the face of all that wondrous experience I imagine to have once coursed through my brain like rivers of fire, here I am today: working eagerly at the meager store of memories I have from my childhood as if they were a few small pieces of tissue in my pocket, wearing thinner and thinner with each rub.
As with so many questions about memory and experience, no one really knows for sure. No one, any longer, believes Freud was right about the mind’s need to quell the “trauma” of psychosexual development by repressing memories associated with growing up, as if the entire adult human race were a limping legion of soldiers who had survived a war, each tender from the wounds of childhood itself.
Instead, most current theories seem in one sense or another to treat the fierce, beautiful memories from this period of our lives like lost treasure, buried under the ground somewhere and we without a map.
Maybe, some have argued, it takes a while for the brain to develop the ability to properly label individual memories with information about the way in which they arose, so that while we may on some deep level remember an experience itself, we are unable to access it because we no longer remember its source. If, for instance, you had not yet developed a sense of self, to what anchor could you safely attach your memories of things that happened to you? I like this notion. I think of balloons that ought to be tethered to a pole, to a tree branch, to a chubby wrist, coming free of their loose knots. Once they had flown high, ranged far away, could you bring them home again?
Or maybe, others say, the tens of billions of synaptic connections we lose as we age into adulthood prevent us from retrieving the recollections we formed early on, because many of the complex strings of firings that once led our minds from here to there have now been broken somewhere along the line. I like this notion, too. I think of a spider’s web that someone has walked through, intricate and gauzy. All unknowing, they shake their heads free of the fine threads as they step away, and leave this corner fragmented from that. I think of a house with ten thousand rooms and a thousand locked doors.
And maybe, still others guess—the ones, I imagine, who love words as much as I do— before we can use language to describe an event, even if only in our minds, memories live in silence. Wanting names, they persist—but cannot be called. I love this notion best of all. It feels less lonely than the others.
I think of a mind full of old friends, waiting for me to remember who they are.
November 17th, 2009 | Meera
In the months after I quit my teaching job, addled from the accumulated unease of days spent in battle and carrying my failure like an extra limb, I found there was nothing more soothing than stillness. Breaths grew small, hands rested quietly against thighs, feet found their place and kept it. I remember one train ride in particular, so gripped with disquiet that having once looked down to my shoes, I felt physically incapable of the simple act of raising my head. The carriage bumped along Boston’s pockmarked streets, but each twitch of its creaky frame saw me tighter and more transfixed.
I strove to be still because movements, in those moments, were traitors. Fear could speak its name in the shudder of a shoulder and there was no step but a misstep. So I paused, glassy as a frozen pond.
My silence here lately has had that same root, I think. It’s been a long, strange year. I’ve unmoored myself, once again, from a career that didn’t satisfy me. And once again I am afraid of defeat. I’ve been trying, I’ll admit, to stop time with hushed inaction. Later, I tell myself, I will speak. Later begin to move. Now, for now, let me be a statue who never leaves her spot. Better that than a human being, capable of tripping. Capable of falling.
But here is the truth; I’m not, you know. Not glassy, or frozen, or still. Not for a second, no matter how paralyzed I think I am. I haven’t a choice about it. Nothing can stop me from swaying to keep my balance.
It’s clear, I suppose, that movement requires reams of complex coordination. I lift my hand to turn the page of the book I’m reading (One Hundred Years of Solitude, for the third and best time in the past dozen years), and to do so I must regulate, consciously or not, the movement of the joints at my shoulder, elbow, wrist, thumb, and forefinger—each of which is capable of turning and bending independently in up to three dimensions. That, in turn, requires controlling the contraction of nearly thirty different muscles, including the six sinewey carpal muscles that bind the wrist and let it roll over in a small half-moon once my fingers have grasped their insubstantial target. Peer in, and these muscles themselves have constituents whose movements must harmonize: fibers threaded together in bundles, each individual bundle squeezing or easing at the bidding of a single nerve.
My many parts synchronize in a beautiful clockwork, all so the sentence that begins somberly on page 70 can end on page 71 with a faint smile: “He soon acquired the forlorn look that one…sees in vegetarians.”
But in stillness, surely, there is rest. As I stand without moving, not even shifting from foot to foot, surely the threads out of which my muscles are woven are unruffled. I want this to be true so badly. Yes, breath continues, and heartbeat. Fluid moves across membranes and always there is the minute trundling of molecular motors carrying their endless loads across my cells. But look, I tell myself, these are only tremors within, rumblings beneath the earth. At least the earth itself remains stoic. My body, terra firma. So do I keep myself safe. In my immobility I can be a soft black hat on a table, waiting for the flourish of handkerchiefs that will prove the show was worth coming for.
Not so. Standing itself is parlous, and never as steady as it looks. Consider how heavy is the human head upon its kinky spine, how large the torso on its spindly legs, and yet how thick those legs compared to the stiff ankles, the tiny feet, upon which we place at last the entire burden of ourselves. We are not built, like lions, on four muscled legs, the pillars of an ancient church. Like inverted pendulums, we are secured to the ground, but travel up through our torsos and to our crowns and what you find is oscillation.
This is what the physiologists say—and they should know, because they watch: No one is ever truly standing still.
We do not ripple as do pliant blades of grass, breathed on by the wind. Instead, we fight to maintain verticality through a near-constant series of tiny displacements and corrections activated within the musculoskeletal system. Postural sway is what they call it. A gentle phrase, and one that captures both the strictness of our ideal (Watch your posture, young lady!) and the impossibility of adhering to it. Motionless we are not.
Here, then, is what moves us. It starts with diminutive shifts in the intensity and positioning of the points of pressure where our soles meet the ground. All unconscious, we map and remap the subtle forces with which we push back against the earth. As we do so, the imaginary reference point we use to gauge our balance (somewhere in between our feet) becomes a constantly moving target. It wanders.
“Rambling,” this is called affectionately in some scientific literature, as if the center of each human being’s personal universe is defined by the fact that it likes to take long walks in the outback.
So. We have a point of reference that keeps us upright, and it moves. But it isn’t unwatched. The current position of this center of pressure is instantly communicated by nerve signals traveling up the brain stem and into the neural system that controls balance. In response, nerves fire in an imperceptible ballet. They gently squeeze and relax those braided threads that make up the muscles in our calves, abdomen, back, thorax. The whole delicate orchestration causes equally diminutive shifts—”trembling,” they call this—in the position of our center of gravity.
Trembling follows rambling, and so we stand. How frail those words make us sound. Like needles skipping across a sheet of paper, following a skittering heart.
Not so long ago, apparently, researchers regarded these stray wobbles as nothing but noise, meaningless bits of information generated by a flawed neural system that was not built well enough to give absolutely correct instructions. If the brain could direct the body to be perfectly still around a fixed central point, the thinking went, it would. What can we say? It can’t. C’est la vie.
If you had told me this, that day on the train, I’d have nodded. It would have been of a piece with my mood then. I’d launched myself into the air, expecting to fly, and fallen terribly. To learn that even my penitent stillness was deficient would have been no surprise.
But scientists, unlike saturnine ex-teachers, do not like the idea that things are just so because they are imperfect. Imperfection is not very interesting. So they continue to wonder about this sway. They draw graphs of it, delightful manic scribbles like ants circling about a drop of syrup, and see that though the movements we make as we shift and sway are variable, they vary within strict limits. No ant strays too far from the sugar.
Scientists also try to poke at the problem, making us close our eyes and seeing if our spontaneous quiverings change. And look, look here. They do. The intensity of postural sway increases significantly with eyes shut. But the tiny muscle movements we make don’t get more haphazard, as they might if the brain were just making more mistakes. The ants are wandering a little further away, but they’re still finding that sweet center.
(Are the ants working for you? They are for me, but I’ve been thinking about this all evening, worrying away at the idea of it. I might be an ant myself. If not, here’s what might be clearer.)
The reason you sway more when you close your eyes, scientists think, is not that you become unsteady, in danger of losing your balance. It’s that you’re working harder to keep the balance you have. With each tiny shift in those pressure points in the soles of your feet, each minute muscular movement in your legs or back, this theory holds, your brain is tracking information about your position in the world. In a way, the sway is a way to test the limits of stability.
If I do this, am I still standing? What about this? Or this? The same incessant experiment takes place whether your eyes are open or not—the brain and body are just more enthusiastic about their probings when one perceptual channel is closed off.
What I am saying is this: Maybe I have to stand up and sway to stay in balance. I’m a little less funereal now than I was those few years ago, a little more willing to welcome uncertainty. Maybe stillness itself is the root of the fall. And maybe, every instant in which I stray away from the perfect center I seek shall be followed by a move back towards it. I’m by no means sure of that. But I do like the idea of experimenting with the limits of stability. And I think I’ll be doing a little more of it right here from now on.
Carrying a Ladder
We are always
a ladder, but it’s
crashes; easy doors
Or, in the body,
there’s too much
swing or off-
And, in the mind,
a drunken capacity,
access to out-of-range
apples. As though
one had a way to climb
out of the damage
June 16th, 2009 | Meera
I don’t know what it’s like for you, but there are days when it feels I’m like meeting someone for the first time. Her features seem foreign to me, and that, in its way, is not so far from the truth.
I don’t know what it’s like for you: there are days when I am most comfortable if the sight is brief. Best if I have a specific task, like brushing my teeth or plucking at the ragged curve of my eyebrows until one bends to match the other; best if I can file the required report and move on, before too much is seen: Go ahead and wear that shirt. It looks well on you. No, there is no scratch on your cheek. It must have been a momentary twitch of a nerve… Yes, you look as tired as you feel. More tired. There it is. It’s not that I am ashamed, understand; my self-esteem is not a dress that has fallen and must be tugged back up. It’s not that I never stare; oh, stare I do. But there’s something unnerving about it.
I don’t know what it’s like for you. For me it’s a question of manners. Too direct a gaze creates an impossible challenge: which pair of eyes will drop first? I know that both are mine. Yet how strange is what I perceive—that I am at home inside one set of arms and legs, and at the same time these very limbs are hanging quite happily on a separate frame. That I am twinned.
I live quite comfortably with this contradiction, of course; but I suppose I haven’t always.
Babies aren’t born with what psychologists (somewhat ploddingly) call “mirror self-recognition.” It takes many months before they’re able to draw an unfaltering line between their reflections and themselves, to comprehend that the stare that meets their own so fearlessly does not belong to another human being. It’s not just a question of waiting until certain inevitable developments take place in the brain, either—though that is important. A light bulb doesn’t just blaze on one day and transform stranger into self. No; in fact, developing the ability to recognize one’s own body in the mirror seems to be a surprisingly rational undertaking, and one that builds over time.
In 1979—the year I was born, naked of a sense of self— two scientists named Lewis and Brooks-Gunn tumbled a series of burbling 12-month-old babies in front of a mirror, to see what they could see. The vast majority of them, the experimenters observed, engaged in something they called “contingent play”—so named because the movements of a reflection are contingent upon one’s own movements.
Having noticed that there was a being opposite them in the glass, and having perceived that the behavior of this being seemed oddly familiar, the babies would proceed to carry out clever studies of their own. Staring at their reflections, they would perform the same series of movements over and over again, each time watching intently to see if the strange creature in front of them would follow their lead correctly. They bobbed their heads up and down, bounced their chubby bodies enthusiastically, carefully waved their arms back and forth, all the while with eyes growing wide as they began to clarify and confirm the fact that they possessed a perfectly synchronous imitative partner who would do all that they did at just the same moment. These early play sessions seem to be a necessary first step towards claiming one’s reflection as one’s own.
(They are not sufficient, for it is possible to recognize that your movements dictate those of another without recognizing that the two are one and the same. The full understanding that the face opposite you in the mirror is your own does not generally arrive until late in the second year of life, according to subsequent studies. When that understanding comes, it can truly be described as self-consciousness. One common test of mirror self-recognition is to dab a spot of rouge on a child’s nose, then place them in front of a mirror. A sheepish, or frustrated, rubbing at the spot is the positive indicator researchers are looking for.)
But listen; am I the only one who is astounded by the canny, systematic tests those children conducted? Am I the only one who went straight to the mirror to reenact them, nursing a tiny thrill and half-hoping to catch my other self shifting her neck just a heartbeat too late? Because I’ll tell you what the Lewis and Brooks-Gunn study says to me. It says that seeing yourself does not come easily.
Let’s put it this way: to know an apple, say, is straightforward. Hold it in your palm; take in its dangerous crimson; scrutinize its glossy skin. It is entirely self-contained. Its apple-y nature is self-evident. To know your own face in the mirror is different. You have to slide into the apprehension sideways, gather together a body of physical evidence and reason your way towards the truth:
When I nod, she nods. When I stare, she stares back. Her arms follow my arms; her legs stretch as far as mine. This plant does not move when I move; it is not part of me. This other person moves without my say-so; he is not part of me. Only she, with her skin so brown and her feet curling under her like frightened mice—only she moves with me. So. This is who I am. These are the things I am made of. These are my boundaries in space.
I don’t remember collecting those proofs. I don’t remember building my sense of self like this, brick by brick with my baby-brain. But I believe that I did, and you as well. And I’ll tell you something else: I believe that we’re in good company. Elephants, apes, and dolphins can learn to see themselves through contingent play, too.
Also, robots. Robots can learn to see themselves. Are you smiling yet? Listen, at the very least, one robot can that I know of—its name is Nico. I read about Nico in this charming paper, published last year. In it, two Yale computer scientists show how, with the help of three algorithms that deftly compare data to experience, a robot “can learn over time whether an item in its visual ﬁeld is controllable by its motors, and thus a part of itself.”
First, Nico spends some time—four minutes, to be precise—waving its arm back and forth and carefully noting the shape of its own movements. Then, it’s ready to look itself in the eye, so to speak. Nico is placed in front of a mirror, whose contents are captured in a streaming image by a wide-angle lens embedded in what would be Nico’s right eye. Carefully monitoring that video stream, the robot continues to motor its arm around in random directions, checking for precisely contingent movements in the reflected scene. It consults the algorithms in its memory, calculating the probability that what it sees is really Nico. Very quickly, then, the robot is able to accurately determine whether it happens to be looking at itself, an inanimate object, or an animate other.
Once it has understood the form of its own arm, learned the way in which its joints shift position—once it has traced the essential outline of its own metallic body—Nico can be said, in a very real sense, to recognize itself. And after that understanding has set in, no one (not even a sly researcher insinuating himself into the scene and painstakingly mimicking Nico’s movements) can fool it. Nico knows exactly what it is.
But achieving that knowledge demands two things, both of which are clearly spelled out in the title of the Yale paper: time and reasoning.
Seeing yourself doesn’t come naturally; it’s not fundamental to your understanding of the world in general. And it can’t be accomplished simply by having someone else tell you who you are in the glass; it’s not a fact you swallow, but a judgment you come to. At first—ask a baby; ask a robot—it’s not at all silly to narrow your eyes at that odd-looking stranger and wonder why they’re copying what you do. At first, surely it’s right and proper to be suspicious of the shade in the mirror.
When it comes right down to it, I mean, you might be wrong about the whole thing.
Here’s the thing: When I think of myself, what comes to mind is less a single clear and shining image of my own face than a shifting sensation of me-ness: a complex amalgamation of memories, ideas, and sensory impressions. I am the one around whom my husband’s arms wrap, the pressure of his musculature against my own clearly defining the shape of my body. I am the one who lay at the foot of my parents’ bed as a child, listening to the hum and click and drip of their ancient air-conditioner and imagining the sounds growing larger and larger until they merged with my own heartbeat. I am the one who frets for hours before phone calls, sweaty and pale, who dances while she cleans, who hates hair in her face and still remembers the sharp, dusty taste of the whiskey sours she used to drink because she liked the way they made her tongue twist up inside her mouth.
I am the one who feels the way I feel, thinks the way I think, not—or not just—the one who looks the way I look. And how do I look, anyway? No matter how many tests I run, no matter how much I grow to trust the image before me in the glass, the sight of my own face is always mediated through layer after layer of tin, silver, glass, copper, paint. I’ve never seen it without a mirror as middleman, without a bender and broker of light. What if the person I’m seeing isn’t who I think it is at all? Why should their mere resemblance to me be sufficient identification?
I live quite comfortably with this suspicion, of course; but not everyone does.
People with a extremely rare disorder known as Capgras delusion come to believe that those whom they love have been replaced by impostors. These strangers are identical to my mother, my sister, my brother, Capgras sufferers say, but they are not them. They are different people entirely. Their features—remarkably similar! The close resemblance is uncanny! But no; they are certainly not the ones I know. They are frauds. I do not recognize them.
The extent of the delusion is such that, confronted with their own reflections, Capgras patients are apt to startle violently. Why, I’ve never seen this person before! they may exclaim, in horror and disgust. Some engage in the same kind of contingent play that babies do, pinching themselves and waving their arms, keeping a chary eye on the ghoul in the mirror—but unlike babies, they will not be satisfied by the paltry evidence of their own eyes. And when repeated gazes into a mirror call up the same disagreeable stranger again and again, people with Capgras may accuse their likenesses of deliberately appearing in their lives solely to stalk and torment them. Capgras delusion steals a person’s ability to see their own true selves, and replaces it with an uninvited guest who cannot—will not—leave them alone. I confess, I sympathize.
But how exactly does this happen? Capgras patients are otherwise, for all intents and purposes, normal—whatever that means. Their vision is not impaired, and neither is their cognitive functioning; nor are any aspects of their memory. Their negative emotional response to their loved ones and their own reflections is bizarre, to say the least, but in some sense it’s also perfectly lucid and reasonable. It matches, after all, precisely the way you would expect someone to react if everyone in their inner circle of intimates had been replaced by an impostor. (And wouldn’t you yell if your beloved reflection suddenly turned into someone you knew, deeply and profoundly, wasn’t you at all?)
So what causes this extraordinary disconnection between vision and belief, between seeing a person, recognizing their features, and correctly identifying them as someone whom you know and love? The inimitable UCSD behavioral neurologist V.S. Ramachandran has a lovely theory about this. Look, he says: Sensory information about objects the eye sees is transmitted from the retina into visual centers in the temporal lobes. Here, the object is identified: This looks like a teapot, this looks like a poodle, this looks like my sister. Capgras patients can accomplish this part of seeing perfectly well.
But after an object has been identified, the brain continues to work. It sends its decoded information to the limbic system, which is a complex network of brain structures that enables the perception and expression of emotions. One of the first places this visual information passes through is the amygdala (the name means almond-shaped, which it is). Ramachandran explains that the amygdala is responsible for labeling the emotional content of what the eye has seen. This object is beloved, the amygdala concludes, and should trigger affection; this one is despised, and should trigger hate.
This visual data, then, becomes colored with a layer of emotional interpretation; it travels on towards other structures in the brain. At its final stop, what began with a glance at a face sets in motion at last the physiological responses that enable a person to actually experience the appropriate emotion: things like a speedier heart rate, higher blood-pressure, and a light film of sweat covering the skin. (For what is emotion but the brain, talking to the body, talking to the brain?)
You might imagine that a rather odd sensation might occur if this process were disrupted somewhere after the point where an object is decoded and before the point where the emotion that ought to be associated with it is actually experienced. Ramachandran did. He asks:
Is it possible that in this patient there has been a disconnection between the face area of the temporal lobes and the part concerned with the experience of emotion? Perhaps the face area and the amygdala are both intact, but the two areas have been disconnected from each other. When (the patient) looks at his mother, even though he realizes that she resembles his mother, he does not experience the appropriate warmth, and therefore says ‘Well, if this is my mother, why is it I’m not experiencing any emotion? This must be some strange person.’
I think of a person like this, and how they must feel when they stare at the mirror, eyes fixed full upon their own faces and hearts as hard as stone. Do they never, now, experience the comfort of being alone with themselves?
It hasn’t been proven, so far, the hypothesis that the Capgras delusion is caused by the neurological disconnect that Ramachandran describes—but its elements catch at my heart. It’s not enough to simply identify a person in order to truly know them. The brain needs more. It’s not enough to match the movements of a reflection to your own in order to recognize it as yourself. The brain needs more. Seeing yourself does not come easily. It requires time. It requires reason. And, beyond all that, it requires some measure of affection.
Knowing this, I return to my counterpart in the mirror—the one who still seems so strange to me sometimes—and am moved to tenderness. I look on her for a long moment, studying the shape of her lips, the brown of her eyes. I forgive our separation, forget the times when her eyes have challenged mine or mine hers, and gaze.
Because if I do not see her, how will I love her?
And if I do not love her, how can I see her?
If you’re fascinated by reflections, I can do no better than to recommend the deep and intricate treatment of the subject in Mirror, Mirror: A History Of The Human Love Affair With Reflection. I found my copy in the stacks of Powell’s Books in Portland, OR, on my honeymoon.