2003 Thesis Excerpts

Erico Marui Guizzo Shannon’s Message

Timothy A. Haynes Basis For Dreaming

Matthew T. Hutson Computing Beethoven’s Tenth

Sorcha McDonagh Atlantic Crossings

Maywa Montenegro deWit Genetically Versatile Rice

Anna E. Strachan Chasing Chupacabras



The Essential Message: Claude Shannon and the Making of Information Theory
Erico Marui Guizzo

If information theory doesn’t have an origin myth, it has a very clear beginning. The field was founded in 1948 when Shannon published the paper considered his masterwork, "A Mathematical Theory of Communication." The fundamental problem of communication, he wrote in the second paragraph, is that of reproducing at one point a message selected at another point. A message could be a letter, a word, a number, speech, music, images, video—anything we want to transmit to another place. To do that, we need a transmission system; we need to send the message over a communication channel. But how fast can we send these messages? Can we transmit, say, a high-resolution picture over a telephone line? How long will that take? Is there a best way to do it?

Before Shannon, engineers had no clear answers to these questions. At that time, a wild zoo of technologies was in operation, each with a life of its own—telephone, telegraph, radio, television, radar, and a number of other systems developed during the war. Shannon came up with a unifying, general theory of communication. It didn’t matter whether you transmitted signals using a copper wire, an optical fiber, or a parabolic dish. It didn’t matter if you were transmitting text, voice, or images. Shannon envisioned communication in abstract, mathematical terms; he defined what the once fuzzy concept of "information" meant for communication engineers and proposed a precise way to quantify it. According to him, the information content of any kind of message could be measured in binary digits, or just bits—a name suggested by a colleague at Bell Labs. Shannon took the bit as the fundamental unit in information theory. It was the first time that the term appeared in print.

In his paper, Shannon showed that every channel has a maximum rate for transmitting electronic data reliably, which he called the channel capacity. Try to send information at a rate greater than this threshold and you will always lose part of you message. This ultimate limit, measured in bits per second, became an essential benchmark for communication engineers. Before, they developed systems without knowing the physical limitations. Now they were not working in the dark anymore; with the channel capacity they knew where they could go—and where they couldn’t.

But the paper contained still one more astounding revelation. Shannon demonstrated, contrary to what was commonly believed, that engineers could beat their worst enemy ever: transmission errors—or in their technical jargon, "noise." Noise is anything that disturbs communication. It can be an electric signal in a telephone wire that causes crosstalk in an adjacent wire, a thunderstorm static that perturbs TV signals distorting the image on the screen, or a failure in network equipment that corrupts Internet data. At that time, the usual way to overcome noise was to increase the energy of the transmission signals or send the same message repeatedly—much as when, in a crowded pub, you have to shout for a beer several times. Shannon showed a better way to avoid errors without wasting so much energy and time: coding.

Coding is at the heart of information theory. All communication processes need some sort of coding. The telephone system transforms the spoken voice into electrical signals. In Morse code, letters are transmitted with combinations of dots and dashes. The DNA molecule specified a protein’s structure with four types of genetic bases. Digital communication systems use bits to represent—or encode—information. Each letter of the alphabet, for example, can be represented with a group of bits, a sequence of zeroes and ones. You can assign any number of bits to each letter and arrange the bits in any way you want. In other words, you can create as many codes as desired. But is there a best code we should use? Shannon showed that with specially designed codes engineers could do two things: first, they could squish the messages—thus saving transmission time; also, they could protect data from noise and achieve virtually error-free communication using the whole capacity of a channel—perfect communication at full speed, something no communication specialist had ever dreamed possible.

Read Erico’s entire thesis on MIT’s DSpace



The Basis for Dreaming:
Physiology, Theory and Practice

Timothy A. Haynes

It was 1953 at the University of Chicago. Eugene Aserinsky—college dropout, dental school dropout, social work dropout—had washed up on the doorstep of Nathaniel Kleitman—physiology professor and sleep expert—and been taken in as an assistant dream researcher. At this point in time, dream research was dominantly Freudian, but Aserinsky wasn’t interested in why we dream or in what dreams mean, he wanted to know why our eyes twitch when we sleep. No one expected eye-twitching to revolutionize neuroscience; other people in history had noticed and recorded eye-twitching, but no one had bothered to study it.

Rummaging through the university basement, Aserinsky dug up a junked electroencephalograph and wired it to the sleeping eye of his eight-year old son and waited for twitching. Once the eyelids were moving, the EEG would erratically spike and dive into jagged peaks and troughs. Aserinsky assumed the machine was broken—wouldn’t be the first time—so he hooked up a second EEG to his son’s other eye. Same results. The two machines recorded the same wild fluctuations. Funny, the EEG readouts look a lot like the brainwave readings taken from fully awake patients.

Aserinsky realized that something was awake that theory said shouldn’t be. Theory said the brain rested when the body rested; that sleep was a chance for mind and body to recoup from the rigors of waking life. Even today, this is a popular misconception and a logical belief, based on all outward observation. What shocked Aserinsky: the EEGs said that the brain is often fully awake when the eyelids are twitching, even though the outward body appears practically paralyzed.

Aserinsky shared his results with Kleitman, and together they began rigorous experimentation to determine what those twitching eyes and spiking brainwaves meant. They woke people up when their eyes were twitching and their brains were active according to the EEG and discovered that around 90 percent of people awakened at this stage were in the middle of vivid dreams. Less than 10 percent of people woken up in other stages of sleep were found to be dreaming. They ran five years of experiments on hundreds of test subjects to conclusively establish that—contrary to popular belief—the mind is incredibly active while sleeping, particularly during REM (Rapid Eye Movement) sleep. These experiments also demonstrated that dreaming was somehow linked to the active brain patterns observed during REM sleep.

Aserinsky and Kleitman’s work was the modern revolution that brought the dreaming brain into focus for researchers and freed scientists and philosophers from the belief that sleep existed to rest the mind. It has since been shown that dreaming—or at least the patterns of brain activation that coincide with dreaming—occur in all mammals (and only in mammals). The obvious questions arise: if sleep does not simply rest the mind, what does sleep do, and why is dreaming an inescapable part of evolved mammalian physiology? Since nature is economical and avoids unnecessary energy waste, then what necessary evolutionary function is realized in dreams? It is our task as rational, curious thinkers to explore what this function is.

Einstein wrote: "Philosophy is like a mother who gave birth to and endowed all the other sciences. Therefore, one should not scorn her in her nakedness and poverty, but should hope, rather, that part of her Don Quixote ideal will live on in her children so that they do not sink into philistinism."

Science has done well in divesting us of many archaic misconceptions and in turn providing us with a new understanding of the world, but when it comes to uncovering the nature of the mind, even the best minds of neuroscience are still philosophers in practice, even if not by their own admission. And in no part of neuroscience is the role of philosophy more important than in probing this terra incognita of why we dream.

Michel Jouvet, one of Europe’s leading neurophysiologists and dream researchers, wrote in 1999 that: "People know very well that sleep is a complicated problem and that dreaming is perhaps the last frontier of neurobiology: we shall certainly understand perception before we understand dreaming."

His point is well taken: that dreaming is the most misunderstood and mystical function of mammalian physiology. We all dream and none of us can definitely say exactly why. The more progress that is made in understanding the brain, the stronger our foundation from which to speculate on dreaming—and enormous progress has been made in the last fifty years—but dreaming is still one of the ultimate Quixotic windmills, right next to what preceded the Big Bang? or how many dimensions are there, really?

Why are dreams? is the type of question that can really keep you up at night.


Artificial Intelligence and Musical Creativity: Computing Beethoven’s Tenth
Matthew T. Hutson

The first time I heard of David Cope’s Experiments in Intelligent Music (EMI, or, more affectionately, Emmy) was in 1999. The cognitive scientist Douglas Hofstadter, author of Gödel, Escher, Bach, came to my university to give a lecture on the EMI phenomenon. When Hofstadter first learned of EMI, he visited Cope and soon went on a lecture tour presenting EMI and his interpretations of it. EMI does not just compose music; it can compose music in the style of any given artist. Feed it the scores of Beethoven’s first nine symphonies, and it spits out his tenth.) One might argue that composing in an existing style is not creative, but that would mean that Beethoven’s symphonies 2-9 weren’t creative.) Hofstadter is an amateur composer and a passionate fan of Chopin, and the fact that EMI could produce Chopin-like pieces disturbed him. During the lecture he offered three sources of pessimism, which he later reprinted in Virtual Music, a 2001 book by Cope:

"1. Chopin (for example) is a lot shallower than I had ever thought.
 2. Music is a lot shallower than I had ever thought.
 3. The human soul/mind is a lot shallower than I had ever thought."

Faced with these options, one can perhaps empathize with the German musicologist’s anger at David Cope.

But generating music with an algorithm is really nothing new. An algorithm is just a series of rules, usually followed in sequential steps; the algorithm for getting dressed might be: put on pants, put on shirt, put on socks, etc. Algorithmic music is much older than computers. In the 17th century, Giovanni Andrea Bontempi designed a wheel with numbers on it assigned to notes, and an accompanying set of rules for using the wheel to compose. The methods appeared in his 1660 work, New Method of Composing Four Voices, by means of which one thoroughly ignorant of the art of music can begin to compose. In 1787, Mozart devised a game called Musikalisches Würfelspiel that involved arranging precomposed sections of music into a minuet by rolling dice. Mozart and other great composers adhered to a set of rules restricting tonal music given in 1725 by Johann Joseph Fux in his The Study of Counterpoint. Example rule: "From one perfect consonance to another perfect consonance one must proceed in contrary oblique motion." (A consonance is a combination of notes that sound nice together; its opposite is dissonance. Motion refers to the relative motion between overlapping melodies or voices; oblique means that one voice goes up or down while the other remains steady, and contrary means they move in opposite directions.)

Any kind of constraint or formalism placed upon composition defines an algorithmic process. The haiku is an algorithm; you’re required to write the poem in three lines of five, seven, and five syllables, respectively, and the theme must refer at least indirectly to a season. Many constraints remain invisible, such as the very idea of 12-note tonal music. Constraints also result from the methods of notation, and from how certain instruments can be played. (A chord for a solo piano piece can’t contain three widely spaced notes, as humans usually have only two hands.) Cope argues that all composers are algorithmic and that it would be insulting to argue otherwise, because they’ve spent years learning the techniques and processes of their school and those musicians who came before them.


Atlantic Crossings
Sorcha McDonagh

It is a still night, and to the north of the Greenland ice cap, the emerald veils of the aurora borealis are fluttering like ghosts. The Brent family is flying southeast, breaking the silence with the steady beating of their wings, with their panting, and with an occasional honk at one another. The sound races out into the night and echoes off the ice-canyon walls. This is the second leg of their journey to Ireland, a 500-mile flight across the ice sheet that will take the geese about ten hours to complete. They are bound for the fjords and bays near Ammassalik, an island on Greenland’s east coast. Natives refer to the east coast as "Tunu," the back side of the country, because it is so sparsely populated compared with the western seaboard.

The adult geese recognize the terrain below and know their way through the icy valley. They know the look of Baffin Bay, too, with its bluish ice floes; the jagged outline of Qeqertarsuaq; and the patchwork fields of Northern Ireland with the Mountains of Mourne to the south. The geese may even pick out the city lights of Reykjavik and Derry. This method of navigating by landmarks, known as piloting, is one of the first things that pilots of light aircraft learn. On cross-country flights, they look for landmarks that are easy to pick out from the air: intersecting highways, reservoirs, mountains.

Ornithologists think that birds form mental maps, much as people remember the prominent features of an oft-made journey. Even the stars feature in these maps—they are cosmic landmarks, familiar, sparkling arrangements on the velvet hemisphere above. Lindbergh and Columbus referred to the constellations on their Atlantic crossings, and there are yachtsmen who still navigate by the stars; some say it is more authentic, the mark of a true sailor. In his book, Sea Change, Peter Nichols writes with pride about using a sextant on his solo Atlantic crossing in a small yacht.

As well as using the stars and landmarks, birds use them to establish direction. The distinction between navigation and orientation is an important one: navigation means using landmarks and bearings to find something—like staging grounds or a food source—whereas orientation means knowing north from south. In the 1960’s, behavioral ecologist Stephen Emlen released Indigo buntings in a planetarium, the northern sky displayed on its dome, and found that the birds turned to face the direction of their migration. When he rotated the dome image by 180 degrees, the buntings changed direction too. But when Emlen excluded the North Star and the region around it from the planetarium’s display, the birds became confused. To test the birds’ reliance on the North Star, Emlen blotted out the star itself and some other constellations from the rest of the sky. The birds all turned to face the right way, suggesting that the constellations around the North Star, which include Draco and Ursa Minor (which the North Star is a part), mattered most to the buntings. This region of the night sky is their reference point, the anchor around which the sky seems to revolve.

The "star compass," as it is known, is one of three compasses that birds employ on their travels. The first one to be discovered was the Sun compass, over fifty years ago, when German ornithologist Gustav Kramer observed the behavior of caged starlings around the time they would normally migrate. The birds were restless, agitated by the migratory drive, the Zugunruhe. When they could see the Sun, they all tended to face in the direction of their migratory route. When the Sun was hidden by clouds, they did not all face the same way. But when Kramer used mirrors to change the direction of the Sun’s rays, the birds all turned to face what they thought was their direction of migration, relative to the reflected light.

Kramer concluded that the starlings used sunlight as a directional reference. The theory made more sense when some of Kramer’s colleagues found that birds’ circadian rhythms enabled them to compensate for the Sun’s changing position over the course of a day: they know where the Sun should be, and when. Starlings and carrier pigeons can use the Sun compass at the poles—even when the Sun does not set—and penguins use it as they slowly make their way across Antarctica’s ice fields. Even the orange monarch butterfly orients itself by the Sun’s position on its annual migration from the United States to Mexico and back.

The third compass birds have is magnetic, one that gives them a sense of north and south that is as natural as our sense of up and down. Biologists had long thought that birds could perceive Earth’s magnetic field, but it was not proven until 1968, by ornithologist Wolfgang Wiltschko and some European robins. By placing electrical coils in a bird cage, Wiltschko shifted the direction of the magnetic north inside the cage, fooling the birds.

Over thirty years later, we still don’t know how birds perceive Earth’s magnetic field. One theory is that a bird’s photoreceptors, the light-sensitive cells in its eyes, may double as magnetoreceptors—cells that are sensitive to magnetic fields. It is likely that crystals of an iron ore called magnetite, found in the back of a bird’s mouth, have some connection to birds’ magnetic sense. Because Earth’s magnetic field varies with location, getting stronger at the poles and weaker at the Equator, birds may be able to construct mental maps based on the planet’s magnetic topography.

Together, the innate orientation skills and the acquired knowledge of the landscape—geographic, stellar, and magnetic—enable thousands of migrating species to cross Earth’s diverse terrains every year.


Rice: How the Most Genetically Versatile Grain Conquered the World
Maywa Montenegro deWit

On an overcast day in late January the rice fields in Crowley, Louisiana are chocolate-brown and barren. Except for a few remnants of rice stubble and tractor trails, there is little to suggest that anything more than earthworms grow here. In a matter of weeks, however, this landscape will begin to transform. The first brilliant green leaves of rice will poke their heads through the soil, specking the dark earth with the promise of the harvest. Throughout the early months of summer, the plants will grow taller and fuller, and will sprout multiple branch-like tillers. The tips of each of these will blossom into a head of small white flowers, giving the fields, from a distance, the look of a sugar coating. As the flowers mature, seeds will form and begin to fill with starch. The rice plants—now top-heavy with the weight of the grain—will bend forward in a graceful bow. Soon the rice grains will harden and the stalks will dry out, turning the fields a rich golden brown. Some ninety days after the first hardy spikelets emerged, the rice will be ripe and ready for harvest.

Not that the rice in these fields will ever make it to the supermarket shelves. Here at the Louisiana State University Rice Research Station just outside Crowley, the rice fields are giant petri dishes and Oryza sativa L—better known as rice—is the experimental organism. On small one- to two-acre plots, scientists plant seeds from a stockroom containing thousands of different rice varieties—from strains that tolerate being sprayed with herbicides to types that smell like popcorn. And the researchers probe many aspects of rice production, from fertilization, soil, and water management to breeding of new varieties and, nowadays, genetic engineering. Founded in 1909, to improve rice production through "varietal improvement and the development of agronomic practices that increase production and maintain profitability", the LSU Rice Research Station claims the oldest, and yet still one of the most prolific rice breeding programs in the country. Over forty percent of rice acreage in all southern states, and seventy percent in Louisiana, is planted with Cypress, a long-grain variety born and bred at the LSU Rice Station. The leading medium grain rice in the United States, a variety known as Bengal, also comes out of the Crowley cradle. The Rice Research Station brings together a colorful array of rice experts—from lab scientists and plant breeders to farmers and policy-makers. And each group brings to the station its own unique brand of rice knowledge. Plant pathologists examine the diseases and viruses that afflict rice, while entomologists tackle the pestilential bugs. Rice breeders make careful crosses between different rice varieties, while geneticists decipher what it is at the molecular level that determines a "variety" in the first place. Farmers contribute by "field-testing" experimental rice and innovative farming methods; more importantly, they contribute something less tangible—a familiarity with rice that comes from decades of working the land.

Looking much like farmers themselves, the rice researchers are no goggle-masked scientists in latex gloves and lab coats. Bluejeans and cowboy boots are standard lab attire in Crowley, along with button-down plaid shirts and Kelly-green John Deere caps. And while the research conducted at the LSU station is some of the most sophisticated in agricultural science, the scientists are only secondarily concerned with publication for academia. Or as Dr. Qi Ren Chu, a plant geneticist and rice-breeder at the Research Station puts it, "here we try to reach the rice community rather than go to international symposia to present fancy papers." Instead, the focus in Crowley is on production: from determining optimal planting time to testing methods for weed management—the goal is always how to coax more grain from the soil per unit of time, energy, and money. This, indeed, is the holy grail of any rice farmer, and it is a preoccupation whose success determines the welfare of more than half of humanity.


Chasing Chupacabras: Why People Would Rather Believe in a Bloodsucking Red-eyed Monster from Outer-Space than in a Pack of Hungry Dogs
Anna E. Strachan

In his last book, The Demon-Haunted World, the late Carl Sagan wrote that people embrace pseudoscience in proportion to their misunderstanding of real science. In January of 2003, I approached fifty Puerto Ricans on the street—half of them men, half of them women—and asked, "Do you believe the Chupacabras exists?" Twenty-four answered "Yes." Nine of those twenty-four people said they believed because of the "evidence" or the "dead animals they found." Five of the twenty-four said they believed because "people have seen it," and another five said they believed because the "news" or "science" said the Chupacabras exists.

But the "evidence" they cite—the dead animals, the eyewitnesses, the science and the news—never led any authorities to suspect a bloodsucking monster. So why do these people believe? Sagan would have blamed a superficial understanding of science; people are fed the findings of science in soundbites and textbooks, but the process of science rarely gets on the menu. When Puerto Rican scientists were quoted in the island’s media, reports tended to emphasize the scientists’ speculations of what the attacker could be (it’s a dog! It’s a monkey! It’s dogs and monkeys!), while few emphasized what the autopsy reports proved the attacker could not be—a bloodsucker. This allowed some people, like Roberto Nogal, to assume that the studies were never carried to conclusion, that "the cases were shut tight," when in reality, science—in Sagan’s words—"gropes and staggers" toward truth. Still, the different speculations offered over several weeks of media coverage may have allowed people to roll the "Chupacabras" into the same mental drawer with every trashed model of the universe and "cure" for cancer, concluding that scientists haven’t got a clue. But even that doesn’t explain why believers would still choose to believe in the highly improbable—a super-intelligent bloodthirsty biped—over the probable hungry stray dogs.

One possible explanation for people’s belief in the Chupacabras is a phenomenon psychologists call the "confirmation bias," which holds that people tend to selectively notice or ignore things according to their already-held beliefs. For example, hypochondriacs are at the mercy of the confirmation bias, interpreting every ache as indicative of a serious illness. But psychologists have found that the confirmation bias is a fundamental tendency in human thinking. An example can be seen in Mayor Chemo Soto’s analysis of the Goatsucker attack "patterns":

"Now, listen there’s something about this: there’s something about bodies of water. It appears near water, little creeks, waterfalls. Every time it appears there is a big noise, a deafening sound, a sound of a turbine. It never appears close to the houses, but in the hills. We have run after it when we heard that noise, and when we get there, there is a terribly strong smell of sulfur. That’s everywhere that the Chupacabras has been—sulfur. A violent stink in the place where the noise has been. It’s very strange. The other day, a meteorite fell by the mountain and when it fell, there was a terrible smell of sulfur too. So I started thinking, analyzing… so it has to be from outer space. Because of the smell and the noise."

How about the times when people reported a "wet dog" smell instead of sulfur, or didn’t notice any smell at all? Or the reports of silent attacks beside people’s homes? What did he think of the Goatsucker attacks reported in arid, waterless environments such as those in central Puerto Rico, Mexico, or Texas? No, he replies, he hadn’t considered those. But he insists they weren’t important. He does, however, have a keen memory for the details and events that confirm his hypotheses: "When it was attacking around here, you could always small that terrible stink. You would go to the hills, and you would know it had been around there, because of that terrible stink of sulfur."

In one psychology experiment conducted by researchers at Iowa State University, ESP believers and skeptics read a scientific paper either supporting or debunking ESP. Then they were tested on their memory of the paper’s content. The ESP believers who read the paper undermining ESP not only remembered very little of it, in some cases they "remembered" that the paper upheld ESP rather than challenged it! And "normal" beliefs are subject to the same bias. In a similar experiment at Stanford University, people were asked to read studies arguing for or against capital punishment, and they consistently judged the one confirming their own view as "better conducted" and "more convincing." Everyone was more critical of the study attacking their previously held belief, regardless of which belief they held.