The Lowline

By Aileen Marshall

lowline-light-collector-demo-960x639

Photo Courtesy of Dan Barasch via Kickstarter.com

Have you heard of the Lowline? No? Well maybe because it doesn’t fully exist yet. And no, it’s not under the Highline, although its name was inspired by it. It will be an underground park in an abandoned trolley terminal under Delancey Street. The park will use new solar technology to redirect sunlight underground to grow plants and light the park.

The Williamsburg Bridge Trolley Terminal opened in 1908 on Delancey Street.  Trolleys went back and forth to Brooklyn across the aforementioned bridge.  The station extends three blocks underground from Essex Street to Clinton Street, and has interesting architectural features, such as cobblestones and a 15-foot ceiling. It closed in 1948 and has been sitting empty ever since.

Then in 2009 architect James Ramsey, who used to work at NASA developing optics for satellites, heard about it.  He discussed it with his friend Daniel Barasch, a strategist for Google.  Ramsey thought he could use fiber optics to collect and redirect sunlight underground to make it into a park. They made a proposal to the city.

DSCN0943

Aileen Marshall/NATURAL SELECTIONS

Two feasibility studies were started in 2011. One was by HR&A Advisors, a real estate, economic and energy consulting firm. The other was from the engineering firm Arup. Both came up with positive findings, indicating that it would be helpful to the community. Since 2012 the Lowline organization has run a program called Young Designers. They offer educational programs to local schools and other groups, using the lab for lessons in science, technology, engineering and design.

By 2012, the pair had raised $150,000 on Kickstarter to build a laboratory exhibit of the solar technology that would be used in the Lowline. As of 2015, the Lowline organization has raised $155,000 to build the park. The exhibit lab uses what Ramsey calls “remote skylights,” the technology that would be used in the park. An above-ground parabolic disk collects sunlight, then a concentrator increases the light 30-fold and filters out the hotter rays. Protective tubes send light to a central distribution point via fiber optic cables, then to an aluminum canopy in the lab. That, in turn, reflects the light into the lab. This illuminates the lab and allows the plants to grow. Since it is reflected sunlight, it contains the full spectrum of sunlight, including the wavelengths needed for photosynthesis. Optic technology allows the outdoor disk to follow the sun during the day and maximize the amount of sunlight it collects. Mirror boxes would toggle the light between electric and sunlight to allow for variations, such as cloudy days.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XVIII: Robert Bruce Merrifield, 1984 Prize in Chemistry

By Joseph Luna

By the time Bruce Merrifield sat down to write in his lab notebook in May 1959, a scientific puzzle had been twirling in his head for quite some time. What he wrote next summarized a Nobel-worthy problem and offered a bold but totally unproven solution, all in three sentences. It turned out to be an impeccably succinct opening salvo, not just for a research career, but for an entire field.

“There is a need for a rapid, quantitative, automatic method for the synthesis of long chain peptides. A possible approach may be the use of chromatographic columns where the peptide is attached to the polymeric packing and added to by an activated amino acid, followed by removal of the protecting group and with repetition of the process until the desired peptide is built up. Finally the peptide must be removed from the supporting medium.”

To unpack this a bit: Merrifield spotted the need to take amino acid building blocks and string them together to form a peptide of his choosing (or if a really long peptide, a whole protein). His idea in essence was to use a solid support to get an amino acid to hold still, so that he could methodically link amino acids together sequentially. Finally, the immobilized chain of amino acids, the peptide, could be released and studied.

At a time when molecular biology was just getting off the ground, Merrifield’s understated first sentence belies a history of protein chemistry already more than half a century old, as well as his own frustration at making the small peptides he was interested in studying. After joining Wayne Wooley’s research group as a post-doc at Rockefeller in 1949, Merrifield applied his biochemistry training by isolating and characterizing “strepogenins” a catch-all term for small peptides that stimulated bacterial growth. The standard practice was to isolate these peptides from a biological source, but this approach almost always generated scholarly (aka vicious) pushback: it was very difficult to rule out contamination. If a compound could be crystalized as a means of isolating it to “purity”, most biochemist naysayers would generally be assuaged.

Chemists, however, were an entirely different breed of naysayer. They would only be convinced by chemical synthesis of a pure compound, characterized at each intermediate step as a measure of quality, and where, by definition, no biological contaminant could be introduced since no life form (other than the chemist’s hands) was required. For this reason, most biochemists weren’t really considered chemists: they merely isolated and characterized what they thought were active compounds, but they could very well be fooling themselves. Justus von Leibig’s famous chemical dictum “Tierchemie ist Schmierchemie” (Biochemistry is sloppy chemistry) stung hard for the better part of a century.

Continue reading

Searching the Nobel Prize

By Susan Russo

There is a wealth of enjoyment in exploring Nobel Prize information online. There are videos, such as a documentary of the four 2012 Laureates’ discoveries in medical research; Mother Teresa’s and Elie Wiesel’s speeches after their awards of their Peace Prizes; and a 1994 interview with John Nash (prize in Economic Sciences), including his views of the movie A Beautiful Mind, based on his life and work. Another category, “Nobel Laureate Facts”, delivers statistics on the number of total prizes throughout the years, the number of women’s prizes “so far”, ages of the awardees, and the reasons that two awardees, Jean-Paul Sartre and Le Duc Tho, declined their prizes. Other current special features appear about Albert Einstein, Marie Curie, Malala Yousafzai, and Rabindranath Tagore. There is even a section called “Educational Games”, which includes “Save the Dog” about diabetes, “Bloodtyping”, “A Drooling Game” about conditioned learning, and “All about Laser.” In another link, the Director of the Norwegian Nobel Institute describes the process of nominations for the Peace Prize.

My favorite section, however, is listening to the Nobel podcasts, short interviews giving us the viewpoints of the awardees in their own words.  A recent interviewee was Rockefeller’s own Roderick MacKinnon. There are two separate interviews with May-Britt Moser and Edvard Moser, 2013’s dual awardees in Medicine. May-Britt Moser talks about “pure joy” for herself, and “inequality in science”, while her husband Edvard speaks of the value of “partnership” and recalls “childhood memories.” Mario Molina, awardee in Chemistry in 1995, discusses “climate change” and the role of “human activity” and says, “The risks are unacceptable.”  In 2006, Roger Kornberg (Chemistry) admits that most of his “ideas are wrong.” John Mather, a NASA scientist (Physics, 2006) thinks that if there is water on Mars, there is likely to be life in some form. Elizabeth Blackburn (2009 Physiology or Medicine), whose discoveries show how telomeres transform in aging, says, “We just know so much and yet we know so little.” We hear from Randy Schekman, whose award in 2013 was in Physiology or Medicine, arguing for open access in scientific publications.  And George Smoot (Physics, 2015) lauds the fact that “science today is a truly global enterprise.”  Some Nobel Prize winners admit that they were surprised by their awards. One, John O’Keefe (2013, Physiology or Medicine), prefers being in the lab, saying, “I’m a bench scientist.” And Alice Munro, who won the prize in Literature in 2013, describes her reaction as, “Bewildering but very pleasant.” In all the podcasts I’ve heard, the awardees reflect an excitement in their work, and most display a spirited optimism for the future. All in all, “meeting” these people online is thought-provoking and inspirational, at least to this listener.

Growing vegetables in small spaces

By Guadalupe Astorga

image 1

Top Left: Hydroponic research in Epcot Center, Orland/Antony Pranata,CC. Top Rigt: Hydroponics/Frank Fox, CC /Bottom panels: Our Windowfarms Project

One of today’s global issues concerns the supply of fresh food to people in cities. While the carbon footprint for transporting fruits and vegetables from the areas where they are produced, to the consumers’ tables can reach high levels for longer distances, local production and consumption have several advantages. A number of new initiatives make it possible to take advantage of urban spaces to grow fresh vegetables in your own city or apartment.

In cities where the space is dominated by concrete construction, urban agriculture has shed new light into public and private spaces, promoting community interactions and the development of organic alternatives to intensive crop farming.

Different projects have taken over rooftops and unused spaces in New York City, not only to grow fresh vegetables for distribution in the local community, but also to offer a sustainable model for urban agriculture in open spaces.

Other interesting alternatives involve hydroponic cultures, which offer a very efficient way to grow different types of organic plants with no need of big spaces. In recent years, several hydroponic techniques have exploded and evolved in a plethora of varieties developed by enthusiastic farmers who have openly shared their knowledge on the internet, making videos with detailed tutorials and instructions for beginners and experienced farmers. Hydroponics are not expensive or complicated, can be started at any time of the year, and you can control what you eat.

image 2

Left panel: Hydrosock Version/Jim Flavin; Right panel: Hydroponics principle/iamozone, CC

In an example of these collaborative initiatives, also born in New York City, hydroponic vertical gardens are designed for our apartment windows, and people around the world have shared their experiences to create new innovative and esthetic designs. You will need a bit of creativity and enthusiasm to make this project in your apartment, but it is certainly worth it.

A more convenient and simpler alternative to get started wih hydroponics in your own apartment at minimal cost is the Hydrosock Version, proposed by Jim Flavin (Fig. 1, left panel). This handy design is the easiest version of hydroponics; it does not need an air pump to oxygenate the water, nor expensive or specialized materials. The roots get oxygen as the water level decreases in the reservoir. The principle is shown in Fig. 1 right panel.

I encourage you to make this simple hydroponic system at home for high yields of vegetable production and little cost. This is the proper time of the year to start if you want to harvest delicious vegetables for this summer.

You will just need:

Continue reading

Louise Pearce – An Extraordinary Woman of Medicine

By Susan Russo

Photograph_of_Louise_Pearce_(1885-1959)

Acc. 90-105 – Science Service, Records, 1920s-1970s, Smithsonian Institution Archives

In 1913, the Rockefeller Institute appointed its first woman researcher, Louise Pearce, M.D., who worked as an assistant to Simon Flexner. Pearce was promoted to Associate Member in 1923, and continued in this position until 1951, when she became President of the Woman’s Medical College of Pennsylvania. During her career, Pearce attained many firsts, including her 1915 election as the first woman member of the American Society for Pharmacology and Experimental Therapeutics (ASPET); the second member wasn’t elected until 1929. Also, Pearce had affiliations with the New York Infirmary for Women and Children (1921); the General Advisory Council of the American Social Hygiene Association (1925); the National Research Council (1931); and was elected Director of the Association of University Women in 1945. In 1921, Pearce was elected to membership in the Belgian Society of Tropical Medicine, and received the Order of the Crown of Belgium, and in 1931 she was appointed Visiting Professor of Syphilology at Peiping Union Medical College in China.

Born in Winchester, Massachusetts, in 1885, her family moved to Los Angeles, where she attended the Girls Collegiate School. She went on to receive her Bachelor’s degree in physiology and histology at Stanford University in 1907. Pearce continued her studies at Boston University, and was awarded her M.D. from the Johns Hopkins University School of Medicine, specializing in pathology, in 1912.

While at Rockefeller, Pearce worked closely with Wade Hampton Brown, a pathologist, chemist Walter Jacobs, and immunologist Michael Hiedelberger. Their first endeavors, organized by Simon Flexner, were experiments in the treatment of syphilis, using arsenic derivatives made by Pearce and Brown in animal models. Their work was published in the Journal of Experimental Medicine in 1919. Soon after, the Rockefeller Institute sent Pearce to Léopoldville in the Belgian Congo, where she worked in a local hospital, and her laboratory to test the drug tryparsamide in human trials, saving many of the lives of syphilitic patients and patients with sleeping sickness, conditions which had previously caused almost certain fatalities. After returning to the Institute, Pearce and Brown added cancer experiments in animal models, discovering, in rabbits, the malignant epithelial tumor of the scrotum, named the Brown-Pearce Carcinoma.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XVII: Torsten Wiesel, 1981 Prize in Physiology or Medicine.

By Joseph Luna

In the late 1950’s, two scientists sat with a cat in a darkened room and flicked on a projector screen. For this particular movie night with kitty, the scientists showed a series of simple images to the cat, and between each one they waited for the cat to respond. Nearly all cat owners, myself included, have probably performed a variant of this basic experiment, whether with a treat or a feathery toy, to get hold of a cat’s finicky attention, or to divert it from a precarious vase or an exposed ankle. But the two scientists, David Hubel and Torsten Wiesel, first at Johns Hopkins and then at Harvard, were after something much deeper. They wanted the cat to tell them what it saw. And magically enough, they had surgically created a talking cat: an electrode was inserted into the visual cortex of the anesthetized cat’s head and set up to record from a tiny patch of the brain (rest assured the cat was fine after the experiment). By showing different images to this conked-out kitty, Hubel and Weisel aimed to find the specific stimulus that excited the area they were recording from, be it a picture of a stationary dot or a simple line moving across the screen. If they succeeded at finding the right stimulus, they would hear the characteristic rat-tat-tat of a neuron firing. In other words, a tiny and specific part of the cat’s brain would seem to be saying “yup, that’s a line right there.”

How we perceive the outside world has been a central human question for millennia, underwriting large swathes of philosophy, and later, psychology and neuroscience. In the first half of the 20th century, technological developments aimed at measuring the electrical activity of a stimulated neuron in the brain yielded a concrete path to explore how organisms perceive their surroundings. Of the five most obvious senses, studying vision seemed particularly attractive since the input was physically always the same: photons. And yet photons could be arranged in wildly complex patterns to signal, in the case of a cat, the difference between a mouse and a shampoo bottle. How did light get transformed when it hit the eye into something “recognizable”?

This was a motivating question for a generation of scientists in the Department of Physiology at Johns Hopkins Medical School in the middle of the 20th century. And one such scientist was a young faculty member named Stephen Kuffler, who, in 1948, recorded from single cells in the cat retina and found that these cells did not signal absolute levels of light to the brain, but rather they transmitted the contrast information between light and dark. Small spots of light could activate retinal neurons, whereas flooding the eye with light didn’t do so. This finding largely confirmed in a mammal what a fellow soon-to-be Hopkins faculty member (and subject of this series) H. Keffer Hartline had seen while measuring the eye of the horseshoe crab over a decade earlier. Like Hartline, Kuffler could conclude that the “raw data” from light was passed to the brain as a code that essentially said, “this part is dark and this part is light”, but what happened after the retina was a mystery.

Continue reading

Neuroscience Night

By Aileen Marshall

March 14 through the 20 was National Brain Awareness Week. In honor of that, the Rockefeller University’s Science and Media Group sponsored an event called Neuroscience Night, run by the organization KnowScience. The event consisted of several talks by local scientists about their fascinating research on the brain. The topics ranged from the infant brain to the addicted brain.

Brain Awareness Week has been presented every March by the Dana Foundation for twenty years. The foundation is a non-profit that promotes neuroscience research by grants, publications and education; made up of more than 350 neuroscientists, including some Noble laureates. They publish the online journal Cerebrum. They also provide materials for organizations and groups to put on events for Brian Awareness week. Besides the Rockefeller University, many New York City institutes hosted seminars and exhibits, including Columbia University, Mount Sinai, New York University, and the Greater New York City Chapter of the Society for Neuroscience.

Rockefeller’s Neuroscience Night was organized by KnowScience, which is a non-profit science advocacy and educational organization founded and headed by Rockefeller’s own Dr. Simona Giunta. They run events to improve the awareness and understanding of science among the public, particularly adults.

The first speaker at the Neuroscience Night was Rosemarie Perry, a postdoctoral research scientist from New York University. She spoke about the infant brain. It turns out that babies are a lot smarter than we give them credit for. They learn a lot in their first year. The infant brain is capable of learning several different languages. Like many animals, humans go through a stage when they need a caregiver to survive. She told us how the human’s infant brain is geared toward bonding with its caregiver, in order to get what it needs. In rats there is a sensitive period, the first nine days after birth, when bonding is established.  In humans, attachment starts in the womb, where the fetus learns the mother’s scent and voice. And this attachment is bi-directional, oxytocin is released during skin to skin contact, enforcing the bond of both caregiver and infant. The caregiver can even regulate the infant’s brain. In rats, the amygdala kicks in after ten days, which is responsible for fear. Perry’s experiments have shown that the mother’s presence can block the fear response in rat pups.

The next speaker was Bianca Jones-Marlin, a postdoctoral researcher from Columbia University. Her topic was Love and the Brain. She told us that there is a chemical reaction behind love, no matter if it’s romantic, familial, or platonic. It is also oxytocin that is released during eye contact with a loved one. Oxytocin effects the reward center of the brain. Experiments have shown that oxytocin is also released when one has eye contact with one’s dog. This hormone works in the left hearing center of the brain. Jones-Marlin’s experiments with mice have shown that mice will retrieve their pups back to the nest when they hear them cry. But a virgin female in the cage will not retrieve the pup.

Continue reading

Reflections on the Updated Periodic Table

By Paul Jeng

Where does science live? For me these days, it’s in the fifteen open tabs lagging my browser as I switch from email to PubMed. It’s in hot coffee in the morning and red velvet seminar cookies in the afternoon. It’s spelled out in Calibri on slides or floating around inside the heads of people arm-curling a five-pound Chipotle burrito while crossing York Avenue. But back in grade school, for many of us, science lived as outlines on posters on the wall. Nine concentric rings represented the solar system, squiggly lines denoted the borders of countries, and a grid of colored squares equaled a comprehensive catalog of all known elements. These posters were big glossy boxes of truth, inked into permanence by mysterious sources of unbridled knowledge (are school posters peer-reviewed?). As ubiquitous classroom décor, they served as road signs for navigating an educational frame of mind: science this way, English Lit that way.

The king of school posters was, unquestionably, the periodic table. What chemistry classroom or laboratory is complete without one? Few other images can claim a more complete symbolic representation of scholarship: fastidious organization, cryptic nomenclature, and stacks upon stacks of numbers. Its silhouette is unmistakable, a double-tower fortress fringed by a lanthanide-actinide moat, imposing to outsiders yet comforting for those who’ve earned citizenship within its walls. To chemistry-allergic premeds it’s a cold instrument of torture, but to science historians the tabular arrangement is a lovingly-crafted mural of the building blocks of existence. Quietly, it’s one of the most popular posters in the world. You could have a 36×24 printout delivered tomorrow by Amazon for under two dollars, or buy a vintage 1960’s linen edition shipped from Berlin through Etsy for over a grand, and everywhere in between. If chemistry were a subway system, the periodic table would be the ubiquitous MTA map. If laboratory halls were the bedroom walls of teenage girls from 1999, the periodic table would most certainly be N’Sync.

It may be tempting to view the periodic table, essentially the heart of chemistry, as a hallowed monument of science, carved in stone. In reality, the table is as much a finished product today as it was to Mendeleev in 1869. When The Rockefeller University was founded in 1901, there were 84 known elements. When I was born, that number had grown to 109. The chronically outdated periodic tables hanging around us should be regarded with pride, a remarkable testament to the speed of scientific progress and the breadth of human achievement or, alternatively, a massive conspiracy from Big Poster to boost sales revenues.

Continue reading

Zika Virus

By Aileen Marshall

zika

Rash on a arm due to Zika virus. FRED / Wikimedia Commons

What should you know about the Zika virus? It’s been around for over 50 years, but it’s only recently that it’s spread has increased around the world, especially in South America. The Zika virus is spread by mosquitoes, but for most people it only causes a mild infection. However, an infection in pregnant women can cause a birth defect called microcephaly, in which the skull and brain don’t fully develop. At this point, there’s limited diagnostic tests and no cure, so labs are scrambling to develop these products.

The Zika virus was discovered in 1947 in the Zika Forest of Uganda. It was isolated from the blood of a rhesus monkey there, as part of a Yellow Fever monitoring program. It was then found in an Aedes africanus mosquito from the same area, a year later. The first human infected was found in 1952 in Uganda and Tanzania. A study in India that year found a significant number of Indians who had antibodies to Zika, an indication that it had been prevalent in that population. There were sporadic outbreaks of Zika over the later years in equatorial areas of Africa and Asia. Then in 2007, an outbreak of what initially appeared to be dengue or chikungunya occurred in the French Polynesian island of Yap. It was later confirmed to be Zika, the first outbreak outside of Africa or Asia. By 2013 it had spread to other South Pacific islands with some patients who also had neurological effects and there were some cases of microcephaly. In March of 2015, health officials in Brazil noted an increase in Zika-like symptoms and rash in the northeast part of the country. By that summer, there was a great increase in the number of children born with microcephaly, especially in that same area. By later that year, there were confirmed cases of Zika infections in other South and Central American countries, and the Caribbean. On February 1 of this year, the World Health Organization declared it a public health emergency of international concern.

The Zika virus belongs to the same family, Flaviviridae, as dengue, chikungunya, yellow fever and West Nile viruses, which is why the antibodies often cross-react in diagnostic tests. It has a single strand positive sense RNA genome, which means it replicates in one step. The strain in this recent outbreak has been sequenced and it has found to be the same strain from the South Pacific outbreak.

It is transmitted by a couple of species of mosquitoes under the Aedes genus of mosquitoes. These tend to be relatively aggressive biters who bite during the day and like to stay indoors. If a mosquito bites someone with an active Zika infection, the insect can then pass it on to the next person it bites. Evidence of the virus has been found in blood, semen, saliva and urine. There have been some cases of person-to-person transmission by blood and semen. It is not known whether it can be transmitted by a person’s saliva, or kissing. The mechanism of maternal to fetal transmission is also not known. According to Claudia Dos Santos of the Instituto Carlos Chagas/Fiocruz in Brazil, it is found in Hofbauer cells, a type of white blood cell found in the placenta. “It’s possible that Zika virus can cross the placenta and infect the brains of fetuses” says Melody Li, of our own Rice lab.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XVI: David Baltimore, 1975 Prize in Physiology or Medicine.

By Joseph Luna

On June 19th 1946, a captive rhesus monkey in the Mengo district near the town of Entebbe, Uganda developed unexplained hind-limb paralysis. British and American scientists, part of the local Yellow Fever Research Institute, financed in part by The Rockefeller Foundation, soon isolated what they believed to be a virus as the cause. The named it Mengo Encephalitis Virus, later shortened to just Mengovirus. The virus was quickly isolated in mosquitoes, and found in at least one person, but generally it posed no major risks to human health. Mengovirus was but an additional member of a constellation of RNA viruses known as picornaviruses, of which poliovirus was far and away the star. After a few reports demonstrating that Mengovirus could induce characteristic paralysis in mice as an animal model, interest died down.

A decade later, as mammalian cell culture techniques matured, many viruses were tested for their ability to replicate in a plate of cells instead of a whole animal. And one early and surprising finding was that just the RNA genetic information of Mengovirus was capable of launching an infection if artificially introduced into a cell. Furthermore, whereas normal cellular RNA production occurred almost exclusively in the nucleus, Mengovirus set up shop and made RNA only in the cytoplasm. And the biggest surprise: if cells were treated with the drug Actinomycin D, which prevented normal cellular RNA production from a DNA template, Mengovirus didn’t care, and went on producing copies of its own RNA as if nothing had happened.

For a young MIT graduate student named David Baltimore taking a course at Cold Spring Harbor Laboratory, this became an enthralling problem. So enthralling in fact that Baltimore left MIT to join the lab of the lecturer that day, Richard Franklin, at The Rockefeller University. There, Baltimore’s graduate school project was to develop an in vitro system to characterize the nature of Mengovirus RNA synthesis from an RNA template. He did so by taking Mengovirus-infected cells, grinding them up, and discarding the nuclei (where cellular RNA synthesis occurs from DNA). To the remaining cytoplasmic fraction, where there was no DNA and where Mengovirus could replicate, he added radioactive RNA nucleotides (A, C, G, and U) one-by-one, in combination, or leaving one out. The idea was that if there was an RNA-dependent RNA polymerase (a “replicase”), it should be able to link radioactive nucleotides together to make an RNA copy that would fall out of solution when placed in acid. By taking a Geiger counter and measuring if the radioactivity went into this “acid insoluble” fraction, Baltimore could conclude that a polymerase had acted on existing Mengovirus RNA to make an RNA copy composed of whatever radioactive nucleotides he added.

Continue reading

Martin Shkreli: Disease or Symptom?

By Sarala Kal

Hillary Clinton said “he was like the worst bad date you can imagine,” and many others call him the villain of the pharmaceutical industry. Thirty-two-year-old Martin Shkreli is a Brooklyn native, whose placement in a high school program for gifted youth serendipitously landed him an internship on Wall Street at the ripe age of 17. Few would expect the child of two immigrant parents, who worked as janitors, to have a career that escalated at such a rapid pace. Shkreli’s intellect and intuition led him to co-founding the hedge fund MSMB Capital Management, co-founding and working as the CEO of the biotechnology company Retrophon, and also co-founding and working as the CEO of Turing Pharmaceuticals. However, what’s gained immense attention from the public is not Shkreli’s professional pedigree, but rather his manipulation of the system. Unphased by negative attention, he has repeatedly been seen trolling the world on Twitter, buying overpriced albums, and raising the price of a drug on the W.H.O. list of Essential Medicines by more than 5000%. It is simple to pinpoint his actions and name him the villain in the ongoing battle of increasing drug prices and the affordability of healthcare. But is he really the root of the problem? Or is he a mere symptom of the disease?

In August of 2015, Daraprim was acquired by Turing Pharmaceuticals. The 62-year old drug, known generically as pyrimethamine, is the standard of care for treating the life-threatening parasitic infection, toxoplasmosis. Toxoplasmosis, for babies born to women who become infected during pregnancy, can be fatal. Additionally, it ravages the compromised immune systems of patients with HIV, and has been identified by the Centers for Disease Control and Prevention as one of the five neglected parasitic diseases for which public health action is necessary. What was once priced at $13.50, after the acquisition by Turing Pharmaceuticals, was raised to $750 overnight. CEO Martin Shkreli justified this price hike by saying that the drug was so rarely used that the impact on the health system would be miniscule, and that Turing would use the money to develop better treatments with fewer side effects. They promised to offer reductions of up to 50% to hospitals, introduce smaller bottles of 30 tablets, lower overall costs and offer free sample packages. Their promises, however, were broken almost immediately. Premiums for patients increased five-fold, some Medicare and Medicaid patients were not even given the option of receiving the drug, and doctors were forced to seek out alternative treatments. The high price of the drug has also given many companies the incentive to work as quickly as possible to produce a generic equivalent. After a tremendous amount of backlash, Shkreli continued to respond to media attention with a smug look and snarky comments, reiterating his point that the only thing that mattered to him was his company’s profit.

The Daraprim case has as much to do with the Food and Drug Administration as with Shkreli. The F.D.A. certification process for generic drugs is grueling enough that whoever owns Daraprim has a virtual monopoly in America. According to an F.D.A. official, Congress has not really vested any authority to the F.D.A. over pricing. One of the strangest things about the anti-Shkreli argument is that it asks us to be shocked that a medical executive is motivated by profit. Shkreli proves a crucial point about money and medicine through his actions. By showing what is legal, he has helped us to think about what we might want to change, and what we might need to learn to live with. Shkreli has opened our eyes to what we need to be focusing on to help change this country and try to make medicine affordable for everyone. Why is Shkreli able to do what he did? This is the real disease, while Shkreli himself is only the symptom.

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part XV: Christian de Duve, 1974 Prize in Physiology or Medicine.

By Joseph Luna

Nobel

“Centrifuge rotor designed by Henri Beaufay, constructed at the Rockefeller University instrument shop by Nils Jernberg for Christian de Duve, circa 1965. Rotor shown in open (left) and closed positions (right). From the Rockefeller University Merrill Chase historic scientific instrument collection, accession number 232.”

In his two-volume book A Guided Tour of the Living Cell, Christian de Duve vividly describes a most hostile setting, where “everywhere we look are scenes of destruction: maimed molecules of various kinds, shapeless debris, half-recognizable pieces of bacteria and viruses, fragments of mitochondria, membrane whorls, damaged ribosomes, all in the process of dissolving before our very eyes.” Such is the introduction to an organelle called the lysosome that only de Duve as its discoverer could give.

Where mitochondria produce energy and ribosomes produce protein, lysosomes function as a sort of digestive system for a cell: they are equal parts stomach, trash compactor, and recycling center. As bags filled with destructive enzymes, lysosomes perform the critical and often unrewarding job of waste disposal. But the story of how lysosomes were discovered was anything but unrewarding. Like any good scientific caper, it starts with a serendipitous and chance observation made under unlikely circumstances. And for the bench scientist, these circumstances were of the most frustrating variety: they all center on a positive control that never worked.

In the early 1950s, de Duve was a new faculty member at the Catholic University of Louvain in his native Belgium, and had set up his lab to tackle the mechanism of insulin on the liver. With the exception of glycolysis and the tricarboxylic acid (citric acid) cycle, metabolism was still largely uncharted territory, and one of the key questions centered on how liver cells responded to insulin to lower blood sugar. Biochemists had a hint that the first thing an insulin treated liver cell did to incoming glucose was to add a phosphate group, but this fragile phosphate group could be removed by a newly-described enzyme, later termed glucose-6-phosphatase, that generally made studying insulin action in ground-up liver tissue difficult. De Duve set out to purify and characterize this new enzyme.

After trying all the usual biochemical techniques to separate glucose-6-phosphatase from the other non-specific acid phosphatase found in the liver, de Duve hit an impasse: he couldn’t get glucose-6-phosphatase back into solution. Standard practice was to lower the pH to get an enzyme to fall out of solution, discard all the soluble stuff, and then try to get the enzyme back into solution by raising the pH. It was great on paper, except that it never worked. Luckily, de Duve was prepared.

Prior to taking up his post in Belgium, de Duve paid a visit to Albert Claude, a fellow Belgian and pioneering cell biologist then at the Rockefeller Institute. Claude had shown de Duve that proteins bound to larger structures tended to clump and stay clumped together at low pH. Thus, the most promising way to isolate glucose-6-phosphatase, if it was indeed bound to a larger structure, was to use the centrifuge of cell biologists instead of the acids used by biochemists.

Continue reading

Wasting Our Food

By Guadalupe Astorga

food

Gene Alexander/U.S. Department of Agriculture, Masatoshi/CC; Brooks Farms Rocks/CC, Hazelisles/CC

More than 40% of the food in the United States ends up in the trash can. This is huge, and includes sea-food, meat, cereals fruits and vegetables, as well as dairy products. Surprisingly, the Food and Agriculture Organization (FAO) reports that for all categories, food waste is not primarily the result of a deficient food supply chain, but rather occurs at home (see graph). In industrialized countries food wastage by consumers is as high as the total net food production in the sub-Saharan African region. This reflects an irresponsible behavior, fruit of the occidental consumption culture. This situation is especially concerning for the case of marine resources, where half of the fish and seafood exploited is never eaten. If we consider the whole supply chain, North America wastes half of the fishery production. In a world with limited and over-exploited marine resources, this is unacceptable.

2011. Global food losses and food waste – Extent, causes and prevention. Rome

2011. Global food losses and food waste – Extent, causes and prevention. Rome

But consumers not only throw away the marine resources, we also waste cereal, fruit and vegetables, meat and dairy products (see graph). A similar situation is observed in Europe, where food wastage can reach up to 30%. It is interesting to compare this scenario with developing countries, where food wastage by consumers is negligible. Does it mean that in occidental countries with higher income levels people can afford to throw away food? Meanwhile, almost 800 million people suffer from severe hunger and malnutrition.

What can we do?

First of all, educate ourselves for more responsible food consumption habits.

A few weeks ago, members of the French parliament (MPs) unanimously voted to propose a law that will force supermarkets to give unsold food to charities, risking a fine of up to 102,000 dollars if they do not adhere. The initiative was driven by Arash Derambarsh, a municipal councilor that persuaded the French MPs to adopt the measure after his petition throw change.org obtained more than 200,000 signatures and celebrity support. He is planning to expand this initiative to Europe in the next few months, even though the law ignited debates about implementation of similar laws has already started in several other countries.

Several worldwide non-profit associations collect unsold food from supermarkets for free distribution among people with low income levels. An example of these associations in New York are City Harvest, Hunger Solutions New York, Food Bank for New York City, and The New York City Coalition Against Hunger.

An alternative movement of people known as freegans also contribute to this anti-waste culture as they rummage through the garbage of retailers, residences, offices, and other facilities for useful goods. The goods recovered by freegans are safe, usable and clean, reflecting how retailers dispose of a high volume of products in perfect condition.

Let’s now consider the environmental impact of food loss and waste. The worldwide carbon footprint of food produced and not eaten ranks third, after the USA and China. Thirty percent of available agricultural land is used to grow or farm food that will never be eaten.

In a growing population like ours, estimates from FAO suggest that food production should increase by at least 50% in the next 30 years in order to satisfy its alimentary requirements. If we reduce the food waste by a quarter, the whole world population could fulfill its alimentary necessities.

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XIV: George E. Palade, 1974 Prize in Physiology or Medicine.

By Joseph Luna

Nestled in the 3rd sub-basement of Smith Hall, around 1953, an electron microscope (EM) is briefly idle. The machine, an RCA model EMU-2A, resembles a spare part from some future space station: a long vertical steel tube adorned with studs and knobs, with a viewfinder at the base. To the casual viewer, there’s little to indicate the purpose of this strange contraption. But to its operator, just having imaged the last specimen of tissues ranging from the pancreas to blood cells to the intestine, the purpose of this machine is strikingly clear, and is measured in Angstroms. The man sitting at the controls is George Palade, and he has just discovered “a small particulate component of the cytoplasm,” as he tentatively named it. In a few years, this particle would be renamed the “ribosome” and would soon be recognized as the essential protein-making machine in all of life.

Of course, such a romantic view of discovery relies squarely on hindsight, for it is almost impossible to pinpoint where one is during a scientific revolution in real time. This was certainly true at the beginning of modern cell biology, as the specimen preparation methods used for EM carried with them the specter of artifact. In essence, how did George Palade know that these particles weren’t a farce? The preceding seven years had done much to prepare Palade to address this question. Alongside Albert Claude, Keith Porter and others, Palade placed the nascent field of cell biology on sound methodological footing that enabled the discovery of the ribosome, and so much more.

In 1946, barely a year from the first EM picture of a cell, Palade joined the Rockefeller Institute as a postdoc, at Claude’s invitation. When Palade got his start, Claude’s group was concerned with trying to connect enzymatic activities that biochemists could measure, with a physical location in the cell that could be accounted for by fractionation or using new EM methods to see what the ultrastructure looked like. Claude and his co-workers were able to break cells apart into roughly four fractions that could be subjected to biochemical tests: nuclei, a large fraction that appeared to contain mitochondria, microsomes, and free cytoplasm. The large fraction caught their attention precisely because there was a problem. In intact cells, mitochondria could be stained with a dye called Janus Green, but the dye never worked in the large fraction, despite EM results that showed intact, though clumped, mitochondria. Moreover, biochemists had found that the large fraction contained many of the enzymes known to be involved in energy production, but this fraction wasn’t pure enough to make firm conclusions. Palade helped to clarify this issue by devising a better way to isolate pure mitochondria using dissolved sucrose (table sugar) as an isotonic buffer instead of the saline solutions used by Claude. As a result, the large fraction retained Janus Green staining, and energy making enzymes were much more enriched. It was an instructive experience because it showed that cells could be taken apart rationally, a bit like taking apart a radio with a screwdriver instead of with a sledgehammer. Intact, functional units like mitochondria could be separated and studied apart from other cell components. For these early cell biologists, it was a compelling justification to keep going.

This much was evident to Institute president Herbert Gasser. With Claude’s move back to Belgium in 1949, the retirement of lab head James Murphy in 1950, and other departures, the first Rockefeller cell biology group shrunk to just Porter and Palade. Gasser made the rare move of making them joint lab heads of their own cytology laboratory, and outfitted Smith hall with an RCA microscope.

Porter and Palade next made a concerted effort to describe, in intact cells and tissues, the ultrastructure of the mitochondria and a subcellular structure found in the microsomal fraction that Porter named the endoplasmic reticulum (ER). While Porter working with Joseph Blum, devised a new microtome to make thin slices of tissue for EM, Palade refined fixation and staining conditions (colloquially called “Palade’s pickle procedure”) to take EM to new heights. Using these tools, Palade went on to describe the inner structure of the mitochondria, observing inner folds and chambers he called cristae. The Palade model of the mitochondrion was illuminating for biochemists, because it provided structural constraints for possible mechanisms that explained how mitochondria made energy. In other words, what a mitochondrion looked like was essential for its function.

This line of thinking was critical to deciphering what role, if any, of those particles Palade observed in 1953. He noticed that they were typically observed stuck to the ER, were enriched in the microsomal fraction, and had high levels of RNA. He also noticed that secretory cells, such as digestive enzyme producing exocrine cells of the pancreas were packed with ER and ribosomes. In short order a hypothesis emerged, from Palade and others, that ER and ribosomes were involved in the synthesis and ordered transport of proteins in the cell. Working with Philip Siekevitz, Palade used radioactive amino acids to biochemically trace protein synthesis and transport in these cells, following the radioactivity in cell fractions, and using EM to visualize structure in each fraction; all in a seven part series of papers between 1958 and 1962. This triple threat of cell fractionation, biochemistry, and EM became the model for the entire field. EMs the world over have since rarely been idle for long.

Digging Into That Juicy and Tasty Steak…

Some Valuable Facts about Meat    

By Guadalupe Astorga

This October 2015, the World Health Organization (WHO) declared red meat and its processed derivatives a threat to human health, namely for its carcinogenic risk. Twenty-two experts from ten countries in the International Agency for Research on Cancer (IARC) concluded that processed meat is “carcinogenic to humans” (Group 1, as with tobacco smoking and asbestos), while red meat is “probably carcinogenic to humans” (Group 2A). This classification is based on the strength of scientific evidence rather than on the level of risk. Daily consumption of 50g (1.8 oz) of processed meat increases the risk of colorectal cancer by 18% (as a reference, the meat in a hamburger can easily surpass 200g or 7 oz). Find more details in the WHO Q&A about this topic here.

JeffreyW / CC BY

JeffreyW / CC BY

Now, let’s get into more digestible terms:

Processed meat is meat that has been transformed by the food industry through salting, curing, fermenting, smoking, or other processes used to enhance flavor or improve preservation. This includes hot dogs, ham, sausages, corned beef, beef jerky, canned meat and meat-based preparations and sauces, and even the meat in your beloved hamburger.

Now, what is the reason for the risk in unprocessed red meat? In this case, it is the way you cook it that can be problematic. High-temperature cooking, as in a barbecue or in a pan, produces carcinogenic chemicals including polycyclic aromatic hydrocarbons and heterocyclic aromatic amines.

Is raw meat safer? If you really want to eat raw meat

you must consider that eating it carries a separate risk related to microbial infections. Although some of them are resistant, cooking kills most bacteria in steak.

In the end, is there a real health risk to eat red meat? Similar to alcohol, the risk depends on the dose. A good alternative is to steam your meat or cook it in the oven. The Food and Agriculture Organization (FAO) offers a recipe for a low-cost sausage variation made from vegetables and fresh, unprocessed meat that you can easily prepare to enjoy a delicious homemade natural product. Learn more about processed meat products and find a homemade alternative at the end of this article.

Knowing these facts about the potential effects on human health is terrific, but what about the real risks derived from the production process?

Unlike the European Union, in the United States there is still a significant use of antibiotics in livestock farming. Because these drugs are also used in humans, when we consume meat we acquire a strong antibiotic resistance and this can drive up health care costs. In 2009, the total cost of antibiotic resistant-infections in the United States was estimated to be between $17 and $26 billion per year. Read more in this governmental health bill.

The environmental consequences of meat production can be even stronger than its health risk.

We normally think about global warming as being produced directly by human activity through carbon emissions. Surprisingly, industrial livestock production, including poultry, is one of the biggest sources of methane (CH4, released as a digestion byproduct) and human-related nitrous oxide (N2O), which has 296 times the global warming potential of carbon dioxide (CO2). Find more information about the role of livestock in climate change in this article from FAO. If you want to read a detailed study of livestock and climate change from FAO go to this link.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XIII: Albert Claude, 1974 Prize in Physiology or Medicine.

By Joseph Luna

centrifuge

An International Equipment Corporation, Model B size 1, circa the mid-1930s, of the type used by Claude for cell fractionation. RU historic instrument collection, accession number 342. Photograph by the author.

On December 7, 1970, the moon-bound crew of the final Apollo mission swiveled their camera toward earth, some 28,000 miles distant, and took a picture. Three weeks later the resulting photograph revealed a delicate blue orb suspended in space, painted with swirling clouds above the African continent. When released to the public in time for the holiday newspapers, this picture became instantly famous, serving as a visual capstone for humanity’s sojourn beyond our planet, which appears simultaneously majestic and intimate. It is perhaps for that reason that this picture was dubbed “The Blue Marble,” and is among the most iconic scientific photographs known.

I wonder what our next three prize-winners thought of the Blue Marble photo that winter. Whereas astronauts helped make the world small with spectacular portraits of earth, by the 1970s our next three Scandinavian visitors, Albert Claude, George Palade, and Christian de Duve, had been using images for over 25 years to show that microscopic cells were organized worlds unto themselves. Starting with the first electron microscope image of an intact cell in 1945, these three (and many others) helped launch the modern discipline of cell biology. For a comprehensive history of cell biology, particularly at Rockefeller University, I refer the reader to “Entering an Unseen World” by our very own Carol Moberg. For the next three installments of this series, we’ll specifically profile how each of these three men contributed to found a field as a distinct RU creation. And we’ll begin with Albert Claude.

Claude’s early life was difficult, and a bit momentous. After losing his mother to breast cancer at the age of seven, Claude moved around with his family before dropping out of school to care for an ailing uncle. He never finished high school. He worked in a steel mill during World War I, and volunteered as a teenager to aid the British Intelligence Service. By the war’s end, Claude was a decorated military veteran, and his first lucky break came when Belgian education authorities made it possible for veterans to pursue higher education without a diploma. This made it possible for Claude to go to medical school in 1922 and he graduated six years later.

It was then that Claude turned his attention to the cancer problem. At the time, The Rockefeller Institute for Medical Research (RIMR) was an epicenter for the debate on the origin of cancer. On one side was Peyton Rous, discoverer of the first transmissible sarcoma in chickens that bears his name, as the chief proponent for a viral origin of cancer. On the other side was James Murphy, who in short believed that a chemical or environmental insult was responsible for inducing cancer in otherwise normal cells. What exactly the Rous sarcoma agent was could only be speculated, since few had tried to purify it. Claude, freshly read up on the subject, wrote to then RIMR president Simon Flexner and proposed isolating the sarcoma agent. A year later Claude found himself in Murphy’s laboratory in New York, charged to do just that.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part XII: Stanford Moore and William Stein, 1972 Prize in Chemistry

By Joseph Luna

Original rotating fraction collector used by Moore and Stein for analysis of RNAse. RU historic instrument collection, accession number 105.

Original rotating fraction collector used by Moore and Stein for analysis of RNAse. RU historic instrument collection, accession number 105.

“RNAse-free.” To most any molecular biologist working with RNA, these two seemingly unrelated words are as sweet sounding together as “passion-fruit.” This is because ribonucleases, those small hardy enzymes that chew up RNA, can be found everywhere, are more invasive than the tiniest bacteria, and can utterly ruin an experiment. Seeing an “RNAse-free” label on one’s reagents is often a mark of trust that experimental results are on firm footing. But the story of RNase is a fascinating one, particularly at Rockefeller, for it is a story intricately wrapped in two names as tightly bound and harmonious together as “RNAse-free”: those of Stanford Moore and William Stein, or “Moore-n’-Stein”.

What can be considered one of the greatest life-long collaborations in biochemistry began simply, when Moore and Stein met as post-docs in the laboratory of Max Bergmann in 1939. Bergmann had fled Nazi Germany five years prior and took up a position at the Rockefeller Institute to continue his research on protein chemistry. A once long-time collaborator of Emil Fischer (who coined the term “peptide”), Bergmann and his lab were focused on finding ways to isolate and analyze proteins. By the mid 1930s, all twenty of the primary amino acid building blocks had been discovered, but it was unclear how they were put together to make a functional protein. What’s more, each protein that could be isolated appeared to have a different and unique composition of amino acids. Before one could get a grasp on protein structure, what was needed was a reliable way to determine how much of each amino acid a particular protein contained. This was the problem Moore and Stein first tackled.

They started by mixing together eighteen amino acids at known concentrations and asking if they could invent a method that could both separate and individually measure the concentration of each amino acid in the mixture. It was a daunting task, a bit like trying to uncook an egg. An early form of chromatography using starch columns eventually solved the first problem. Moore and Stein discovered that each of the eighteen amino acids passed through these columns at unique speeds, and so by adding the mixture at one end of the column and collecting fractions at the other, the mixture could be separated in a defined way: phenylalanine came out first, then leucine, then isoleucine and so on. And because standing around collecting fractions drop by drop was simultaneously laborious and boring, they invented a mechanical lab technician to precisely do the work: the automated fraction collector. The second problem, to measure the concentration of amino acids in the fractions, was solved by turning to a well-known chemical reaction known as the ninhydrin reaction. Chemists had discovered that in the presence of ninhydrin, amino acid solutions turned a bluish-purple with each amino acid giving off a unique, if unstable, hue. Moore and Stein figured out ways to stabilize the reaction such that the amount of blue could help determine both the identity of the amino acid, and its concentration.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part XI: Gerald M. Edelman, 1972 Prize in Physiology or Medicine

By Joseph Luna

To be immune is to be exempt. In the late 19th century, a physician named Paul Ehrlich gave a death-defying example of such an exemption by giving mice sub-lethal quantities of the deadly toxin ricin. Over time, these mice developed a specific resistance to ricin such that they survived when exposed to amounts that would kill a normal mouse. And yet, this ricin immunity was specific, as the super mice remained susceptible to other toxins. What made immunity so specific and how did it come about? With this experiment, Ehrlich joined a chorus of scientists that included Edward Jenner and Louis Pasteur before him to address immunity. It was upon these questions that the science of immunology was founded.

To explain how this might work in his ricin-proof mice, Ehrlich and others reasoned that the exposed mice begin to produce something that could counter the effects of the toxin—an anti-toxin. When it was shown that serum from an animal exposed to toxins or infectious diseases could be transferred to confer immunity in a recipient, this finding blossomed into the concept of a curative anti-serum. It was here that Ehrlich went further. Attempting to summarize the common thread that ran across exquisitely specific immunities against toxins, bacteria, parasites, or anything threatening, Ehrlich coined the term “antibody.” It was a specific antibody directed against a specific usually foreign substance, he formulated, that was the root cause of immunity.

Over the next five decades, the study of antibodies lay at the heart of immunology as researchers worked on how specific antibody reactions could be, how antibodies came about, how they could be inherited and passed along, and what exactly they were made of. Answering this last point briefly became a focus at Rockefeller in the 1930s, where chemical methods were first used to determine that antibodies were made of protein. But beyond this, key questions remained unsettled: what accounted for antibody diversity? Were specific antibodies structurally distinct by adopting different conformations or by having different sequences? In short: what does an antibody look like?

Sometime in 1955, a young captain in the U.S. Army named Gerald Edelman asked himself this question. Edelman was a medical doctor stationed in Paris, and when not attending to fellow soldiers at the hospital, Edelman would read medical and science textbooks for fun. Picking up an immunology textbook one day, he read page upon page of the foreign targets of antibodies—antigens—but almost nothing on antibodies themselves. After an extensive literature search on antibodies, Edelman reached an unsatisfying end. He decided to do something unusual: he applied to graduate school with the goal of studying antibody structure. Even more unusual, he chose not to go to a Harvard or a Johns Hopkins level institution, but instead entered a newly created graduate program at The Rockefeller Institute for Medial Research in 1957.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part X: H. Keffer Hartline, 1967 Prize in Physiology or Medicine

By Joseph Luna

While strolling along a beach one day in the summer of 1926, a young physiologist named Haldan Keffer Hartline came across a living fossil. Before him was a horseshoe crab, Limulus polyphemus, with its domed carapace shell, spiked rudder tail and pedipalp legs. Barely changed after over 450 million years of evolution, this mysterious ancient mariner must’ve been a startling and alien sight. We don’t know what Hartline thought of the creature’s primitive book gills, its belly filled with shellfish or its eerie blue blood. But something did enthrall him: the crab’s large compound eyes.

Though he was a medical student, Hartline had no interest in practicing medicine, but was fascinated by research, particularly the physiology of vision. How does seeing work? This question first riveted Hartline while an undergraduate, where he worked on the light-sensing abilities of pill bugs. Moving on to medical school at Johns Hopkins, Hartline attempted to study vision in frogs by using neurophysiological instruments to record activity from their optic nerves, but it proved more difficult and complex than he imagined. What he needed was a simpler model organism, if there was one. He made his way to the Marine Biological Laboratory on the southern coast of Massachusetts, frustrated by past failures, but on a mission to find the right organism to study.

image2

It was a conceptual leap to propose that studying vision in a weird creature like Limulus would yield insight on how animals, including humans, see generally, but the idea wasn’t out of place among biologists in the 1920s. By decade’s end, the Nobel Prize winning Danish physiologist August Krogh laid the case for studying diverse organisms for general biological insight, predicting for the field in 1929: “for such a large number of problems there will be some animal of choice or a few such animals on which it can be most conveniently studied.”

The year before, Hartline published a descriptive study of arthropod compound eyes, where he succeeded in recording nerve impulses after light stimulation in Limulus along with grasshoppers and two species of butterfly. This comparative work revealed that light stimulation could induce characteristic minute electrical spikes that could be measured among arthropods. And whereas the grasshopper and butterfly were difficult to handle and gave complex recordings, those of Limulus were simple waves and could be studied for extended periods of time when bathed in seawater. But what really set Limulus apart was the size of its compound eye as it opened the possibility of studying its single facets.

As the name suggests, a compound eye can be thought of as a closely spaced array of simpler eyes. Each “eye”, called an ommatidium, individually acts as a receptor for light directly above it and is composed of a cornea that directs light to a bundle of photoreceptor cells that are in turn connected to a single optic nerve. In small insect eyes, individual ommatidia number in the thousands and can really only be seen under a microscope; the same is true for analogous rods and cones in vertebrate retinas. The ommatidia of Limulus by comparison are fewer in number but comparatively gargantuan: each is about 1mm across, making them among the largest light receptors in the animal kingdom. Based on their large size, Hartline reasoned that it might be possible to take neurophysiological measurements from single optic nerve fibers in the horseshoe crab. Working with Clarence Graham in the summer of 1931, Hartline succeeded in doing just that. Graham and Hartline dissected single ommatidia, and devised methods to illuminate their photoreceptive cells while recording from the optic nerve. In went light they could control, out went neural signals to the brain that they could measure. These were some of the first measurements of the most fundamental unit of vision.

Continue reading

Alfred Nobel and the Prizes

By Susan Russo

Alfred Nobel was born in Stockholm, Sweden, in 1833. He is best remembered for the invention of dynamite and for leaving the major part of his fortune for the establishment of prizes for a person or persons who accomplished discoveries resulting in the “greatest benefit on mankind.” Nobel’s father was an engineer, manufacturer, and inventor. One of his inventions was modern plywood. The family factories were in St. Petersburg, Russia, where Albert was educated by tutors, showing marked interest in chemistry and languages. From 1841 to 1842, Albert was sent to Sweden to the Jacobs Apologistic [sic] School. Albert’s studies in chemistry continued in Russia, then Paris, then four years in the United States. Albert’s interests also included explosives, taught to him by his father. His 355 inventions included a gas meter in 1857, a detonator in 1863, and a blasting cap in 1865.  Nobel’s additional interest in physiological research led to his starting laboratories in France and Italy for experiments in blood transfusions, as well as his making donations to the Pavlov laboratory in Russia.

Nobel died in 1896, but when his brother Ludvig died in 1888, one newspaper mistakenly wrote Albert’s obituary, characterizing him as the “merchant of death.” Before his own death, Albert Nobel wrote a will that set aside most of his fortune to create the Nobel prizes. This will was contested by members of his family, so that the prizes were not legally authorized until 1897. In 1900, the Nobel Foundation was established by order of Sweden’s King Oscar II.

Because of these delays, the initial Nobel Prizes were not awarded until 1901, the first in physics to Wilhelm Roentgen, and also in the will’s stated fields of chemistry, peace, physiology or medicine, and literature.

The Nobel Foundation selects professionals in these fields from around the world to nominate individuals for the prizes (including at least one professor at Rockefeller). The Swedish Academy of Sciences awards the prizes for physics and chemistry; the Karolinska Institute awards prizes for physiology or medicine; and the Academy in Stockholm awards prizes for Literature. The Peace price is awarded by the Norwegian Storting, the legislature of Norway. In 1968, a Prize in Economic Sciences in Memory of Alfred Nobel was established by Sweden’s central bank, Sveriges Riksbank.

The gold Nobel prize medals are minted in Sweden, with a profile of Albert Nobel on one side. On the prizes presented in Sweden there is a Latin verse from Virgil which is translated as “inventions enhance life which is beautified through art.” The original 1901 prize money for the award was 150,782 Swedish kronor, which as of this writing is $19,948. Nobel prizes are not awarded every year, if there are no discoveries deemed to be of significance, nor, frequently, during times of war.

Continue reading