Zika Virus

By Aileen Marshall

zika

Rash on a arm due to Zika virus. FRED / Wikimedia Commons

What should you know about the Zika virus? It’s been around for over 50 years, but it’s only recently that it’s spread has increased around the world, especially in South America. The Zika virus is spread by mosquitoes, but for most people it only causes a mild infection. However, an infection in pregnant women can cause a birth defect called microcephaly, in which the skull and brain don’t fully develop. At this point, there’s limited diagnostic tests and no cure, so labs are scrambling to develop these products.

The Zika virus was discovered in 1947 in the Zika Forest of Uganda. It was isolated from the blood of a rhesus monkey there, as part of a Yellow Fever monitoring program. It was then found in an Aedes africanus mosquito from the same area, a year later. The first human infected was found in 1952 in Uganda and Tanzania. A study in India that year found a significant number of Indians who had antibodies to Zika, an indication that it had been prevalent in that population. There were sporadic outbreaks of Zika over the later years in equatorial areas of Africa and Asia. Then in 2007, an outbreak of what initially appeared to be dengue or chikungunya occurred in the French Polynesian island of Yap. It was later confirmed to be Zika, the first outbreak outside of Africa or Asia. By 2013 it had spread to other South Pacific islands with some patients who also had neurological effects and there were some cases of microcephaly. In March of 2015, health officials in Brazil noted an increase in Zika-like symptoms and rash in the northeast part of the country. By that summer, there was a great increase in the number of children born with microcephaly, especially in that same area. By later that year, there were confirmed cases of Zika infections in other South and Central American countries, and the Caribbean. On February 1 of this year, the World Health Organization declared it a public health emergency of international concern.

The Zika virus belongs to the same family, Flaviviridae, as dengue, chikungunya, yellow fever and West Nile viruses, which is why the antibodies often cross-react in diagnostic tests. It has a single strand positive sense RNA genome, which means it replicates in one step. The strain in this recent outbreak has been sequenced and it has found to be the same strain from the South Pacific outbreak.

It is transmitted by a couple of species of mosquitoes under the Aedes genus of mosquitoes. These tend to be relatively aggressive biters who bite during the day and like to stay indoors. If a mosquito bites someone with an active Zika infection, the insect can then pass it on to the next person it bites. Evidence of the virus has been found in blood, semen, saliva and urine. There have been some cases of person-to-person transmission by blood and semen. It is not known whether it can be transmitted by a person’s saliva, or kissing. The mechanism of maternal to fetal transmission is also not known. According to Claudia Dos Santos of the Instituto Carlos Chagas/Fiocruz in Brazil, it is found in Hofbauer cells, a type of white blood cell found in the placenta. “It’s possible that Zika virus can cross the placenta and infect the brains of fetuses” says Melody Li, of our own Rice lab.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XVI: David Baltimore, 1975 Prize in Physiology or Medicine.

By Joseph Luna

On June 19th 1946, a captive rhesus monkey in the Mengo district near the town of Entebbe, Uganda developed unexplained hind-limb paralysis. British and American scientists, part of the local Yellow Fever Research Institute, financed in part by The Rockefeller Foundation, soon isolated what they believed to be a virus as the cause. The named it Mengo Encephalitis Virus, later shortened to just Mengovirus. The virus was quickly isolated in mosquitoes, and found in at least one person, but generally it posed no major risks to human health. Mengovirus was but an additional member of a constellation of RNA viruses known as picornaviruses, of which poliovirus was far and away the star. After a few reports demonstrating that Mengovirus could induce characteristic paralysis in mice as an animal model, interest died down.

A decade later, as mammalian cell culture techniques matured, many viruses were tested for their ability to replicate in a plate of cells instead of a whole animal. And one early and surprising finding was that just the RNA genetic information of Mengovirus was capable of launching an infection if artificially introduced into a cell. Furthermore, whereas normal cellular RNA production occurred almost exclusively in the nucleus, Mengovirus set up shop and made RNA only in the cytoplasm. And the biggest surprise: if cells were treated with the drug Actinomycin D, which prevented normal cellular RNA production from a DNA template, Mengovirus didn’t care, and went on producing copies of its own RNA as if nothing had happened.

For a young MIT graduate student named David Baltimore taking a course at Cold Spring Harbor Laboratory, this became an enthralling problem. So enthralling in fact that Baltimore left MIT to join the lab of the lecturer that day, Richard Franklin, at The Rockefeller University. There, Baltimore’s graduate school project was to develop an in vitro system to characterize the nature of Mengovirus RNA synthesis from an RNA template. He did so by taking Mengovirus-infected cells, grinding them up, and discarding the nuclei (where cellular RNA synthesis occurs from DNA). To the remaining cytoplasmic fraction, where there was no DNA and where Mengovirus could replicate, he added radioactive RNA nucleotides (A, C, G, and U) one-by-one, in combination, or leaving one out. The idea was that if there was an RNA-dependent RNA polymerase (a “replicase”), it should be able to link radioactive nucleotides together to make an RNA copy that would fall out of solution when placed in acid. By taking a Geiger counter and measuring if the radioactivity went into this “acid insoluble” fraction, Baltimore could conclude that a polymerase had acted on existing Mengovirus RNA to make an RNA copy composed of whatever radioactive nucleotides he added.

Continue reading

Martin Shkreli: Disease or Symptom?

By Sarala Kal

Hillary Clinton said “he was like the worst bad date you can imagine,” and many others call him the villain of the pharmaceutical industry. Thirty-two-year-old Martin Shkreli is a Brooklyn native, whose placement in a high school program for gifted youth serendipitously landed him an internship on Wall Street at the ripe age of 17. Few would expect the child of two immigrant parents, who worked as janitors, to have a career that escalated at such a rapid pace. Shkreli’s intellect and intuition led him to co-founding the hedge fund MSMB Capital Management, co-founding and working as the CEO of the biotechnology company Retrophon, and also co-founding and working as the CEO of Turing Pharmaceuticals. However, what’s gained immense attention from the public is not Shkreli’s professional pedigree, but rather his manipulation of the system. Unphased by negative attention, he has repeatedly been seen trolling the world on Twitter, buying overpriced albums, and raising the price of a drug on the W.H.O. list of Essential Medicines by more than 5000%. It is simple to pinpoint his actions and name him the villain in the ongoing battle of increasing drug prices and the affordability of healthcare. But is he really the root of the problem? Or is he a mere symptom of the disease?

In August of 2015, Daraprim was acquired by Turing Pharmaceuticals. The 62-year old drug, known generically as pyrimethamine, is the standard of care for treating the life-threatening parasitic infection, toxoplasmosis. Toxoplasmosis, for babies born to women who become infected during pregnancy, can be fatal. Additionally, it ravages the compromised immune systems of patients with HIV, and has been identified by the Centers for Disease Control and Prevention as one of the five neglected parasitic diseases for which public health action is necessary. What was once priced at $13.50, after the acquisition by Turing Pharmaceuticals, was raised to $750 overnight. CEO Martin Shkreli justified this price hike by saying that the drug was so rarely used that the impact on the health system would be miniscule, and that Turing would use the money to develop better treatments with fewer side effects. They promised to offer reductions of up to 50% to hospitals, introduce smaller bottles of 30 tablets, lower overall costs and offer free sample packages. Their promises, however, were broken almost immediately. Premiums for patients increased five-fold, some Medicare and Medicaid patients were not even given the option of receiving the drug, and doctors were forced to seek out alternative treatments. The high price of the drug has also given many companies the incentive to work as quickly as possible to produce a generic equivalent. After a tremendous amount of backlash, Shkreli continued to respond to media attention with a smug look and snarky comments, reiterating his point that the only thing that mattered to him was his company’s profit.

The Daraprim case has as much to do with the Food and Drug Administration as with Shkreli. The F.D.A. certification process for generic drugs is grueling enough that whoever owns Daraprim has a virtual monopoly in America. According to an F.D.A. official, Congress has not really vested any authority to the F.D.A. over pricing. One of the strangest things about the anti-Shkreli argument is that it asks us to be shocked that a medical executive is motivated by profit. Shkreli proves a crucial point about money and medicine through his actions. By showing what is legal, he has helped us to think about what we might want to change, and what we might need to learn to live with. Shkreli has opened our eyes to what we need to be focusing on to help change this country and try to make medicine affordable for everyone. Why is Shkreli able to do what he did? This is the real disease, while Shkreli himself is only the symptom.

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part XV: Christian de Duve, 1974 Prize in Physiology or Medicine.

By Joseph Luna

Nobel

“Centrifuge rotor designed by Henri Beaufay, constructed at the Rockefeller University instrument shop by Nils Jernberg for Christian de Duve, circa 1965. Rotor shown in open (left) and closed positions (right). From the Rockefeller University Merrill Chase historic scientific instrument collection, accession number 232.”

In his two-volume book A Guided Tour of the Living Cell, Christian de Duve vividly describes a most hostile setting, where “everywhere we look are scenes of destruction: maimed molecules of various kinds, shapeless debris, half-recognizable pieces of bacteria and viruses, fragments of mitochondria, membrane whorls, damaged ribosomes, all in the process of dissolving before our very eyes.” Such is the introduction to an organelle called the lysosome that only de Duve as its discoverer could give.

Where mitochondria produce energy and ribosomes produce protein, lysosomes function as a sort of digestive system for a cell: they are equal parts stomach, trash compactor, and recycling center. As bags filled with destructive enzymes, lysosomes perform the critical and often unrewarding job of waste disposal. But the story of how lysosomes were discovered was anything but unrewarding. Like any good scientific caper, it starts with a serendipitous and chance observation made under unlikely circumstances. And for the bench scientist, these circumstances were of the most frustrating variety: they all center on a positive control that never worked.

In the early 1950s, de Duve was a new faculty member at the Catholic University of Louvain in his native Belgium, and had set up his lab to tackle the mechanism of insulin on the liver. With the exception of glycolysis and the tricarboxylic acid (citric acid) cycle, metabolism was still largely uncharted territory, and one of the key questions centered on how liver cells responded to insulin to lower blood sugar. Biochemists had a hint that the first thing an insulin treated liver cell did to incoming glucose was to add a phosphate group, but this fragile phosphate group could be removed by a newly-described enzyme, later termed glucose-6-phosphatase, that generally made studying insulin action in ground-up liver tissue difficult. De Duve set out to purify and characterize this new enzyme.

After trying all the usual biochemical techniques to separate glucose-6-phosphatase from the other non-specific acid phosphatase found in the liver, de Duve hit an impasse: he couldn’t get glucose-6-phosphatase back into solution. Standard practice was to lower the pH to get an enzyme to fall out of solution, discard all the soluble stuff, and then try to get the enzyme back into solution by raising the pH. It was great on paper, except that it never worked. Luckily, de Duve was prepared.

Prior to taking up his post in Belgium, de Duve paid a visit to Albert Claude, a fellow Belgian and pioneering cell biologist then at the Rockefeller Institute. Claude had shown de Duve that proteins bound to larger structures tended to clump and stay clumped together at low pH. Thus, the most promising way to isolate glucose-6-phosphatase, if it was indeed bound to a larger structure, was to use the centrifuge of cell biologists instead of the acids used by biochemists.

Continue reading

Wasting Our Food

By Guadalupe Astorga

food

Gene Alexander/U.S. Department of Agriculture, Masatoshi/CC; Brooks Farms Rocks/CC, Hazelisles/CC

More than 40% of the food in the United States ends up in the trash can. This is huge, and includes sea-food, meat, cereals fruits and vegetables, as well as dairy products. Surprisingly, the Food and Agriculture Organization (FAO) reports that for all categories, food waste is not primarily the result of a deficient food supply chain, but rather occurs at home (see graph). In industrialized countries food wastage by consumers is as high as the total net food production in the sub-Saharan African region. This reflects an irresponsible behavior, fruit of the occidental consumption culture. This situation is especially concerning for the case of marine resources, where half of the fish and seafood exploited is never eaten. If we consider the whole supply chain, North America wastes half of the fishery production. In a world with limited and over-exploited marine resources, this is unacceptable.

2011. Global food losses and food waste – Extent, causes and prevention. Rome

2011. Global food losses and food waste – Extent, causes and prevention. Rome

But consumers not only throw away the marine resources, we also waste cereal, fruit and vegetables, meat and dairy products (see graph). A similar situation is observed in Europe, where food wastage can reach up to 30%. It is interesting to compare this scenario with developing countries, where food wastage by consumers is negligible. Does it mean that in occidental countries with higher income levels people can afford to throw away food? Meanwhile, almost 800 million people suffer from severe hunger and malnutrition.

What can we do?

First of all, educate ourselves for more responsible food consumption habits.

A few weeks ago, members of the French parliament (MPs) unanimously voted to propose a law that will force supermarkets to give unsold food to charities, risking a fine of up to 102,000 dollars if they do not adhere. The initiative was driven by Arash Derambarsh, a municipal councilor that persuaded the French MPs to adopt the measure after his petition throw change.org obtained more than 200,000 signatures and celebrity support. He is planning to expand this initiative to Europe in the next few months, even though the law ignited debates about implementation of similar laws has already started in several other countries.

Several worldwide non-profit associations collect unsold food from supermarkets for free distribution among people with low income levels. An example of these associations in New York are City Harvest, Hunger Solutions New York, Food Bank for New York City, and The New York City Coalition Against Hunger.

An alternative movement of people known as freegans also contribute to this anti-waste culture as they rummage through the garbage of retailers, residences, offices, and other facilities for useful goods. The goods recovered by freegans are safe, usable and clean, reflecting how retailers dispose of a high volume of products in perfect condition.

Let’s now consider the environmental impact of food loss and waste. The worldwide carbon footprint of food produced and not eaten ranks third, after the USA and China. Thirty percent of available agricultural land is used to grow or farm food that will never be eaten.

In a growing population like ours, estimates from FAO suggest that food production should increase by at least 50% in the next 30 years in order to satisfy its alimentary requirements. If we reduce the food waste by a quarter, the whole world population could fulfill its alimentary necessities.

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XIV: George E. Palade, 1974 Prize in Physiology or Medicine.

By Joseph Luna

Nestled in the 3rd sub-basement of Smith Hall, around 1953, an electron microscope (EM) is briefly idle. The machine, an RCA model EMU-2A, resembles a spare part from some future space station: a long vertical steel tube adorned with studs and knobs, with a viewfinder at the base. To the casual viewer, there’s little to indicate the purpose of this strange contraption. But to its operator, just having imaged the last specimen of tissues ranging from the pancreas to blood cells to the intestine, the purpose of this machine is strikingly clear, and is measured in Angstroms. The man sitting at the controls is George Palade, and he has just discovered “a small particulate component of the cytoplasm,” as he tentatively named it. In a few years, this particle would be renamed the “ribosome” and would soon be recognized as the essential protein-making machine in all of life.

Of course, such a romantic view of discovery relies squarely on hindsight, for it is almost impossible to pinpoint where one is during a scientific revolution in real time. This was certainly true at the beginning of modern cell biology, as the specimen preparation methods used for EM carried with them the specter of artifact. In essence, how did George Palade know that these particles weren’t a farce? The preceding seven years had done much to prepare Palade to address this question. Alongside Albert Claude, Keith Porter and others, Palade placed the nascent field of cell biology on sound methodological footing that enabled the discovery of the ribosome, and so much more.

In 1946, barely a year from the first EM picture of a cell, Palade joined the Rockefeller Institute as a postdoc, at Claude’s invitation. When Palade got his start, Claude’s group was concerned with trying to connect enzymatic activities that biochemists could measure, with a physical location in the cell that could be accounted for by fractionation or using new EM methods to see what the ultrastructure looked like. Claude and his co-workers were able to break cells apart into roughly four fractions that could be subjected to biochemical tests: nuclei, a large fraction that appeared to contain mitochondria, microsomes, and free cytoplasm. The large fraction caught their attention precisely because there was a problem. In intact cells, mitochondria could be stained with a dye called Janus Green, but the dye never worked in the large fraction, despite EM results that showed intact, though clumped, mitochondria. Moreover, biochemists had found that the large fraction contained many of the enzymes known to be involved in energy production, but this fraction wasn’t pure enough to make firm conclusions. Palade helped to clarify this issue by devising a better way to isolate pure mitochondria using dissolved sucrose (table sugar) as an isotonic buffer instead of the saline solutions used by Claude. As a result, the large fraction retained Janus Green staining, and energy making enzymes were much more enriched. It was an instructive experience because it showed that cells could be taken apart rationally, a bit like taking apart a radio with a screwdriver instead of with a sledgehammer. Intact, functional units like mitochondria could be separated and studied apart from other cell components. For these early cell biologists, it was a compelling justification to keep going.

This much was evident to Institute president Herbert Gasser. With Claude’s move back to Belgium in 1949, the retirement of lab head James Murphy in 1950, and other departures, the first Rockefeller cell biology group shrunk to just Porter and Palade. Gasser made the rare move of making them joint lab heads of their own cytology laboratory, and outfitted Smith hall with an RCA microscope.

Porter and Palade next made a concerted effort to describe, in intact cells and tissues, the ultrastructure of the mitochondria and a subcellular structure found in the microsomal fraction that Porter named the endoplasmic reticulum (ER). While Porter working with Joseph Blum, devised a new microtome to make thin slices of tissue for EM, Palade refined fixation and staining conditions (colloquially called “Palade’s pickle procedure”) to take EM to new heights. Using these tools, Palade went on to describe the inner structure of the mitochondria, observing inner folds and chambers he called cristae. The Palade model of the mitochondrion was illuminating for biochemists, because it provided structural constraints for possible mechanisms that explained how mitochondria made energy. In other words, what a mitochondrion looked like was essential for its function.

This line of thinking was critical to deciphering what role, if any, of those particles Palade observed in 1953. He noticed that they were typically observed stuck to the ER, were enriched in the microsomal fraction, and had high levels of RNA. He also noticed that secretory cells, such as digestive enzyme producing exocrine cells of the pancreas were packed with ER and ribosomes. In short order a hypothesis emerged, from Palade and others, that ER and ribosomes were involved in the synthesis and ordered transport of proteins in the cell. Working with Philip Siekevitz, Palade used radioactive amino acids to biochemically trace protein synthesis and transport in these cells, following the radioactivity in cell fractions, and using EM to visualize structure in each fraction; all in a seven part series of papers between 1958 and 1962. This triple threat of cell fractionation, biochemistry, and EM became the model for the entire field. EMs the world over have since rarely been idle for long.

Digging Into That Juicy and Tasty Steak…

Some Valuable Facts about Meat    

By Guadalupe Astorga

This October 2015, the World Health Organization (WHO) declared red meat and its processed derivatives a threat to human health, namely for its carcinogenic risk. Twenty-two experts from ten countries in the International Agency for Research on Cancer (IARC) concluded that processed meat is “carcinogenic to humans” (Group 1, as with tobacco smoking and asbestos), while red meat is “probably carcinogenic to humans” (Group 2A). This classification is based on the strength of scientific evidence rather than on the level of risk. Daily consumption of 50g (1.8 oz) of processed meat increases the risk of colorectal cancer by 18% (as a reference, the meat in a hamburger can easily surpass 200g or 7 oz). Find more details in the WHO Q&A about this topic here.

JeffreyW / CC BY

JeffreyW / CC BY

Now, let’s get into more digestible terms:

Processed meat is meat that has been transformed by the food industry through salting, curing, fermenting, smoking, or other processes used to enhance flavor or improve preservation. This includes hot dogs, ham, sausages, corned beef, beef jerky, canned meat and meat-based preparations and sauces, and even the meat in your beloved hamburger.

Now, what is the reason for the risk in unprocessed red meat? In this case, it is the way you cook it that can be problematic. High-temperature cooking, as in a barbecue or in a pan, produces carcinogenic chemicals including polycyclic aromatic hydrocarbons and heterocyclic aromatic amines.

Is raw meat safer? If you really want to eat raw meat

you must consider that eating it carries a separate risk related to microbial infections. Although some of them are resistant, cooking kills most bacteria in steak.

In the end, is there a real health risk to eat red meat? Similar to alcohol, the risk depends on the dose. A good alternative is to steam your meat or cook it in the oven. The Food and Agriculture Organization (FAO) offers a recipe for a low-cost sausage variation made from vegetables and fresh, unprocessed meat that you can easily prepare to enjoy a delicious homemade natural product. Learn more about processed meat products and find a homemade alternative at the end of this article.

Knowing these facts about the potential effects on human health is terrific, but what about the real risks derived from the production process?

Unlike the European Union, in the United States there is still a significant use of antibiotics in livestock farming. Because these drugs are also used in humans, when we consume meat we acquire a strong antibiotic resistance and this can drive up health care costs. In 2009, the total cost of antibiotic resistant-infections in the United States was estimated to be between $17 and $26 billion per year. Read more in this governmental health bill.

The environmental consequences of meat production can be even stronger than its health risk.

We normally think about global warming as being produced directly by human activity through carbon emissions. Surprisingly, industrial livestock production, including poultry, is one of the biggest sources of methane (CH4, released as a digestion byproduct) and human-related nitrous oxide (N2O), which has 296 times the global warming potential of carbon dioxide (CO2). Find more information about the role of livestock in climate change in this article from FAO. If you want to read a detailed study of livestock and climate change from FAO go to this link.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes.

Part XIII: Albert Claude, 1974 Prize in Physiology or Medicine.

By Joseph Luna

centrifuge

An International Equipment Corporation, Model B size 1, circa the mid-1930s, of the type used by Claude for cell fractionation. RU historic instrument collection, accession number 342. Photograph by the author.

On December 7, 1970, the moon-bound crew of the final Apollo mission swiveled their camera toward earth, some 28,000 miles distant, and took a picture. Three weeks later the resulting photograph revealed a delicate blue orb suspended in space, painted with swirling clouds above the African continent. When released to the public in time for the holiday newspapers, this picture became instantly famous, serving as a visual capstone for humanity’s sojourn beyond our planet, which appears simultaneously majestic and intimate. It is perhaps for that reason that this picture was dubbed “The Blue Marble,” and is among the most iconic scientific photographs known.

I wonder what our next three prize-winners thought of the Blue Marble photo that winter. Whereas astronauts helped make the world small with spectacular portraits of earth, by the 1970s our next three Scandinavian visitors, Albert Claude, George Palade, and Christian de Duve, had been using images for over 25 years to show that microscopic cells were organized worlds unto themselves. Starting with the first electron microscope image of an intact cell in 1945, these three (and many others) helped launch the modern discipline of cell biology. For a comprehensive history of cell biology, particularly at Rockefeller University, I refer the reader to “Entering an Unseen World” by our very own Carol Moberg. For the next three installments of this series, we’ll specifically profile how each of these three men contributed to found a field as a distinct RU creation. And we’ll begin with Albert Claude.

Claude’s early life was difficult, and a bit momentous. After losing his mother to breast cancer at the age of seven, Claude moved around with his family before dropping out of school to care for an ailing uncle. He never finished high school. He worked in a steel mill during World War I, and volunteered as a teenager to aid the British Intelligence Service. By the war’s end, Claude was a decorated military veteran, and his first lucky break came when Belgian education authorities made it possible for veterans to pursue higher education without a diploma. This made it possible for Claude to go to medical school in 1922 and he graduated six years later.

It was then that Claude turned his attention to the cancer problem. At the time, The Rockefeller Institute for Medical Research (RIMR) was an epicenter for the debate on the origin of cancer. On one side was Peyton Rous, discoverer of the first transmissible sarcoma in chickens that bears his name, as the chief proponent for a viral origin of cancer. On the other side was James Murphy, who in short believed that a chemical or environmental insult was responsible for inducing cancer in otherwise normal cells. What exactly the Rous sarcoma agent was could only be speculated, since few had tried to purify it. Claude, freshly read up on the subject, wrote to then RIMR president Simon Flexner and proposed isolating the sarcoma agent. A year later Claude found himself in Murphy’s laboratory in New York, charged to do just that.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part XII: Stanford Moore and William Stein, 1972 Prize in Chemistry

By Joseph Luna

Original rotating fraction collector used by Moore and Stein for analysis of RNAse. RU historic instrument collection, accession number 105.

Original rotating fraction collector used by Moore and Stein for analysis of RNAse. RU historic instrument collection, accession number 105.

“RNAse-free.” To most any molecular biologist working with RNA, these two seemingly unrelated words are as sweet sounding together as “passion-fruit.” This is because ribonucleases, those small hardy enzymes that chew up RNA, can be found everywhere, are more invasive than the tiniest bacteria, and can utterly ruin an experiment. Seeing an “RNAse-free” label on one’s reagents is often a mark of trust that experimental results are on firm footing. But the story of RNase is a fascinating one, particularly at Rockefeller, for it is a story intricately wrapped in two names as tightly bound and harmonious together as “RNAse-free”: those of Stanford Moore and William Stein, or “Moore-n’-Stein”.

What can be considered one of the greatest life-long collaborations in biochemistry began simply, when Moore and Stein met as post-docs in the laboratory of Max Bergmann in 1939. Bergmann had fled Nazi Germany five years prior and took up a position at the Rockefeller Institute to continue his research on protein chemistry. A once long-time collaborator of Emil Fischer (who coined the term “peptide”), Bergmann and his lab were focused on finding ways to isolate and analyze proteins. By the mid 1930s, all twenty of the primary amino acid building blocks had been discovered, but it was unclear how they were put together to make a functional protein. What’s more, each protein that could be isolated appeared to have a different and unique composition of amino acids. Before one could get a grasp on protein structure, what was needed was a reliable way to determine how much of each amino acid a particular protein contained. This was the problem Moore and Stein first tackled.

They started by mixing together eighteen amino acids at known concentrations and asking if they could invent a method that could both separate and individually measure the concentration of each amino acid in the mixture. It was a daunting task, a bit like trying to uncook an egg. An early form of chromatography using starch columns eventually solved the first problem. Moore and Stein discovered that each of the eighteen amino acids passed through these columns at unique speeds, and so by adding the mixture at one end of the column and collecting fractions at the other, the mixture could be separated in a defined way: phenylalanine came out first, then leucine, then isoleucine and so on. And because standing around collecting fractions drop by drop was simultaneously laborious and boring, they invented a mechanical lab technician to precisely do the work: the automated fraction collector. The second problem, to measure the concentration of amino acids in the fractions, was solved by turning to a well-known chemical reaction known as the ninhydrin reaction. Chemists had discovered that in the presence of ninhydrin, amino acid solutions turned a bluish-purple with each amino acid giving off a unique, if unstable, hue. Moore and Stein figured out ways to stabilize the reaction such that the amount of blue could help determine both the identity of the amino acid, and its concentration.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part XI: Gerald M. Edelman, 1972 Prize in Physiology or Medicine

By Joseph Luna

To be immune is to be exempt. In the late 19th century, a physician named Paul Ehrlich gave a death-defying example of such an exemption by giving mice sub-lethal quantities of the deadly toxin ricin. Over time, these mice developed a specific resistance to ricin such that they survived when exposed to amounts that would kill a normal mouse. And yet, this ricin immunity was specific, as the super mice remained susceptible to other toxins. What made immunity so specific and how did it come about? With this experiment, Ehrlich joined a chorus of scientists that included Edward Jenner and Louis Pasteur before him to address immunity. It was upon these questions that the science of immunology was founded.

To explain how this might work in his ricin-proof mice, Ehrlich and others reasoned that the exposed mice begin to produce something that could counter the effects of the toxin—an anti-toxin. When it was shown that serum from an animal exposed to toxins or infectious diseases could be transferred to confer immunity in a recipient, this finding blossomed into the concept of a curative anti-serum. It was here that Ehrlich went further. Attempting to summarize the common thread that ran across exquisitely specific immunities against toxins, bacteria, parasites, or anything threatening, Ehrlich coined the term “antibody.” It was a specific antibody directed against a specific usually foreign substance, he formulated, that was the root cause of immunity.

Over the next five decades, the study of antibodies lay at the heart of immunology as researchers worked on how specific antibody reactions could be, how antibodies came about, how they could be inherited and passed along, and what exactly they were made of. Answering this last point briefly became a focus at Rockefeller in the 1930s, where chemical methods were first used to determine that antibodies were made of protein. But beyond this, key questions remained unsettled: what accounted for antibody diversity? Were specific antibodies structurally distinct by adopting different conformations or by having different sequences? In short: what does an antibody look like?

Sometime in 1955, a young captain in the U.S. Army named Gerald Edelman asked himself this question. Edelman was a medical doctor stationed in Paris, and when not attending to fellow soldiers at the hospital, Edelman would read medical and science textbooks for fun. Picking up an immunology textbook one day, he read page upon page of the foreign targets of antibodies—antigens—but almost nothing on antibodies themselves. After an extensive literature search on antibodies, Edelman reached an unsatisfying end. He decided to do something unusual: he applied to graduate school with the goal of studying antibody structure. Even more unusual, he chose not to go to a Harvard or a Johns Hopkins level institution, but instead entered a newly created graduate program at The Rockefeller Institute for Medial Research in 1957.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part X: H. Keffer Hartline, 1967 Prize in Physiology or Medicine

By Joseph Luna

While strolling along a beach one day in the summer of 1926, a young physiologist named Haldan Keffer Hartline came across a living fossil. Before him was a horseshoe crab, Limulus polyphemus, with its domed carapace shell, spiked rudder tail and pedipalp legs. Barely changed after over 450 million years of evolution, this mysterious ancient mariner must’ve been a startling and alien sight. We don’t know what Hartline thought of the creature’s primitive book gills, its belly filled with shellfish or its eerie blue blood. But something did enthrall him: the crab’s large compound eyes.

Though he was a medical student, Hartline had no interest in practicing medicine, but was fascinated by research, particularly the physiology of vision. How does seeing work? This question first riveted Hartline while an undergraduate, where he worked on the light-sensing abilities of pill bugs. Moving on to medical school at Johns Hopkins, Hartline attempted to study vision in frogs by using neurophysiological instruments to record activity from their optic nerves, but it proved more difficult and complex than he imagined. What he needed was a simpler model organism, if there was one. He made his way to the Marine Biological Laboratory on the southern coast of Massachusetts, frustrated by past failures, but on a mission to find the right organism to study.

image2

It was a conceptual leap to propose that studying vision in a weird creature like Limulus would yield insight on how animals, including humans, see generally, but the idea wasn’t out of place among biologists in the 1920s. By decade’s end, the Nobel Prize winning Danish physiologist August Krogh laid the case for studying diverse organisms for general biological insight, predicting for the field in 1929: “for such a large number of problems there will be some animal of choice or a few such animals on which it can be most conveniently studied.”

The year before, Hartline published a descriptive study of arthropod compound eyes, where he succeeded in recording nerve impulses after light stimulation in Limulus along with grasshoppers and two species of butterfly. This comparative work revealed that light stimulation could induce characteristic minute electrical spikes that could be measured among arthropods. And whereas the grasshopper and butterfly were difficult to handle and gave complex recordings, those of Limulus were simple waves and could be studied for extended periods of time when bathed in seawater. But what really set Limulus apart was the size of its compound eye as it opened the possibility of studying its single facets.

As the name suggests, a compound eye can be thought of as a closely spaced array of simpler eyes. Each “eye”, called an ommatidium, individually acts as a receptor for light directly above it and is composed of a cornea that directs light to a bundle of photoreceptor cells that are in turn connected to a single optic nerve. In small insect eyes, individual ommatidia number in the thousands and can really only be seen under a microscope; the same is true for analogous rods and cones in vertebrate retinas. The ommatidia of Limulus by comparison are fewer in number but comparatively gargantuan: each is about 1mm across, making them among the largest light receptors in the animal kingdom. Based on their large size, Hartline reasoned that it might be possible to take neurophysiological measurements from single optic nerve fibers in the horseshoe crab. Working with Clarence Graham in the summer of 1931, Hartline succeeded in doing just that. Graham and Hartline dissected single ommatidia, and devised methods to illuminate their photoreceptive cells while recording from the optic nerve. In went light they could control, out went neural signals to the brain that they could measure. These were some of the first measurements of the most fundamental unit of vision.

Continue reading

Alfred Nobel and the Prizes

By Susan Russo

Alfred Nobel was born in Stockholm, Sweden, in 1833. He is best remembered for the invention of dynamite and for leaving the major part of his fortune for the establishment of prizes for a person or persons who accomplished discoveries resulting in the “greatest benefit on mankind.” Nobel’s father was an engineer, manufacturer, and inventor. One of his inventions was modern plywood. The family factories were in St. Petersburg, Russia, where Albert was educated by tutors, showing marked interest in chemistry and languages. From 1841 to 1842, Albert was sent to Sweden to the Jacobs Apologistic [sic] School. Albert’s studies in chemistry continued in Russia, then Paris, then four years in the United States. Albert’s interests also included explosives, taught to him by his father. His 355 inventions included a gas meter in 1857, a detonator in 1863, and a blasting cap in 1865.  Nobel’s additional interest in physiological research led to his starting laboratories in France and Italy for experiments in blood transfusions, as well as his making donations to the Pavlov laboratory in Russia.

Nobel died in 1896, but when his brother Ludvig died in 1888, one newspaper mistakenly wrote Albert’s obituary, characterizing him as the “merchant of death.” Before his own death, Albert Nobel wrote a will that set aside most of his fortune to create the Nobel prizes. This will was contested by members of his family, so that the prizes were not legally authorized until 1897. In 1900, the Nobel Foundation was established by order of Sweden’s King Oscar II.

Because of these delays, the initial Nobel Prizes were not awarded until 1901, the first in physics to Wilhelm Roentgen, and also in the will’s stated fields of chemistry, peace, physiology or medicine, and literature.

The Nobel Foundation selects professionals in these fields from around the world to nominate individuals for the prizes (including at least one professor at Rockefeller). The Swedish Academy of Sciences awards the prizes for physics and chemistry; the Karolinska Institute awards prizes for physiology or medicine; and the Academy in Stockholm awards prizes for Literature. The Peace price is awarded by the Norwegian Storting, the legislature of Norway. In 1968, a Prize in Economic Sciences in Memory of Alfred Nobel was established by Sweden’s central bank, Sveriges Riksbank.

The gold Nobel prize medals are minted in Sweden, with a profile of Albert Nobel on one side. On the prizes presented in Sweden there is a Latin verse from Virgil which is translated as “inventions enhance life which is beautified through art.” The original 1901 prize money for the award was 150,782 Swedish kronor, which as of this writing is $19,948. Nobel prizes are not awarded every year, if there are no discoveries deemed to be of significance, nor, frequently, during times of war.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part IX: F. Peyton Rous, 1966 Prize in Physiology or Medicine

 By Joseph Luna

Peyton Rous

Portrait of Peyton Rous, then in Welch Hall. August 2010. Photograph by the author.

“Whatever you do, don’t commit yourself to the cancer problem.” These were ominous words for a young pathologist named Peyton Rous to hear from his famed mentor William Welch. In the early 1900s, it seemed accurate. Cancer, then as now, is a terrifying constellation of diseases. This was all the more true in 1909, when few tools to study its deadly forms were available beyond the pathological descriptions afforded by the microscope. Added to this frustrating mix were scientific debates on the origins of cancer: some cancers were clearly inherited from one generation to the next, suggesting a genetic cause. And yet other cancers defied an inheritance rule and were instead closely associated with certain chemically-laden occupations, such as “soot wart” carcinoma among chimney sweepers. What if chemical exposures were the real culprit? In an era when chemical regulation was effectively non-existent for industrial workers, one can only imagine what Gilded Age employers would’ve thought of this theory. As a result, “cancer” was seen as a thorny and complex issue, only likely to become thornier. There seemed little a scientist could do to definitively address causes, let alone suggest treatment for cancer. Welch’s words were not far off the mark.

Yet, others were not as pessimistic. Simon Flexner, the Rockefeller Institute’s first director and also a student of Welch’s, offered Rous a position to take up the cancer problem, and Rous, despite some reluctance, went against his mentor’s advice and accepted the offer. Rous was hired ostensibly to take up studies of an epithelial tumor in rats known as the Flexner-Jobling tumor, notable in that it could be transplanted with some success between animals. The position, however, afforded the 31-year old pathologist considerable freedom to explore other potential models of cancer.

Soon after Rous got to work, at a time when live chickens were not an uncommon sight in Manhattan, one inquisitive poultry breeder brought to the institute a Plymouth Rock hen bearing a large tumor. We neither know what her precise motivations were to approach the new institute for medical research on Avenue A with a diseased chicken, nor do we know what Rous initially made of such a strange curiosity. But it was a chance and a fortuitous encounter. Rous took the chicken and attempted to do what many a would-be cancer researcher had tried but failed. After determining the type of cancer under the microscope, he attempted to transmit the tumor to a healthy bird. To his surprise, it worked. The once healthy bird developed tumors that looked almost exactly like the original. This work, published in 1910, established that a “sarcoma of the common fowl” could be transmitted. Such a model for cancer was an important first step in figuring out what caused it.

Rous next dove head-first into this causation problem. In an extraordinary hypothetical leap, Rous repeated his tumor transmission experiment with a twist. Instead of directly injecting bits of tumor into a bird, Rous first passed the tumor cells through a bacteria-tight filter and then injected a bird with the now cell-free filtrate. Scientific consensus of the day held that cancer, as a distinctly cellular phenomenon of “somatic mutations,” shouldn’t arise with injections of cell-free material. Yet within a few weeks, some of the injected birds developed tumors, though nothing was conclusive for Rous until he plied his trade at the microscope. Coming into focus, the methylene-blue and eosin stained tumor cells of bird number 177 almost shouted their answer: cancer. The spindle-cell sarcoma Rous observed in the new bird was indistinguishable from the tumor in the original hen. Rous had discovered that a filterable agent, in modern parlance a virus, could transmit cancer.

Continue reading

Nikola Tesla

By Aileen Marshall

Nikola Tesla in his Colorado lab, 1899

Nikola Tesla in his Colorado lab, 1899

Who was Nikola Tesla? Does this name ring a bell somewhere in your brain but you can’t quite place him? Wasn’t he some sort of scientist? The showing of the movie “Tower to the People: Tesla’s Dream at Wardenclyffe” by the Rockefeller Science Communications and Media Group inspired me to find out. It turns out Tesla was quiet a visionary scientist who worked on many aspects of electricity and physics.

Tesla was born on July 10, 1856 to Serbian parents in what is now Croatia. When he was 19 he started at Austrian Polytechnic and did remarkably well there at first. During his third year he developed a gambling problem and did not take his final exams. He did not receive grades for his final semester and never graduated. He worked as a draftsman until 1880 when his family sent him to Charles Ferdinand University in Prague. He arrived too late to enroll but audited courses there for a year.

The next year he moved to Budapest and worked to improve equipment for the Budapest Telephone Exchange. He moved to New York City in 1882 and was hired by Thomas Edison. He worked on redesigning the Edison Company’s direct current generators. When he came up with a more efficient design, he was offered a mere $10 raise over his $18 a week salary. Tesla felt that was an insult and quit.

In 1886 he found investors to finance a company to make lighting systems and electric motors. However they didn’t agree with his idea to develop a new electric system infrastructure and forced him out and he lost his patents. Then he found other backers who built a lab for him at 89 Liberty Street. It is here that Tesla developed his alternating current motor. Alternating current (AC) is now used to send electricity over long distances over power lines. Direct current (DC) is what we have in our households. Tesla gave a demonstration of his AC system at the American Institute of Electrical Engineers (now The Institute of Electrical and Electronics Engineers) in 1888. He later served as the organization’s vice president. His presentation was reported to George Westinghouse. His AC motor was licensed to Westinghouse Electric & Manufacturing Company and he was hired to work in their labs in Pittsburgh, developing AC system to power the city’s streetcars. This was the beginning of the “War of Currents” between Edison’s DC system and Westinghouse’s AC system. By 1892 Edison’s company was purchased by General Electric.

In 1891 Tesla founded a lab on South Fifth Avenue (now LaGuardia Place) and then 46 East Houston Street where he invented his Tesla coil. A Tesla coil is a high-voltage, high-frequency transformer producing AC wireless electricity. Tesla was always an advocate of wireless energy. He held a demonstration of wireless energy at Columbia University. He had two zinc sheets suspended on each end of the room, and when he passed between the two sheets, a light bulb in his hand was turned on. He would often give demonstrations to friends, one of whom was Mark Twain.

Continue reading

Culture Corner

The Elegant Movie – Thoughts on the films The Theory of Everything and The Imitation Game
By Bernie Langs

image1

Biophysics as studied at The Rockefeller University (photo courtesy of Mario Morgado – see morgadophotography.com for more of Mario’s work).

[Note: Professor John Nash, featured in this set of reviews, passed away tragically in an auto accident as this article was going to press.] The physicist Brian Greene named his widely successful book, which served as an introduction for many in the general public to the mysteries and wonder of string theory, “The Elegant Universe.” This title gave that sub-specialty of the study of physics a kind of mysterious and glamorous dressing up of sorts. I enjoyed that book immensely, although I did struggle at times with his sometimes less than laymen’s explanations. But I was definitely enamored by the excitement he generated about the study of physics and came away feeling that it was physics itself that was elegant, since the universe and the Biblically-termed “heavens and earth” are more what we make of them ourselves from a “blank canvas” rather than having any inherent, purposeful order or Divine scheme and blueprint. God’s abhorrence of the roll of the dice being, of course, duly noted, Professor Einstein.

The genres of mathematics and physics are difficult to master, with many students peaking in high school or early college in the ability to understand them. To bastardize an amusing observation on the nether world spelled out on the television show “The Sopranos”: Math is hard—that’s never been disputed. Perhaps this is because at some point in its study, the student cannot just throw back extrapolations of dictated, memorized facts as done for other academic courses using cookie-cutter solutions. At some point the mathematicians and physicists have to enter a realm of intuition in tandem with a talent to locate obscure paths on the road to solutions through a maze of often maneuvering electron-like unfixed data. I don’t even know if that is true, but that’s my own hunch on why I was an “A” math student until hitting the harsh roadblock of calculus, the wall on which I came to a dead stop with such studies.

The general consensus that math and science at the highest levels is “really, really hard” has led to several movies in recent years romanticizing the notion of the lone genius mathematician and physicist, and I for one enjoy these kinds of films. The general plot lines of such movies show the trials, tribulations and struggles of the men and women who are at the top of these fields, where the mind can be subject to terrific loneliness amid troubled social situations that are a result of seeing and knowing what most people can’t begin to fathom.

The first movie that I saw that explored the fictional tale of the genius mathematician was Good Will Hunting starring a then very young Matt Damon as a math prodigy from a working-class background in South Boston. Damon’s character, Will Hunting, having grown up as a beaten foster child, is in and out of trouble with the law as he runs around with an amusing group of loose characters (including the actors Ben and Casey Affleck). Hunting is unearthed and discovered by a Fields Medalist professor at MIT (Stellan Skarsgård) where Damon, as a janitor, fairly easily solves near impossible math problems left on a chalkboard in a hallway for the brilliant students of the university to try their hands at solving. The story evolves to include emotional scenes with Damon’s appointed psychiatrist, played beautifully by the late Robin Williams, as Williams tries to free the scarred youth from his stunted emotional growth so he can ease into maturation and grow into the man he is destined to be. There’ a wonderful scene where Will’s girlfriend, a Harvard premedical student played by Minnie Driver, asks with wide-eyed wonder, “How do you do it?” Damon explains with confidence that just as Mozart could simply look at a piano keyboard and solve the puzzle of making music, he can use his intuitions to see mathematical solutions as they open up before him.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part VIII: Joshua Lederberg, 1958 Prize in Physiology or Medicine

 By Joseph Luna

“You say [it was] a wonderful scientific achievement?” said Paul Ehrlich. “My dear colleague, for seven years of misfortune I had one moment of good luck!”

Joshua Lederberg, then only 13 or so, read these final lines of The Microbe Hunters and closed his copy, exhilarated. Paul de Kruif’s semi-non-fictional account of twelve great microbiologists had inspired the young Lederberg and cemented his desire to be one of them. It was an odd life choice to make in 1941, but Lederberg was no ordinary teenager. After graduating high school at age 15, Lederberg headed straight to Columbia University. He graduated three years later with a degree in zoology just shy of his nineteenth birthday and continued on at Columbia for medical school as part of a wartime Navy program.

His precociousness had not gone unnoticed, for Lederberg also sought a scientific mentor as an undergrad, and found one in a young assistant professor named Francis Ryan. Having trained with George Wells Beadle and Edward Tatum for his postdoc, Ryan established his laboratory to study the bread mold Neurospora as a new model for microbial genetics. Within a year, Lederberg all but abandoned his medical studies to work in Ryan’s lab, partly due to a single paper that both stunned and spurred the young men to action.

Across town at Rockefeller in 1944, Oswald Avery, Colin MacLeod, and Maclyn McCarty established that DNA was the molecule of heredity in Pneumococcus bacteria. Suddenly the race was on to characterize the role that DNA played in other micro-organisms; Lederberg and Ryan leaped at the chance to try this out in their favorite fungus. Whereas the Rockefeller group established DNA as the key ingredient for transforming non-virulent bacteria to more deadly forms, Lederberg and Ryan aimed to uncover whether DNA could also be responsible for correcting nutritional mutants in Neurospora. In other words, they sought to confirm that manipulating genes as Beadle and Tatum had done was the same as manipulating DNA.

They started with Neurospora mutants that could not make the amino acid leucine. These bugs could only grow when leucine was present in the media, and would die otherwise. Next, they attempted to transform these mutants using DNA from normal Neurospora to restore leucine production. As they suspected, they were able to recover bugs that could grow in the absence of leucine. Yet there was a catch, they figured out that this was not due to the DNA they were introducing into cells, but instead because the mutant microbes had reverted to their parental, or prototroph, condition. But where they failed to show transformation, they succeeded in showing something else: Lederberg and Ryan had invented a prototrophic recovery method to isolate rare natural revertants (termed “back mutations”) to show that induced mutations could sometimes spontaneously switch back to their ancestral condition. Microbes, they discovered, were ceaselessly tinkering.

Their original hypothesis, to correct a mutation at will with DNA transformation in Nuerospora was a spectacular failure, but it got Lederberg to thinking that maybe transformation wasn’t all there was. Maybe there was a way for microbes to transform each other naturally and exchange genetic information. And maybe this might’ve gone unnoticed because it was such a rare event, just like back-mutations were a rare event.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part VII: Edward Lawrie Tatum, 1958 Prize in Physiology or Medicine

By Joseph Luna

It started, on paper at least, with butter. The chemical microbiology of dairy products was “certainly getting hot” as one professor dryly wrote to George Beadle, who in 1937 was starting his lab at Stanford University. Beadle, a plant geneticist who had recently switched to the fruit fly Drosophila melanogaster as a model organism, was looking for a good biochemist to join him with genetics research. He offered the job to 28-year-old Edward Tatum, a University of Wisconsin-Madison Ph.D. who had just spent a year in Utrecht, Netherlands, to study the odd mix of genetics in flies and the chemistry of butter. Tatum had come from a science family (his father was a chemistry professor) and was interested in genetics, but both father and son were concerned with the hybrid role of Beadle’s offer: amongst microbiology, biochemistry and genetics, Tatum stood a good chance of ending up an academic orphan, disowned by each discipline. But with jobs scarce in 1937, there were few options, and Tatum, his wife June, and their toddler, Margaret, headed to California.

What we would now call classical genetics was in full flower at the time. Pioneered at the turn of the century by Thomas Hunt Morgan, the fruit fly was (and still is) a powerful model organism to study inheritance, a concept just rediscovered through the long lost works of Gregor Mendel and his famous pea plants. Fly researchers at the time were interested in uncovering mutants, either natural or induced, that were different from normal flies, just as Mendel had done with peas. By crossing mutants with normal flies, or mutants with other mutants, early geneticists were able to track how a trait was transmitted from one generation to the next. In this manner, they figured out that inherited traits corresponded to physical entities on chromosomes, which they called “genes.” But what exactly a gene did was anyone’s guess. Things that could be readily observed or phenotypes such as changes in eye-color were clearly controlled by genes in the sense that they were inherited in predictable ways, they had genotypes. But for other, absolutely necessary things, like proper metabolism, there was really no path forward, since mutations were usually lethal. As a result, geneticists were thought of as having only uncovered how a subset of trivial phenotypes, like pea shape and fly eye color, were linked to a genotype. Whether critical traits like metabolism played by the same rules was an open and very contentious question.

Into this world, Tatum and Beadle (“Beets” to his friends) set up shop. They set their sights on Drosophila eye color, where they aimed to extract the pigment found in normal flies to characterize it biochemically. Using mutant flies that lacked the pigment, they wanted to perform what we would now call the rescue experiment, where the pigment could be restored in genetically deficient flies. It would have been a powerful demonstration of phenotype correction, were it not for problems encountered seemingly at the get-go. Tatum found that correcting the pigment defect could only work when cultures carried a bacterial contaminant, which presumably made a hormone or small molecule to get things going. They spent four years trying to isolate this hormone, only to be scooped by the competition. It was a major blow for such arduous work, but more importantly, it startled the young researchers as to how complex biochemical genetics would be with flies.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part VI: Fritz Albert Lipmann, 1953 Prize in Physiology or Medicine

By Joseph Luna

From Ra to Apollo to Huitzilopochtli, the ancients were onto something by worshipping the sun. Alongside water, no other entity was as important for the agricultural harvest or for predicting the seasonal movements of wind and life-giving rain. But the precise means by which the sun can be said to nourish took over two millennia to figure out, most of it concentrated in the past 200 or so years, when chemists began to ply their trade to biological problems. Why do plants need light? What happens when a caterpillar, a cow or a human eats them? In other words, how does “food,” for any organism, really work? The answers to these questions lie in the study of metabolism, and biochemists in the late 19th and the first half of the 20th century were wild about these problems.

Fritz Lipmann was among them. Born in the east Prussian capital of Königsberg in 1899, Lipmann came of scientific age during some biochemically exciting times. After receiving an MD in 1924, Lipmann changed course and joined the laboratory of Otto Meyerhof, the discoverer of glycolysis and 1922 Nobelist, at the Kaiser-Wilhelm Institute in Berlin. In the Meyerhof lab, Lipmann worked alongside Karl Lohmann (the discoverer of adenosine tri-phosphate (ATP)) and Dean Burk (the co-discoverer of biotin). Working upstairs was Otto Warburg, who in 1931 would win a Nobel Prize for his work on cellular respiration. And in Warburg’s lab was Hans Krebs, for whom the citric acid cycle is named, and who would later share the 1953 Nobel Prize in chemistry with Lipmann.

The driving question for these biochemists at the time can be summed up succinctly: what was the chemical basis for energy production and consumption in living organisms? By the late 1920s, it was increasingly clear that ATP was a major energy currency in the cell, but the precise means by which it functioned, as both a fuel and as a building block besides how it was made in the cell, were unknown. After a year exploring this problem in P.A. Levene’s laboratory here at Rockefeller, Lipmann moved to Copenhagen to work with Albert Fischer where he studied the end product of Meyerhof’s glycolysis: pyruvic acid.

This “fiery grape” metabolite was interesting as a molecular fork in the road of sorts for an organism: in the absence of oxygen, pyruvic acid undergoes fermentation to make a limited but finite amount of energy before winding up as lactic acid. This is essentially what happens when yogurt or sauerkraut is made. But in the presence of constant oxygen, pyruvic acid does something different: it becomes oxidized and fed into the citric acid cycle to allow continuous production of ATP. In other words, energy production requires continuous breathing, or “respiration.” As a biochemical fulcrum between reactions associated with death (fermentation) or life (respiration), it’s easy to see how this molecule might’ve fascinated Lipmann in the 1930’s. Most of the above was known by then but questions remained. Lipmann noticed that in order for pyruvic acid oxidation to make ATP, some inorganic phosphate was always needed and biochemically used up. Where did this phosphate go? Using radioactive phosphate and adenylic acid, a precursor of ATP, Lipmann observed that pyruvic acid oxidation resulted in radioactive ATP. He had traced the movement of an inert phosphate to the main energy molecule in the cell. This process, now generally summarized as oxidative phosphorylation, is the means by which any organism on this planet that breathes makes energy.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part V: Wendell M. Stanley, 1946 Prize in Chemistry

By Joseph Luna

IMG_4686In 1898, a Dutch botanist named Martinus Beijerinck faced a naming conundrum. He reproduced an experiment first performed six years earlier by Russian botanist Dmitri Ivanovsky, who found that a disease of tobacco plants causing a mosaic discoloration of their precious nicotine laced leaves could be transmitted to a healthy plant in an infectious manner. Moreover, like his predecessor, Beijerinck found that after passing through a filter too small for any known bacteria to pass, the juice of infected plants could still be used to infect healthy tobacco leaves. This was a puzzling observation, since any attempt to see the infectious agent under a microscope turned up nothing. Ivanovsky concluded that there must be a tiny living bacterium, smaller than any known, which was responsible for the disease. Beijerinck on the other hand wasn’t convinced and wanted to call this infectious agent something else to reflect its non-bacterial nature. After what must have been some hand wringing, he settled on an old Latin word for “slimy liquid” and named the new agent a virus.

For the next three decades, exactly what a virus was presented a tantalizing mystery. Viruses behaved as if they were alive, they grew and could adapt, and yet some were so small that they approached the sizes of proteins, or other macromolecules that clearly weren’t alive. So which was it? Alive or dead? Beijerinck, for his part, didn’t have a definitive answer, but set a vital tone by referring to viruses as contagious living fluids (“contagium vivum fluidum”). Until the 1930s, as the roster of plant and animal diseases caused by viruses expanded, attempts to categorize them on the basis of size were used to justify the living (i.e. large) from the non-living (i.e. small). Still, others thought this essentialist idea might be missing something entirely.

Continue reading

Twenty-four visits to Stockholm: a concise history of the Rockefeller Nobel Prizes

Part IV: John H. Northrop, 1946 Prize in Chemistry

By Joseph Luna

So far in this series, it seems as if we’ve focused on foreigners. For a young institution like Rockefeller in the early 20th century, it took time for original Nobel level work to emerge, and so it’s not too surprising that the first three visits to Stockholm was for work done before the recipient arrived at Rockefeller, and in far off places: France/Canada, Austria, and the great state of Missouri. That changed in 1946, when two Rockefeller scientists won Nobel prizes in Chemistry, the elder of whom was a true New Yorker, a Yonkers born and Columbia University-trained, eighth generation Yankee, named John Northrop.

His biography borders on Rooseveltian: John’s father, a zoologist, was tragically killed in an explosion two weeks before young Jack was born in 1891. His mother, a trained botanist, raised him alone in Yonkers and taught at both Columbia and Hunter College. With a mother deeply interested in nature, Jack’s young adulthood was spent largely outdoors, quite a feat for a city boy. He hunted and fished, was at home on a horse or in a canoe, and loved to travel. His youthful adventures took him as far as the American southwest, where in 1913-14 he spent time prospecting for gold along the Colorado River. World War I halted that.

Continue reading