Marx was a Neuroscientist, Part 1: The Two Cultures and the Scientistic Revolution

By Benjamin Campbell

I must interject an intermission into my discussion of economics. The topic is remarkably depressing, and I imagine few readers wish to spend their summer reading about such topics as the farcical global debt bubble or the predictable failure of yet another “sustainable development” summit. Further, I am supposed to be completing a Ph.D., so perhaps I should be discussing neuroscience instead of economics. After all, a bridge to the sciences may help us more fully appreciate the absurdity of capitalism.

The persistence of economic nonsense is, in fact, strongly related to the current intellectual fragmentation. Where C.P. Snow coined The Two Cultures to describe the divide between the sciences and humanities, one would need to scale this by orders of magnitude to account for the contemporary atomization of academia. As Norbert Wiener put it in Cybernetics, the scientist “will be filled with the jargon of his field, and will know all its literature… but, more frequently than not, he will regard the next subject as something belonging to his colleague three doors down the corridor, and will consider any interest in it on his own part an unwarrantable breach of privacy.” This observation should not be limited to the sciences, and the lack of communication grows markedly the more distant any two fields. At times this has degenerated into absurd quarrels between the worst of postmodern relativism and the worst of scientific reductionism.

Of course, scientists often speak of bridging the divide between the “two cultures.” Frequently, however, this reflects a desire for an increase in scientific influence over culture, rather than a mutually beneficial discourse. For instance, Snow’s original lecture was not really about reconciling the two cultures so much as praising science and denouncing what had become of “traditional culture” (which probably explains why scientists reference Snow so often). In more recent years, John Brockman has launched the “third culture” of Edge.org, which features “the most complex and sophisticated minds” of public intellectual discourse. Nearly all are scientists, who Brockman claims “are taking the place of the traditional intellectual.”

Those scientists auditioning for the role of new public intellectuals have, with few exceptions, failed the public miserably. To the extent that they even engage with broader social issues, they focus on easy targets (e.g. religious fundamentalism), content to relitigate the past while ignoring the difficult and uncomfortable questions of the day. The result is little more than a scientific polish applied to the establishment technocratic wisdom, while the historic challenges currently facing humanity are deflected and downplayed. At this pivotal moment, the whiggish view of inevitable progress that pervades this type of scientism is not only unwarranted, but is in fact completely irresponsible. As a result, the few people with insightful things to say about the degenerate state of late capitalism tend to be found on the other side of the “two cultures” divide, meaning their insights are usually lost in translation, and can thus be conveniently ignored in favor of a pseudoscientific economism by capitalism’s apologists.

Given this regrettable state of affairs, Jonah Lehrer’s Proust was a Neuroscientist was bound to spark my interest. Here, a bold young Rhodes Scholar endeavors to bridge Snow’s chasm by discussing the “artists who anticipated the discoveries of neuroscience.” Sadly, as an exemplar of this new commentariat, Lehrer’s limited knowledge of the humanities was surpassed by his superficial understanding of neuroscience. The result was a collection of astonishingly poor essays, centered around such dubious associations as from Aplysia to Proust’s madeleine.

Here, I will attempt something similar to what Lehrer intended, although hopefully more convincingly. As I will argue, not only are the insights of the “other culture” essential for understanding the world, but they have been paralleled by our unfolding understanding of the brain. This will require that we begin with some preliminary issues emerging out of the crisis of the Enlightenment.

The Missing Shade of Blue

Following on the heels of the scientific revolution and coinciding with the bourgeois defeat of feudalism, the Enlightenment was a time of competing and contradictory interests. Much like today, these competing interests were represented in a dominant ideology that was quite inconsistent.

On one hand, the enormous achievements of scientific naturalism tended towards a commitment to empirical observation as the ultimate source of knowledge. Yet most Enlightenment thinkers were also deeply committed to reason in religious, moral, and political matters. Further, scientific progress depended on a leap between the two domains of observation and reason, and there was no clear sense of how to bridge that gap. For instance, the Copernican revolution was not something that fell naturally out of empirical observation; the idea of the earth moving around the sun actually flies directly in the face of everyday human experience. Worse yet, the two poles of rationalism and empiricism tended to undermine each other. Taken to an extreme, rational criticism led to skepticism about the existence of the external world. Conversely, a commitment to empiricism could lead to a materialism that denied any role for human rationality, and worse, God. The only way of preserving both would appear to be an awkward and tenuous dualism.

David Hume would give the empiricist tendency its fullest expression in his Enquiry Concerning Human Understanding (1748). In his masterwork, Hume argued that knowledge ultimately derives from sensory associations, and that “all our ideas… are copies of our impressions.” To Hume, the “creative power of the mind amounts to no more than the faculty of compounding, transposing, augmenting, or diminishing the materials afforded us by the senses and experience.” Hume’s ideas were undeniably quite advanced for his time. For instance, a century before Darwin, Hume argued that the difference in cognition between humans and animals was merely a matter of degree. As we will see, he even anticipated problems in the process of induction that would be formalized much later.

In one form or another, the type of empirical and associative argument advanced by Hume would have a deep influence on the Anglo-American philosophical tradition, and would eventually become dominant in the emerging fields of psychology and neuroscience. Indeed, Hume had suggested that his reasoning could be extended to account for the “fully determinate mechanisms” that explain “the actions and volitions of intelligent agents.” Early physiological work by Charles Sherrington and others would demonstrate the type of stimulus-response associations that characterize the neurons of the peripheral nervous system. It was a natural extension for “behaviorists,” like John Watson, Ivan Pavlov, and Edward Thorndike, to elaborate on such stimulus-response associations to explain most, if not all, of brain and behavior. Thorndike’s “law of effect” asserted that stimulus-response associations were modifiable by what we might now call reward, and B.F. Skinner would greatly elaborate on this type of research with his newly developed operant conditioning chamber.

Even today, if we were to translate the Enquiry into the modern language of information theory, many neuroscientists would find little to disagree with in a Humean view of the brain. There has been a notable revival of operant conditioning research in its current incarnation of reinforcement learning, in which the modification of stimulus-response associations is thought to be mediated by dopamine. Like so many things, the problem with Hume’s associationist thought isn’t that it is necessarily wrong, but rather that it is incomplete.

Hume seemed to recognize the problem. He asked his reader to imagine a person “perfectly acquainted with colors of all kinds, except one particular shade of blue.” Would such a person be able to imagine the missing shade “though it had never been conveyed to him by his senses?” Hume answered in the affirmative but nevertheless concluded that “this instance is so singular, that it is scarcely worth our observing, and does not merit, that for it alone we should alter our general maxim.”

The Cognitive Revolution

It was Hume’s Enquiry that the German rationalist philosopher Immanuel Kant claimed to have stirred him from his “dogmatic slumbers.” The problem facing Kant in his Critique of Pure Reason (1781) was to preserve a role for reason in the face of Hume’s empiricist challenge.

Kant fashioned an ingenious solution. He ceded the underlying point that we only know about the world through appearances, rather than possessing knowledge of “things-in-themselves;” such was his critique of “pure” reason. But if all of our knowledge of the world is gained through a priori forms of thought, then we may possess knowledge about these forms of thought through which we see the world. That is, we “establish something about objects before they are given to us.” This, which he immodestly termed his “Copernican revolution,” allowed Kant to retain a role for reason among the forms of thought that the human mind constructs.

Kant’s response to Hume would be paralleled nearly two centuries later by the reaction against behaviorism that is often termed the “cognitive revolution.” For instance, Skinner had taken the behaviorist program to its logical conclusion by analyzing human language as a form of operant conditioning in his Verbal Behavior (1957). A major salvo of the “revolution” would be Noam Chomsky’s review of the book where he argued forcibly against the poverty of a purely associative view that ignored the forms of representing language in the human mind.

Such a focus on Kantian “forms of thought” was to characterize the nascent interdisciplinary study of cognitive science, which overlapped with and was greatly influenced by the emerging discipline of “artificial intelligence” (AI). For instance, one of the major problems in AI was programming computers for pattern recognition. Early attempts at doing this involved matching sensory information to given templates. More advanced work would incorporate a greater level of detail, such as the detection of edges and shapes that could be reconstructed into spatially invariant forms, but the general viewpoint remained one of observing the scene as filtered through a priori forms. This work was greatly encouraged by the pioneering studies of neurophysiologists David Hubel and Torsten Wiesel, which appeared to suggest that such a strategy of hierarchical feature extraction paralleled that of visual cortex.

Assuming incoming information had been suitably represented in such “forms of thought,” AI pioneers Allen Newell and Herbert Simon recast intelligence as the symbolic manipulation of such representations, and this view of symbolic processing was to dominate early cognitive science. It might seem strange that many scientists envisioned the brain as similar to a computer. One reason for this, as Wiener pointed out, is that people have always viewed themselves through the lens of their contemporary science and technology—from the clockwork mechanistic world of Newton’s time, to the age of steam engines and thermodynamics, the age of communication networks and information, to that of the digital computer. But another reason for the dominance of the symbolic processing accounts of the mind was the strong influence of symbolic logic, best exemplified by Alfred North Whitehead and Bertrand Russell’s Principia Mathematica, on early twentieth century thought. This influence was personified by Walter Pitts, a logician strongly influenced by Russell, whose collaboration with the neurophysiologist Warren McCulloch on A logical calculus of the ideas immanent in nervous activity (1943) would prove highly influential.

From the vantage point of posterity, these early endeavors were characterized by a remarkable overconfidence. AI did not come close to living up to the early hype of its founders, as programming machines for “intelligent” behavior turned out to be much more difficult than many assumed. As for cognitive science, one needs only to read the debate between Jerry Fodor and Steven Pinker over the latter’s How the Mind Works (1997) to get a sense of how little progress the discipline had made in elucidating the title.

Not everyone was sold on the initial promise of AI. Reflecting on the progress in his remarkable synthesis The Entropy Law and the Economic Process (1971), Nicholas Georgescu-Roegen put it this way: “The reason why no computer can imitate the human brain is that thought is a never ending process of Change which… is in essence dialectical.”

The next part of this series will explore what this might possibly mean, and how it relates to the emerging conception of the brain.

July/August 2012