Categories
Articles

Science and Ethnoscience, part 1: Science

SCIENCE AND ETHNOSCIENCE

E. N. Anderson

Dept. of Anthropology

University of California, Riverside

Part 1.  Science and Ethnobiology

Science and Knowledge

The present paper questions the distinctions between “science,” “religion,” “traditional ecological knowledge,” and any other divisions of knowledge that may sometimes be barriers in the way of Truth.

I will make this case via my now rather long experience in ethnobiology.  Ethnobiology is the study of the biological knowledge of particular ethnic groups.  It is part of what is now called “traditional ecological knowledge,” TEK for short.  Ethnobiology has typically been a study of working knowledge:  the actual pragmatic and operational knowledge of plants and animals that people bring to their daily tasks.  It thus concerns hunting and gathering, farming, fishing, tree-cutting, herbal medicine, cooking, and other everyday practical pursuits.  Ethnobiological research has focused on how people use, name, and classify the plants, animals and fungi they know.

As such, it is close to economic botany and zoology, to archaeology, and to ethnomedicine.  It is a part of human ecology, the study of how humans interact with their environment.  It overlaps with cultural ecology, the branch of human ecology that concerns cultural knowledge specifically.  Cultural ecology was essentially invented, and the term coined, by Julian Steward (1955).  Steward attended very seriously to political organization, but his earlier students generally did not, which caused his later students to coin the further term “political ecology” (Wolf 1972), which has caught on in spite of some backlash from the earlier students and their own students (Vayda 2008).  Human/cultural/political ecology has produced a huge, fast-evolving, and rather chaotic body of theory (Sutton and Anderson 2009).

Like many of my generation, I was raised in a semi-rural world of farms, gardens, ranches, and craft work.  I learned to shoot, fish, and camp.  Many formative hours were spent on the family farm, a small worked-out cotton farm in a remote part of East Texas.  (My father was raised there, but the family had abandoned it to sharecroppers by the time I came along.)  I learned about all this through actual practice, under the watchful eyes of elders or peers.  Naturally, I learned it much better than I learned classroom knowledge acquired in a more passive way.  Thus I was preadapted to study other people’s working knowledge of biota.

Logic also makes this a good entry point into the study of theoretical human ecology.  It is the most basic, everyday, universal way that humans interact with “nature.”  It is the most direct.  It has the most direct feedback from the rest of the world—the nonhuman realm that is so often out of human control.  The philosopher may meditate on the nonexistence of existence, or on the number of angels that can dance on the point of a pin, but the working farmer or gatherer must deal with a more pragmatic reality.  She must know which plants are the best for food and which will poison her, and how to avoid being eaten by a bar.

In Comte’s words, we need to know in order to predict, and predict in order to be able to act (savoir pour prévoir, prévoir pour pouvoir).

How do we know we know?

For many people, even many scientists, it is enough to say that we see reality and thus know what’s real.  This is the position of “naïve empiricism.” There is no problem telling the real from the unreal, once we have allowed for natural mistakes and learned to ignore a few madmen.  Reality is transparent to us.  The obvious failure of everyone before now to see exactly what’s real and what isn’t is due to their being childlike primitives.  Presumably, the fact that almost half the science I learned as an undergraduate is now abandoned proves that my teachers (who included at least one Nobel laureate) were childlike primitives too.

Obviously this does not work, and the ancient Greeks already recognized that people are blinded by their unconscious heuristics and biases.  Francis Bacon systematized this observation in his Novum Organum (1901/1620).  He identified four “idols” (of the “tribe, den, market, and theatre”), basically cultural prejudices that cause us to believe what our neighbors believe rather than what is true.  Later, John Locke (1979/1697) expanded the sense of limitations by providing a very modern account of cognitive biases and cognitive processing limitations.  The common claim that Locke believed the mind was a “blank slate” and that he was a naïve empiricist is wrong.  He used the expression tabula rasa (blank slate) but meant that people could learn a wide variety of things, not that they did not have built-in information processing limits and biases.  He recognized both, and described them in surprisingly modern ways.  His empiricism, based on careful and close study, involved working to remove the “idols” and biases.  It also involved cross-checking, reasoning, and progressive approximation, among other ways of thought.

Problems with Words

Ethnobiology has normally been concerned with “traditional ecological knowledge,” now shortened to TEK and sometimes even called “tek” (one syllable).  By the time a concept is acronymized to that extent, it is in danger of becoming so cut-and-dried that it is mere mental shorthand.  The time has come to take a longer look.  This paper will not confine itself to “TEK,” whatever that is.  I am interested in all knowledge of environments.  I want to know how it develops and spreads.

Science studies and history of science have made great strides in recent decades, partly through use of anthropological concepts, and in turn have fed back on anthropological studies of traditional knowledge.  The result has been to blur the distinction between traditional local knowledge and modern international science.  Peter Bowker and Susan Star (1999) have produced descriptions of modern scientific classification that sound very much like what I find among Hong Kong fishermen and Northwest Coast Native people.  Bruno Latour (2004, 2005) describes the cream of French scientists thinking and talking very much as Mexican Maya farmers do.  Martin Rudwick, in his epochal volumes on the history of geology, describes great scientists speculating on the cosmos with all the mixture of confusion, insight, genius, and wild guessing that led Native Californians to conclude that their world was created by coyotes and other animal powers.  Early geological speculation was as far from what we believe today as California’s coyote stories.

Similar problems plague the notion of “indigenous” knowledge.  Criticisms of the idea that there is an “indigenous” kind of knowledge, as opposed to some other kind, have climaxed in a slashing attack on the whole idea by Matthew Lauer and Shankar Aswani (2009).  They maintain “it relies on obsolete anthropological frameworks of evolutionary progress” (2009:317).  This is too strong—no one now uses those frameworks.  The term “indigenous” has a specific legal meaning established by the United Nations.  However, there is a little fire under Lauer and Aswani’s smoke.  The term “indigenous knowledge” does tend to imply that the knowledge held by “indigenous” people is somehow different:  presumably more local, more limited, and more easy to ignore.  Some, especially biologists, use this to justify a few that non-indigenous people (whatever that means) somehow manage to have a wider, better vision.

Similarly, the term “traditional ecological knowledge” has been criticized for implying that said knowledge is “backward and static….  Much development based on TEK thus continues to implement homogenous Western objectives by coopting and decontextualizing selected aspects of knowledges specific to unique places, eliminate their dynamism, and focus more than anything else on negotiating the terms for their commodification” (Sluyter 2003, citing but rather oversimplifying Escobar 1998).  Most of us who study “TEK” do not commit these sins.  But many people do, especially non-anthropologists working for bureaucracies.  Their international bureaucratic “spin” has indeed made the term into a very simplistic label (Bicker et al. 2004, and see below).

The implication of stasis is particularly unfortunate.  Traditional ecological knowledge, like traditional folk music, is dynamic and ever-changing, except in dying cultures.  Many people understand “traditional” to mean “unchanged since time immemorial.”  It does not mean that in normal use.  “Traditional” Scottish folk music is pentatonic and has certain basic patterns for writing tunes (syncopation at specific points, and so on).  New Scottish tunes that follow these traditions are being written all the time, and they are thoroughly traditional though completely new.   Similarly, traditional classification systems can and do readily incorporate new crops and animals.  Traditional Yucatec Maya knowledge of plants is still with us, but over 25 years I have seen their system expand yearly to accommodate new plants.

People are notoriously prone to invent new traditions (Hobsbawm and Ranger 1983).  “Tradition,” more often than not, means “my version of what Grandpa and Grandma did,” not “my faithful reproduction of what my ancestors did in the Ice Age.”

And, of course, modern international science is hardly free from traditions!   “Science” is an ancient Greek invention, and the major divisions—zoology, botany, astronomy, and so on—are ancient Greek in name and definition.  Theophrastus’ original “botany” text of the 4th century BC reads surprisingly well today; we have added evolution and genetics, but even the scientific names of the plants are often the same as Theophrastus’, because his terms continued in use by botanists.  Coining scientific names today is done according to fixed and thoroughly traditional rules, centuries old, maintained by international committees.  Species names of trees, for instance, are normally feminine, because the ancient Romans thought all trees had female spirits dwelling in them.  Thus even trees with masculine-sounding genus names have feminine species names (e.g. Pinus ponderosa, Quercus lobata) Traditions of publication, laboratory conduct, institutional organization, and so on are more recent, but are older than many of the “traditional” bits of lore classed as “TEK.”

It is no more surprising to find that Maya change and adapt with great speed than to find that laboratory chemists use the same paradigms and much of the same equipment that Robert Boyle used more than 300 years ago.

Finally, the differences between traditional (or “indigenous”) knowledges and modern science are not obviously greater than the differences between long-separated traditional cultures.  Maya biological knowledge is a great deal like modern biology—enough to amaze me on frequent occasions.  Both are very different from the knowledge system of the Athapaskan peoples of the Yukon.   Similarly, the conduct of science in the United States is quite different from that in China or Japan.  National laboratory cultures have been the subject of considerable analysis (see e.g. Bowker and Star 1999; Latour 2005; Rabinow 2002).  And modern sciences differ in the ways they operate.  Paleontology is not done the way theoretical physics is done (Gould 2002).  Thus Latour (2004) and many others now speak of “sciences” rather than “science,” just as Peter Worsley (1997) wrote of “knowledges” in discussing TEK and popular lore.

If one looks at high theory, traditional knowledge and modern science may be different, but if one looks at applications, they are the same enterprise:  a search for practical and theoretical knowledge of how everything works.  Similarly, if one looks at discovery methodology, traditional ecological knowledge and formal mathematical theory seem very different indeed, but traditional and contemporary ecology or biology are much more alike.

I can only conclude that instead of speaking of “ethnoscience,” “modern science, “traditional knowledge,” and “postmodern knowledge,” we might just as well say “sciences” and “knowledges” and be done with it.

Therefore, pigeonholing TEK in order to dismiss it is unacceptable (Nadasdy 2004).  By the same token, bureaucratizing science, as “Big Science” and overmanaged government agencies are doing now, is the death of science.   As Michael Dove says:  “By problematizing a purported division between local and extralocal, the concept of indigenous knowledge obscures existing linkages or even identities between the two and may privilege political, bureaucratic authorities with a vested interest in the distinction (whether its maintenance or collapse).”  (Dove 2006:196.)

Problems with Projecting the “Science” Category on Other Cultures

A much more serious problem, often resulting from such bureaucratization, has been the tendency to ignore the “religious” and other beliefs that are an integral part of these knowledge systems.  This is not only bad for our understanding; it is annoying, and sometimes highly offensive, to the people who have the knowledge.  Christian readers might well be offended by an analysis of Holy Communion that confined itself to the nutritional value of the wine and cracker, and implied that was all that mattered.  Projecting our own categories on others has its uses, and for analytic and comparative purposes is often necessary, but it has to be balanced by seeing them in their own terms.  This problem has naturally been worse for comparative science that deliberately overlooks local views (Smith and Wobst 2005; also Nadasdy 2004), but has carried over into ethnoscience.

On the other hand, for analytic reasons, we shall often want to compare specific knowledge of—say—the medical effects of plants.   Thus we shall sometimes have to disembed empirical scientific knowledge from spiritual belief.  If we analyze, for instance, the cross-cultural uses of Artemisia spp. as a vermifuge, it is necessary to know that this universally recognized medicinal value is a fact and that it is due to the presence of the strong poison thujone in most species of the genus.  Traditional cultures may explain the action as God-given, or due to a resident spirit, or due to magical incantations said over the plant, or may simply not have any explanation at all.  However, they all agree with modern lab science on one thing:  it works.

We must, then, consider four different things:  the knowledge itself; the fraction of it that is empirical and cross-culturally verifiable; and the explanations for it in the traditional cultures in question; and the modern laboratory explanations for it.  All these are valuable, all are science, and all are important—but for different reasons.  Obviously, if we are going to make use of the knowledge in modern medicine, we will be less interested in the traditional explanations; conversely, if we are explicating traditional cultural thought systems, it is the modern laboratory explanations that will be less interesting.

The important sociological fact to note is the relative independence or disembedding of “science,” in the sense of proven factual knowledge, from religion.  Seth Abrutyn (2009) has analyzed the ways that particular realms of human behavior become independent, with their own organization, personnel, buildings, rules, subcultures, and so on.  Religion took on such an independent institutional life with the rise of priesthoods and temples in the early states.  Politics too developed with the early states, as did the military.  Science became a truly independent realm only much later.  Only since the mid-19th century has it become organizationally and intellectually independent of religion, philosophy, politics, and so on.  It is not wholly independent yet (as science studies continually remind us).  However, it is independent enough that we can speak of the gap between science and religion (Gould 1999).  This gap was nonexistent in traditional cultures—including the western world before 1700 or even 1800.  Many cultures, including early modern European and Chinese, had developed a sense of opposing natural to supernatural or spiritual explanations, but there were no real separate institutional spheres based on the distinction.

However, we can back-project this distinction on other cultures for analytic reasons—if we remember we are doing violence to their cultural knowledge systems in the process.  There are reasons why one sometimes wants to dissect.

Inclusive Science

I use “science” to cover systematic human fact-finding about the world, wherever done and however done.  Traditional people all include what we moderns call “supernatural” factors in their explanations.  Thus, we have to take some account of such ideas in our assessment of their sciences (Gonzalez 2001).  This is obviously a very broad and possibly a bit idioyncratic usage, but it allows comparison.  It is imperfect, but alternatives seem worse.

Science is about something—specifically, about knowing more, and perhaps improving the human condition in the process.  The appropriate tests are therefore outcome measures, which are usually quite translatable and comparable between cultures.

I might prefer “sciences,” following Latour (2004) and Eugene Hunn (2008), but I share with Joseph Needham a dedication to the idea of a panhuman search for verifiable knowledge. Since the first hominid figured out how to use fire or chip rock, science has been a human-wide, cumulative venture, responsible for many of the greatest achievements of the human spirit.  Yet the traditions and knowledge systems that feed into it are very different indeed.  Science is a braided river, or, even more graphically, a single river made up of countless separate water molecules.

Science gives us sciences, but is one endeavor.  Attempts to confine scientific methodology to a single positivist bed have not worked, and modern sciences are institutionalized in separate departments, but neither of these things destroys the basic reality and unity of the set of practices devoted to knowledge-seeking.  Even today, in spite of the divergence of the sciences, we have Science magazine and “big science” and a host of other recognitions of a basic system.

All narrow definitions are challenged by the fact that the ancient Romans invented the term “science” (scientia, “things known,” from scire “know”).

The Greek word for science was episteme (shades of Foucault 1970), and the more general words for “knowledge” were sophos “knowledge” and sophia “wisdom, cleverness.”  Sciences, however, were distinguished by the ending –logia, from logos, “word.”  Simpler fields that were more descriptive than analytic ended in –nomos “naming.”  It is interesting that astrology was a science but astronomy a mere “star-naming”!  Another ending was –urgia “handcraft work,” as is chirurgia,the word that became “surgery” in English; it literally means “handwork.”

The Greeks worked terribly hard on most of what we now think of as “the sciences,” from botany to astronomy.  In the western world, they get the major credit for separating science from other knowledges.  Aristotle, in particular, kept his accounts of zoology and physics separate from the more speculative material he called “metaphysics.”  (At least, he probably called it that, though some have speculated that his students labeled that material, giving it a working term that just meant “the stuff that came after [meta, ‘beyond’] the physics stuff in his lectures.”)

The Greeks also gave us philosophia, “love of wisdom”—the higher, rigorous attention to the most basic and hard-to-solve questions.  This word was given its classic denotation and connotation by Plato (Hadot 2002).  They used techne for (mere) craft.  Yet another kind of knowledge was metis—sharp dealing, resourcefulness, street smarts.  The quintessential metic was Odysseus, and east Mediterranean traders are still famous for this ability.

The ancient Greeks (at least after Aristotle) contrasted science, an expert and analytical knowledge of a broad area, with mere craft, techne. This has left us today with an invidious distinction between “science” and “technology” (or “craft”).  The Greeks were less invidious about it.  Arts were usually mere techne, but divine inspiration—the blessing of the Muses that gave us Homer and Praxiteles—went beyond that.  We now think of the Muses as arch Victorian figures of speech, but the ancient Greeks took them seriously.

Allowing the Greeks and Romans their claim to having science makes it impossible to rule out Egyptian and “Chaldean” (Mesopotamian) science, which the Greeks explicitly credited.  Then we have to admit, also, Arab, Persian, and Chinese science, which continued the Greek projects (more or less).  Privileging modern Euro-American science is patently racist.  Before 1200 or 1300 A.D., the Chinese were ahead of the west in most fields.  We can hardly shut them out.  (True, they had no word for “science,” but the nearest equivalent, li xue “study of basic principles,” was as close to our “science” as scientia was at the same point in time.)  Once we have done that, the floodgates are open, and we cannot reasonably rule out any culture’s science.

Words for “science” and scientists in English go back at least to the Renaissance.  The OED attests “science” from 1289.  The word “scientist” was not invented till W. Whewell coined it in 1833, but it merely replaced earlier words: “savant” from the French, or the Latinate coinage “scient,” used as a noun or adjective.  These words had been around since the 1400s.  (“Scient” had become obsolete.)

Thus, I define “science” as systematic, methodical efforts to gain pragmatic and empirical knowledge of the world and to explain this by theories (however wildly wrong the latter may now appear to be). Paleolithic flint-chipping, Peruvian llama herding, and Maya herbal medicine are sciencs, in so far as they are systematized, tested, extended by experience, and shared.  The contrast is with unsystematized observation, random noting of facts, and pure speculation.  In this I agree with scholars of traditional sciences such as Roberto Gonzalez (2001) and Eugene Hunn (2008; see esp. pp. 8-9), as well as Malinowski, who considered any knowledge based on experience and reason to be science, and thus found it everywhere.

The boundaries are vague, but this is inevitable.  “Science” however defined is a fuzzy set.  Even modern laboratory science grades off into rigorous field sciences and into speculative sciences like astrophysics.

Science is based on theories, which I define as broad ideas about the world that generate predictions and explanations when applied to pragmatic, empirical engagement with particular environments.  This allows me to consider folk views such as the beliefs supporting shamanism along with modern scientific theories.

On the other hand, in small-scale traditional cultures, cutting off “science” creates an artificial distinction.  Such societies do not separate science from other knowledge, including what we in English would call “religion” or “spiritualism,” and analysis does violence to this.  It is worth doing anyway for some comparative and analytical purposes, but most of the time I find it preferable to talk about “knowledge.”  For most purposes, I am much more interested in understanding traditional knowledge systems holistically. For some purposes, however, we need to analyze, and all we can do is live with the violence, remembering that “analysis” literally means “splitting up.”

Chinese, Arab, Persian, and Indian civilizations, and probably Maya and Aztec ones, did have self-conscious, cumulative traditions of fact-seeking and explanation-seeking.  The Near Eastern cultures actually based their science on the Greeks, and even used the Greek words.  Both “science” and “philosophy,” variously modified, were taken into Arabic and other medieval Near Eastern languages.  The Chinese were farther afield, as will appear below, but Joseph Needham was clearly right in studying their efforts as part of the world scientific tradition.  However, it is also necessary to study the ways that traditional Chinese knowledge and knowledge-seeking was not like western sciences.  I will argue at length, below, that both Needham and his critics are right, and that to understand Chinese knowledge of the environment we must analyze it both on its own terms and as scientific practice.

Finally, there is an inevitable tendency to back-project our modern views of the world on earlier days.  Astrology and alchemy seemed as reasonable in the Renaissance as astronomy and chemistry.  There was simply no reason to think that changing dirt into gold was any harder than changing iron ore into iron.

There was even evidence that it could work.  Idries Shah (1956) gives an account by an observant traveler of an alchemist changing dirt to gold in modern central Asia.  The meticulous account makes it clear that he was actually separating finely disseminated gold out of alluvial deposits, but he was evidently quite convinced that he was really transforming the dirt.  More recently, reconstructed alchemical experiments turn silver yellow (superficially, however).  Apparently alchemists were fooled into thinking this was a real change, or at least could be developed into one (Reardon 2011).  Scientists are thus now studying alchemy to see just what those early chemists were doing.  They were not just wasting their time.  They had high hopes and were not unreasonable.  Ultimately they proved wrong, and duly hung up their signboards.  Such is progress—and they were not the last to have to give up on a failed project; we do it every day now.

The old “Whig history” that starts with Our Perfect Modern Situation and works back—seeing history as a long battle of Good (i.e. what led to us perfect moderns) vs. Evil—is long abandoned, but we cannot avoid some presentism (Mora-Abadía 2009).  Obviously, even my use of the term “science” for TEK is something of a presentist strategy.  Thus “science” is a rather arbitrary term.  I shall use it, with some discomfort, for that part of knowledge which claims to be preeminently dedicated to learning empirical and pragmatic things about environments and about lives.

Overly Restrictive Definitions of “Science”

I strictly avoid using it to mean solely lab-based activities.  I follow the Greeks in using it for Aristotle’s legacy, not just for the world of case/control, hypothesis-generation, hypothesis-testing, and formal theory.  This form of science was canonized by Ernst Mach and others in the late 19th century.  This usage is inadequate for many reasons.  Among other things, it relegates Aristotle, Galen, Tao Hongjing, Boyle, Li Shizhen, Harvey, Newton, Linnaeus, and even Lyell and Darwin to the garbage can.  Mach certainly did not want this; he was trying to improve scientific practice, not deny his heritage.

We can hardly balk at the errors of traditional societies.  Much of the science I learned as an undergraduate is now known to be wrong:  stable continents, Skinnerian learning theory, “climax communities” in ecology, and so on.  We allow them into our histories of science, along with phlogiston, ether, humoral medicine, the mind-body dichotomy, and other wrong theories once sacrosanct in Western science.

In my field work in Hong Kong, I found that many Chinese explained earthquakes as dragons shaking in the earth.  Other Chinese explained earthquakes as waves caused by turbulent flow of qi (breath, or vital energy) in the earth.  The Chipewyans of north Canada explain earthquakes as the thrashing of a giant fish (Sharp 1987, 2001).  When I was an undergraduate, most American geologists did not yet accept the fact that earthquakes are usually caused by plate tectonics, and instead invoked scientific explanations just as mystical and factually dubious as the dragons and fish.  They blamed earthquakes on the earth shrinking, or the weight of stream sediments—anything except plate tectonics (Oreskes 1999, 2001).    One should never be too proud about inferred variables inside a black box.

Unlike emotions, which have clear biological foundations, scientific systems can be seen as genuinely culturally constructed from the ground up.  Chimpanzees make termite-sticks and leaf cups, but the gap between these and space satellites is truly far greater than the gap between chimp rage and human anger.  It is true that chimps in laboratory situations can figure out how to put sticks together to get bananas, and otherwise display the basics of insight and hypothesis-testing (de Waal 1996; Kohler 1927), but they do not invent systematic and comprehensive schemes for understanding the whole world.  People, including those in the simplest hunter-gatherer societies, all do.

Many historians restrict “science” to the activity popularized in western Europe by Galileo, Bacon, Descartes, Harvey, Boyle, and others in the 16th and 17th centuries.  This usage is considerably more reasonable.  The “Scientific Revolution” involved a really distinctive moment or Foucaultian “rupture” that led to new worlds.  However, much excellent work has recently cut it down to size.  In fact, we now know that calling it a “revolution” drew a somewhat arbitrary line between these sages and their immediate forebears.  They were self-consciously “Aristotelian” against the “Platonism” of said forebears, but this looks very much less distinctive when one considers Arab and Persian science.  Aristotelianism had come to Europe from the Arabs in the 12th and 13th centuries, and the “revolution” was really a slow evolution (Gaukroger 2006).

A valuable term for the unified tradition that embraces European science since 1500 and world science since 1700 or 1800 is “rapid discovery science” (Collins 1998).  Rapid discovery science is very different from traditional science, but the difference is one of degree at least as much as of  kind.

The period from Galileo to 1800 may be defined as early modern science. Unlike both its primarily Near Eastern ancestors and its post-1800 descendent modern international science, it was largely a European enterprise.  Many criticisms have been made of its Eurocentric biases.  It did indeed display a rather distinctive and basically European worldview:  dualistic, excessively rational, dismissing or belittling the rest of the world, and more than somewhat sexist.  However, as we shall see, it depended in critical ways on nonwestern science for both data and ideas.  It was never isolated and could never really ignore the rest of the world’s knowledges.

A common terminological use is to restrict “science” to modern laboratory-based scientific practice, and the most closely similar field sciences.  This science develops formal theories (preferably stated in mathematic terms), generates hypotheses from the theories, tests these according to a formal methodology, discusses the results, and awaits cross-confirmation by other labs.  The problem with this usage is that it rules out virtually all science done before the 19th century.  In the early 20th century, Viennese logicians attempted to theorize such science as exceedingly formal, even artificial, procedure, with very strict rules of verification or—more famously—“falsification” (Popper 1959).

But this rules out not only all earlier science but even most science done today.  Field science can’t make the grade.  As Stephen Jay Gould (e.g. 1999) often pointed out, paleontology does not qualify.  We can hardly experiment in the lab with Tyrannosaurus rex.  Indeed, historians and social scientists (such as Thomas Kuhn, 1962) have repeatedly pointed out that few lab men and women follow their own principles—they go with hunches, have accidents, and so on.  The most hard-core positivist scientists admit this happily in their memoirs (see e.g. Skinner 1959).  Thus, I shall not use “science” in the above sense.  I shall use the term modern laboratory science for the general sort of science idealized by the positivists, but without losing sight of the fact that even it does not follow positivist guidelines.

However, no one can deny that there was a general movement in the 19th century to make science and the sciences more self-conscious, more rigorous, more clearly divided, and more methodologically consistent (see e.g. Rudwick 2005 on geology.)  Contrary to much blather, this was not a “European” enterprise.  It already involved people on both American continents, and it very soon included Asians.  Modern medicine, in particular, owes as much to Wu Lien-teh for his studies of plague and to Kiyoshi Shiga for his studies of dysentery as it does to any but the greatest of the European doctors.  (Shiga won what may be the least enviable immortalization in history, as the namesake of shigellosis.)  Moreover, many of the European founders did their key work in the tropics, as in the pathbreaking work of Patrick Manson and Ronald Ross on malaria and Walter Reed on yellow fever.  Therefore, I will use the term modern international science to refer to the new, self-conscious enterprise that began after 1800.

As Arturo Escobar says, “…an ensemble of Western, modern cultural forces…has unceasingly exerted its influence—often its dominance—over most world regions.  These forces continue to operate through the ever-changing interaction of forms of European thought and culture, taken to be universally valid, with the frequently subordinated knowledges and cultural practices of many non-European groups throughout the world” (Escobar 2008:3).  Escobar, among many others, speaks of “decolonializing” knowledge, and I hope to contribute to that.

Euro-American rational science arose in a context of technological innovation, imperial expansion, power jockeying (as Foucault reminded us), political radicalism, and economic dynamism.  We now know, thanks to modern histories and ethnographies of science, that European science was and is a much messier, more human enterprise than most laypersons think.  The cool, rational, detached scientist with his (sic!) laboratory, controlled experiments, and exquisitely perfect mathematical models is rare indeed outside of old-fashioned hagiographies of scientists.  Rarer still is the lackey of patriarchal power, creating phony science simply to enslave.  (Rare, but far from nonexistent; one need think only of the sorry history of racism and “scientific” sexism, up to and including Lawrence Summers’ famous dismissal of women’s math abilities.  One could always argue that Summers is an economist, not a scientist.)

More nuanced conclusions emerge from the history of science (as told by e.g. Martin Rudwick 2007, 2008) and the ethnography of science (e.g. Bruno Latour 2004, 2005).  These show modern international science as a very human enterprise.  Most of us who have worked in the vineyard can only agree.  (I was initially trained as a biologist and have done some biological research, so I am not ignorant of the game.)  These accounts bring modern science much closer to the traditional ecological knowledge of the Maya, the Haida, or the Chumash.  I have no hesitation about using the word “science” to describe any and all cultures’ pragmatic knowledge of the environment (see below, and Gonzalez 2001).

One can often infer the theory behind traditional or early empirical knowledge.  Sometimes it is quite sophisticated, and one wishes the writer had been less modest.  Therefore, a solid, factual account should not be dismissed because it “doesn’t speak to theory issues” until one has thought over the implications of the author’s method and conclusion.  This is as true if the account comes from a Maya woodsman or Chinese herbalist as it is when the account comes from a laboratory scientist.

We thus need a definition of “science” broad enough to include “ethnoscience” traditions.  The accumulated knowledge and wisdom of humanity is being lost and neglected more than ever, in spite of the tiny and perhaps dwindling band of anthropologists who care about it.  The fact that a group does not have a “thing” called “science,” and even the fact that the group believes in mile-long fish and dinosaur-sized otters (as do the Chipewyan of Canada; Sharp 2001), does not render their empirically verifiable knowledge unscientific.

Considering all folk explanations, and classifying the traditional ones as “religion,” Edward Tylor classically explained magic and religion as, basically, failed science (Tylor 1871).  He came up with a number of stories explaining how religious beliefs could have been reasonably inferred by fully rational people who had no modern laboratory devices to make sense of their perceptions.  Malinowski’s portrayal of religion as emotion-driven was part of a general reaction against Tylor in the early 20th century.

Indeed, Tylor discounted emotion too much.  On the whole, however, there is still merit in Tylor’s work.  There is also merit in Malinowski’s.  Science, like religion and magic, partakes of the rational, the emotional, and the social.

Basic Science:  Beyond Kuhn and Kitcher

“I can’t remember a single first formed hypothesis which had not after a time to be given up or greatly modified.  This has naturally led me to distrust greatly deductive reasoning in the mixed sciences.”  (Darwin, from his notebooks, quoted Kagan 2006:76)

All my life, I have been fascinated with scientific knowledge—that is, knowledge of the world derived from deliberate, careful, double-checked reflection on experience, rather than from blind tradition, freewheeling speculation, or logic based on a priori principles.

Thomas Kuhn’s classic The Structure of Scientific Revolutions (1962, anticipated by the brilliant work of Ludwig Fleck on medical history) concentrated on biases and limits within scientific practice.  Kuhn was attending to real problems with science itself.  This contrasts with, say, critiques of racism and sexism, which are necessary and valuable but wre already anticipated by Francis Bacon’s critiques of bias-driven pseudoscience (Bacon 1901, orig. ca. 1620).

From all this arose a great change in how “truth” is established.  Instead of going for pure unbiased observation, or for falsification of errors, we now go for “independent confirmation.”  David Kronenfeld (personal communication, 2005) adds:  “Science itself is also an attitude—probing, trying to ‘give nature a chance to say no,’ and so forth….science is not a thing of individuals but is a system of concepts and of people.”

A result is not counted, a finding is not taking seriously, unless it is cross-confirmed, preferably by people working in a different lab or field and from a different theoretical framework.  I certainly don’t believe my own findings unless they are cross-confirmed.  (See Kitcher 1993; for much more, Martin and MacIntyre 1994.)

Indeed, the new face of positivism demands  what is called VV&A:  “Verification (your model correctly captures the processes you are studying), validation (your code correctly implements your model) and authentication (some group or individual blesses your simulation as being useful for some intended purpose)” (Jerrold Kronenfeld, email of Jan. 7, 2010).  This is jargon in the “modeling” world, but it applies across the board.  Any description of a finding must be checked to see that it is correct, that the descriptions of it in the literature are accurate, and that advances knowledge, strengthens or qualifies theory, or is otherwise useful to science.

In short, science is necessarily done by a number of people, all dedicated to advancing knowledge, but all dedicated to subjecting every new idea or finding to a healthy skepticism.  We now see science as a social process.  Truth is established, but slowly, through debate and ongoing research.

Naïve empiricist agendas assume we can directly perceive “reality,” and that it is transparent—we can know it just by looking.  We can tell the supernatural from the natural.  This is where we begin to see real problems with these agendas, and the whole “modernist program” that they may be said to represent.  Telling the supernatural from the natural may have looked easy in Karl Popper’s day.  It seemed less clear before him, and it seems less clear today.

We have many well-established facts that were once outrageous hypotheses:  the earth is an oblate spheroid (not flat), blood circulates, continents drift, the sun is only a small star among billions of others.  We also have immediate hypotheses that directly account for or predict the facts.  We know an enormous amount more than we did ten years ago, let alone a thousand years, and we can do a great deal more good and evil, accordingly.

However, science has moved to higher and higher levels of abstraction, inferring more and more remote and obscure intervening variables.  It now presents a near-mystical cosmology of multidimensional strings, dark matter, dark energy, quark chromodynamics, and the rest.  Even the physicist Brian Greene has to admit:  “Some scientists argue vociferously that a theory so removed from direct empirical testing lies in the realm of philosophy or theology, but not physics” (Greene 2004:352).

To people like me, unable to understand the proofs, modern physics is indeed an incomprehensible universe I take on faith—exactly like religion.  The difference between it and religion is not that physics is evidence-based.  Astrophysics theories, especially such things as superstring and brane theory, are not based on direct evidence, but on highly abstract modeling.  The only difference I can actually perceive is that science represents forward speculation by a small, highly trained group, while religion represents a wide sociocultural communitas. Religion also has beautiful music and art, as a result of the communitas-emotion connection, but I suppose someone somewhere has made great art out of superstring theory.

The universe is approximately 96% composed of dark matter and energy—matter and energy we cannot measure, cannot observe, cannot comprehend, and, indeed, cannot conceptualize at all (Greene 2004).  We infer its presence from its rather massive effects on things we can see.  For all we know, dark matter and energy are God, or the Great Coyote in the Sky (worshiped by the Chumash and Paiute).

On a smaller and more human scale, we have the “invisible hand” (Smith 1776) of the market—a market which assumes perfect information, perfect rationality, and so on, among its dealers.  The abstract “market” is no more real than the Zapotec Earth God, and has the same function:  serving as black-box filler in an explanatory model.  Of course Smith was quite consciously, and ironically, used a standard theological term for God.

The tendency to use “science” to describe truth-claims and “religion” to describe untestable beliefs is thus normative, not descriptive.  It is a rather underhanded attempt to confine religion to the realm of the untestable and therefore irrelevant.  (This objection was made by almost every reviewer of Gould 1999.)

We have abstract black-box mechanisms in psychology (e.g. Freudian dynamic personality structure), anthropology (“culture”), and sociology (“class,” “discourse,” “network”).  Darwin’s theory of evolution had a profoundly mysterious black box, in which the actual workings of selection lay hidden, until modern genetics shone light into the box in the 1930s and 1940s.  Geology similarly depended on mysticism, or at least on wildly improbable mechanisms, to get from rocks to mountains, until continental drift showed the way.  Human ability to withstand disease was for long a totally black box.  The usual wild speculations filled it until Elie Metchnikoff’s brilliant work revealed the immune-response system, and gave us all yogurt into the bargain.  It was Metchnikoff who popularized it as a health food, having seen that people in his native Bulgaria ate much yogurt and lived long.

At present, organized “science” in the United States is full of talk about “complex adaptive systems” that are “self-organizing” and may or may not have an unmeasurable quality called “resilience.”  They may be explained by “chaos theory.”  All this is substantially mystical, and sometimes clearly beyond the pale of reality; no, a butterfly flapping in Brazil can not cause a tornado in Kansas, by any normal meaning of the word “cause.”  “Self-organizing” refers to ice crystals growing in a freezing pool, ecological webs evolving, and human communities and networks forming—as if one could explain all these by the same process!  In fact, they are simply equated by a singularly loose metaphor.

When traditional peoples infer things like superstrings and self-organizing systems, we label those inferences “beliefs in the supernatural.”  The traditional people themselves never seem to do this labeling; they treat spirit forces and spirit beings as part of their natural world.  This is exactly the same as our treating dark energy, the market, and self-organization as “natural.”

Surely if they stopped and thought, the apologists for science would recognize that some unpredictable but large set of today’s inferred black-box variables will be a laughingstock 20 years from now—along with phlogiston, luminiferous ether (Greene 2004), and the angle of repose.

More:  they would have to admit that a science that is all correct and all factually proved out is a dead science!  Science is theories and hypotheses, wild ideas and crazy speculation, battles of verification and falsification.  Facts (whatever they are) make up part of science, but in a sense they are but the dead residue of science that has happened and gone on.  (See Hacking 1999; Philip Kitcher 1993.  These writers have done a good job of dealing with the fact that science is about truth, but is ongoing practice rather than final truth.  See Anderson 2000 for further commentary on Hacking.)

The history of science is littered with disproved hypotheses.  Mistakes are the heart and soul of science.  Science progresses by making guesses (hopefully educated ones) about the world, and testing them.  Inevitably, if these guesses are specific and challenging enough to be interesting, many of them will be wrong.  This is one of the truths behind Karl Popper’s famous claim that falsification, not verification, is the life of science (Popper 1948).

Science is not about established facts.  Established, totally accepted truth may be a result of science, but real science has already gone beyond it into the unknown.  Science is a search.

Premodern and traditional sciences made the vast majority of their errors from assuming that active and usually conscious agents, not mindless processes, were causal.  If they did not postulate dragons in the earth and gods in the sky, they postulated physis (originally a dynamic flux that produced things, not just the physical world), “creative force” (vis creatrix), or the Tao.

Today, most errors seem to come not from this but from three other sources.

First, scientists love, and even need, to assume that the world is stable and predictable.  This leads them into thinking it is more simple and stable than it really is.  Hence motionless continents (Oreskes 1998), Newtonian physics with its neat predictable vectors, climax communities in ecology, maximum sustainable yield theory in fisheries, S-R theory in psychology, phlogiston, and many more.

Second, scientists are hopeful, sometimes too much so.  From this come naïve behaviorism, from a hope for the infinite perfectability of humanity (see Pinker 2003); humanistic psychology (with the same fond hope); astrology; manageable “stress” as causing actually hopeless diseases (Taylor 1989); and the medieval Arab belief that good-tasting foods must be good for you (Levey 1966).

Third, some scientists like to take out their hatreds and biases on their subjects, and pretend that their fondest hates are objective truth.  This corrupted “science” gave us racism, sexism, and the old idea that homosexuality is “pathological.”  Discredited “scientific” ideas about children, animals, sexuality in general (Foucault 1978), and other vulnerable entities are only slightly less obvious.

It gave us the idea (now, I hope, laid definitively to rest) that nonhuman animals are mere machines that do not feel or think.  It gives us the pathologization of normal behavior.  Much or most diagnosed ADHD in the United States, for instance, is clearly not real ADHD; other countries have only about 10% our rate.  Most extreme of all ridiculous current beliefs, and thus most popular of all, is the idea that people are innately selfish, evil, violent, or otherwise horrific, and only a thin veneer of culture holds them in place.  This has given us Hobbes’ state of nature, Nietzsche’s innate will to power, Freud’s id, Dawkins’ selfish gene, and the extreme form of  “rational self-interest” that assumed people act only for immediate selfish advantage.  Three seconds of observation in any social milieu (except, perhaps, a prison riot) would have disproved all this, but no one seemed to look.

Given all the above, critics of science have shown, quite correctly, that all too much of modern “science” is really social bias dressed up in fancy language.

An issue of concern in anthropology is the ways that, in modern society, some mistaken beliefs are classified as “pseudoscience,” some as “religion,” and some merely as “controversial/inaccurate/disproved science.”   In psychology, parapsychology is firmly in the “pseudoscience” category, but racism (specifically, “racial” differences in IQ) remains “scientific,” though equally disproved and ridiculous.  Freudian theory, now devastated by critiques, is “pseudoscience” to many but is “science”—even if superseded science—to many others.  It is obvious that such labels are negotiable, and are negotiated.

The respectability and institutional home of the propounder of a theory is clearly a major determinant.  A mistake, if made at Harvard, is science; the same mistake made outside of academia is pseudoscience.  Pseudoscience is validly used for quite obvious shucks masquerading as science, but nonsense propounded by a Harvard or Stanford professor is all too apt to be taken seriously—especially if it fits with popular prejudices.  One recalls the acceptance as “science” of the transparently ridiculous race psychology of Stanford professor Thomas Jukes and Harvard professor Richard Herrnstein (see Herrnstein and Murray 1994, where, for instance, the authors admit that Latinos are biologically diverse, mixed, and nonhomogeneous, and then go right on to assign them a racial IQ of 89).

All this is not meant to give any support to the critics of science who claim “it” (whatever “it” is) can only be myth or mere social storytelling.  It is also not meant to claim that traditional knowledge is as well-conceived and well-verified as good modern science.  It is meant to show that traditional knowledge-seeking and modern science are the same enterprise.  Let us face it:  modern science does better at finding abstruse facts and proving out difficult causal chains.  We now know a very great deal about what causes illness, earthquakes, comets, and potatoes; we need not appeal to witchcraft or the Great Coyote.  But the traditional peoples were not ignorant, and the modern scientists do not know it all, so we are all in the same book, if not always on the same page.

Thus, science is the process of coming to general conclusions that are accurate enough to be used, on the basis of the best evidence that can be obtained.  Inevitably, explanatory models will be developed to account for the ways the facts connect to the conclusions, and these models will often be superseded in due course; that is how science progresses.

“Best evidence” is a demanding criterion, but not as demanding as “absolute proof.”  One is required to do the best possible—use appropriate methods, check the literature, get verification by others using other models or equipment.  Absolute proof is more than we can hope for in this world (Kitcher 1993).

Purely theoretical models provide a borderline case.  Even when they cannot be tested, they may qualify as science in many areas (e.g. theoretical astrophysics, where experimental testing is notoriously difficult).  Fortunately, they are usually testable with data.

The need to test hypotheses with hard evidence does not rule out the study of history.  Archaeological finds showed that the spice trade of the Roman Empire was indeed extensive.  This validated J. Miller’s hypothesis of extensive spice trade through the Red Sea area (Miller 1969), and invalidated Patricia Crone’s challenge thereto (Crone 1987).  We are, hopefully, in the business of developing Marx’ “science of history,” as well as other human sciences.

We are also now aware that “mere description” isn’t “mere.”  It always has some theory behind it, whether we admit it or not (Kitcher 1993; Kuhn 1962).  Even a young child’s thoughts about the constancy of matter or the important differences between people and furniture are based on partially innate theories of physics and biology (see e.g. Ross 2004).  Thus, traditional ecological knowledge can be quite sophisticated theoretically, though lacking in modern scientific ways of stating the theories in question.

Science and Popular “Science”

It thus appears that science is indeed a social institution.  But what kind of social institution is it?  Four different ones are called “science.”

First, we have the self-conscious search for knowledge—facts, theories, methodologies, search procedures, and knowledge systems.  This is the wide definition that allows us to see all truth-seeking, experiential, verification-conscious activities as “science,” from Maya agriculture to molecular genetics.  This can be divided into two sub-forms.  First, we can examine and compare systems as they are, mistakes and all—taking into account Chinese beliefs about sacred fish, Northwest Coast beliefs about bears that marry humans, Siberian shamans’ flights to other worlds, early 20th century beliefs in static continents, and so on.  Second, we can also look at all systems in the cold light of modern factual analysis, dismissing alike the typhoon-causing sacred fish and the tornado-causing Amazonian butterfly.  Fair is fair, and an international standard not kind to sacred fish cannot be merciful to exaggerated and misapplied “western” science either.

Second, it is “what scientists do”—not things like breathing and eating, but things they do qua scientists (e.g. Latour 2004).  This would include not only truth-seeking but a lot of grantsmanship, nasty rivalries, job politics, and even outright faking of data and plagiarism of others’ work.

Third, it is science as an institution:  granting agencies, research parks, university science faculties, hi-tech firms.  Many people use “science” this way, unaware that they are ruling off the turf the vast majority of human scientific activities, including the work of Newton, Boyle, Harvey, and Darwin, none of whom had research institutes.

Fourth, we have science as power/knowledge.  From the Greeks to Karl Marx and George Orwell, milder forms of this claim have been made, and certainly a great deal of scientific speculation is self-serving.  Science does, however, produce knowledge that stands the tests of verification and usually of utility.  It is our best hope of improving our lives and, now, of saving the world.

What is not science is perhaps best divided into four heads.

First, dogma, blind tradition, conformity, social and cultural convention, visionary and idiosyncratic knowledge, and bias—the “idols” of Bacon (1901).  These have sometimes replaced genuine inquiry within a supposedly scientific tradition.

Second, ordinary daily experience, which clearly works but is not being tested or extended—just being used and re-used.  Under this head comes explicitly non-“sciency” but still very useful material: autobiographies, collected texts, art, poetry and song.  These qualify as useful data, if only as worthwhile insights into someone’s mind.  All this material deserves attention; it is raw material that science can use.

Third, material that is written to be taken seriously as a claim about the world, but is not backed up by anything like acceptable evidence.  In addition to the obvious cases such as today’s astrology and alchemy, this would include most interpretive anthropology, especially postmodern anthropology.  Too many social science journal articles consist of mere “theory” without data, or personal stories intended to prove some broad point about identity or ethnicity or some other very complex and difficult topic. However, the best interpretive anthropology is well supported by evidence; consider, for example, Lila Abu-Lughod’s Veiled Sentiments (1985), or Steven Feld’s Sound and Sentiment (1982).

Fourth, pure advocacy:  politics and moral screeds.  This is usually backed up by evidence, but the evidence is selected by lawyers’ criteria.  Only such material as is consistent with the writer’s position is presented, and there is very minimal fact-checking.  If material consistent with an opposing position is presented, it is undercut in every way possible.  Typically, opponents are represented in straw-man form, and charged with various sins that may or may not have anything to do with reality or with the subject at hand; “any stick will do to beat a dog.”  Once again, Bacon (1901) was already well aware of this form of non-science.

The Problem of Truth

The problem of truth, and whether science can get at it in any meaningful way, has led to a spate of epistemological writings in anthropology, science studies, and history of science.  These writings cover the full range of possibilities.

The classic empiricist position—we can and do know real truths about the world—is robustly upheld by people like Richard Dawkins, whose absolute certainty not only extends to his normal realm (genetics; Dawkins 1976, a book widely criticized) but to religion, philosophy, and indeed everything he can find to write about.  He is absolutely positive that there are no supernatural beings or forces (Dawkins 2006).  He has said “Show me a relativist at 30,000 feet and I’ll show you a hypocrite” (quoted in Franklin 1995:173)  Sarah Franklin has mildly commented on this bit of wisdom:  “The very logic that equates ‘I can fly’ with ‘science must be an unassailable form of truth’ and furthermore assumes such an equation to be self-evident, all but demands cultural explication” (Franklin 1995:173).

At the other end of the continuum is the most extreme form of the “strong programme” in science studies, which holds that science is merely a set of myths, no different from the first two chapters of the Book of Genesis or any other set of myths about the cosmos.  Its purpose, like the purposes of many other myths, is to maintain the strong in power.  It is just another power-serving deception.  Since it cannot have any more truth-value than a dream or hallucination, it cannot have any other function; it must maintain social power.  This allowed Sandra Harding to maintain that “Newton’s Principia Mathematica is a rape manual” because male science “rapes female nature.”  This reaches Dawkins’ level of unconscious self-satire, and has been all too widely quoted (to the point where I can’t trace the real reference).  Dawkins might point out that Harding surely wrote it on a computer, sent it by mail or email to a publisher, and had it published by modern computerized typography.

Bad enough, but far more serious is the fact that the “strong programme” depends on assuming that people, social power, and social injustice are real.  Harding’s particularly naïve application of it also assumes that males, females, and rape are not only real but are unproblematic categories—yet mathematics is not.  How the strong programmers can be so innocently realist about an incredibly difficult concept like “power,” while denying that 2 + 2 = 4, escapes me.

Clearly, these positions are untenable, but that leaves a vast midrange.

The empiricist end of the continuum would, I think, be anchored by John Locke (1979/1697) and the Enlightenment philosophers who (broadly speaking) followed him.  Locke was not only aware of human information processing biases and failures; his account of them is amazingly modern and sophisticated.  It could go perfectly well into a modern psychology textbook.  He realized that people believe the most fantastic nonsense, using a variety of traditional beliefs as proof.  But he explains these as due to natural misinference, corrected by self-awareness and careful cross-checking.  He concluded that our senses process selectively but do not lie outright.  Thus the track from the real world to our knowledge of it is a fairly short and straight one—but only if we use reason, test everything, and check deductions against reality.

Locke’s optimism was almost immediately savaged by David Hume (1969 [1739-40]), who concluded that we cannot know anything for certain; that all theories of cause are pure unprovable inference; that we cannot even be sure we exist; and that all morals and standards are arbitrary.  This total slash-and-burn job was done in his youth, and has a disarming cheerfulness and exuberance about it, as if he were merely clearing away some minor problems with everyday life.  This tone has helped it stay afloat through the centuries, anchoring the skeptical end of the continuum.

Immanuel Kant (1978, 2007) took Hume seriously, and admitted that all we have is our experience—and maybe not even that.  At least we have our sensory impressions: the basic experiences of seeing, smelling, hearing, feeling, and tasting.  They combine to produce full experiences, informed by emotion, cognition, and memory of earlier experiences.  This more or less substitutes “I experience, therefore maybe I am” for Descartes’ “I think, therefore I am”; Kant realized not only that thought is not necessarily a given, but, more importantly, that sensory experience is prior to thought in some basic way.  He worked outward from assuming that experience was real and that our memory of it extending backward through time was also real.  Perhaps the time itself was illusory.  Certainly our experience of time and space is basic and is not the same as Time and Space.  And perhaps the remembered events never happened.  But at least we experience the memory.  From this he could tentatively conclude that there is a world-out-there that we are experiencing, and that its consistency and irreducible complexity make it different from dreams and hallucinations.

In practice, he took reality as a given, and devoted most of his work to figuring out how the mind worked and how we could deduce standards of morality, behavior, and judgment from that.  He was less interested in saving reality from Hume than in saving morality.  This need not concern us here.  What matters much more is his realization that the human brain inevitably gives structure to the universe—makes it simpler, neater, more patterned, and more systematic, the better to understand and manage it.  Obviously, if we took every new perception as totally new and unprecedented, we would never get anything done.

Kant therefore catalogued many of the information-processing biases that have concerned psychologists since. Notable were his “principle of aggregation” and “principle of differentiation,” the most basic information-processing heuristics (Kant 1978).  The former is our tendency to lump similar things into one category; the latter is our tendency to see somewhat different things as totally different.  In other words, we tend to push shades of gray into black and white.  This leads us to essentialize and reify abstract categories.  Things that refuse to fit in seem uncanny.  More generally, people see patterns in everything, and try to give a systematic, structured order to everything.  From this grew the whole structuralist pose in psychology and anthropology, most famously advocated in the latter field by Claude Lévi-Strauss (e.g. 1962).

Hume and Kant were also well aware—as were many even before them—of the human tendency to infer agency by default.  We assume that anything that happens was done by somebody, until proven otherwise.  Hence the universal belief in supernaturals and spirits.  This and the principle of aggregation gives us “other-than-human persons,” the idea that trees, rocks, and indeed all beings are people like us with consciousness and intention.

Kant’s focus on experience and the ways we process it were basic to social science; in fact, social science is as Kantian as biology is Darwinian.  However, Kant still leaves us with the old question.  His work reframes it:  How much of what we “know” is actually true?  How much is simply the result of information-processing bias?

People could take this in many directions.  At the empiricist end was Ernst Mach, who developed “positivism” in the late 19th century.  Well aware of Kant, Mach advocated rigorous experimentation under maximally controlled conditions, and systematic replication by independent investigators, as the surest way to useful truths.  The whole story need not concern us here, except to note that controlled, specified procedures and subsequent replication for confirmation or falsification have become standard in science (Kitcher 1993).  Note that positivism is not the naïve empiricist realism that postmodernists and Dawkinsian realists think it is.  It is, in fact, exactly the opposite.  Also, positivism does not simply throw the door open to racist and sexist biases, as the Hardings of the world allege.  It does everything possible to prevent bias of any kind.  If it fails, the problem is that it was done badly.

Kant did not get deeply into the issue of social and political influences on belief, but he was aware of them, as was every thinker from Plato on down.  Kantians almost immediately explored the issue.  By far the most famous was Marx, whose theory of consciousness and ideology is well known; basically, it holds that people’s beliefs are conditioned by their socioeconomic class.  Economics lies behind belief, and also behind nonsense hypocritically propagated by the powerful to keep themselves in power.

By the end of the 19th century, this was generalized by Nietzsche and others to a concern with the effects of power in general—not just the power of the elite class—on beliefs.  This idea remained a minority position until the work of Michel Foucault in the 1960s and 1970s.  Foucault is far too complex to discuss here, but his basic idea is simple:  established knowledge in society is often, if not always, “power/knowledge”:  information management in the service of power.  Foucault feared and hated any power of one person over another; he was a philosophic anarchist.  He saw all such sociopolitical power as evil.  He also saw it as the sole reason why we “know” and believe many things, especially things that help in controlling others.  He was especially attracted to areas where science is minimal and need for control is maximal:  mental illness, sexuality, education, crime control.  When he began writing, science had only begun to explore these areas, and essentially did not exist in the crime-control field.  Mischief expanded to fill the void; there is certainly no question that the beliefs about sex, women, and sexuality that passed as “science” berore the days of Kinsey had everything to do with keeping women down and nothing to do with truth.  Since his time, mental illness and its treatment, as well as sexuality, have been made scientific (though far from perfectly known), but crime control remains exactly where it was when Foucault wrote, and, for that matter, where it was when Hammurabi wrote his code.

A generation of critics like Sandra Harding concluded that science had no more grasp on truth than religion did.  Common such “science” certainly was; typical it was not.  By the time the postmodernists wrote, serious science had reached even to sex and gender issues, with devastating effects on old beliefs.  The postmodernists were flogging a dead horse.  Often, they kept up so poorly on actual science that they did not realize this.  Those who did realize it moved their research back into history.  Finding that the sex manuals of the 19th century were appalling examples of power/knowledge was easy.  The tendency to overgeneralize, and see all science as more of the same, was irresistible to many.  Hence the assumption that Newton and presumably all other scientists were mere purveyors of yet more sexist and racist nonsense.

A “strong programme” within science studies holds that science is exactly like religion: a matter of wishes and dreams, rather than reality.  This is going too far.  Though this idea is widely circulated in self-styled “progressive” circles, it is an intensely right-wing idea.  It stems from Nazi and proto-Nazi thought and ideology (including the thought of the hysterically anti-Semitic Nietzsche, and later of Martin Heidegger and Paul de Man, both committed and militant Nazis, and influenced also by the right-wing philosopher and early Nazi-sympathizer Paul Ricoeur).  It is deployed today not only by academic elites but also by the fundamentalist extremists who denounce Darwinian evolution as just another origin myth.  The basically fascist nature of the claim is made clear by such modern right-wing advocates as Gregg Easterbrook (2004), who attacked the Union of Concerned Scientists for protesting against the politicization of science under the Bush administration.  Easterbrook makes the quite correct point that the Union of Concerned Scientists is itself a politically activist group, but then goes on to maintain that, since scientific claims are used for political reasons, the claims are themselves purely political.  He thus confuses, for example, the fact that global warming due to greenhouse gases is now a major world problem with the political claims based on this fact; he defends the Bush administration’s attempt to deny or hide the fact.

Postmodernists dismiss science—and sometimes all truth-claims—as just another social or cultural construction, as solipsistic as religion and magic.  Some anthropologists still believe, or at least  maintain, that cultural constructions are all we have or can know.  This is a self-deconstructing position; if it’s true, it isn’t true, because it is only a cultural construction, and the statement that it’s only a cultural construction is only a cultural construction, and we are back with infinite regress and the Liars Paradox.  The extreme cultural-constructionist position is all too close to, and all too usable by, the religious fundamentalists who dismiss science as a “secular humanist religion.”

If we trim off these excesses, we are left with Foucault’s real question:  How much of what we believe, and of what “science” teaches, is mere power/knowledge?  Obviously, and sadly, a great deal of it still is, especially in the fields where real science has been thin and rare.  These include not only education and crime control, but also economics, especially before the rise of behavioral economics in the 1990s.  “Development” is another case (see Dichter 2003; Escobar 2008; Li 2007).  Not only is rather little known in this area, but the factual knowledge accumulated over the years in this area is routinely disregarded by development agents, and the pattern of disregard fits perfectly with Foucaultian theory.  To put it bluntly, “development” is usually about control, not about development.  Indeed, coping strategies for most social problems today are underdetermined by actual scientific research, leaving power/knowledge a clear field.

However, this does not invalidate science.  Where we actually know what we are doing, we do a good job.  Medicine is the most obvious case.  Foucault subjected medicine to the usual withering fire, and so have his followers, but the infant mortality rate under state-of-the-art care has dropped from 500 per thousand to 3 in the last 200 years, the maternal mortality rate from 50-100 per thousand to essentially zero, and life expectancy (again with state-of-the-art health care) has risen from 30 to well over 80.  Somebody must be doing something right.

Medical science largely works.  Medical care, however, lags behind, because the wider context of how we deal with illness and its sociocultural context remains poorly studied, and thus a field where power/knowledge can prevail (Foucault 1973).

Here we may briefly turn to Chinese science to find a real counterpart.  The Chinese, at least, had deliberately designed, government-sponsored case/control experiments as early as the Han Dynasty around 150 BC (Anderson 1988).  The ones we know about were in agriculture (nong; agricultural science is nongxue), and Chinese agriculture developed spectacularly over the millennia.  It is beyond doubt that this idea was extended to medicine; we have some hints, though no real histories.  Unfortunately most of the work was done outside the world of literate scholars.  We know little about how it was done.  A few lights shine on this process now and then over the centuries (e.g. the Qi Min Yao Shu of ca. 550, and the wonderful 17th-century Tiangong Kaiwu, an enthusiastic work on folk technology).  They show a development that was extremely rigorous technically, extremely rapid at times, and obviously characterized by experiment, analysis, and replication.

In medicine (yi or yixue), the developments were slow, uncertain, and tentative, because of far too much reliance on the book and far too little on experience and observation (see e.g. Unschuld 1986).  However, there was enough development, observation, and test—corpse dissection, testing of drugs, etc.—to render the field scientific.  Even so, we must take note that it was far more tradition-bound and far less innovative than western medicine after 1650 (cf. Needham 2000 and Nathan Sivin’s highly skeptical introduction thereto).   Medical botany and nutrition are probably the most scientific fields, but medical botany ceased to progress around 1600, nutrition around the same time or somewhat later.  It is ironic that Chinese botany froze at almost exactly the same time that European botany took off on its spectacular upward flight.  Li Shizhen’s great Bencao Gangmu of 1593 was more impressive than the European herbals of its time.  Unfortunately, it was China’s last great herbal until international bioscience came to Chinese medicine in the last few decades.  Herbal knowledge more or less froze in place; Chinese traditional doctors still use the Bencao Gangmu. By contrast, European herbals surpassed it within a few years, and kept on improving.

Best of all, thanks to the life work of the incredible scholar H. T. Huang (2000), we know that food processing was fully scientific by any reasonable standard.  Chinese production of everything from soy sauce to noodles was so sophisticated and so quick to evolve in new directions that, in many realms, it remains far ahead of modern international efforts.  Thanks to H. T. and a few others, we can understand in general what is going on, but modern factories cannot equal the folk technologists in actual production.

One thing emerges very clearly from comparison of epistemology and the historical record:  using some form of the empirical or positivist “scientific method” does enormously increase the speed and accuracy of discovery.  Intuition and introspection also have a poor record.  Medieval scholars, both Platonists and Aristotelians, relied on intuition, and did not add much to world science; much of the triumph of the Renaissance and the scientific “revolution” was due to developments in instrumentation and in verification procedures.  Psychologists long ago abandoned introspective methods, since the error rate was extremely high.  Doctors have known this even longer.  The proverb “the doctor who treats himself has a fool for a patient” is now many centuries old.

The flaws of the empirical and positivist programs are basically in the direction of oversimplification.  Procedures are routinized.  Mythical “averages” are used instead of individuals or even populations (Kassam 2009).  Diversity is ignored.  Kant’s principles of differentiation and aggregation are applied with a vengeance (cf., again, Kassam 2009, on taxonomy).  The result does indeed allow researcher bias to creep in unless zealously guarded against—as Bacon pointed out.  But, for all these faults, science marches on.  The reason is that the world is fantastically complicated, and we have to simplify it to be able to function in it.  Quick-and-dirty algorithms give way, over time, to more nuanced and detailed ones, but Borges’ one-to-one map of the world remains useless.  A map of the world has to simplify, and then the user’s needs dictate the appropriate scale.

The full interpretive understanding sought by many anthropologists, by contrast, remains a fata morgana.  It is fun to try to understand every detail of everyone’s experience, but even if we could do it (and we can’t even begin) it would be as useless as Borges’ map.

On the other hand, we need that attempt, to bring in the nuances to science and to correct the oversimplifications.  A purely positivist agenda can never be enough.

Case Study:  Yucatec Maya Science

Anthropology is in a particularly good place to test and critique discussions of science, because we are used to dealing with radically different traditions of understanding the world.  Also, we are used to thinking of them as deserving of serious consideration, rather than simply dismissing them as superstitious nonsense, as modern laboratory scientists are apt to do.  I thus join Roberto Gonzalez (2001) and Eugene Hunn (2008) in using the word “science,” without qualifiers, for systematic folk knowledge of the natural world.

The problem is not made any easier by the fact that no society outside Europe and the Middle East seems to have developed a concept quite like the ancient Latin scientia or its descendants in various languages, and that the European and Middle Eastern followers of the Greeks have defined scientia/science in countless different ways.  Arabic ‘ilm, for instance, in addition to being used as a translation for scientia, has its own meanings, and this led to many different definitions and usages of the word in Arabic.

In Chinese, to know is zhi, and this can mean either to know a science or to know by mystical insight.  An organized body of teaching, religious or philosophical, is a jiao.  The Chinese word for “science,” kexue, is a modern coinage.  It means “study of things.”  It was originally a Japanese coinage using Chinese words.  The Chinese borrowed it back.  Lixue, “study of the basic principles of things,” is a much older word in Chinese, and once meant something like “science,” but it has now been reassigned to mean “physics.”  Other sciences have mostly-new names coined by putting the name of the thing studied in front of the Chinese word xue “knowledge.”   But non-science knowledges are also xue; literature and culture is wen xue “literary knowledge.”  We can define Chinese “science” in classical terms, without using the modern word kexue, by simply listing the forms of xue devoted to the natural (xing, ziran) world as opposed to the purely cultural.

Yucatec Maya has no word separating science from other kinds of knowledge.  So far as I know, the same is true of other Native American languages.  The Yucatec Maya language divides knowledge into several types.  The basic vocabulary is as follows:

Oojel to know  (Spanish saber).

Oojel ool to know by heart; ool means “heart.” Cha’an ool is a rare or obsolete synonym.

K’aaj, k’aajal to recognize, be familiar with (Spanish conocer in the broader sense)

K’aajool, k’aajal ool to “recognize by heart” (Spanish reconocer):  to recognize easily and automatically.  (The separation between ool and k’aaj is so similar to the Spanish distinction of saber and conocer that there may be some influence from Spanish here.)

K’aajoolal (or just k’aajool), knowledge; that which is known.

U’ub– to hear; standardly used to mean “understand what one said,” implying just to catch it or get it, as opposed to na’at, which refers to a deeper level of understanding.

Na’at to understand.

The cognate word to na’at in Tzotzil Maya is na’, and has been the subject of an important study by Zambrano and Greenfield (2004).  They find that it is used as the equivalent of “know” as well as “understand,” but focally it means that one knows how to do something—to do something on the basis of knowledge of it.   This keys us into the difference between Tzotzil knowing and Spanish or English knowing:  Tzotzil know by watching and then doing (as do many other Native Americans; see Goulet 1998, Sharp 2001), while Spanish and English children and adults know by hearing lectures or by book-learning.  It seems fairly likely that a culture that sees knowledge as practice would not make a fundamental or basic distinction between magic, science, and religion.  The distinction would far more likely be between what is known from experience and what is known only from others’ stories.  Such distinctions are made in some Native American languages.

Ook ool religion, belief; to believe; secret.  Ool, once again, is “heart.”

So Chinese and Maya have words for knowledge in general but no word for science as opposed to humanistic, religious, or philosophical knowledge.  Unlike the Greeks, they do not split the semantic domain finely.

Let us then turn to “science” in English.  The word has been in the language since the 1200s.  One reference, from 1340, in the OED is appropriately divine:  “for God of sciens is lord,” i.e. “for God is lord of all knowledge.”  The word has been progressively restricted over time, from a general term for knowledge to a term for a specific sort of activity designed to increase specialized knowledge of a particular subset of the natural world.

Science can also be seen as an institution:  a social-political-legal setup with its own rules, organizations, people, and subculture.  We generally understand one of two things:

1) a general procedure of careful assemblage of all possible data relevant to a particular question, generally one of theoretical interest, about the natural world.

2) a specific procedure characterized by cross-checking or verification (Kitcher 1993) or by falsifiability (Popper 1959).  Karl Popper’s falsifiability touchstone was extremely popular in my student days, but is now substantially abandoned, even by people who claim to be using it and to regard it as the true touchstone of science.  One problem is that falsifiability is just as hard to get and insure as verifiability.  We all know anthropological theories that have been falsified beyond all reasonable doubt, but are still championed by their formulators as if nothing had happened.  Thus, as of 2009, Mark Raymond Cohen 2009 is still championing his idea that Pleistocene faunal extinctions forced people to turn to agriculture to keep from starving—a theory so long and frequently disproved that the article makes flat-earthers look downright reasonable.

Another and worse problem is that Popper’s poster children for unfalsifiable and therefore nonscientific theories have been disproved, or at least subjected to some rather serious doubt.  Adolph Grunbaum (1984; see also Crews 1998, Dawes 1994) took on Freud and Popper both at once, showing quite conclusively that—contra Popper—Freudian theory was falsifiable, and had in fact been largely, though not entirelyi, falsified.  As with Cohen, the Freudians go right on psychoanalyzing—and, unlike Cohen, charging huge amounts of money.  Marx’ central theory was Popper’s other poster child, and while it does indeed seem to be too fluffy to disprove conclusively, its predictions have gone the way of the Berlin Wall, the USSR, and Mao’s Little Red Book.

We are well advised to stick with Ernst Mach’s original ideas of science as requiring specialized observational techniques (usually involving specialized equipment and methodology) and, above all, cross-verification (Kitcher 1993).  On the other hand, David Kronenfeld points out (email of Jan. 7, 2010) that Popper’s general point—one should always be as skeptical as possible of ideas and subject them to every possible test—is as valid as ever.  Clear counter-evidence should count (Cohen to the contrary notwithstanding).

The question for us then becomes whether the Maya had special procedures for meticulously gathering data relating to more or less theoretic questions about the natural world, and how they went about verifying their data and conclusions.

The Maya are a different case, since they do not have any terminological markers at all (unlike the Chinese with xue), they do not have a history of systematic cumulative investigation and replication, and, for that matter, they do not have a concept of “nature.”  All of the everyday Maya world is shaped by human activities.  Areas not directly controlled by the Maya themselves are controlled by the gods or spirits.  Rain, the sky, and the winds, for instance, have varying degrees of control by supernaturals.   Ordinary workings of the stars and of weather and wind are purely natural in our English sense, but anything important—like storms, or dangerous malevolent winds—has agency.  This makes the idea of “natural science” distinctly strange in Maya context.

However, the Maya are well aware that human and supernatural agency is only one thing affecting the forests, fields, and atmosphere.  Plants, animals and people grow and act according to consistent principles—essentially, natural laws.  The heavens are regular and consistent; star lore is extensive and widely known.  Inheritance is obvious and well recognized.  So are ecological and edaphological relationships—indeed there is a very complex science of these.

To my knowledge, there is no specific word for any particular science in Maya, with the singular exception of medicine:  ts’akTs’ak normally refers to the medicines themselves, but can refer to the field of medicine in general.  In spite of a small residual category of diseases explained by witchcraft and the like, Maya medicine is overwhelmingly naturalistic.  People usually get sick from natural causes—most often, getting chilled when overheated.  This simple theory of causation underdetermines an incredibly extensive system of treatment.  I have recorded 350 medicinal plants in my small research area alone, as well as massage techniques, ritual spells, cleansing and bathing techniques, personal hygeine and exercise, and a whole host of other medicinal technologies (Anderson 2003, 2005).  Some are “magic” by the standards of modern science, but are considered to work by lawful, natural means.  More to the point, medicine in Maya villages is an actively evolving science in which the vast majority of innovations are based either on personal experience and observation or on authority that is supposed to be medically reliable.  (Usually the authority is the local clinic or an actual medical practitioner, but a whole host of patent medicines and magical botánica remedies are in use.)  New plants are quickly put to use, often experimentally.  If they resemble locally used plants, they may be tried for the same conditions.  In the last five years, noni, a Hawaiian fruit used in its native home as a cureall, has come to Mayaland, being used for diabetes and sometimes other conditions.  It is now grown in many gardens, and is available for sale at local markets.  It has been tried by countless individuals to treat their diabetes, and compared in effectiveness with local remedies like k’ooch (Cecropia spp.) and catclaw vine (uncertain identification).  I have also watched the Maya learn about, label, and study the behavior of new-coming birds, plants, and peoples.  Their ethnoscience is rapidly evolving.  The traditional Maya framework is quite adequate to interpret and analyze new phenomena and add knowledge of them to the database.

This makes it very difficult to label Maya knowledge “traditional ecological knowledge” as opposed to science.  It is not “traditional” in the sense of old, stagnant, dying, or unchanging.  Nonanthropologists generally assume that traditional ecological knowledge is simply backward, failed science (see e.g. Nadasdy 2004).  They are now supposed to take account of it in biological planning and such-like matters, but they do so to minimal extent, because of this assumption.

Until modern times, the Maya did not have the concept of a “science” separate from other knowledge.  They also lacked the institution of “science” as a field of endeavor, employment, grant-getting, publishing, etc.  Now, of course, there are many Maya scientists.  Quintana Roo Maya are an education-conscious, upwardly-mobile population.  The head of the University of Quintana Roo’s Maya regional campus is a local Maya with a Ph.D. from UCSC in agroecology.

This might not matter, but Maya knowledge also includes those supernatural beings, mentioned above.  This was a problem for Roberto Gonzalez and Gene Hunn as well, in their conceptualization of Zapotec indigenous knowledge as science.  The Quintana Roo Maya are a hardheaded, pragmatic lot, and never explain anything by supernaturals if they can think of a visible, this-world explanation, but there is much they cannot explain in traditional terms or on the basis of traditional knowledge.  Storms, violent winds, and other exceptional meteorological phenomena are the main case.  Strange chronic health problems are the next most salient; ordinary diseases have ordinary causes, but recurrent, unusual illnesses, especially with mental components, must be due to sorcery or demons.

One could simply disregard these rather exceptional cases, and say that Maya science has inferred these black-box variables the way the Greeks inferred atoms and aether and the way the early-modern scientists inferred phlogiston and auriferous forces.  However, discussion with my students, especially Rodolfo Otero in recent weeks, has made it more clear to me that much of our modern science depends on black-box variables that are in fact rather supernatural.  Science has moved to higher and higher levels of abstraction, inferring more and more remote and obscure intervening variables.  Physics now presents a near-mystical cosmology of multidimensional strings, dark matter, dark energy, quark chromodynamics, and the rest.  The physicist Brian Greene admits of string theory:  “Some scientists argue vociferously that a theory so removed from direct empirical testing lies in the realm of philosophy or theology, but not physics” (Greene 2004:352).

Closer to anthropology are the analytic abstractions that have somehow taken on a horrid life of their own—the golems and zombies of our trade.  These are things like “globalization,” “neoliberalism,” “postcoloniality,” and so forth.  Andrew Vayda says it well:  “Extreme current examples…are the many claims involving ‘globalization,’ which…has transmogrified from being a label for certain modern-world changes that call for explanation to being freely invoked as the process to which the changes are attributed” (Vayda 2009:24).  “Neoliberalism” has been variously defined, but there is no agreed-on definition, and there are no people who call themselves “neoliberals”; the term is purely pejorative and lacks any real meaning, yet it somehow has become an agent that does things and causes real-world events.  There is even something called “the modernist program,” though no one has ever successfully defined “modern” and none of the people described as “modernist” had a program under that name.  Farther afield, we have the Invisible Hand of the market, rational self-interest, money, and a number of other economic concepts that suffer outrageous reification.  The tendency of social sciences to reify concepts and turn them into agents has long been noted (e.g. Mills 1959), and it is not dead.

The real problem with supernatural beings, Maya or postmodern, is that they tend to become more and more successful at pseudo-explanation, over time.  People get more and more vested interest in using reified black-box postulates to explain everything.  The great advantage of modern science—what Randall Collins (1999) calls “rapid discovery science”—is that it tries to minimize such explanatory precommitment.  The whole virtue of rapid discovery science is that it keeps questioning its own basic postulates.

If we can claim “science” for superstrings, neoliberalism, and rational choice, the Maya winds and storm gods can hardly be ruled out.  At least one can see storms and feel winds.  Black-box variables are made to be superseded.  We cannot use them as an excuse to reject the accumulated wisdom of the human species.


 

Leave a Reply

Your email address will not be published. Required fields are marked *