Archive for the ‘Articles’ Category

Science and Ethnoscience, part 2: European Biology as Ethnobiology

Monday, August 22nd, 2011

SCIENCE AND ETHNOSCIENCE
E. N. Anderson

Dept. of Anthropology, University of California, Riverside

Part 2.  European Science as Ethnoscience:  Science in Europe before International Science Came

Recently, historians of science have reacted against the old model of evaluating former beliefs in light of current knowledge.  This is surely the right thing to do.  However, it often leads to evaluating former beliefs as if they were a homogeneous body of lore, decoupled from real-world experience.  One could, for instance, recount the medical knowledge of 1600 as if it were a single, coherent system, based on logical reasoning, with no input from experience or practice.  This is not really how people think, and certainly not how science and medicine developed.  People interact with their patients and surroundings, learn from that as well as from books, and come up with individual knowledge systems that may or may not have much in common with those of their contemporaries.  The current histories of science thus take account of agency, and the role of interaction with reality.

Near East and China to Europe

Science gets around.  Three particularly important cases of early-day knowledge transfer are particularly well documented:  the spread of medical lore from Greece to the Near East in the early Islamic period; the spread of medicine and other technical lore between China and the Near East in the Mongol period; and the spread of science from both the above to Europe in the Middle Ages and Renaissance.

The first two cases joined early, for Near Eastern medical knowledge was flowing to both Europe and China in the 1200s and 1300s.  However, the two-way nature of the latter flow, and the radical differences in structure and cultural background, make it more reasonable to treat them intially as separate histories.

Europe before 1500 participated in a general rise of science in the Eurasian and African world.  Greek learning was long forgotten in the west, but Arab and Byzantine scholars reintroduced it, first to Moorish Spain, then to Sicily and upward through Italy.  There had been a huge flow from the Greek world into Arabic and Persian cultures from 700 to 1000, but essentially none the other direction.  After this time the flow almost entirely reversed.  Translation into Arabic shrank considerably (Lewis 1982:76), but translation from Arabic into western languages picked up.  At first, almost all of it was within the Arab-influenced worlds of Spain and Italy, but it spread rapidly beyond those spheres.  Greek learning spread to west Europe directly (Freely 2009:165177, and see below), but spread largely via the Arabsd..

The great Salerno medical school, just south of Naples, was apparently started by Arabs in the early 8th century.  Legend said the school was founded by an Arab, a Jew, a Latin and a Greek.  It flourished by 850; it blossomed from about 1000 AD as the center of Islamic-derived learning in Europe.  Constantine the African (ca. 1020-1087), from Tunis or near it, was instrumental in transferring Arabic knowledge into Italy at this time, including his translations (and those of his student John the Saracen, 1040-1103) of works including al-Abbās, and Hunayn ibn Ishāq’s versions of Aristotle and Galen, though his translations were far from the best imaginable (Kamal 1975:189, 662-3; Ullman 1978).  (Hunayn, a Christian, came out under his Christian name of Iohannitius.)  Constantine worked in Salerno or nearby Montecassino.

Indian numerals were Arabized in the 9th century, and then developed into Arabic numerals, which slowly entered Europe in the late middle ages and early Renaissance.  The most important transfer of Indian into Arabic numeration came via al-Khwārazmī in Baghdad.  He became so famous as a mathematician that his name entered the world’s language.  “Algorithm” is a corruption of “al-Khwārazmī.”  This word first appeared in a thirteenth-century translation, Algoritmi de numero indorum, “Al-Khwärazmī on Indian numbering” (Hill 1990b:255;  “Logarithm” is a deliberately-coined metathesis of “algorithm”).   He contributed greatly to algebra (Arabic al-jabr, “figuring”), and his work on it was translated into Latin in the 12th century, by Robert of Chester and then again by Gerard of Cremona.  Trigonometry followed the same course, possibly from India, certainly from Islam, at a somewhat later date.  (On this and other mathematical transfers, see Freely 2009:133, with forms of numbers well shown, from ancient Brahmi to modern; Hill 19990; Mushtaq and Berggren 2000, esp. pp. 182, 187.)   The most important name in transferring Arabic numerals into Europe (in the 990s) was Gerbert of Aurillac, who became Pope Sylvester II (Lewis 2008:328-329)—one of the few popes to have any distinction in learning outside of theology.

The Arabs and other Near Easterners also made enormous contributions to technology and agriculture, but these are poorly known, because the contributors were rarely literate and literate people were rarely interested (Hill 1990b).  A few agricultural handbooks exist, and show great sophistication.  We know this lore was transferred to Europe, but we have few details.

The Salerno medical school remained the greatest in Europe throughout the early middle ages.  This school translated the Arab Taqwim as-sihha by the Christian Arab Ibn-Butlān (d. ca. 1066) as the Tacuinum sanitatis, which remained the basic medical manual in Europe for centuries (Tacuinum Sanitatis 1976).  It is still in print in several languages, though now more for its beautiful early-Renaissance plates than for its advice.  The latter, though, is still good; it survives today in the standard clichés about moderation in diet, moderate exercise, rest, and so forth, familiar to everyone from doctors’ talk and pop medical books.  These saws trace directly back to the Tacuinum.

It, in turn, was the basis for the Salernitan Rule, the versified guide to health that was the Salernan school’s most famous product (Arikha 2007:77, 100ff.).  Sir John Harington translated it into English around 1600.  His famous translation of one line is still frequently and justly quoted:

“Use three physicions still:  First Doctor Quiet,

Next Doctor Merryman, and Doctor Diet” (Harington 1966:22).

The Latin original, ibid., is:

Si tibi deficiant medici, medici tibi fiant

Haec tria, mens laeta, requies, moderata diaeta; literally, “if you need doctors, get three:  a happy mind, rest, and a moderate diet.”

The Salerno school also produced the Articella (“little art”), a handbook that, “by the mid-thirteenth century…was the foundational textbook for most medical teaching in the West.  It included the Hippocratic Aphorisms and Prognostics; Galen’s short Ars parva; the medically essential and thus ubiquitous treatises On Pulses and On Urines; and the extensive compendium of Galenic writings by Hunayn ibn Is’haq (Johannitius), the Isagoge Ioannitii in tegni Galeni, in the translation by Constantinus Africanus” (Arikha 2007:77).  Many other Italian translating projects were active (Freely 2009:126ff.).

Through it and other channels, the work of Ibn Sina (Avicenna, 980-1037; see Avicenna 1999) became standard.  Ibn Sina hailed from the far east of the Iranian world, near Bukhara.  He was a thorough-going Aristotelian, committed to investigation of the world, though convinced that intuition was vital in providing that.  His enormous Canon of Medicine was translated into Latin by Gerard of Cremona (1114-1187), along with perhaps a hundred other Arab works.  Gerard had moved to Toledo to learn Arabic, and remained there (Freely 2009:128; Pormann and Savage-Smith 2007:164), in that world which still remembered “convivencia.”  This was surely one of the most stunning examples of knowledge transfer in all history (Covington 2007; Kamal 1975:663; Ullman 1978:54).  One suspects that Gerard did not single-handedly translate all of them, but the achievement was fantastic nonetheless.  Avicenna’s Canon work remained standard in Europe into the 17th century.  Gerard also translated Ptolemy’s Almagest, and basic works of Al-Kindi, Al-Farabi, Al-Hazen, Thabit, Rhazes, al-Zahrawi, and Al-Khwarizmi, the last being the first algebra to reach Europe.  He also translated much alchemy (Hill 1990a:341), which, be it remembered, was a perfectly reasonable science in those days; much of modern chemistry descends from it.  Certainly, few people in history have been so important, and very few so important yet so little known.

Also active in Toledo were the Jewish translator and writer Abraham ibn Ezra (1086-1164; Freely 2009:129) and several others.

Fibonacci, famous for developing the sequence of numbers that specifies the pattern of developing plant structures, learned much from the Arabs, using al-Khwarizmi’s algebra works in Latin (Covington 2007:10)—presumably Gerard’s translation.  Faraj ben Salim, a Sicilian Jew, translated more of Rhazese as well as Ibn Jazlah, al-Abdan, and others.  As late as the 16th century, Andrea Alpago of Belluno was translating or retranslating more of Avicenna (Kamal 1975:664, following Hitti).  Another Italian, Stephen of Pisa, was active at Salenro and in the Middle East (Ullmann 1978:54).

Botany transferred actively, largely in the form of herbal medicine in the tradition of Dioscorides.  The Arabs had vastly increased the number of items in the Dioscoridean materia medica, and Europe slowly adopted many of these, though unable to access some that were strictly Near Eastern (Idrisi 2005).

Spain was key to transmission.  The Arabs conquered it in 711, ruled most of it into the 11th century, and retained a foothold at Grenada until 1492.  At peak, under the late Ummayads in the 10th century, Cordova (the capital) reportedly had 200,000 houses, 10,000,000 people, 600 inns, 900 baths, 600 mosques (with schools), 17 universities, and 70 public libraries, the royal one containing 225,000 books (Kamal 1975:8), or, by other estimates, 400,000 (Lewis 2008:326).  The Ummayad golden age ended, but subsequent dynasties did surprisingly well keeping civilization alive, and slowly Europe realized that there was something worthwhile here.

The climax of Spanish appropriation of Islamic knowledge came in the 11th-13th centuries, under Alfonso the Wise (late 13th century) and other relatively enlightened monarchs.  Moorish Spain was a center of Arab and Islamic civilization.  Works spread all over the world from there; Yusuf al-Mu’taman’s geometry book of the 11th century was taken by Moses Maimonides (1135-1204) to Cairo, whence it went on all over the Islamic world, being republished, for example, in Central Asia in the 13th century (Covington 2007).  At that time or earlier, Spanish travelers even went to Egypt and Syria, and possibly Central Asia, in search of knowledge (Kamal 1975:662, citing the medieval writer al-Maqrizi).  Ibn al-Baytar (d. 1248), a famous Andalusian physician and herbalist, traveled in the Near East and listed hundreds of remedies; many herbal drugs are still called by his name.

Around 750, the Byzantine emperor Constantine VII sent ‘Abd al-Rahman II of Andalus an elegant Greek manuscript of Dioscorides.  Seeing this as obviously far more useful than most pretty gifts, the Jewish minister Hasdai ibn Shaprut had it translated, with the gift-bearing ambassador and a monk providing the Greek, and several Arabs helping with the Arabic and with the plant identifications (Lewis 2008:331).  Arabic versions of Dioscorides were eventually brought into Latin, but, as we have seen, most Arabic medical knowledge came later and via Italy.

Even love poetry moved north; Andalusian song, sometimes learned via captured singing-girls, inspired the troubadours (see e.g. Lewis 2008:355).  Christian captives went the other way, and influenced Andalusian Arab songs; they often have chorus lines in (rather butchered) medieval Spanish, often with definitely racy words.

A vast range of Spanish and Italian words come from Arabic, including a huge percentage of traditional medical terms, and many have gone on into English, ranging from “syrup” and “sherbet” to “soda,” “cotton,” “alkali,” “antimony,” “realgar,” and “lozenge,” to say nothing of such well-known scientific terms as “algebra,” “algorithm,” “alchemy,” and most of the names of the larger stars.  The Arab definite article “al-“ is often a dead giveaway for Arabic origin.  The “l” gets assimilated to many initial consonants, giving Spanish words like azulejo “tile” (Arabic az-zulej) and azafrán “saffron” (az-zafaran).  The standard Spanish word for thin noodles,  fideos, is Arabic; the proper classical Arabic is fidāwish (see Zaouali 2007:116 for the word and a medieval recipe), fideos being the Andalusian Arabic pronunciation.  Today the word is often mistakenly taken as a plural.

Spain was, of course, a center of Arabic learning, which could easily be translated directly.  Al-Maqqari wrote of its capital in the 10th century:  “In four things Cordoba surpasses the capitals of the world…the greatest of all things is knowledge—and that is the fourth” (Freely 2009:107; the other three were local buildings, including the mosque which still survives).   Ibn Zuhr (Avenzoar to Europeans, transcribing the Andalucian pronunciation of his name) flourished ca. 1091-1162.  His more famous student Ibn Rushd (1126-1198, known in Latin as Averroes, approximating the Andalucian dialect pronunciation of Ibn Rushd) became a standard source of medical and scientific knowledge for medieval Europe.  He was enormously influential on St. Thomas Aquinas, and through him on all subsequent European thought.  It is not impossible that Europe would never have developed modern science without Averroes.  Averroes was an Aristotelian, and his version of Aristotle remained standard in Europe, being definitively superseded only after the original Greek texts became widely known.

Averroes also wrote “The Incoherence of the Incoherence,” an answer to al-Ghazzali’s “The Incoherence of the Philosophers,” a mystic’s attack on rational thinking.  Though one standard story claims that al-Ghazzali got the best of it and ended philosophy in Islam, actually Averroes’ answer was fairly successful, and science continued to flourish in the Islamic world, succumbing more to later economic decline than to al-Ghazzali’s mysticism.  Other scientists included Abulcasem (Abu al-Qasim).  Translation effort culminated with Arnold of Villanova (d. ca. 1313), who translated Avicenna, Al-Kindi, Avenzoar and others.

Some knowledge flowed the other way.  Little, if any, of it was scientific; it was more in the line of fun.  Some medieval Arab songs in Spain had Spanish-language choruses—significantly, written to be sung by slave-girls used for sexual purposes.  Spanish food got into Muslim cooking; “a primitive sort of puff pastry” was fulyātil, from the medieval Spanish word for “leafy” (Perry 2007:xii).  We will return to the story of Spain.

Italy, however, was also a major transfer zone, with Muslim control of Sicily (and briefly part of south Italy) critically important.  Sicily fell to Roger the Norman, who with his successors developed one of the most tolerant realms of the Middle Ages; seeing the value of Islamic knowledge, he and his successors, especially Frederick II, tolerated Muslim communities and oversaw a great deal of translation and learning.  One result was Frederick’s great treatise on falconry, De Arte Venandi cum Avibus, which is probably the only medieval work that is still the standard textbook in its subject (Frederick 1943).

South France produced the famous Tibbon family of Jewish translators, who rendered many works into Hebrew; then they or others translated on into Latin.  They were especially active in the 13th century (Pormann and Savage-Smith 2007:164-165).  They may have made the greatest single contribution to the translation effort, vying with Gerard of Cremona.  The enterprise ranks among the most astonishing examples of knowledge transfer in all history.

Universities, Crusaders and their doctors, knightly orders centered in Cyprus and elsewhere in the Mediterranean, and ordinary travelers became more and more a part of the effort, until the path was well-beaten and no longer a matter for a few heroic travelers.

Even the British Isles contributed translators, including Adelard of Bath and Michael Scot.  Rober Bacon learned much from translations of Arabic lore.  Later, in the 17th century, Jacobus Golius introduced Descartes to Alhazen’s work and other relevant texts; Alhazen’s work on optics now survives only in Latin translation.

By 1200, Paris had 40,000 inhabitants, 4000 of whom were students (Gaukroger 2006:47).

Students were then as they are now; “as the contemporary saying went, [they learned] liberal arts at Paris, law at Orleans, medicine at Salerno, magic at Toledo, and manners and morals nowhere” (Whicher 1949:3; cf. Waddell 1955, esp. pp 176 ff).  Nothing has changed since, except for the addresses of the most prestigious universities.  The “contemporary saying” was presumably said by older professors, who never fail to claim that the younger generation is going to hell, and never remember that their elders said the same thing about them.  It is particularly amusing to hear aging ‘60s people complain about today’s amazingly tranquil and industrious young. 

Religion was both enabler and opponent of all this.  Plato was the basis of early theology.  The rise of Platonism explains such things as the Seven Deadly Sins:  Greek philosophical annoyances rather than Biblical taboos.  Aristotle was outlawed for much of these earlier centuries; the idea that God was present in all his creation—the physical world—was anathematized as heresy (see Gaukroger 2006:70-71).

Oddly, Greek learning did not penetrate Europe directly until long after classical Greek works were well known via the Arab routes.  In fact, the Greeks themselves recovered much of it from the Arabs (Herrin 2008); the Dark Ages were not nearly so dark in Byzantium as in the west, but still much was lost.  Greeks such as Gregory Chioniades (late 13th-early 14th C) eventually came to translate Arab advances in astronomy, medicine, and related fields (Herrin 2008:274).  Somewhat before this time, medical study has revived in Byzantium; dissection began again (after longstanding Christian bans) around the 11th century (Herrin 2008:228).

Western Europeans came to Byzantium for commerce and crusades in the high middle ages.  The infamous Fourth Crusade of 1204 led to European occupation of the city for almost 60 years.  During this period, such Westerners as William of Moerbeke read and translated Aristotle, Galen, Archimedes, and other scientific greats (Herrin 2008:278-279).

Meanwhile, Greeks from the Byzantine world appeared in the West, in time to teach Petrarch and convert him to trying to rediscover Greek classics in their original form.  Burgundio of Pisa first translated Galen from Greek to Latin, around 1180 (Kamal 1975:663).  Others, including the Jewish Bonacosa, followed over the next century.  Byzantine delegations continued, and the 15th century emerged as a major turning point, establishing Greek learning as more or less de regueur for serious scholars, at least in Italy (see Gaukroger 2006:89-90).  The story of the rediscovery of classical learning is too well known to need retelling here; what interests us at this point is that direct work with the Greek sources came long after much classical learning was known through Arabic refraction.

With the rise of early modern science, it was the Europeans’ turn to seek out Near Eastern knowledge in its actual homeland.  Leonhard Rauwolf traveled extensively in the Near East in the 16th century, to be followed in later centuries by Adan Tournefort (a father of taxonomy) and many others.  The classical sources were by then well known in Europe; Rauwolf and Tournefort were more interested in gathering new knowledge through actual field work.  They are among the great ancestors of modern-day field biologists and anthropologists.

India, China and Japan became well known only later.  Portuguese and then Dutch enterprise (the latter especially in Japan) led to a flood of knowledge coming back to Europe.  The Jesuit missionaries, who focused on East Asia as their initial mission field, were particularly important; they idealized Chinese culture, arguing enthusiastically for its philosophy, governance, food, medicine, and anything and everything else (on medicine, see Barnes 2005).  “New Christians” may have been important too, if the example of Garcia da Orta (the Jewish-background writer on Indian medicines) is representative.  A veritable translating industry introduced East Asian medicine to Europe in the mid-17th century, with moxibustion in particular intriguing the Dutch in Japan (Cook 2007:350-377).  Even Thomas Sydenham, the very image of the “new science” in medical form, was fascinated by moxibustion and recommended it (Cook 2007:372).  Concepts did not get across, but practices and especially drugs did.  As Cook (2007:377) says:  “Culture certainly made translating the whys and wherefores as understood by one group extraordinarily difficult.  But it was no barrier to useful goods or the business of how to do something.”

The flood of medieval Arab material was almost all Aristotelian, and it led to an enormous revolution in European thought in the 12th and 13th centuries (Ball 2008; Gaukroger 2006).  The highly idealistic, other-worldly, broadly Platonic worldview of the Dark Ages gave way to a view that valued investigation of real-world things.  God’s plan as revealed in the actual experienced world became a major goal of investigation.  This was to be the key reason for scientific investigation for the next several centuries, as we shall see in the next section.

Traditional churchmen, however, caviled at the new rationalistic, worldly, logical approach.  They felt that “taking too strong an interest in nature as a physical entity was tantamount to second-guessing God’s plans” (Ball 2008:817).

This view rose in parallel to, and may have been derived from, the Muslim reaction against Aristotelianism.  In the Near East, but not in Europe, Muslim reaction triumphed in the end.  Extreme reactionary religiosity, associated with the Hanbalite legal school, begat the Ash’arite view that speculation on the world was impious.  This received a huge boost through al-Ghazāli’s savage attacks on the “philosophers” in the 12th century.  Hanbalite thinking has more recently given rise to the Wahhabism that swept the Islamic world in the late 20th and early 21st century.  Wahhabism was espoused by the Saud family in Saudi Arabia, and their oil wealth gave them the ability to propagate it worldwide, leading to Al-Qaeda terrorism, widespread attacks on girls’ schools, and many other manifestations.  Islam is as diverse as Christianity; the Hanbalites are to the other legal schools as the hard-shell southern Baptists are to the mainstream Christians.

Ash’arism might not have triumphed, however, had not the Mongols swept through the Middle East, followed closely by the even more devastating epidemics of bubonic plague from 1346 onward.  These multiple blows ruined economy and culture, and left the region prostrate.

Science withered or ossified.  Folk wisdom continued to increase, and so did science in some marginal areas of Islam such as India and Central Asia.  But in general the torch was passed to Europe.  The roles of the Middle East and Europe were reversed.  Thus, writing on Ottoman Turkish medicine and natural history after the Turkish empire had passed its noon, Bernard Lewis reports that “they did not think in terms of the progress of research, the transformation of ideas, the gradual growth of knowledge.  The basic ideas of forming, testing and, if necessary, abandoning hypotheses remained alien to a society in which knowledge was conceived as a corpus of eternal verities which could be acquired, accumlated, transmitted, interpreted, and applied but not modified or transformed” (Lewis 1982:229).  Lewis also notes lack of interest in the rest of the world.  He correctly says it is more typical of human societies than is the ethnographic curiosity of Europe in the modern period.  But the ancient Greeks and the early medieval Muslims had been more attentive to “the others.”

Lewis contrasts this strongly with the great days of early Islam, when the Near East was the scientific center of the world.  The Ottoman twilight may be an extreme case, but I encountered exactly those attitudes among older Chinese scholars in Hong Kong in 1965 and 1966.  Many of them told me soberly that the traditional fishermen I studied had six toes and never learned to swim.  A minute’s observation on the waterfront on any warm summer day would have sufficed to disprove both claims, but the claims were old and were in the Chinese literature, and that was enough!  Such attitudes trace back to the declining days of the Ming Dynasty in the 1500s, and are not unknown earlier, but (as in Islam) they do not hold universally until economic and political decline set in.  Nothing could be farther from genuine traditional ecological knowledge; those same fishermen (and the Yucatec Maya I later studied) constantly tested and added to their pragmatic knowledge of their worlds.

The Origins of Early Modern Science

Things were very different in Europe.  Early modern science arose after Near Eastern and other sciences were incorporated there.  Perhaps from China or the Near East came the idea of garden as microcosm of the world; this idea led many to start gardens in which they tried to grow everything they could find (Cook 2007:30).

One odd pioneer was Paracelsus (1493-1541; see Thick 2010:200).  Wildly nonconformist and eccentric, he dabbled in mining, alchemy, medicine, and philosophy during a wandering life working as miner, chemist and doctor.  He believed all nature and life were chemical, and could be reproduced in the chemist’s or alchemist’s laboratory.  Cemistry and alchemy were not differentiated at this time—they were one science.  He made, or at least established in the literature, perhaps the two most important breakthroughs in liberating modern science from Greek mistake:  he saw that diseases were separate entities in their own right, and not just forms of humoral imbalance; and he saw that at least some chemical elements—mercury and sulphur, to be exact (and he added salt)—were not compouinds of earth, air, fire and water, but were actual elements themselves.  The first of these profound insights was taken up later by Sydenham and others.  The second was not to be fully developed until Lavoisier.  Still, the idea was out there; the seed was sown.

Medieval herbals gave way successively to Brunfels’ major one of 1530-36, Fuchs’ great book of 1542, and then in the late 16th century the truly great work of Dodoens (Cook 2007; Ogilvie 2006).

Of course, a dramatic moment was the coming of New World plants to Europe, first in the rather small work of Nicolas Monardes of Sevilla (1925), but then in the enormous and stunning achievement of Francisco Hernandez in the late 16th century.  Thought by some recent writers to be lost, or buried in imperial Spanish libraries, it was actually made available by the Lynx Academy (made famous by Galileo’s membership; Freedberg 2002; Saliba 2007).  It was republished in Mexico in an obscure wartime edition (Hernandez 1942), which languishes almost unknown; a new edition is needed.

Meanwhile, Bernardino de Sahagun was getting Aztec students and colleagues to record their knowledge, in the monumental Codex Florentinus (Sahagun 1950-1982).  These ethnoscience studies of Mexico are among the greatest achievements of plant exploration and of ethnography.

Only shortly before, Las Casas had led the successful movement to have Native Americans declared by the Catholic Church to be fully human and entitled to all human rights then recognized.  This was the beginning of the end for the appalling practices of early Spanish settlement, when Native Americans were enslaved and worked to death, or fed alive to dogs because they were cheaper than dogfood (Las Casas 1992; Pagden 1987; Varner and Varner 1983).  Las Casas risked his life for decades; the settler interests were openly after him.  Few political battles in history have been more heroic or more important.  Interestingly, Las Casas was the conservative in these fights; the modernizing “humanists” took the position that the conquerors had full rights to do anything they wanted to the “savages.”

Spain in the late 16th century was thus a dynamic place of forward thinking and spectacular achievement.  Monardes may have heard the masses of the great Sevillan composer Francisco Guerrero.  The year of Guerrero’s death, 1599, saw the birth in Sevilla of the master paiter Velásquez.  Contemporary with Guerrero, the incomparable Tomas Luis de Victoria was shuttling between Spain and Rome (where Palestrina composed his vast repertoire at the same time).

“New Spain” in the New World was rapidly catching up.  Spanish composers moved to Mexico and South America, where they taught the locals, initiating a period of Baroque music that is little known but unexcelled; among other things, Estebán Salas in Cuba became the first African-American to compose classical European music.  In the 17th century, Juan Ruiz de Alarcón migrated from his obscure Mexican birthplace to Spain, where he became one of the great dramatists and an absolutely unexcelled master of the Spanish language.  (He was one of those writers who can make strong men weep simply from the beauty of the sounds, even if they do not understand the Spanish.)  In short, Spain—including “New Spain”—in the 16th and early 17th centuries was fully participant in the brilliant and innovative civilization of Western Europe, along with Italy, France, the Netherlands and England.  Spain’s melancholy decline set in before the full scientific revolution (or non-revolution), but not before scholars like Monardes and Hernández had contributed in a major way to it.

Ogilvie (2006) cautions that the new discoveries in Europe and the Near East were far more important in the development of botanical science than these rather sketchily-known New World discoveries.  However, these did indeed have a major effect (Gaukroger 2006:359; even so, Bernardino de Sahagun’s great work on Aztec knowledge, now known as the “Florentine Codex,” was not known in Europe at that time.)

Arabic learning, by this time, was entering Europe via Arabic-literate European scholars as well as immigrant Arabic-speakers like Leo Africanus (d. ca. 1550)  Leo taught Arabic to the European Orientalist Jean-Albert Widmanstadt, 1506-ca 1559).  A contemporary was Guillaume Postel (1510-1581), whose astonishing career has recently been reconstructed (Saliba 2007:218-220).  Postel served on a mission to Constantinople, where he apparently learned Arabic or at least developed an interest that led to his doing so.  He read and annotated technical works of astronomy and probably other sciences, and briefly taught Arabic in Paris.  People like him evidently alerted Copernicus to Arabic astronomy, which clearly influenced Copernicus.

Just as Greek had been the exciting new language to Petrarch and his generation, Arabic was to the 16th century.  Arabic manuscripts are widely found in old European libraries (notably the Vatican and, of course, Byzantine libraries), and were not read by Arab travelers alone.  With the Lynceans and their colleagues seeking out knowledge from the Aztecs to the Arabs, Europe was suddenly a very exciting place.

An example of knowledge flow from the Near East to Europe may be of interest.  The idea of circulation of the blood seems to have started in Islamic lands.  Bernard Lewis (2001:79-80) records that “a thirteenth-century Syrian physician called Ibn al-Nafīs” (d. 1288) worked out the concept (see also Kamal 1975:154).  His knowledge spread to Europe, via “a Renaissance scholar called Andrea Alpago (died ca. 1520) who spent many years in Syria collecting and translating Arabic medical manuscripts” (Lewis 2001:80).  Michael Servetus picked up the idea, including Ibn al-Nafīs’ demonstration of the circulation from the heart to the lungs and back. William Harvey (1578-1657) learned of this, and worked out—with stunning innovative brilliance—the whole circulation pattern, publishing the discovery of circulation in 1628 (Pormann and Savage-Smith 2007:47).  Galen and the Arabs thought the blood was entirely consumed by the body, and renewed constantly in the liver.  They did not realize that the veins held a return flow; they thought the arteries carried pneuma, the veins carried nutrients. Harvey’s genius was to see that blood actually circulates continually, ferrying nutrients to and from the whole body in a closed circuit.

The Dawn of Rapid Discovery Science

Europe has progressed fairly continuously since the final eclipse of the Roman Empire, though there were some checks in the 14th, 17th, and 18th centuries as well as in the Great Depression of the 1930s.  Knowledge in particular has risen steadily, even through those difficult periods.

Europe after 1500 presents a strikingly different case from both medieval Europe and the other civilizations of the world.  The flow of Near Eastern, Chinese, and Indian learning to Europe was one major input into the rise of what Randall Collins (1998) called “rapid discovery science.”

Yet, the new wave really began with Thomas Aquinas, Roger Bacon, William of Ockham, and other medieval thinkers, and they of course were drawing on those Arab sources.  This makes rather a slow process of the famous “scientific revolution” beloved of earlier generations of historians.  The current feeling is that dragging out a “revolution” over many centuries is ridiculous.  We live in an age, after all, when the computer revolution took only a generation.

The most comprehensive study of the intellectual background to the “revolution” is that of Gaukroger (2006, 2010).  Gaukroger sees a development from the scholasticism of the high medieval period, with its Aristotelian natural philosophy, to modern science.  Before the high middle ages, Plato and Christian dogma had been riding high, inhibiting learning.  Gaukroger provides very important observations on Plato, Augustine and Manicheanism (Gaukroger 2006:51-54).  Aristotle was rehabilitated thanks to the Arabs and to Thomas Aquinas.

One might argue, in defense of the old term, that what happened in the 17th century was the most momentous single change in all human history, rivaled only by the origin of agriculture.  (The latter was also a very slow process, leading to fights about whether it was a “revolution” or not.)  I will, here, follow Collins, and refer to the event as the invention (basically between 1540 and 1700) of rapid discovery science, rather than as a “scientific revolution.”

The new, empirical, discovery-oriented, innovation-seeking science arose in the 17th century, pursuant to the work of Francis Bacon (1561-1626), Galileo Galilei (1564-1642), William Harvey (1578-1657), René Descartes (1596-1650), and their correspondents.  Francis Bacon first emphasized the need for experiments to prove claims and advance knowledge; he was opposing magic and dogma based on anecdotal evidence, as well as sheer ignorance.  He also emphasized the need for cooperation; the lone-wolf savant was already a dated concept!   Like other scientists, he wished to strip away the veil of Nature and disclose her; she had been a goddess who “loved to hide herself,” and was still poetically so represented (Hadot 2006).  After Bacon, tension arose between scientists who wished to strip her and romantics who preferred the veil (Hadot 2006).

One remembers that religion and science were not opposed then; in fact science was seen as the discovery of God’s laws in nature.  Descartes and Boyle were great religious thinkers as well as scientists.  The great astronomer Johannes Kepler studied a supernova and realized that the star that guided the Magi to Jesus might well have been such; he sought records and regularities, calculated a date for Jesus’ birth (by then it was known that it was not 1 AD), and coupled it with astrology—still a science then, though a dubious one (Kemp 2009).  Kepler also believed in the Pythagorean music of the spheres, seeing earth and nature moved by heavenly harmonies “just as a farmer is moved by music to dance” (quoted in Kemp 2009).

The revolution was real, if slow. (See Bowler and Morus 2005 for the canonical story; Gaukroger 2006, 2010 for much more detail and a much more radical view.)  It involved finding more and more real-world problems with ancient atomism, mechanism, humoral medicine, and almost everything else, and thus more and more reason to go with new knowledge rather than old teachings.

A fascinating insight into the mind of the time is Malcolm Thick’s detailed biography of Sir Hugh Plat (1552-1608; Thick 2010).  Plat was an Elizabethan tradesman, a brewer by background, who succumbed to the insatiable curiosity of the time.  He never made a significant contribution to anything, but he worked with beaver-like intensity on chemistry, alchemy, food, medicine, cooking, gardening, and every other useful art he could find.  He amassed an incredible collection of ideas, methods, and tricks, most of which he tried himself.  Plat is important not because of what he accomplished but because his story was typical.  There were thousands of ordinary people in Europe of the time who became downright obsessive over useful knowledge or simply science for science’s sake.  They wanted to help the world and to advance learning.

Plat’s work is fascinatingly comparable to an almost exact contemporary, Song Yingxing (1587-1666?), who, oddly enough, has found a biographer at almost exactly the same time as Hugh Plat (Schäfer 2011).  Song was a much more organized, and one gathers a much more intelligent, man than Plat, and produced a famous work instead of a flock of rather ephemeral items, but the mentality was the same:  an obsessive urge to find out absolutely everything about useful arts.  Yet Song’s interests died with him, and no one like him existed in China for centuries.  Plat, on the other hand, was soon forgotten in the rush of new learning.

The same contrast—so bitter for China—is visible in herbals.  At the same time, Li Shizhen was compiling the greatest herbal in Chinese history and the greatest in the world up to his time (Li 2003, Chinese original 1593).  Li’s work was the culmination of a great herbal tradition going back for millennia.  But he was almost surpassed in his own lifetime, and was surpassed soon after it, as the new European herbal movement grew from strength to strength;  Rembert Dodoens’ breakthrough herbal came in 1554, to be followed by John Gerard’s (based on Dodoens’) in 1633 and Parkinson’s in 1629.  Li remained the standard of Chinese herbals until the late 20th century.  Thus, in herbal wisdom as in useful knowledge, China was still up with the west in the 1590s, but had fallen hopelessly behind by 1650.  (One reason was the fall of the Ming Dynasty and its replacement by the often-repressive and scientifically sluggish Qing.)

Through all human history, people had followed received wisdom unless there was overwhelming reason to change.  The revolution consisted of the simple idea that we should seek new knowledge instead, using the best current observations.  These were ideally from experiments, but perfectly acceptable if they came from exploration and natural history, like Galileo’s work on astronomy (published in 1632), or even from pure theory, like Newton’s Principia mathematica (1687).

Robert Boyle (1627-1691) stated the case for experiment over received tradition in The Skeptical Chymist (2006/1661; cf. Freely 2009:214-215), taking the extremely significant extreme position that even when he had no better theory to propose, he would not accept hallowed authority—he would wait for more experiments.  This is, of course, precisely the position that Thomas Kuhn said was hopeless, in The Structure of Scientific Revolutions (Kuhn 1962).  But it worked for Boyle.

It is no mere coincidence that, just as earlier scholars had their “republic of letters” and Galileo and his friends their “Lynx Academy,” Boyle depended on an “Invisible College” for stimulus and conversation.  Scientists may study vacuums, but they cannot work in one.  The sociology of science is vital.

Much of the revolution consisted of new opportunities to observe and test.  Consider the persistence of Hippocratic-Galenic medicine.  Few indeed were the people in premodern times who had Galen’s opportunities to observe, experiment, learn, teach, and synthesize.  He had the enormous medical university in Pergamon, the whole resources of Rome, and his practice with gladiators and other hard-living people to draw on.  He was a brilliant synthesist and a dynamic writer.  The reason he was not superseded until the 17th century was that no one could really do it.  No one had the technology, the theories, the infrastructure of labs and hospitals, or the observational opportunities.  The Arabs and Chinese could, and did, supplement his ideas with enormous masses of data, information, and further qualification, but they were wise not to throw Galen over. Radical rejection of his ideas was not fully accomplished until the 19th century.  By then, modern microscopes, laboratories, and experimental apparatus were perfected.  Soon Galen’s anatomy was extended by Harvey, Willis and others; his lack of recognition of diseases as specific entities was challenged by Paracelsus, then devastated by Sydenham.  This was a long, slow process, and followers of the eccentric Paracelsus were considered quacks and outsiders in the 16th century (Thick 2010).  The newness and uniqueness of syphilis had much to do with the change in attitude.

The same was true in chemistry.  Boyle’s courage in throwing out received wisdom on alchemy, particles, the nonexistence of vacuums, and elemental natures did not help him go beyond the ancients in regard to basic theory.  He discussed the atomic theory, but it too lacked real evidence at the time.  Above all, he realized that the world had proved to be far more complicated than the Greeks or the Renaissance scholars thought; he reviews dozens of sophisticated chemical experiments that proved this amply.  Old view simply would not fit.  But the future was unclear.

He could see that earth, air, fire and water were not much of a story, but he had no way of conceiving of the idea that earth, air and water were actually made up of simpler elements that were, or were comparable to, metals.  This involved reversing all conventional wisdom, which held that the basic elements combined to produce the metals.  This reversal was ultimately reached by Lavoisier in the 18th century.  It had to wait until improvements in experimental technique had isolated oxygen, nitrogen, and so forth.  Such a change in thinking was incredibly difficult to achieve, and truly revolutionary.  Finding out something new merely adds to knowledge, but this was a matter of turning upside down the whole basis of European thinking!  The earth-air-fire-water cosmology was basic to all aspects of (older) knowledge.  The recognition that these four substances broke down into simpler elements, rather than vice versa, was terribly hard-won.

Such new classification systems were extremely important.  Biological classification also underwent a basic paradigm shift.

The classification of living things, traditionally ascribed to Linnaeus, derives as much or more from the brilliant work of John Ray (1627-1705), an exact contemporary—in birth date at least—of Boyle.  Ray was a natural historian, fascinated with plants and birds, and a key person in uniting field work with laboratory work (specifcially dissection; but note that the botanists had been there before him).

Ray developed the modern species concept—the idea that those organisms which can interbreed with each other form a species (Birkhead 2008:31). In fact, Ray coined the term “species” in its modern use (Wikipedia, “John Ray”).  He also rejected both the idea that each species has to be viewed as a unique item (as Locke implied) and that it is merely one variant on a more general Platonic type; he pioneered the modern science of classification on the basis of picking out important traits of all sorts to distinguish species and group them taxonomically (Gaukroger 2010:191-194).  He thus foregrounded reproduction and reproductive structures, later shown by Linnaeus to be the really criterial things to look at in classifying plants.

With this system, sex mattered.  Anatomy mattered, and reproductive anatomy mattered more than superficial structures; Ray was a great pioneer in elucidating the reproductive anatomy and physiology of birds.  (In this he built on a great tradition, going back to surprisingly sensible if often wrong ideas of Aristotle’s.)  Leaving descendents mattered; Darwinian evolution depends on Ray and Linnaeus more than on the infamous Malthus.  Without this concept and its implications, there was no reason not to classify plants by their leaves, as many botanists did.  (The leaf-dependent botanists were later to attack Linnaeus for the “immorality” of his “sexual” system.)  Trees could be classified by their timber value.  We shall consider below a much more recent question over what to do with whales.

Ray’s work led to further development by Adan Tournefort, explorer of the Near East.  (I first encountered Tournefort as the man dubiously honored by Brassica tournefortii, a loathed and hated weed from North Africa that has invaded my southern California homeland.  But it tastes good—it is a wild broccoli—and thus I have a soft spot in my heart, or rather in my stomach, for it.)  The taxonomic work of Tournefort and his contemporaries led directly to Linnaeus.

Less beneficial, perhaps, was Ray’s crucial role in developing the “argument by design” for the existence of God (Birkhead 2008).  Later made famous by William Paley, this survives as the universal argument for “intelligent design” today.  It had the advantage of setting Darwin wondering what really caused the design in the world.  Natural selection was his answer—firm enough that a modern intelligent design advocate (like Francis Collins) must assume God, like modern artificial-intelligence designers, uses it to fine-tune his creation.

New and rigorous classification systems for stars, minerals, mental illnesses, and everything else imaginable were to follow, and they had and have their own costs and biases (Foucault 1970; Kassam 2009).  Today we have whole classification systems for everything from universes to subatomic particles.  Atoms, when discovered, were thought to be the true atoms of Greek thought—the final particles that could not be subdivided further.  (“Atom” comes from Greek atomos, “uncuttable.”)  Another bad guess.

This new wave’s creators saw themselves as a “Republic of Letters” (Gaukroger 2010; Ogilvie 2006:82ff; Rudwick 2005).  Educated people all over Europe were in constant correspondence with each other.  This correspondence was relatively unmarred by the hatreds and political games that made daily life in Renaissance Europe so insecure.  People respected each other across lines of nation and faith.  The common language, Latin, was not the property of any existing polity.  Members in this borderless but well-recognized Republic treated each other according to unwritten, or rarely-written, rules of respect and courtesy.

Science and humanities were one.  Describing a typical case, Martin Kemp (2008) points out connections between Peter Breughel’s extremely accurate and innovative representations of landscape and the maps of Abraham Ortelius, a cartographer who was a friend of Breughel.

Of course, all academics will realize that those rules of respect did not extend to debates about theory!  A Protestant could respect and tolerate a Catholic or Jew, but if anyone dared to cross his pet idea on plant reproduction or the treatment of ulcers, the words flew like enraged cats.  That was part of the game—part of business in the Republic of Letters.  This information flow presaged the value of scientific journals (invented in the 18th century but not really important till the 19th), and then the Internet; the vast network held together by letters in the 17th century was exactly like the scientific network on the Internet today.  All the Internet has added is speed—important, to be sure.

Religious solidarity and debate stood behind much of the vigor of debates in science, with Protestants and Jews always being on the defensive at first, and having to argue trenchantly for their beliefs.  This led them to be both original and persistent in thinking (Merton 1973; Morton 1981).  But, also, the wars of religion in the 16th and 17th centuries led to major cynicism about organized religion, and contributed mightily to retreat into science as an alternative way of knowing the Divine Will and into the Republic of Letters as an alternative and more decent way of being social.  The skepticism that surfaces in Montaigne, grows in Bayle, and climaxes in Voltaire fed a search for truths that were not simply matters of unprovable church dogma.

This development was exceedingly slow and uneven, because, contrary to conventional wisdom, the middle ages had plenty of sophisticated observation and argument, and the 17th and even 18th centuries had plenty of obscurantist, mystical, and blindly-Aristotelian holdovers.  Brilliant adversarial argument, technological progress, and economic benefits of forward research were all sporadic and contingent.  They did not suddenly cut in at the glad dawn in 1620 or 1650 or any other year.

What did cut in was neatly summarized by van Helmont, the Dutch physician who proved plants grew through combining air and water:  “Neither doth the reading of Books make us to be of the properties [of simples], but by observation” (quoted in Wear 2007:98).  Helmont had much to do with inventing the modern concept of “disease”—a specifiably entity, distinct from its symptoms.  The coming of plague and syphilis, clearly entities though very changeable in symptomatology and clearly different from anything in Herodotus or Galen, had more to do with the origin of this concept; people simply could not ignore them.

Significantly, Helmont’s own work was badly flawed, not least because of his many mystical and even visionary “observations” (see Andrew Wear 2000).  17th-century science did not suddenly discover Truth in the face of learned Error.  In fact, Galen’s and Avicenna’s old books remained much better guides to medical practice than Helmont’s rather wild ideas.  What mattered was that Helmont, and many others, were breaking away from reliance on the books, and rapidly developing a science based on original observation and test.  Their willingness to endure false starts as the price of radical breakthrough is far more important, to science and to history, than their initial successes at replacing the classics with better ideas.

Deborah Harkness (2008) has shown that this type of activity—feverish quest for anything new, exciting, and informative—was exceedingly widespread in Elizabethan England, and by inference in much of urban Europe.  Everyone from farm workers and craftsmen to lords and high court officials was frantically seeking anything new.  Things that improved manufacturing and promoted profit were especially desired, but people were almost as obsessed with new stars, rare plants, and odd rocks as with more solid matters like improving metallurgy and arms manufacture.  This ferment contrasts with China’s relatively staid attitude to innovation.  Even the works of Elman and of William Rowe, which do disclose much inteletual and craft activity in early modern China, have not produced anything similar.  The Tiangong Kaiwu was roughly contemporary with, and similar to, Hugh Plat’s Elizabethan work that gives its name to Harkness’ volume, but unlike Plat’s book it was an isolated incident, not a presage of more and better to come.  Similarly, Li Shizhen’s great herbal came out at almost exactly the same time as the comparable works of Dodoens and Gerard.   (The relations of those two—with Gerard as plagiarist extraordinaire—are described in detail by Harkness).  But Li’s was the last great Chinese herbal, Dodoens’ the first great European one.  By the early 1600s, Europe had surpassed China.

Harkness wisely includes alchemy and astrology among the useful sciences (see above on the Near East); no one at that time had a clue that one could not turn lead into gold or dirt into silver.  Recall that earth was still an “element” then; gold and silver were not.  Equally amazing things were being done daily in smelting and refining.  Similarly, everyone could see the sun’s influence on all life, and the moon’s control of tides; inexorable logic “proved” that the other heavenly bodies must have some influence.  The problem was that reality did not follow logic or common sense.

Moreover, alchemy, at least, sometimes worked.  We have a careful eyewitness account of a modern Central Asian alchemist turning dirt into gold (cited in Idries Shah’s Oriental Magic, 1956).  Fortunately, the account is extremely perceptive, allowing us to perceive that the good sage was simply panning a very small amount of finely disseminated gold out of a very large amount of alluvial soil. He added a good deal of magical rigmarole, but the actual process is clear.  He seems to have been genuinely convinced he was making the gold; finely disseminated gold in alluvial dirt is far from easy to see.  Countless such alluvial separations must have lain behind alchemy.  Similarly, mercury can extract gold from crushed auriferous rock, and is routinely used for that purpose today; if the gold particles are too small to see—as they often are—an alchemist would surely have thought he was turning rock to gold, via the “mercuric” power that led to naming the liquid metal after the trickster and messenger god.  And of course much of alchemy was spiritual, not physical.

The basic hopelessness of alchemy, however, was proved by Robert Boyle, in The Skeptical Chymist.  Boyle critiqued Galen, Paracelsus, and Helmont for reductionism without evidence, and upheld a view that was, indeed, skeptical; he saw no way to simplify chemistry.  He did not really substitute a new paradigm for an old one.

What mattered was that loyalty to and reliance on the old texts had given way to loyalty to independent verification and reliance on one’s own experiments and observations.  Boyle was not afraid to admit frank ignorance and to throw out theories without having much better to substitute.  Earlier generations, even though they were perfectly aware of the imperfection of old texts and the benefits of observation, did not trust their own innovative findings unless those clearly improved on all that had gone before.  Science thus progressed slowly and cautiously.  Boyle did not throw caution to the winds, but he had come to be a leader in a generation that preferred their own experiments to old stories, no matter how little their new experiments appeared to accomplish.  They were on the way to the modern period, when hypotheses and theories are expected to fail and to be superseded in a few years, and when “hard science” departments tell university libraries not to bother keeping journals more than a year or two (as I observed during my years chairing a university library committee).

Europe the Different

Floods of ink have been expended on why China, India and the Near East did not pick up on their own innovations, and why it was a tiny, marginal backwater of the Eurasian continent that exploded into rapid discovery science.

Clearly, it is Europe that is the exception.  The normal course of human events is to see knowledge advance slowly and fairly steadily, as it has done in all societies over thousands of years.  Chinese and Near Eastern science did not stop advancing when Europe took over the lead; they kept on.  Nor did the Maya, Inuit, Northwest Coast Native peoples, or Australian Aborigines stagnate or cease advancing at any point in their history.  They kept learning more.  Archaeology shows, in fact, that most such societies kept increasing their knowledge at exponential rather than linear rates.  Certainly the Northwest Coast peoples learned dramatically more in the last couple of millennia.  But the exponent was very small.  Europe’s since 1500 has been much larger.  In the 20th century, the number of scientific publications doubled every few years.  The doubling time continues to decrease.

This is quite unnatural for humans.  People are normally interested in their immediate social group, and in getting more liked and admired therein.  All their effort, except for minimal livelihood-maintenance, goes into social games and gossip.  (People do not work for “money”; they work for what money can buy—necessities and status.  Once they have the bare necessities, and perhaps a tiny bit of solitary enjoyment, everything else goes for social acceptance and status.)  Devoting oneself to science—to the dispassionate search for impersonal truth—is truly weird by human standards.  We still think of people with this interest as “nerds” and “geeks.”  Many of them are indeed somewhat autistic.  When I started teaching, I thought young people were interested in the world.  All I had to do was present information.  I learned that that was the last and least of my tasks.  The great teachers are those that can get the students interested in anything beyond their immediate social life.

In fact, interest in learning more about the natural world is—in my rather considerable experience—actually considerably greater in traditional small-scale societies than in modern, science-conscious Europe and America.  I have spent many years living with Maya farmers, Northwest Coast Natives, and Chinese fisherfolk, and certainly the level of interest in nature and natural things was much greater among them than among modern Americans.  They were correspondingly less single-mindedly obsessed with social life.  They lacked, for example, the fascination with “celebs” that reveals itself in countless magazines and TV programs, and that much earlier revealed itself in ancient Greek and Roman adulation of actors and gladiators.  They were also much quicker to pick up skills and knowledge from other people and peoples than American farmers and craftspeople are.

Why did Europe in the 16th and 17th centuries suddenly become obsessed with Japanese medicines, Indonesian shells, and Near Eastern flowers?  Why did so many Europeans take breaks from the Machiavellian social games of their age to study such things?  Pliny had studied, and indeed invented, “natural history,” but his work became a “classic”—quoted, cited, unread, and unimitated—in its own time; natural history grew under Arab care, but truly flourished only in post-1400 Europe.

No such changes took place in the other lands.  If anything, they went the other way.  Near Eastern science declined sadly during this period.  (The Ottoman Empire was a partial contrast, but its history seems almost more European than Near Eastern at this time.)  India was preoccupied with horrific invasions and conquests by Tamerlane, Babur, and lesser lights.

China spent this period trapped in the Ming Dynasty, whose frequently-unstable rulers and frozen, overcentralized bureaucracy stifled change.  Technological and scientific progress did occur, but it was slow.  Ming and Qing autocracy is surely the major reason—revisionists to the contrary notwithstanding (see e.g. Anderson 1988; Mote 1999 gives the best, most balanced discussion of the issue, suspending judgment but making a solid case).  In spite of Li Shizhen and his great innovative herbal of 1593, Chinese science was always deeply attendant to the past, discouraging innovative theories and ideas.  This point has been greatly overmade in western sources (often to the point of racism), and is now a cliché, but it is not without truth.  I have heard many educated Chinese strongly maintain points inscribed in old books but clearly and visibly wrong for present conditions.  In Hong Kong I was repeatedly told, for instance, that the fishermen I studied could not swim.  Anyone could see otherwise on a walk along any waterfront on any warm day.  But the old books said fishermen don’t swim.  In fairness to the Chinese, I have run into the same faith in books, as opposed to observation, in the United States and Europe.

China in the Song Dynasty was ahead of Europe in every field, and ahead of the Near East in most areas of science and enquiry.  The Mongol Empire, and its continuation in China’s Yuan Dynasty, instituted in massive knowledge transfer (Anderson ms.; Paul Buell, ongoing research; Buell et al 2000), leveling the playing field and introducing many Chinese accomplishments to the western world.  Gunpowder, cannon, the compass, printing, chemical technology, ceramic skills and many other innovations spread across Eurasia.  However, the Mongol yoke was repressive in China.  The end of Yuan saw violence and chaos.  The new Ming Dynasty brought in much worse autocracy and repression.  After an uneven but fairly successful start, the dynasty settled down after the 1420s to real stagnation.

A significant and highly visible symptom is the paralysis of philosophy.  The spectacular flowering of Buddhist, Taoist, and Neo-Confucian thought under Song and Yuan had a deeply conservative tinge, but at least it was a massive intellectual endeavor.  Highly innovative ideas were generated, often in the name of conservatism.  (An irony not exactly unknown in the western world; someone has remarked that all successful revolutions promise “return to the good old days.”)  By contrast, the only dramatic philosophical innovation of the Ming Dynasty was that of Wang Yangming.  Wang was a high official with a brilliant career as censor and general.  He retired to propagate his personal mix of Confucianism and Buddhism, an “inner light” philosophy strikingly similar to Quaker thought but maintained also by a profound skepticism about worldly success and worldly affairs in general.  He moved Confucian philosophy much closer to the quiestism and mysticism of monastic Buddhism. Wang was one of the key figures in turning Chinese intellectuals inward toward quietism, which in turn was one of the causes of China’s failure to equal Europe in scientific and technical progress.

Larry Israel (2008) has given us a superb dramatic account of Wang’s subduing of an apparently psychopathic rogue prince of the Ming Dynasty.  It is another side of Wang.  By a combination of absolutely brilliant generaling and political savvy (not without Machiavellian scheming), he parlayed a very weak position with about 10,000 troops into a total victory over a huge rebellion involving—according to Wang’s reports—100,000 total troops, many of them hardened bandits and outlaws.  Wang is described as maintaining perfect cool through it all, and showing perfect timing.

It is interesting to compare him with his near-contemporary Michel de Montaigne, another soldier turned sage.  Wang was far higher up the administrative and military ladder than Montaigne, but had the same ambivalence about it and the same desire to retire to meditative and isolated pursuits as soon as he could.  The great similarity in lifetrack and the real similarity in philosophy does not extend to any similarity in the effects of their thoughts over the long term.  Montaigne’s skepticism and meditative realism were enormously liberating to European intellectuals (see e.g. Pascal’s “thoughts”), and Montaigne thus became a major inspiration of the Enlightenment.

Montaigne remained less quiestist and escapist than Wang, but the real difference was in the times.  If the world had been different, Wang might have started a Chinese enlightenment, and Montaigne might have turned Europeans inward to arid meditation.  Wang’s thought was perfect at feeding the escapism of Chinese intellectuals faced with a hopelessly stagnant and degenerate court.  Montaigne’s rather similar thought was perfect at feeding the idealism and merciless enquiry of European intellectuals in a time of rapid change, dynamic expansion of empires, and terrific contestation of religion and rising autocracy (cf. Perry Anderson 1974).

A huge part of the problem was that Chinese intellectuals served at the mercy of the court, and the Ming court was erratic and punitive, regularly condemning innovators and critics of all kinds (Wang barely survived).  By contrast, many of Europe’s first scientists were minor nobles who had little hope of major advancement but no fear of falling far.  Moreover, like scientists everywhere until the 21st century, they were males who had long-suffering wives to do the social and family work.  Today, married female scientists still usually have all the responsibility of remembering birthdays, organizing children’s parties, and being nice to the boss at dinner; some resent it, some enjoy it, but all recognize it is a special and unfair burden.  Throughout the world in premodern times, science was the preserve of males, and at first of well-born ones.  Only they had the leisure and resources to pursue science.  They were often young and adventurous.  Today, the average age of scientists who make major innovations and get Nobel prizes is around 38 (Berg 2007); in math and physics it is considerably younger than that.  In the Renaissance and early modern period, averages would have been even lower, because of the shorter lifespans of those days.

Benjamin Elman (2005) has shown that the clichés about China’s failure to learn from Europe are not adequate accounts.  The Jesuits in the 17th and early 18th centuries did not bring modern European science; they brought Aristotelian knowledge and old, pre-Copernican astronomy, already discredited in Europe.  The Chinese already had science as good as that.  The Jesuits failed to introduce calculus and other modern mathematics.  The Chinese took what they could use—clocks, some mapping techniques—and saw correctly that the rest was not worth taking.  The Jesuits lost their China foothold and eventually were closed down totally (to be revived much later), and China had no real chance to learn until other missionaries flooded into China in the 19th century.  However, they continued to benefit from, and develop, the knowledge they learned from the Jesuits.  (Interestingly, this point had been made 60 years earlier by the anthropologist A. L. Kroeber [1944:196], without the materials available to Elman—showing what can be done by a relatively unbiased scholar in spite of the lack of any good information on just how successful Chinese science was.)

Elman systematically compares scientific fields ranging from mathematics and engineering to botany and medicine.  (Among other things, he notes that western medicine had some impact at the same time that the indigenous Chinese medical traditions were moving from a focus on cold to a more balanced focus on both cold and heat as causes of illness.  Like most premodern peoples, their naturalistic medical traditions gave heavy importance to those environmental factors.)  He misses the one that would best make his case:  nutrition.  Chinese nutritional science was ahead of the west’s till the very end of the 19th century.  This was one case in which the west should have done the learning.

After that, China learned about as fast as any country did.  Japan did not get its famous clear lead over China in borrowing from the west till late in the 19th century.  Elman sums up a general current opinion that China’s loss of the Sino-Japanese War of 1894-95 was not because China was behind technologically, but because China was corrupt and misgoverned.  The Empress Dowager’s infamous reallocation of the navy’s budget to redecorate the Summer Palace was only one problem!

This being said, the Chinese were indeed resistant to western knowledge, slow to realize its importance, slow to take it up, slow to see that their own traditions were lacking.  Elman is certainly right, both intellectually and morally, in stressing the Cinese successes, but he may go a bit far the other way.  He sometimes forgets that only a tiny elite adopted any western knowledge.  He admits the Jesuits had no effect outside the court circles—they were sequestered from the people.  In fact, China missed its chances till too late, and its borrowings were then interrupted by the appalling chaos of the 20th century.  Only in the 21st century did China finally drop its intellectual isolationism.

A Few Notes on Later Change

Science as a reliable cranker-out of money-making technologies is a 19th-century perception.  During the period of the (supposed!) “scientific revolution,” craftsmen, not scientists, made the profitable innovations.  The brilliant and pathbreaking innovations in agriculture, textiles, dyeing, mining, and other arts, from the 1400s on (after Europe had internalized Moorish introductions), are all anonymous.  While Bacon and Descartes were making themselves famous, the really important technological developments were being made by farmers and laborers, whose names no one recorded but whose deeds live on in every bite we take and every fibre we wear.  Few things are more moving, or humbling, than realizing how much we now owe to countless unnamed men and women who lived quiet good lives while the rich and famous did little besides pile up corpses, or, at best, write learned Latin tomes of speculation.

On the other hand, though some at the time said that science only satisfied “idle” curiosity, the very use of the invidious word “idle” indicates that more “serious” game was afoot.  Besides the obvious utility of medicine, there were countless works on transport, mining, agriculture, water management, architecture, and every other art of life.  As recognized in the old phrase “Renaissance man,” a well-known artist, politician, or literary person might make scientific advances in practical fields.  Most famously, Leonardo da Vinci made contributions (or at least plans for contributions) to many.

All this was learned much less rapidly than we once thought.  It took generations for the whole complex of observation, experiment, open publication, and forward-looking, inquiring, argumentative science to take wide hold.  Moreover, the founders’ mistakes conditioned science for years, or even centuries.  Worst in this regard was Descartes’ claim that nonhuman animals are mere machines, without true consciousness.  Not until the late 20th century was this idea—so pernicious in its effects—definitively excised from serious science.

However, the idea that Descartes is responsible for the mind-body dualism or the idea that animals are mere machines is based on the assumption that major cultural change occurs because a brilliant individual has a great insight which then trickles down.  This is not how culture change occurs.  It comes from continual interaction with the natural and social world, leading to general learning and constantly re-negotiated conclusions.  Descartes merely put fancy words to what had been church dogma for 1600 years.  He had his influence, but it was minor.

Medicine too reveals a slow, halting progress.  Notable innovators were Hooke, Boyle, and Thomas Sydenham, who developed from the Helmontian canon further ideas of nosology—systematic classification of named disease entities, rather than mere description of symptoms and inferred humoral causes—and laid the foundations for modern epidemiology (Gaukroger 2006:349-351; Wear 2000).  Boyle, ever the innovative and devoted mind, even counseled learning medical knowledge from Native Americans, long foreshadowing modern plant hunting (Gaukroger 2006:374).  However, Galenic medicine held sway through the 19th century, and in marginal areas right through the 20th.

However slow and uneven this all was, dynamic, forward-looking figures like Galileo, Descartes (who invented mathematical modeling as a systematic scientific procedure), Hooke, and Boyle did indeed transform the world.  The really critical element was their insistence on observation and experiment.  Europe previously (and even for a long time after) never could shake off the devotion to prior authority.  Rapid discovery science came when people realized that Aristotle, Avicenna, and other classics were simply not reliable and had to be tested and supplemented.

European expansion and the rise of entrepreneurship has long been a prime suspect in all this (Marx, Weber, and almost everyone else in the game mentioned it).  The correlation of maritime expansion, discovery, nascent mercantile capitalism, and science—the four developing in about that order—is too clear to ignore.

This had a background not only in the Mediterranean trade (Braudel 1973) but also in the European fishery, which developed early, and expanded into a high-seas, far-waters fishery by the 1400s (see e.g. Cook 2007:7-8).  This led to Europe’s taking full advantage, quickly, of Chinese and Arab maritime advances.  They developed navigation and seamanship to a unique and unprecedented level by 1450.  Holland and Portugal, the most dependent on fisheries of any nations (anywhere), took the lead.

After that, mercantile values took over: need for honest dealing (within reason!), enterprise, factual information, and above all keeping up on every bit of new knowledge and speculation.  Everything could be useful in getting an advantage in trade.  Even clear prose (necessary to scientific writing, at least today) may owe much to this need of merchants for simple, direct information (Cook 2007:56; 408-409).  The whole organization of the new science was influenced by the organization and institutions of the new mercantile capitalism.  Also, merchants wanted tangible signs of their travels and adventures: gardens, curiosity cabinets.

This classic theory has recently received a powerful boost from Harold Cook, who traces out the rise of Dutch business and science in Matters of Exchange:  Commerce, Medicine, and Science in the Dutch Golden Age (2007).  He shows that Dutch science was very much a matter of cataloging and processing the new items the Dutch were discovering in Indonesia, Japan, Brazil, and elsewhere.

Terms like “scientist” and “biology” date from the 19th century, as does “science” in its modern sense.  (“Scientist,” coined by William Whewell, was not really a new word; it merely replaced earlier terms like “savant” and “scient,” which had become obsolete.)

In the early modern period, the people in question were simply called “scholars,” because no one clearly separated science from theology, philosophy, and other branches of knowledge.  Enquiry was enquiry.  Only in the 19th century did disciplines become so distinctive, formal, and methodologically separate that they had to have their own names.

By the late 19th century, folk knowledge of the world had separated from formal knowledge so completely that yet another set of new terms appeared.  Consider the term “ethnobotany,” coined in 1895 by John Harshbarger to refer to the botanical knowledge of local ethnic groups.  This was an old field of study; Dioscorides really started it, and the 16th-century herbalists did it with enthusiasm—Ogilvie (2006:71) called it “ethnobotany ante litteram.”  Linnaeus drew heavily on folk knowledge in his botanical work.   China had a parallel tradition; Li Shizhen drew on folk wisdom.  But no one saw folk botany as a separate and distinctive field until the 1890s, when science became so formalized and laboratory-based that the old folk science became a different thing in people’s minds.

Conclusions on Science History

Looking back over the preceding sections, we see that the main visible difference was the explosion of trade and conquest, especially—but far from solely—in the 15th and 16th centuries.  This brought Europe into a situation where it was forced to deal with a fantastically increased mass of materials to classify, study, and deal with.  It simply could not ignore the new peoples, plants, animals, and so on that it had acquired.

Exactly the same problem faced the Greeks when they grew from tiny city-states to world empire between 600 BCE and 300 BCE, and they did exactly the same thing, leading to the scientific progress of the period.  The golden age of Chinese philosophy was in a similar expansionist period at the exact same time, but Chinese science peaked between 500 and 1200 A.D., with rapid expansion of contacts with the rest of the world.  The Arabs repeated the exact story when they exploded onto the world scene in the 600s and 700s.  In all cases, stiffening into empire was deadly; it slowed Greek science in the Hellenistic period, and virtually shut down Chinese and Near Eastern science after the Mongol conquests.  These conquests did much direct damage, but their real effect was to introduce directly—or create through reaction—a totalitarian style of rule.  China’s Yuan and especially Ming dynasties were hostile to change and innovation; Qing was less so, but not by much.  The change in the Near East was even more dramatic.  The spectacular flood of scientific works suddenly shut off completely after the Mongols (and the plagues that soon followed).  There was hardly a new book of science from then until modern European scientific works began to be translated.  Even today, the Near East lags almost all the rest of the world—including some far less developed regions—in science.  As expected, the worst lag is in the most autocratic countries.  The least lag is found in the more politically sane nations, such as Turkey, where both liberal Hanafi Islam and a European window have led to greater openness.

Europe and America have not, so far, suffered totalitarian death, but the United States from 2010 onward shows exactly how this happens.  The far right wing of the Republican party took over the House and Representatives and most states in that year, and immediately began a full-scale assault on the funding, the independence, and the freedom of teaching of the research and teaching institutions of the country, from grade school to the National Science Foundation.  An almost total defunding of science was advocated.  In education, teaching was under attack, with proposals to replace trained and independent teachers overseeing classes of 20-30 by low-skilled persons with low salaries and no job security put in charge of classes of 60-80.  Something very much like this happened in Ming and Qing China.

It also happened many times over in Europe, but there were always countries where scientists and scholars could take refuge:  the Netherlands in the 17th century, England in the 18th, France in the 19th, America in the 20th, and various lesser states at various times.  The European world’s fractionation saved it.  No one state could take over, and no one could repress all science.  In China, by contrast, the paranoid Ming Dynasty could shut down almost all progress throughout the whole region.  In the Near East, the Turkish and Persian empires did more or less the same thing.

In Europe, a feedback process developed.  The freer states promoted trade and commerce, which in turn stimulated more democracy (for various well-understood reasons).  This stimulated more searches for knowledge, which were relatively free of dogmatic interference.  Any forward knowledge could provide an advantage in trade.  The rise of Republican anti-intellectualism in the United States tracked the replacement of trade and commerce by economic domination through giant primary-production firms, especially oil and coal interests.

Religion

Another factor was the tension between religious sects.  Robert Merton (1973) and A. G. Morton (1981) pointed out a connection between religious debate and science.  Merton saw Protestantism as hospitable ideologically.  I find Morton’s explanation far more persuasive.  He thought that the arguments between sects over “absolute truth” created a world in which people seriously maintained minority views against all comers, argued fiercely for them, and sought proof from sources outside everyday society.  They were used to seeing truth as defensible even if unpopular.

Cook (2007) confirms this by noting how many religious dissenters wound up finding refuge the Netherlands—Spinoza and Descartes are only the most famous cases—and how many more resorted to publishing, teaching, or researching there.  Cook takes pains to point out that Dutch leadership in intellectual fields rapidly declined as the Netherlands lost political power, religious freedom, and mercantile edge (the three seem to have declined in a feedback relationship with each other; see also Israel 1995 for enormous detail on these matters).   Gaukroger (2006) has argued, reasonably enough, for a much more complex relationship, but I think Merton’s theory still applies, however much more there is to say.

Accordingly, the separation of science and religion is a product of the Enlightenment, and the “conflict” between science and religion is an 18th-19th-century innovation (Gaukroger 2006; Gould 1999; Rudwick 2005, 2008).  Before that, scientists, like everyone else, took God and the supernatural realm for granted (though there were exceptions by the 18th century).  Few saw a conflict, though the separation was beginning to be evident in the work of Spinoza and Descartes.  They deserve some of the blame for separating the natural from the moral (see Cook 2007:240-266).  Descartes inquired deeply into passions, mind, and soul, developing more or less mechanistic models whose more oversimplified aspects still bedevil us today.  Scientists like Newton and Boyle were not only intensely religious men, but they saw their science as a pillar of religious devotion—a devout exploration of God’s creation.  As late as the 18th century, Hume still argued that no one could seriously be an atheist, and was astonished when he visited France and met a roomful of them (Gaukroger 2006:27).  God was already seen as a clockmaker by the 14th century (Hadot 2006:85, 127), and by the 17th it appeared to many scientists that their job was to understand the divine clockwork.

The conflict of science and religion arose only after Archbishop Ussher and other rationalists overdefined the Bible’s position on reality, and had their claims shown to be ridiculous (Rudwick 2005, 2008).  Between fundamentalist “literalism” and 19th-century science there is, indeed, an unbridgeable gap.  However, no one who reads the Bible seriously can maintain a purely literalist position.  There are too many lines like Deuteronomy 10:16:  “Circumcise therefore the foreskin of your heart.”  (This line is repeatedly discussed in the Bible, from the Prophets down to Paul’s Epistle to the Romans, which discourses on it at great length.)  And the “Virgin Birth” is hard to square with Jesus’ lineage of “begats” traced through Joseph.  Be that as it may, today we are stuck with the conflict, sometimes in extreme forms, as when Richard Dawkins and the Kansas school board face off.

A conflict of science and philosophy arose too, but stayed mild.  Philosophy, however, fell from guiding the world (through the middle ages) to guiding nations (through the Renaissance and early modern periods) to guiding movements (through the 19th century) to being a game.  By the mid-twentieth century it had some function in guiding science, but had ceased to be a living force in guiding the world.  Economics has replaced it in many countries.  Extremist political ideology—fascism, communism, and religious extremism—has replaced it elsewhere.  Philosophical ethics have thinned out, though the Kantian ethics of Jurgen Habermas and John Rawls have recently been influential.

Mastering Nature

The early concern with “mastery” of nature has been greatly exaggerated in recent environmentalist books.  It was certainly there, but, like the conflict with religion, it was largely a creation of the post-Enlightenment world.  And it was not to last; biology has now shifted its concern to saving what is left rather than destroying everything for immediate profit.

The 19th century was, notoriously, the climactic period for science as nature-mastering, but it was also the age that gave birth to conservation as a serious field of study.  Modern environmentalists read with astonishment George Perkins Marsh’s great book Man and Nature (2003 [1864]).  This book started the modern conservation movement.  One of the greatest works of 19th century science, it profoundly transformed thinking about forests, waters, sands, and indeed the whole earth’s surface.  Yet it is unequivocally committed to mastery and Progress, not preservation.  Marsh forthrightly prefers tree plantations to natural forests, and unquestioningly advocates draining wetlands.  He wished not to stop human management of the world, but to substitute good management for bad management.  His only sop to preservation is an awareness of the truth later enshrined in the proverb “Nature always bats last.”  He knew, for instance, that constraining rivers with levees was self-defeating if the river simply aggraded its bed and eventually burst the banks.

This being said, the importance of elite male power in determining science has been much exaggerated in some of the literature (especially the post-Foucault tradition).  Scientists were a rare breed. More to the point, they were self-selected to be concerned with objective, dispassionate knowledge (even if “useful”), and they had to give up any hope of real secular power to pursue this goal. Science was a full-time job in those days.  So was getting and holding power.

A few people combined the two (usually badly), but most could not.  Scientists and scholars were a dedicated and unconventional breed.  Many, from Spinoza to Darwin, were interested in the very opposite of worldly power, and risked not only their power but sometimes their lives.  (Spinoza’s life was in danger for his religious views, not his lens-making innovations, but the two were not unrelated in that age.  See Damasio 2003.)  Moreover, not everyone in those days was the slave of an insensate ideology.  Thoreau was not alone in his counter-vision of the good.  Certainly, the great plant-lovers and plant explorers of old, from Dioscorides to Rauwolf and Bauhin and onward through Linnaeus and Asa Gray, were not unappreciative of nature.

And even the stereotype of male power is inadequate; many of these sages had female students, and indeed by the end of the 19th century botany was a common female pursuit.  Some of the pioneer botanists of the Americas were women, including incredibly intrepid ones like Kate Brandegee, who rode alone through thousands of miles of unexplored, bandit-infested parts of Mexico at the turn of the last century.

We need to re-evaluate the whole field of science-as-power.  Governments, especially techno-authoritarian ones like Bismarck’s Prussia and the 20th century dictatorships, most certainly saw “science” and technology as ways to assert control over both nature and people.  Scientists usually did not think that way, though more than a few did.  This leads to a certain disjunction.  Even in the area of medicine, where Michel Foucault’s case is strong and well-made (Foucault 1973), there is a huge contrast between medical innovation and medical care delivery.  Medical innovation was classically the work of loners (de Kruif 1926), from Joseph Lister to Maurice Hillebrant (the designer of the MMR shots).  Even the greatest innovators in 19th-century medicine, Robert Koch and Louis Pasteur, worked with a few students, and were less than totally appreciated by the medical establishment of the time.  Often, these loners were terribly persecuted for their innovative activities, as Semmelweis was in Hungary (Gortvay and Zoltán 1968) and Crawford Long, discoverer of anesthesia, in America.  (Dwelling in the obscurantist “Old South,” at a time when black slavery was considered a Biblical command, Long was attacked for thwarting God’s plan to make humans suffer!)  By contrast, medical care delivery involves asserting control over patients.  At best this is true caring, but usually it means batch-processing them for convenience and economy—regarding their humanity merely as an annoyance.  No one who has been through a modern clinic needs a citation for this (but see Foucault 1973).

Science and Ethnoscience, part 1: Science

Monday, August 22nd, 2011

SCIENCE AND ETHNOSCIENCE

E. N. Anderson

Dept. of Anthropology

University of California, Riverside

Part 1.  Science and Ethnobiology

Science and Knowledge

The present paper questions the distinctions between “science,” “religion,” “traditional ecological knowledge,” and any other divisions of knowledge that may sometimes be barriers in the way of Truth.

I will make this case via my now rather long experience in ethnobiology.  Ethnobiology is the study of the biological knowledge of particular ethnic groups.  It is part of what is now called “traditional ecological knowledge,” TEK for short.  Ethnobiology has typically been a study of working knowledge:  the actual pragmatic and operational knowledge of plants and animals that people bring to their daily tasks.  It thus concerns hunting and gathering, farming, fishing, tree-cutting, herbal medicine, cooking, and other everyday practical pursuits.  Ethnobiological research has focused on how people use, name, and classify the plants, animals and fungi they know.

As such, it is close to economic botany and zoology, to archaeology, and to ethnomedicine.  It is a part of human ecology, the study of how humans interact with their environment.  It overlaps with cultural ecology, the branch of human ecology that concerns cultural knowledge specifically.  Cultural ecology was essentially invented, and the term coined, by Julian Steward (1955).  Steward attended very seriously to political organization, but his earlier students generally did not, which caused his later students to coin the further term “political ecology” (Wolf 1972), which has caught on in spite of some backlash from the earlier students and their own students (Vayda 2008).  Human/cultural/political ecology has produced a huge, fast-evolving, and rather chaotic body of theory (Sutton and Anderson 2009).

Like many of my generation, I was raised in a semi-rural world of farms, gardens, ranches, and craft work.  I learned to shoot, fish, and camp.  Many formative hours were spent on the family farm, a small worked-out cotton farm in a remote part of East Texas.  (My father was raised there, but the family had abandoned it to sharecroppers by the time I came along.)  I learned about all this through actual practice, under the watchful eyes of elders or peers.  Naturally, I learned it much better than I learned classroom knowledge acquired in a more passive way.  Thus I was preadapted to study other people’s working knowledge of biota.

Logic also makes this a good entry point into the study of theoretical human ecology.  It is the most basic, everyday, universal way that humans interact with “nature.”  It is the most direct.  It has the most direct feedback from the rest of the world—the nonhuman realm that is so often out of human control.  The philosopher may meditate on the nonexistence of existence, or on the number of angels that can dance on the point of a pin, but the working farmer or gatherer must deal with a more pragmatic reality.  She must know which plants are the best for food and which will poison her, and how to avoid being eaten by a bar.

In Comte’s words, we need to know in order to predict, and predict in order to be able to act (savoir pour prévoir, prévoir pour pouvoir).

How do we know we know?

For many people, even many scientists, it is enough to say that we see reality and thus know what’s real.  This is the position of “naïve empiricism.” There is no problem telling the real from the unreal, once we have allowed for natural mistakes and learned to ignore a few madmen.  Reality is transparent to us.  The obvious failure of everyone before now to see exactly what’s real and what isn’t is due to their being childlike primitives.  Presumably, the fact that almost half the science I learned as an undergraduate is now abandoned proves that my teachers (who included at least one Nobel laureate) were childlike primitives too.

Obviously this does not work, and the ancient Greeks already recognized that people are blinded by their unconscious heuristics and biases.  Francis Bacon systematized this observation in his Novum Organum (1901/1620).  He identified four “idols” (of the “tribe, den, market, and theatre”), basically cultural prejudices that cause us to believe what our neighbors believe rather than what is true.  Later, John Locke (1979/1697) expanded the sense of limitations by providing a very modern account of cognitive biases and cognitive processing limitations.  The common claim that Locke believed the mind was a “blank slate” and that he was a naïve empiricist is wrong.  He used the expression tabula rasa (blank slate) but meant that people could learn a wide variety of things, not that they did not have built-in information processing limits and biases.  He recognized both, and described them in surprisingly modern ways.  His empiricism, based on careful and close study, involved working to remove the “idols” and biases.  It also involved cross-checking, reasoning, and progressive approximation, among other ways of thought.

Problems with Words

Ethnobiology has normally been concerned with “traditional ecological knowledge,” now shortened to TEK and sometimes even called “tek” (one syllable).  By the time a concept is acronymized to that extent, it is in danger of becoming so cut-and-dried that it is mere mental shorthand.  The time has come to take a longer look.  This paper will not confine itself to “TEK,” whatever that is.  I am interested in all knowledge of environments.  I want to know how it develops and spreads.

Science studies and history of science have made great strides in recent decades, partly through use of anthropological concepts, and in turn have fed back on anthropological studies of traditional knowledge.  The result has been to blur the distinction between traditional local knowledge and modern international science.  Peter Bowker and Susan Star (1999) have produced descriptions of modern scientific classification that sound very much like what I find among Hong Kong fishermen and Northwest Coast Native people.  Bruno Latour (2004, 2005) describes the cream of French scientists thinking and talking very much as Mexican Maya farmers do.  Martin Rudwick, in his epochal volumes on the history of geology, describes great scientists speculating on the cosmos with all the mixture of confusion, insight, genius, and wild guessing that led Native Californians to conclude that their world was created by coyotes and other animal powers.  Early geological speculation was as far from what we believe today as California’s coyote stories.

Similar problems plague the notion of “indigenous” knowledge.  Criticisms of the idea that there is an “indigenous” kind of knowledge, as opposed to some other kind, have climaxed in a slashing attack on the whole idea by Matthew Lauer and Shankar Aswani (2009).  They maintain “it relies on obsolete anthropological frameworks of evolutionary progress” (2009:317).  This is too strong—no one now uses those frameworks.  The term “indigenous” has a specific legal meaning established by the United Nations.  However, there is a little fire under Lauer and Aswani’s smoke.  The term “indigenous knowledge” does tend to imply that the knowledge held by “indigenous” people is somehow different:  presumably more local, more limited, and more easy to ignore.  Some, especially biologists, use this to justify a few that non-indigenous people (whatever that means) somehow manage to have a wider, better vision.

Similarly, the term “traditional ecological knowledge” has been criticized for implying that said knowledge is “backward and static….  Much development based on TEK thus continues to implement homogenous Western objectives by coopting and decontextualizing selected aspects of knowledges specific to unique places, eliminate their dynamism, and focus more than anything else on negotiating the terms for their commodification” (Sluyter 2003, citing but rather oversimplifying Escobar 1998).  Most of us who study “TEK” do not commit these sins.  But many people do, especially non-anthropologists working for bureaucracies.  Their international bureaucratic “spin” has indeed made the term into a very simplistic label (Bicker et al. 2004, and see below).

The implication of stasis is particularly unfortunate.  Traditional ecological knowledge, like traditional folk music, is dynamic and ever-changing, except in dying cultures.  Many people understand “traditional” to mean “unchanged since time immemorial.”  It does not mean that in normal use.  “Traditional” Scottish folk music is pentatonic and has certain basic patterns for writing tunes (syncopation at specific points, and so on).  New Scottish tunes that follow these traditions are being written all the time, and they are thoroughly traditional though completely new.   Similarly, traditional classification systems can and do readily incorporate new crops and animals.  Traditional Yucatec Maya knowledge of plants is still with us, but over 25 years I have seen their system expand yearly to accommodate new plants.

People are notoriously prone to invent new traditions (Hobsbawm and Ranger 1983).  “Tradition,” more often than not, means “my version of what Grandpa and Grandma did,” not “my faithful reproduction of what my ancestors did in the Ice Age.”

And, of course, modern international science is hardly free from traditions!   “Science” is an ancient Greek invention, and the major divisions—zoology, botany, astronomy, and so on—are ancient Greek in name and definition.  Theophrastus’ original “botany” text of the 4th century BC reads surprisingly well today; we have added evolution and genetics, but even the scientific names of the plants are often the same as Theophrastus’, because his terms continued in use by botanists.  Coining scientific names today is done according to fixed and thoroughly traditional rules, centuries old, maintained by international committees.  Species names of trees, for instance, are normally feminine, because the ancient Romans thought all trees had female spirits dwelling in them.  Thus even trees with masculine-sounding genus names have feminine species names (e.g. Pinus ponderosa, Quercus lobata) Traditions of publication, laboratory conduct, institutional organization, and so on are more recent, but are older than many of the “traditional” bits of lore classed as “TEK.”

It is no more surprising to find that Maya change and adapt with great speed than to find that laboratory chemists use the same paradigms and much of the same equipment that Robert Boyle used more than 300 years ago.

Finally, the differences between traditional (or “indigenous”) knowledges and modern science are not obviously greater than the differences between long-separated traditional cultures.  Maya biological knowledge is a great deal like modern biology—enough to amaze me on frequent occasions.  Both are very different from the knowledge system of the Athapaskan peoples of the Yukon.   Similarly, the conduct of science in the United States is quite different from that in China or Japan.  National laboratory cultures have been the subject of considerable analysis (see e.g. Bowker and Star 1999; Latour 2005; Rabinow 2002).  And modern sciences differ in the ways they operate.  Paleontology is not done the way theoretical physics is done (Gould 2002).  Thus Latour (2004) and many others now speak of “sciences” rather than “science,” just as Peter Worsley (1997) wrote of “knowledges” in discussing TEK and popular lore.

If one looks at high theory, traditional knowledge and modern science may be different, but if one looks at applications, they are the same enterprise:  a search for practical and theoretical knowledge of how everything works.  Similarly, if one looks at discovery methodology, traditional ecological knowledge and formal mathematical theory seem very different indeed, but traditional and contemporary ecology or biology are much more alike.

I can only conclude that instead of speaking of “ethnoscience,” “modern science, “traditional knowledge,” and “postmodern knowledge,” we might just as well say “sciences” and “knowledges” and be done with it.

Therefore, pigeonholing TEK in order to dismiss it is unacceptable (Nadasdy 2004).  By the same token, bureaucratizing science, as “Big Science” and overmanaged government agencies are doing now, is the death of science.   As Michael Dove says:  “By problematizing a purported division between local and extralocal, the concept of indigenous knowledge obscures existing linkages or even identities between the two and may privilege political, bureaucratic authorities with a vested interest in the distinction (whether its maintenance or collapse).”  (Dove 2006:196.)

Problems with Projecting the “Science” Category on Other Cultures

A much more serious problem, often resulting from such bureaucratization, has been the tendency to ignore the “religious” and other beliefs that are an integral part of these knowledge systems.  This is not only bad for our understanding; it is annoying, and sometimes highly offensive, to the people who have the knowledge.  Christian readers might well be offended by an analysis of Holy Communion that confined itself to the nutritional value of the wine and cracker, and implied that was all that mattered.  Projecting our own categories on others has its uses, and for analytic and comparative purposes is often necessary, but it has to be balanced by seeing them in their own terms.  This problem has naturally been worse for comparative science that deliberately overlooks local views (Smith and Wobst 2005; also Nadasdy 2004), but has carried over into ethnoscience.

On the other hand, for analytic reasons, we shall often want to compare specific knowledge of—say—the medical effects of plants.   Thus we shall sometimes have to disembed empirical scientific knowledge from spiritual belief.  If we analyze, for instance, the cross-cultural uses of Artemisia spp. as a vermifuge, it is necessary to know that this universally recognized medicinal value is a fact and that it is due to the presence of the strong poison thujone in most species of the genus.  Traditional cultures may explain the action as God-given, or due to a resident spirit, or due to magical incantations said over the plant, or may simply not have any explanation at all.  However, they all agree with modern lab science on one thing:  it works.

We must, then, consider four different things:  the knowledge itself; the fraction of it that is empirical and cross-culturally verifiable; and the explanations for it in the traditional cultures in question; and the modern laboratory explanations for it.  All these are valuable, all are science, and all are important—but for different reasons.  Obviously, if we are going to make use of the knowledge in modern medicine, we will be less interested in the traditional explanations; conversely, if we are explicating traditional cultural thought systems, it is the modern laboratory explanations that will be less interesting.

The important sociological fact to note is the relative independence or disembedding of “science,” in the sense of proven factual knowledge, from religion.  Seth Abrutyn (2009) has analyzed the ways that particular realms of human behavior become independent, with their own organization, personnel, buildings, rules, subcultures, and so on.  Religion took on such an independent institutional life with the rise of priesthoods and temples in the early states.  Politics too developed with the early states, as did the military.  Science became a truly independent realm only much later.  Only since the mid-19th century has it become organizationally and intellectually independent of religion, philosophy, politics, and so on.  It is not wholly independent yet (as science studies continually remind us).  However, it is independent enough that we can speak of the gap between science and religion (Gould 1999).  This gap was nonexistent in traditional cultures—including the western world before 1700 or even 1800.  Many cultures, including early modern European and Chinese, had developed a sense of opposing natural to supernatural or spiritual explanations, but there were no real separate institutional spheres based on the distinction.

However, we can back-project this distinction on other cultures for analytic reasons—if we remember we are doing violence to their cultural knowledge systems in the process.  There are reasons why one sometimes wants to dissect.

Inclusive Science

I use “science” to cover systematic human fact-finding about the world, wherever done and however done.  Traditional people all include what we moderns call “supernatural” factors in their explanations.  Thus, we have to take some account of such ideas in our assessment of their sciences (Gonzalez 2001).  This is obviously a very broad and possibly a bit idioyncratic usage, but it allows comparison.  It is imperfect, but alternatives seem worse.

Science is about something—specifically, about knowing more, and perhaps improving the human condition in the process.  The appropriate tests are therefore outcome measures, which are usually quite translatable and comparable between cultures.

I might prefer “sciences,” following Latour (2004) and Eugene Hunn (2008), but I share with Joseph Needham a dedication to the idea of a panhuman search for verifiable knowledge. Since the first hominid figured out how to use fire or chip rock, science has been a human-wide, cumulative venture, responsible for many of the greatest achievements of the human spirit.  Yet the traditions and knowledge systems that feed into it are very different indeed.  Science is a braided river, or, even more graphically, a single river made up of countless separate water molecules.

Science gives us sciences, but is one endeavor.  Attempts to confine scientific methodology to a single positivist bed have not worked, and modern sciences are institutionalized in separate departments, but neither of these things destroys the basic reality and unity of the set of practices devoted to knowledge-seeking.  Even today, in spite of the divergence of the sciences, we have Science magazine and “big science” and a host of other recognitions of a basic system.

All narrow definitions are challenged by the fact that the ancient Romans invented the term “science” (scientia, “things known,” from scire “know”).

The Greek word for science was episteme (shades of Foucault 1970), and the more general words for “knowledge” were sophos “knowledge” and sophia “wisdom, cleverness.”  Sciences, however, were distinguished by the ending –logia, from logos, “word.”  Simpler fields that were more descriptive than analytic ended in –nomos “naming.”  It is interesting that astrology was a science but astronomy a mere “star-naming”!  Another ending was –urgia “handcraft work,” as is chirurgia,the word that became “surgery” in English; it literally means “handwork.”

The Greeks worked terribly hard on most of what we now think of as “the sciences,” from botany to astronomy.  In the western world, they get the major credit for separating science from other knowledges.  Aristotle, in particular, kept his accounts of zoology and physics separate from the more speculative material he called “metaphysics.”  (At least, he probably called it that, though some have speculated that his students labeled that material, giving it a working term that just meant “the stuff that came after [meta, ‘beyond’] the physics stuff in his lectures.”)

The Greeks also gave us philosophia, “love of wisdom”—the higher, rigorous attention to the most basic and hard-to-solve questions.  This word was given its classic denotation and connotation by Plato (Hadot 2002).  They used techne for (mere) craft.  Yet another kind of knowledge was metis—sharp dealing, resourcefulness, street smarts.  The quintessential metic was Odysseus, and east Mediterranean traders are still famous for this ability.

The ancient Greeks (at least after Aristotle) contrasted science, an expert and analytical knowledge of a broad area, with mere craft, techne. This has left us today with an invidious distinction between “science” and “technology” (or “craft”).  The Greeks were less invidious about it.  Arts were usually mere techne, but divine inspiration—the blessing of the Muses that gave us Homer and Praxiteles—went beyond that.  We now think of the Muses as arch Victorian figures of speech, but the ancient Greeks took them seriously.

Allowing the Greeks and Romans their claim to having science makes it impossible to rule out Egyptian and “Chaldean” (Mesopotamian) science, which the Greeks explicitly credited.  Then we have to admit, also, Arab, Persian, and Chinese science, which continued the Greek projects (more or less).  Privileging modern Euro-American science is patently racist.  Before 1200 or 1300 A.D., the Chinese were ahead of the west in most fields.  We can hardly shut them out.  (True, they had no word for “science,” but the nearest equivalent, li xue “study of basic principles,” was as close to our “science” as scientia was at the same point in time.)  Once we have done that, the floodgates are open, and we cannot reasonably rule out any culture’s science.

Words for “science” and scientists in English go back at least to the Renaissance.  The OED attests “science” from 1289.  The word “scientist” was not invented till W. Whewell coined it in 1833, but it merely replaced earlier words: “savant” from the French, or the Latinate coinage “scient,” used as a noun or adjective.  These words had been around since the 1400s.  (“Scient” had become obsolete.)

Thus, I define “science” as systematic, methodical efforts to gain pragmatic and empirical knowledge of the world and to explain this by theories (however wildly wrong the latter may now appear to be). Paleolithic flint-chipping, Peruvian llama herding, and Maya herbal medicine are sciencs, in so far as they are systematized, tested, extended by experience, and shared.  The contrast is with unsystematized observation, random noting of facts, and pure speculation.  In this I agree with scholars of traditional sciences such as Roberto Gonzalez (2001) and Eugene Hunn (2008; see esp. pp. 8-9), as well as Malinowski, who considered any knowledge based on experience and reason to be science, and thus found it everywhere.

The boundaries are vague, but this is inevitable.  “Science” however defined is a fuzzy set.  Even modern laboratory science grades off into rigorous field sciences and into speculative sciences like astrophysics.

Science is based on theories, which I define as broad ideas about the world that generate predictions and explanations when applied to pragmatic, empirical engagement with particular environments.  This allows me to consider folk views such as the beliefs supporting shamanism along with modern scientific theories.

On the other hand, in small-scale traditional cultures, cutting off “science” creates an artificial distinction.  Such societies do not separate science from other knowledge, including what we in English would call “religion” or “spiritualism,” and analysis does violence to this.  It is worth doing anyway for some comparative and analytical purposes, but most of the time I find it preferable to talk about “knowledge.”  For most purposes, I am much more interested in understanding traditional knowledge systems holistically. For some purposes, however, we need to analyze, and all we can do is live with the violence, remembering that “analysis” literally means “splitting up.”

Chinese, Arab, Persian, and Indian civilizations, and probably Maya and Aztec ones, did have self-conscious, cumulative traditions of fact-seeking and explanation-seeking.  The Near Eastern cultures actually based their science on the Greeks, and even used the Greek words.  Both “science” and “philosophy,” variously modified, were taken into Arabic and other medieval Near Eastern languages.  The Chinese were farther afield, as will appear below, but Joseph Needham was clearly right in studying their efforts as part of the world scientific tradition.  However, it is also necessary to study the ways that traditional Chinese knowledge and knowledge-seeking was not like western sciences.  I will argue at length, below, that both Needham and his critics are right, and that to understand Chinese knowledge of the environment we must analyze it both on its own terms and as scientific practice.

Finally, there is an inevitable tendency to back-project our modern views of the world on earlier days.  Astrology and alchemy seemed as reasonable in the Renaissance as astronomy and chemistry.  There was simply no reason to think that changing dirt into gold was any harder than changing iron ore into iron.

There was even evidence that it could work.  Idries Shah (1956) gives an account by an observant traveler of an alchemist changing dirt to gold in modern central Asia.  The meticulous account makes it clear that he was actually separating finely disseminated gold out of alluvial deposits, but he was evidently quite convinced that he was really transforming the dirt.  More recently, reconstructed alchemical experiments turn silver yellow (superficially, however).  Apparently alchemists were fooled into thinking this was a real change, or at least could be developed into one (Reardon 2011).  Scientists are thus now studying alchemy to see just what those early chemists were doing.  They were not just wasting their time.  They had high hopes and were not unreasonable.  Ultimately they proved wrong, and duly hung up their signboards.  Such is progress—and they were not the last to have to give up on a failed project; we do it every day now.

The old “Whig history” that starts with Our Perfect Modern Situation and works back—seeing history as a long battle of Good (i.e. what led to us perfect moderns) vs. Evil—is long abandoned, but we cannot avoid some presentism (Mora-Abadía 2009).  Obviously, even my use of the term “science” for TEK is something of a presentist strategy.  Thus “science” is a rather arbitrary term.  I shall use it, with some discomfort, for that part of knowledge which claims to be preeminently dedicated to learning empirical and pragmatic things about environments and about lives.

Overly Restrictive Definitions of “Science”

I strictly avoid using it to mean solely lab-based activities.  I follow the Greeks in using it for Aristotle’s legacy, not just for the world of case/control, hypothesis-generation, hypothesis-testing, and formal theory.  This form of science was canonized by Ernst Mach and others in the late 19th century.  This usage is inadequate for many reasons.  Among other things, it relegates Aristotle, Galen, Tao Hongjing, Boyle, Li Shizhen, Harvey, Newton, Linnaeus, and even Lyell and Darwin to the garbage can.  Mach certainly did not want this; he was trying to improve scientific practice, not deny his heritage.

We can hardly balk at the errors of traditional societies.  Much of the science I learned as an undergraduate is now known to be wrong:  stable continents, Skinnerian learning theory, “climax communities” in ecology, and so on.  We allow them into our histories of science, along with phlogiston, ether, humoral medicine, the mind-body dichotomy, and other wrong theories once sacrosanct in Western science.

In my field work in Hong Kong, I found that many Chinese explained earthquakes as dragons shaking in the earth.  Other Chinese explained earthquakes as waves caused by turbulent flow of qi (breath, or vital energy) in the earth.  The Chipewyans of north Canada explain earthquakes as the thrashing of a giant fish (Sharp 1987, 2001).  When I was an undergraduate, most American geologists did not yet accept the fact that earthquakes are usually caused by plate tectonics, and instead invoked scientific explanations just as mystical and factually dubious as the dragons and fish.  They blamed earthquakes on the earth shrinking, or the weight of stream sediments—anything except plate tectonics (Oreskes 1999, 2001).    One should never be too proud about inferred variables inside a black box.

Unlike emotions, which have clear biological foundations, scientific systems can be seen as genuinely culturally constructed from the ground up.  Chimpanzees make termite-sticks and leaf cups, but the gap between these and space satellites is truly far greater than the gap between chimp rage and human anger.  It is true that chimps in laboratory situations can figure out how to put sticks together to get bananas, and otherwise display the basics of insight and hypothesis-testing (de Waal 1996; Kohler 1927), but they do not invent systematic and comprehensive schemes for understanding the whole world.  People, including those in the simplest hunter-gatherer societies, all do.

Many historians restrict “science” to the activity popularized in western Europe by Galileo, Bacon, Descartes, Harvey, Boyle, and others in the 16th and 17th centuries.  This usage is considerably more reasonable.  The “Scientific Revolution” involved a really distinctive moment or Foucaultian “rupture” that led to new worlds.  However, much excellent work has recently cut it down to size.  In fact, we now know that calling it a “revolution” drew a somewhat arbitrary line between these sages and their immediate forebears.  They were self-consciously “Aristotelian” against the “Platonism” of said forebears, but this looks very much less distinctive when one considers Arab and Persian science.  Aristotelianism had come to Europe from the Arabs in the 12th and 13th centuries, and the “revolution” was really a slow evolution (Gaukroger 2006).

A valuable term for the unified tradition that embraces European science since 1500 and world science since 1700 or 1800 is “rapid discovery science” (Collins 1998).  Rapid discovery science is very different from traditional science, but the difference is one of degree at least as much as of  kind.

The period from Galileo to 1800 may be defined as early modern science. Unlike both its primarily Near Eastern ancestors and its post-1800 descendent modern international science, it was largely a European enterprise.  Many criticisms have been made of its Eurocentric biases.  It did indeed display a rather distinctive and basically European worldview:  dualistic, excessively rational, dismissing or belittling the rest of the world, and more than somewhat sexist.  However, as we shall see, it depended in critical ways on nonwestern science for both data and ideas.  It was never isolated and could never really ignore the rest of the world’s knowledges.

A common terminological use is to restrict “science” to modern laboratory-based scientific practice, and the most closely similar field sciences.  This science develops formal theories (preferably stated in mathematic terms), generates hypotheses from the theories, tests these according to a formal methodology, discusses the results, and awaits cross-confirmation by other labs.  The problem with this usage is that it rules out virtually all science done before the 19th century.  In the early 20th century, Viennese logicians attempted to theorize such science as exceedingly formal, even artificial, procedure, with very strict rules of verification or—more famously—“falsification” (Popper 1959).

But this rules out not only all earlier science but even most science done today.  Field science can’t make the grade.  As Stephen Jay Gould (e.g. 1999) often pointed out, paleontology does not qualify.  We can hardly experiment in the lab with Tyrannosaurus rex.  Indeed, historians and social scientists (such as Thomas Kuhn, 1962) have repeatedly pointed out that few lab men and women follow their own principles—they go with hunches, have accidents, and so on.  The most hard-core positivist scientists admit this happily in their memoirs (see e.g. Skinner 1959).  Thus, I shall not use “science” in the above sense.  I shall use the term modern laboratory science for the general sort of science idealized by the positivists, but without losing sight of the fact that even it does not follow positivist guidelines.

However, no one can deny that there was a general movement in the 19th century to make science and the sciences more self-conscious, more rigorous, more clearly divided, and more methodologically consistent (see e.g. Rudwick 2005 on geology.)  Contrary to much blather, this was not a “European” enterprise.  It already involved people on both American continents, and it very soon included Asians.  Modern medicine, in particular, owes as much to Wu Lien-teh for his studies of plague and to Kiyoshi Shiga for his studies of dysentery as it does to any but the greatest of the European doctors.  (Shiga won what may be the least enviable immortalization in history, as the namesake of shigellosis.)  Moreover, many of the European founders did their key work in the tropics, as in the pathbreaking work of Patrick Manson and Ronald Ross on malaria and Walter Reed on yellow fever.  Therefore, I will use the term modern international science to refer to the new, self-conscious enterprise that began after 1800.

As Arturo Escobar says, “…an ensemble of Western, modern cultural forces…has unceasingly exerted its influence—often its dominance—over most world regions.  These forces continue to operate through the ever-changing interaction of forms of European thought and culture, taken to be universally valid, with the frequently subordinated knowledges and cultural practices of many non-European groups throughout the world” (Escobar 2008:3).  Escobar, among many others, speaks of “decolonializing” knowledge, and I hope to contribute to that.

Euro-American rational science arose in a context of technological innovation, imperial expansion, power jockeying (as Foucault reminded us), political radicalism, and economic dynamism.  We now know, thanks to modern histories and ethnographies of science, that European science was and is a much messier, more human enterprise than most laypersons think.  The cool, rational, detached scientist with his (sic!) laboratory, controlled experiments, and exquisitely perfect mathematical models is rare indeed outside of old-fashioned hagiographies of scientists.  Rarer still is the lackey of patriarchal power, creating phony science simply to enslave.  (Rare, but far from nonexistent; one need think only of the sorry history of racism and “scientific” sexism, up to and including Lawrence Summers’ famous dismissal of women’s math abilities.  One could always argue that Summers is an economist, not a scientist.)

More nuanced conclusions emerge from the history of science (as told by e.g. Martin Rudwick 2007, 2008) and the ethnography of science (e.g. Bruno Latour 2004, 2005).  These show modern international science as a very human enterprise.  Most of us who have worked in the vineyard can only agree.  (I was initially trained as a biologist and have done some biological research, so I am not ignorant of the game.)  These accounts bring modern science much closer to the traditional ecological knowledge of the Maya, the Haida, or the Chumash.  I have no hesitation about using the word “science” to describe any and all cultures’ pragmatic knowledge of the environment (see below, and Gonzalez 2001).

One can often infer the theory behind traditional or early empirical knowledge.  Sometimes it is quite sophisticated, and one wishes the writer had been less modest.  Therefore, a solid, factual account should not be dismissed because it “doesn’t speak to theory issues” until one has thought over the implications of the author’s method and conclusion.  This is as true if the account comes from a Maya woodsman or Chinese herbalist as it is when the account comes from a laboratory scientist.

We thus need a definition of “science” broad enough to include “ethnoscience” traditions.  The accumulated knowledge and wisdom of humanity is being lost and neglected more than ever, in spite of the tiny and perhaps dwindling band of anthropologists who care about it.  The fact that a group does not have a “thing” called “science,” and even the fact that the group believes in mile-long fish and dinosaur-sized otters (as do the Chipewyan of Canada; Sharp 2001), does not render their empirically verifiable knowledge unscientific.

Considering all folk explanations, and classifying the traditional ones as “religion,” Edward Tylor classically explained magic and religion as, basically, failed science (Tylor 1871).  He came up with a number of stories explaining how religious beliefs could have been reasonably inferred by fully rational people who had no modern laboratory devices to make sense of their perceptions.  Malinowski’s portrayal of religion as emotion-driven was part of a general reaction against Tylor in the early 20th century.

Indeed, Tylor discounted emotion too much.  On the whole, however, there is still merit in Tylor’s work.  There is also merit in Malinowski’s.  Science, like religion and magic, partakes of the rational, the emotional, and the social.

Basic Science:  Beyond Kuhn and Kitcher

“I can’t remember a single first formed hypothesis which had not after a time to be given up or greatly modified.  This has naturally led me to distrust greatly deductive reasoning in the mixed sciences.”  (Darwin, from his notebooks, quoted Kagan 2006:76)

All my life, I have been fascinated with scientific knowledge—that is, knowledge of the world derived from deliberate, careful, double-checked reflection on experience, rather than from blind tradition, freewheeling speculation, or logic based on a priori principles.

Thomas Kuhn’s classic The Structure of Scientific Revolutions (1962, anticipated by the brilliant work of Ludwig Fleck on medical history) concentrated on biases and limits within scientific practice.  Kuhn was attending to real problems with science itself.  This contrasts with, say, critiques of racism and sexism, which are necessary and valuable but wre already anticipated by Francis Bacon’s critiques of bias-driven pseudoscience (Bacon 1901, orig. ca. 1620).

From all this arose a great change in how “truth” is established.  Instead of going for pure unbiased observation, or for falsification of errors, we now go for “independent confirmation.”  David Kronenfeld (personal communication, 2005) adds:  “Science itself is also an attitude—probing, trying to ‘give nature a chance to say no,’ and so forth….science is not a thing of individuals but is a system of concepts and of people.”

A result is not counted, a finding is not taking seriously, unless it is cross-confirmed, preferably by people working in a different lab or field and from a different theoretical framework.  I certainly don’t believe my own findings unless they are cross-confirmed.  (See Kitcher 1993; for much more, Martin and MacIntyre 1994.)

Indeed, the new face of positivism demands  what is called VV&A:  “Verification (your model correctly captures the processes you are studying), validation (your code correctly implements your model) and authentication (some group or individual blesses your simulation as being useful for some intended purpose)” (Jerrold Kronenfeld, email of Jan. 7, 2010).  This is jargon in the “modeling” world, but it applies across the board.  Any description of a finding must be checked to see that it is correct, that the descriptions of it in the literature are accurate, and that advances knowledge, strengthens or qualifies theory, or is otherwise useful to science.

In short, science is necessarily done by a number of people, all dedicated to advancing knowledge, but all dedicated to subjecting every new idea or finding to a healthy skepticism.  We now see science as a social process.  Truth is established, but slowly, through debate and ongoing research.

Naïve empiricist agendas assume we can directly perceive “reality,” and that it is transparent—we can know it just by looking.  We can tell the supernatural from the natural.  This is where we begin to see real problems with these agendas, and the whole “modernist program” that they may be said to represent.  Telling the supernatural from the natural may have looked easy in Karl Popper’s day.  It seemed less clear before him, and it seems less clear today.

We have many well-established facts that were once outrageous hypotheses:  the earth is an oblate spheroid (not flat), blood circulates, continents drift, the sun is only a small star among billions of others.  We also have immediate hypotheses that directly account for or predict the facts.  We know an enormous amount more than we did ten years ago, let alone a thousand years, and we can do a great deal more good and evil, accordingly.

However, science has moved to higher and higher levels of abstraction, inferring more and more remote and obscure intervening variables.  It now presents a near-mystical cosmology of multidimensional strings, dark matter, dark energy, quark chromodynamics, and the rest.  Even the physicist Brian Greene has to admit:  “Some scientists argue vociferously that a theory so removed from direct empirical testing lies in the realm of philosophy or theology, but not physics” (Greene 2004:352).

To people like me, unable to understand the proofs, modern physics is indeed an incomprehensible universe I take on faith—exactly like religion.  The difference between it and religion is not that physics is evidence-based.  Astrophysics theories, especially such things as superstring and brane theory, are not based on direct evidence, but on highly abstract modeling.  The only difference I can actually perceive is that science represents forward speculation by a small, highly trained group, while religion represents a wide sociocultural communitas. Religion also has beautiful music and art, as a result of the communitas-emotion connection, but I suppose someone somewhere has made great art out of superstring theory.

The universe is approximately 96% composed of dark matter and energy—matter and energy we cannot measure, cannot observe, cannot comprehend, and, indeed, cannot conceptualize at all (Greene 2004).  We infer its presence from its rather massive effects on things we can see.  For all we know, dark matter and energy are God, or the Great Coyote in the Sky (worshiped by the Chumash and Paiute).

On a smaller and more human scale, we have the “invisible hand” (Smith 1776) of the market—a market which assumes perfect information, perfect rationality, and so on, among its dealers.  The abstract “market” is no more real than the Zapotec Earth God, and has the same function:  serving as black-box filler in an explanatory model.  Of course Smith was quite consciously, and ironically, used a standard theological term for God.

The tendency to use “science” to describe truth-claims and “religion” to describe untestable beliefs is thus normative, not descriptive.  It is a rather underhanded attempt to confine religion to the realm of the untestable and therefore irrelevant.  (This objection was made by almost every reviewer of Gould 1999.)

We have abstract black-box mechanisms in psychology (e.g. Freudian dynamic personality structure), anthropology (“culture”), and sociology (“class,” “discourse,” “network”).  Darwin’s theory of evolution had a profoundly mysterious black box, in which the actual workings of selection lay hidden, until modern genetics shone light into the box in the 1930s and 1940s.  Geology similarly depended on mysticism, or at least on wildly improbable mechanisms, to get from rocks to mountains, until continental drift showed the way.  Human ability to withstand disease was for long a totally black box.  The usual wild speculations filled it until Elie Metchnikoff’s brilliant work revealed the immune-response system, and gave us all yogurt into the bargain.  It was Metchnikoff who popularized it as a health food, having seen that people in his native Bulgaria ate much yogurt and lived long.

At present, organized “science” in the United States is full of talk about “complex adaptive systems” that are “self-organizing” and may or may not have an unmeasurable quality called “resilience.”  They may be explained by “chaos theory.”  All this is substantially mystical, and sometimes clearly beyond the pale of reality; no, a butterfly flapping in Brazil can not cause a tornado in Kansas, by any normal meaning of the word “cause.”  “Self-organizing” refers to ice crystals growing in a freezing pool, ecological webs evolving, and human communities and networks forming—as if one could explain all these by the same process!  In fact, they are simply equated by a singularly loose metaphor.

When traditional peoples infer things like superstrings and self-organizing systems, we label those inferences “beliefs in the supernatural.”  The traditional people themselves never seem to do this labeling; they treat spirit forces and spirit beings as part of their natural world.  This is exactly the same as our treating dark energy, the market, and self-organization as “natural.”

Surely if they stopped and thought, the apologists for science would recognize that some unpredictable but large set of today’s inferred black-box variables will be a laughingstock 20 years from now—along with phlogiston, luminiferous ether (Greene 2004), and the angle of repose.

More:  they would have to admit that a science that is all correct and all factually proved out is a dead science!  Science is theories and hypotheses, wild ideas and crazy speculation, battles of verification and falsification.  Facts (whatever they are) make up part of science, but in a sense they are but the dead residue of science that has happened and gone on.  (See Hacking 1999; Philip Kitcher 1993.  These writers have done a good job of dealing with the fact that science is about truth, but is ongoing practice rather than final truth.  See Anderson 2000 for further commentary on Hacking.)

The history of science is littered with disproved hypotheses.  Mistakes are the heart and soul of science.  Science progresses by making guesses (hopefully educated ones) about the world, and testing them.  Inevitably, if these guesses are specific and challenging enough to be interesting, many of them will be wrong.  This is one of the truths behind Karl Popper’s famous claim that falsification, not verification, is the life of science (Popper 1948).

Science is not about established facts.  Established, totally accepted truth may be a result of science, but real science has already gone beyond it into the unknown.  Science is a search.

Premodern and traditional sciences made the vast majority of their errors from assuming that active and usually conscious agents, not mindless processes, were causal.  If they did not postulate dragons in the earth and gods in the sky, they postulated physis (originally a dynamic flux that produced things, not just the physical world), “creative force” (vis creatrix), or the Tao.

Today, most errors seem to come not from this but from three other sources.

First, scientists love, and even need, to assume that the world is stable and predictable.  This leads them into thinking it is more simple and stable than it really is.  Hence motionless continents (Oreskes 1998), Newtonian physics with its neat predictable vectors, climax communities in ecology, maximum sustainable yield theory in fisheries, S-R theory in psychology, phlogiston, and many more.

Second, scientists are hopeful, sometimes too much so.  From this come naïve behaviorism, from a hope for the infinite perfectability of humanity (see Pinker 2003); humanistic psychology (with the same fond hope); astrology; manageable “stress” as causing actually hopeless diseases (Taylor 1989); and the medieval Arab belief that good-tasting foods must be good for you (Levey 1966).

Third, some scientists like to take out their hatreds and biases on their subjects, and pretend that their fondest hates are objective truth.  This corrupted “science” gave us racism, sexism, and the old idea that homosexuality is “pathological.”  Discredited “scientific” ideas about children, animals, sexuality in general (Foucault 1978), and other vulnerable entities are only slightly less obvious.

It gave us the idea (now, I hope, laid definitively to rest) that nonhuman animals are mere machines that do not feel or think.  It gives us the pathologization of normal behavior.  Much or most diagnosed ADHD in the United States, for instance, is clearly not real ADHD; other countries have only about 10% our rate.  Most extreme of all ridiculous current beliefs, and thus most popular of all, is the idea that people are innately selfish, evil, violent, or otherwise horrific, and only a thin veneer of culture holds them in place.  This has given us Hobbes’ state of nature, Nietzsche’s innate will to power, Freud’s id, Dawkins’ selfish gene, and the extreme form of  “rational self-interest” that assumed people act only for immediate selfish advantage.  Three seconds of observation in any social milieu (except, perhaps, a prison riot) would have disproved all this, but no one seemed to look.

Given all the above, critics of science have shown, quite correctly, that all too much of modern “science” is really social bias dressed up in fancy language.

An issue of concern in anthropology is the ways that, in modern society, some mistaken beliefs are classified as “pseudoscience,” some as “religion,” and some merely as “controversial/inaccurate/disproved science.”   In psychology, parapsychology is firmly in the “pseudoscience” category, but racism (specifically, “racial” differences in IQ) remains “scientific,” though equally disproved and ridiculous.  Freudian theory, now devastated by critiques, is “pseudoscience” to many but is “science”—even if superseded science—to many others.  It is obvious that such labels are negotiable, and are negotiated.

The respectability and institutional home of the propounder of a theory is clearly a major determinant.  A mistake, if made at Harvard, is science; the same mistake made outside of academia is pseudoscience.  Pseudoscience is validly used for quite obvious shucks masquerading as science, but nonsense propounded by a Harvard or Stanford professor is all too apt to be taken seriously—especially if it fits with popular prejudices.  One recalls the acceptance as “science” of the transparently ridiculous race psychology of Stanford professor Thomas Jukes and Harvard professor Richard Herrnstein (see Herrnstein and Murray 1994, where, for instance, the authors admit that Latinos are biologically diverse, mixed, and nonhomogeneous, and then go right on to assign them a racial IQ of 89).

All this is not meant to give any support to the critics of science who claim “it” (whatever “it” is) can only be myth or mere social storytelling.  It is also not meant to claim that traditional knowledge is as well-conceived and well-verified as good modern science.  It is meant to show that traditional knowledge-seeking and modern science are the same enterprise.  Let us face it:  modern science does better at finding abstruse facts and proving out difficult causal chains.  We now know a very great deal about what causes illness, earthquakes, comets, and potatoes; we need not appeal to witchcraft or the Great Coyote.  But the traditional peoples were not ignorant, and the modern scientists do not know it all, so we are all in the same book, if not always on the same page.

Thus, science is the process of coming to general conclusions that are accurate enough to be used, on the basis of the best evidence that can be obtained.  Inevitably, explanatory models will be developed to account for the ways the facts connect to the conclusions, and these models will often be superseded in due course; that is how science progresses.

“Best evidence” is a demanding criterion, but not as demanding as “absolute proof.”  One is required to do the best possible—use appropriate methods, check the literature, get verification by others using other models or equipment.  Absolute proof is more than we can hope for in this world (Kitcher 1993).

Purely theoretical models provide a borderline case.  Even when they cannot be tested, they may qualify as science in many areas (e.g. theoretical astrophysics, where experimental testing is notoriously difficult).  Fortunately, they are usually testable with data.

The need to test hypotheses with hard evidence does not rule out the study of history.  Archaeological finds showed that the spice trade of the Roman Empire was indeed extensive.  This validated J. Miller’s hypothesis of extensive spice trade through the Red Sea area (Miller 1969), and invalidated Patricia Crone’s challenge thereto (Crone 1987).  We are, hopefully, in the business of developing Marx’ “science of history,” as well as other human sciences.

We are also now aware that “mere description” isn’t “mere.”  It always has some theory behind it, whether we admit it or not (Kitcher 1993; Kuhn 1962).  Even a young child’s thoughts about the constancy of matter or the important differences between people and furniture are based on partially innate theories of physics and biology (see e.g. Ross 2004).  Thus, traditional ecological knowledge can be quite sophisticated theoretically, though lacking in modern scientific ways of stating the theories in question.

Science and Popular “Science”

It thus appears that science is indeed a social institution.  But what kind of social institution is it?  Four different ones are called “science.”

First, we have the self-conscious search for knowledge—facts, theories, methodologies, search procedures, and knowledge systems.  This is the wide definition that allows us to see all truth-seeking, experiential, verification-conscious activities as “science,” from Maya agriculture to molecular genetics.  This can be divided into two sub-forms.  First, we can examine and compare systems as they are, mistakes and all—taking into account Chinese beliefs about sacred fish, Northwest Coast beliefs about bears that marry humans, Siberian shamans’ flights to other worlds, early 20th century beliefs in static continents, and so on.  Second, we can also look at all systems in the cold light of modern factual analysis, dismissing alike the typhoon-causing sacred fish and the tornado-causing Amazonian butterfly.  Fair is fair, and an international standard not kind to sacred fish cannot be merciful to exaggerated and misapplied “western” science either.

Second, it is “what scientists do”—not things like breathing and eating, but things they do qua scientists (e.g. Latour 2004).  This would include not only truth-seeking but a lot of grantsmanship, nasty rivalries, job politics, and even outright faking of data and plagiarism of others’ work.

Third, it is science as an institution:  granting agencies, research parks, university science faculties, hi-tech firms.  Many people use “science” this way, unaware that they are ruling off the turf the vast majority of human scientific activities, including the work of Newton, Boyle, Harvey, and Darwin, none of whom had research institutes.

Fourth, we have science as power/knowledge.  From the Greeks to Karl Marx and George Orwell, milder forms of this claim have been made, and certainly a great deal of scientific speculation is self-serving.  Science does, however, produce knowledge that stands the tests of verification and usually of utility.  It is our best hope of improving our lives and, now, of saving the world.

What is not science is perhaps best divided into four heads.

First, dogma, blind tradition, conformity, social and cultural convention, visionary and idiosyncratic knowledge, and bias—the “idols” of Bacon (1901).  These have sometimes replaced genuine inquiry within a supposedly scientific tradition.

Second, ordinary daily experience, which clearly works but is not being tested or extended—just being used and re-used.  Under this head comes explicitly non-“sciency” but still very useful material: autobiographies, collected texts, art, poetry and song.  These qualify as useful data, if only as worthwhile insights into someone’s mind.  All this material deserves attention; it is raw material that science can use.

Third, material that is written to be taken seriously as a claim about the world, but is not backed up by anything like acceptable evidence.  In addition to the obvious cases such as today’s astrology and alchemy, this would include most interpretive anthropology, especially postmodern anthropology.  Too many social science journal articles consist of mere “theory” without data, or personal stories intended to prove some broad point about identity or ethnicity or some other very complex and difficult topic. However, the best interpretive anthropology is well supported by evidence; consider, for example, Lila Abu-Lughod’s Veiled Sentiments (1985), or Steven Feld’s Sound and Sentiment (1982).

Fourth, pure advocacy:  politics and moral screeds.  This is usually backed up by evidence, but the evidence is selected by lawyers’ criteria.  Only such material as is consistent with the writer’s position is presented, and there is very minimal fact-checking.  If material consistent with an opposing position is presented, it is undercut in every way possible.  Typically, opponents are represented in straw-man form, and charged with various sins that may or may not have anything to do with reality or with the subject at hand; “any stick will do to beat a dog.”  Once again, Bacon (1901) was already well aware of this form of non-science.

The Problem of Truth

The problem of truth, and whether science can get at it in any meaningful way, has led to a spate of epistemological writings in anthropology, science studies, and history of science.  These writings cover the full range of possibilities.

The classic empiricist position—we can and do know real truths about the world—is robustly upheld by people like Richard Dawkins, whose absolute certainty not only extends to his normal realm (genetics; Dawkins 1976, a book widely criticized) but to religion, philosophy, and indeed everything he can find to write about.  He is absolutely positive that there are no supernatural beings or forces (Dawkins 2006).  He has said “Show me a relativist at 30,000 feet and I’ll show you a hypocrite” (quoted in Franklin 1995:173)  Sarah Franklin has mildly commented on this bit of wisdom:  “The very logic that equates ‘I can fly’ with ‘science must be an unassailable form of truth’ and furthermore assumes such an equation to be self-evident, all but demands cultural explication” (Franklin 1995:173).

At the other end of the continuum is the most extreme form of the “strong programme” in science studies, which holds that science is merely a set of myths, no different from the first two chapters of the Book of Genesis or any other set of myths about the cosmos.  Its purpose, like the purposes of many other myths, is to maintain the strong in power.  It is just another power-serving deception.  Since it cannot have any more truth-value than a dream or hallucination, it cannot have any other function; it must maintain social power.  This allowed Sandra Harding to maintain that “Newton’s Principia Mathematica is a rape manual” because male science “rapes female nature.”  This reaches Dawkins’ level of unconscious self-satire, and has been all too widely quoted (to the point where I can’t trace the real reference).  Dawkins might point out that Harding surely wrote it on a computer, sent it by mail or email to a publisher, and had it published by modern computerized typography.

Bad enough, but far more serious is the fact that the “strong programme” depends on assuming that people, social power, and social injustice are real.  Harding’s particularly naïve application of it also assumes that males, females, and rape are not only real but are unproblematic categories—yet mathematics is not.  How the strong programmers can be so innocently realist about an incredibly difficult concept like “power,” while denying that 2 + 2 = 4, escapes me.

Clearly, these positions are untenable, but that leaves a vast midrange.

The empiricist end of the continuum would, I think, be anchored by John Locke (1979/1697) and the Enlightenment philosophers who (broadly speaking) followed him.  Locke was not only aware of human information processing biases and failures; his account of them is amazingly modern and sophisticated.  It could go perfectly well into a modern psychology textbook.  He realized that people believe the most fantastic nonsense, using a variety of traditional beliefs as proof.  But he explains these as due to natural misinference, corrected by self-awareness and careful cross-checking.  He concluded that our senses process selectively but do not lie outright.  Thus the track from the real world to our knowledge of it is a fairly short and straight one—but only if we use reason, test everything, and check deductions against reality.

Locke’s optimism was almost immediately savaged by David Hume (1969 [1739-40]), who concluded that we cannot know anything for certain; that all theories of cause are pure unprovable inference; that we cannot even be sure we exist; and that all morals and standards are arbitrary.  This total slash-and-burn job was done in his youth, and has a disarming cheerfulness and exuberance about it, as if he were merely clearing away some minor problems with everyday life.  This tone has helped it stay afloat through the centuries, anchoring the skeptical end of the continuum.

Immanuel Kant (1978, 2007) took Hume seriously, and admitted that all we have is our experience—and maybe not even that.  At least we have our sensory impressions: the basic experiences of seeing, smelling, hearing, feeling, and tasting.  They combine to produce full experiences, informed by emotion, cognition, and memory of earlier experiences.  This more or less substitutes “I experience, therefore maybe I am” for Descartes’ “I think, therefore I am”; Kant realized not only that thought is not necessarily a given, but, more importantly, that sensory experience is prior to thought in some basic way.  He worked outward from assuming that experience was real and that our memory of it extending backward through time was also real.  Perhaps the time itself was illusory.  Certainly our experience of time and space is basic and is not the same as Time and Space.  And perhaps the remembered events never happened.  But at least we experience the memory.  From this he could tentatively conclude that there is a world-out-there that we are experiencing, and that its consistency and irreducible complexity make it different from dreams and hallucinations.

In practice, he took reality as a given, and devoted most of his work to figuring out how the mind worked and how we could deduce standards of morality, behavior, and judgment from that.  He was less interested in saving reality from Hume than in saving morality.  This need not concern us here.  What matters much more is his realization that the human brain inevitably gives structure to the universe—makes it simpler, neater, more patterned, and more systematic, the better to understand and manage it.  Obviously, if we took every new perception as totally new and unprecedented, we would never get anything done.

Kant therefore catalogued many of the information-processing biases that have concerned psychologists since. Notable were his “principle of aggregation” and “principle of differentiation,” the most basic information-processing heuristics (Kant 1978).  The former is our tendency to lump similar things into one category; the latter is our tendency to see somewhat different things as totally different.  In other words, we tend to push shades of gray into black and white.  This leads us to essentialize and reify abstract categories.  Things that refuse to fit in seem uncanny.  More generally, people see patterns in everything, and try to give a systematic, structured order to everything.  From this grew the whole structuralist pose in psychology and anthropology, most famously advocated in the latter field by Claude Lévi-Strauss (e.g. 1962).

Hume and Kant were also well aware—as were many even before them—of the human tendency to infer agency by default.  We assume that anything that happens was done by somebody, until proven otherwise.  Hence the universal belief in supernaturals and spirits.  This and the principle of aggregation gives us “other-than-human persons,” the idea that trees, rocks, and indeed all beings are people like us with consciousness and intention.

Kant’s focus on experience and the ways we process it were basic to social science; in fact, social science is as Kantian as biology is Darwinian.  However, Kant still leaves us with the old question.  His work reframes it:  How much of what we “know” is actually true?  How much is simply the result of information-processing bias?

People could take this in many directions.  At the empiricist end was Ernst Mach, who developed “positivism” in the late 19th century.  Well aware of Kant, Mach advocated rigorous experimentation under maximally controlled conditions, and systematic replication by independent investigators, as the surest way to useful truths.  The whole story need not concern us here, except to note that controlled, specified procedures and subsequent replication for confirmation or falsification have become standard in science (Kitcher 1993).  Note that positivism is not the naïve empiricist realism that postmodernists and Dawkinsian realists think it is.  It is, in fact, exactly the opposite.  Also, positivism does not simply throw the door open to racist and sexist biases, as the Hardings of the world allege.  It does everything possible to prevent bias of any kind.  If it fails, the problem is that it was done badly.

Kant did not get deeply into the issue of social and political influences on belief, but he was aware of them, as was every thinker from Plato on down.  Kantians almost immediately explored the issue.  By far the most famous was Marx, whose theory of consciousness and ideology is well known; basically, it holds that people’s beliefs are conditioned by their socioeconomic class.  Economics lies behind belief, and also behind nonsense hypocritically propagated by the powerful to keep themselves in power.

By the end of the 19th century, this was generalized by Nietzsche and others to a concern with the effects of power in general—not just the power of the elite class—on beliefs.  This idea remained a minority position until the work of Michel Foucault in the 1960s and 1970s.  Foucault is far too complex to discuss here, but his basic idea is simple:  established knowledge in society is often, if not always, “power/knowledge”:  information management in the service of power.  Foucault feared and hated any power of one person over another; he was a philosophic anarchist.  He saw all such sociopolitical power as evil.  He also saw it as the sole reason why we “know” and believe many things, especially things that help in controlling others.  He was especially attracted to areas where science is minimal and need for control is maximal:  mental illness, sexuality, education, crime control.  When he began writing, science had only begun to explore these areas, and essentially did not exist in the crime-control field.  Mischief expanded to fill the void; there is certainly no question that the beliefs about sex, women, and sexuality that passed as “science” berore the days of Kinsey had everything to do with keeping women down and nothing to do with truth.  Since his time, mental illness and its treatment, as well as sexuality, have been made scientific (though far from perfectly known), but crime control remains exactly where it was when Foucault wrote, and, for that matter, where it was when Hammurabi wrote his code.

A generation of critics like Sandra Harding concluded that science had no more grasp on truth than religion did.  Common such “science” certainly was; typical it was not.  By the time the postmodernists wrote, serious science had reached even to sex and gender issues, with devastating effects on old beliefs.  The postmodernists were flogging a dead horse.  Often, they kept up so poorly on actual science that they did not realize this.  Those who did realize it moved their research back into history.  Finding that the sex manuals of the 19th century were appalling examples of power/knowledge was easy.  The tendency to overgeneralize, and see all science as more of the same, was irresistible to many.  Hence the assumption that Newton and presumably all other scientists were mere purveyors of yet more sexist and racist nonsense.

A “strong programme” within science studies holds that science is exactly like religion: a matter of wishes and dreams, rather than reality.  This is going too far.  Though this idea is widely circulated in self-styled “progressive” circles, it is an intensely right-wing idea.  It stems from Nazi and proto-Nazi thought and ideology (including the thought of the hysterically anti-Semitic Nietzsche, and later of Martin Heidegger and Paul de Man, both committed and militant Nazis, and influenced also by the right-wing philosopher and early Nazi-sympathizer Paul Ricoeur).  It is deployed today not only by academic elites but also by the fundamentalist extremists who denounce Darwinian evolution as just another origin myth.  The basically fascist nature of the claim is made clear by such modern right-wing advocates as Gregg Easterbrook (2004), who attacked the Union of Concerned Scientists for protesting against the politicization of science under the Bush administration.  Easterbrook makes the quite correct point that the Union of Concerned Scientists is itself a politically activist group, but then goes on to maintain that, since scientific claims are used for political reasons, the claims are themselves purely political.  He thus confuses, for example, the fact that global warming due to greenhouse gases is now a major world problem with the political claims based on this fact; he defends the Bush administration’s attempt to deny or hide the fact.

Postmodernists dismiss science—and sometimes all truth-claims—as just another social or cultural construction, as solipsistic as religion and magic.  Some anthropologists still believe, or at least  maintain, that cultural constructions are all we have or can know.  This is a self-deconstructing position; if it’s true, it isn’t true, because it is only a cultural construction, and the statement that it’s only a cultural construction is only a cultural construction, and we are back with infinite regress and the Liars Paradox.  The extreme cultural-constructionist position is all too close to, and all too usable by, the religious fundamentalists who dismiss science as a “secular humanist religion.”

If we trim off these excesses, we are left with Foucault’s real question:  How much of what we believe, and of what “science” teaches, is mere power/knowledge?  Obviously, and sadly, a great deal of it still is, especially in the fields where real science has been thin and rare.  These include not only education and crime control, but also economics, especially before the rise of behavioral economics in the 1990s.  “Development” is another case (see Dichter 2003; Escobar 2008; Li 2007).  Not only is rather little known in this area, but the factual knowledge accumulated over the years in this area is routinely disregarded by development agents, and the pattern of disregard fits perfectly with Foucaultian theory.  To put it bluntly, “development” is usually about control, not about development.  Indeed, coping strategies for most social problems today are underdetermined by actual scientific research, leaving power/knowledge a clear field.

However, this does not invalidate science.  Where we actually know what we are doing, we do a good job.  Medicine is the most obvious case.  Foucault subjected medicine to the usual withering fire, and so have his followers, but the infant mortality rate under state-of-the-art care has dropped from 500 per thousand to 3 in the last 200 years, the maternal mortality rate from 50-100 per thousand to essentially zero, and life expectancy (again with state-of-the-art health care) has risen from 30 to well over 80.  Somebody must be doing something right.

Medical science largely works.  Medical care, however, lags behind, because the wider context of how we deal with illness and its sociocultural context remains poorly studied, and thus a field where power/knowledge can prevail (Foucault 1973).

Here we may briefly turn to Chinese science to find a real counterpart.  The Chinese, at least, had deliberately designed, government-sponsored case/control experiments as early as the Han Dynasty around 150 BC (Anderson 1988).  The ones we know about were in agriculture (nong; agricultural science is nongxue), and Chinese agriculture developed spectacularly over the millennia.  It is beyond doubt that this idea was extended to medicine; we have some hints, though no real histories.  Unfortunately most of the work was done outside the world of literate scholars.  We know little about how it was done.  A few lights shine on this process now and then over the centuries (e.g. the Qi Min Yao Shu of ca. 550, and the wonderful 17th-century Tiangong Kaiwu, an enthusiastic work on folk technology).  They show a development that was extremely rigorous technically, extremely rapid at times, and obviously characterized by experiment, analysis, and replication.

In medicine (yi or yixue), the developments were slow, uncertain, and tentative, because of far too much reliance on the book and far too little on experience and observation (see e.g. Unschuld 1986).  However, there was enough development, observation, and test—corpse dissection, testing of drugs, etc.—to render the field scientific.  Even so, we must take note that it was far more tradition-bound and far less innovative than western medicine after 1650 (cf. Needham 2000 and Nathan Sivin’s highly skeptical introduction thereto).   Medical botany and nutrition are probably the most scientific fields, but medical botany ceased to progress around 1600, nutrition around the same time or somewhat later.  It is ironic that Chinese botany froze at almost exactly the same time that European botany took off on its spectacular upward flight.  Li Shizhen’s great Bencao Gangmu of 1593 was more impressive than the European herbals of its time.  Unfortunately, it was China’s last great herbal until international bioscience came to Chinese medicine in the last few decades.  Herbal knowledge more or less froze in place; Chinese traditional doctors still use the Bencao Gangmu. By contrast, European herbals surpassed it within a few years, and kept on improving.

Best of all, thanks to the life work of the incredible scholar H. T. Huang (2000), we know that food processing was fully scientific by any reasonable standard.  Chinese production of everything from soy sauce to noodles was so sophisticated and so quick to evolve in new directions that, in many realms, it remains far ahead of modern international efforts.  Thanks to H. T. and a few others, we can understand in general what is going on, but modern factories cannot equal the folk technologists in actual production.

One thing emerges very clearly from comparison of epistemology and the historical record:  using some form of the empirical or positivist “scientific method” does enormously increase the speed and accuracy of discovery.  Intuition and introspection also have a poor record.  Medieval scholars, both Platonists and Aristotelians, relied on intuition, and did not add much to world science; much of the triumph of the Renaissance and the scientific “revolution” was due to developments in instrumentation and in verification procedures.  Psychologists long ago abandoned introspective methods, since the error rate was extremely high.  Doctors have known this even longer.  The proverb “the doctor who treats himself has a fool for a patient” is now many centuries old.

The flaws of the empirical and positivist programs are basically in the direction of oversimplification.  Procedures are routinized.  Mythical “averages” are used instead of individuals or even populations (Kassam 2009).  Diversity is ignored.  Kant’s principles of differentiation and aggregation are applied with a vengeance (cf., again, Kassam 2009, on taxonomy).  The result does indeed allow researcher bias to creep in unless zealously guarded against—as Bacon pointed out.  But, for all these faults, science marches on.  The reason is that the world is fantastically complicated, and we have to simplify it to be able to function in it.  Quick-and-dirty algorithms give way, over time, to more nuanced and detailed ones, but Borges’ one-to-one map of the world remains useless.  A map of the world has to simplify, and then the user’s needs dictate the appropriate scale.

The full interpretive understanding sought by many anthropologists, by contrast, remains a fata morgana.  It is fun to try to understand every detail of everyone’s experience, but even if we could do it (and we can’t even begin) it would be as useless as Borges’ map.

On the other hand, we need that attempt, to bring in the nuances to science and to correct the oversimplifications.  A purely positivist agenda can never be enough.

Case Study:  Yucatec Maya Science

Anthropology is in a particularly good place to test and critique discussions of science, because we are used to dealing with radically different traditions of understanding the world.  Also, we are used to thinking of them as deserving of serious consideration, rather than simply dismissing them as superstitious nonsense, as modern laboratory scientists are apt to do.  I thus join Roberto Gonzalez (2001) and Eugene Hunn (2008) in using the word “science,” without qualifiers, for systematic folk knowledge of the natural world.

The problem is not made any easier by the fact that no society outside Europe and the Middle East seems to have developed a concept quite like the ancient Latin scientia or its descendants in various languages, and that the European and Middle Eastern followers of the Greeks have defined scientia/science in countless different ways.  Arabic ‘ilm, for instance, in addition to being used as a translation for scientia, has its own meanings, and this led to many different definitions and usages of the word in Arabic.

In Chinese, to know is zhi, and this can mean either to know a science or to know by mystical insight.  An organized body of teaching, religious or philosophical, is a jiao.  The Chinese word for “science,” kexue, is a modern coinage.  It means “study of things.”  It was originally a Japanese coinage using Chinese words.  The Chinese borrowed it back.  Lixue, “study of the basic principles of things,” is a much older word in Chinese, and once meant something like “science,” but it has now been reassigned to mean “physics.”  Other sciences have mostly-new names coined by putting the name of the thing studied in front of the Chinese word xue “knowledge.”   But non-science knowledges are also xue; literature and culture is wen xue “literary knowledge.”  We can define Chinese “science” in classical terms, without using the modern word kexue, by simply listing the forms of xue devoted to the natural (xing, ziran) world as opposed to the purely cultural.

Yucatec Maya has no word separating science from other kinds of knowledge.  So far as I know, the same is true of other Native American languages.  The Yucatec Maya language divides knowledge into several types.  The basic vocabulary is as follows:

Oojel to know  (Spanish saber).

Oojel ool to know by heart; ool means “heart.” Cha’an ool is a rare or obsolete synonym.

K’aaj, k’aajal to recognize, be familiar with (Spanish conocer in the broader sense)

K’aajool, k’aajal ool to “recognize by heart” (Spanish reconocer):  to recognize easily and automatically.  (The separation between ool and k’aaj is so similar to the Spanish distinction of saber and conocer that there may be some influence from Spanish here.)

K’aajoolal (or just k’aajool), knowledge; that which is known.

U’ub– to hear; standardly used to mean “understand what one said,” implying just to catch it or get it, as opposed to na’at, which refers to a deeper level of understanding.

Na’at to understand.

The cognate word to na’at in Tzotzil Maya is na’, and has been the subject of an important study by Zambrano and Greenfield (2004).  They find that it is used as the equivalent of “know” as well as “understand,” but focally it means that one knows how to do something—to do something on the basis of knowledge of it.   This keys us into the difference between Tzotzil knowing and Spanish or English knowing:  Tzotzil know by watching and then doing (as do many other Native Americans; see Goulet 1998, Sharp 2001), while Spanish and English children and adults know by hearing lectures or by book-learning.  It seems fairly likely that a culture that sees knowledge as practice would not make a fundamental or basic distinction between magic, science, and religion.  The distinction would far more likely be between what is known from experience and what is known only from others’ stories.  Such distinctions are made in some Native American languages.

Ook ool religion, belief; to believe; secret.  Ool, once again, is “heart.”

So Chinese and Maya have words for knowledge in general but no word for science as opposed to humanistic, religious, or philosophical knowledge.  Unlike the Greeks, they do not split the semantic domain finely.

Let us then turn to “science” in English.  The word has been in the language since the 1200s.  One reference, from 1340, in the OED is appropriately divine:  “for God of sciens is lord,” i.e. “for God is lord of all knowledge.”  The word has been progressively restricted over time, from a general term for knowledge to a term for a specific sort of activity designed to increase specialized knowledge of a particular subset of the natural world.

Science can also be seen as an institution:  a social-political-legal setup with its own rules, organizations, people, and subculture.  We generally understand one of two things:

1) a general procedure of careful assemblage of all possible data relevant to a particular question, generally one of theoretical interest, about the natural world.

2) a specific procedure characterized by cross-checking or verification (Kitcher 1993) or by falsifiability (Popper 1959).  Karl Popper’s falsifiability touchstone was extremely popular in my student days, but is now substantially abandoned, even by people who claim to be using it and to regard it as the true touchstone of science.  One problem is that falsifiability is just as hard to get and insure as verifiability.  We all know anthropological theories that have been falsified beyond all reasonable doubt, but are still championed by their formulators as if nothing had happened.  Thus, as of 2009, Mark Raymond Cohen 2009 is still championing his idea that Pleistocene faunal extinctions forced people to turn to agriculture to keep from starving—a theory so long and frequently disproved that the article makes flat-earthers look downright reasonable.

Another and worse problem is that Popper’s poster children for unfalsifiable and therefore nonscientific theories have been disproved, or at least subjected to some rather serious doubt.  Adolph Grunbaum (1984; see also Crews 1998, Dawes 1994) took on Freud and Popper both at once, showing quite conclusively that—contra Popper—Freudian theory was falsifiable, and had in fact been largely, though not entirelyi, falsified.  As with Cohen, the Freudians go right on psychoanalyzing—and, unlike Cohen, charging huge amounts of money.  Marx’ central theory was Popper’s other poster child, and while it does indeed seem to be too fluffy to disprove conclusively, its predictions have gone the way of the Berlin Wall, the USSR, and Mao’s Little Red Book.

We are well advised to stick with Ernst Mach’s original ideas of science as requiring specialized observational techniques (usually involving specialized equipment and methodology) and, above all, cross-verification (Kitcher 1993).  On the other hand, David Kronenfeld points out (email of Jan. 7, 2010) that Popper’s general point—one should always be as skeptical as possible of ideas and subject them to every possible test—is as valid as ever.  Clear counter-evidence should count (Cohen to the contrary notwithstanding).

The question for us then becomes whether the Maya had special procedures for meticulously gathering data relating to more or less theoretic questions about the natural world, and how they went about verifying their data and conclusions.

The Maya are a different case, since they do not have any terminological markers at all (unlike the Chinese with xue), they do not have a history of systematic cumulative investigation and replication, and, for that matter, they do not have a concept of “nature.”  All of the everyday Maya world is shaped by human activities.  Areas not directly controlled by the Maya themselves are controlled by the gods or spirits.  Rain, the sky, and the winds, for instance, have varying degrees of control by supernaturals.   Ordinary workings of the stars and of weather and wind are purely natural in our English sense, but anything important—like storms, or dangerous malevolent winds—has agency.  This makes the idea of “natural science” distinctly strange in Maya context.

However, the Maya are well aware that human and supernatural agency is only one thing affecting the forests, fields, and atmosphere.  Plants, animals and people grow and act according to consistent principles—essentially, natural laws.  The heavens are regular and consistent; star lore is extensive and widely known.  Inheritance is obvious and well recognized.  So are ecological and edaphological relationships—indeed there is a very complex science of these.

To my knowledge, there is no specific word for any particular science in Maya, with the singular exception of medicine:  ts’akTs’ak normally refers to the medicines themselves, but can refer to the field of medicine in general.  In spite of a small residual category of diseases explained by witchcraft and the like, Maya medicine is overwhelmingly naturalistic.  People usually get sick from natural causes—most often, getting chilled when overheated.  This simple theory of causation underdetermines an incredibly extensive system of treatment.  I have recorded 350 medicinal plants in my small research area alone, as well as massage techniques, ritual spells, cleansing and bathing techniques, personal hygeine and exercise, and a whole host of other medicinal technologies (Anderson 2003, 2005).  Some are “magic” by the standards of modern science, but are considered to work by lawful, natural means.  More to the point, medicine in Maya villages is an actively evolving science in which the vast majority of innovations are based either on personal experience and observation or on authority that is supposed to be medically reliable.  (Usually the authority is the local clinic or an actual medical practitioner, but a whole host of patent medicines and magical botánica remedies are in use.)  New plants are quickly put to use, often experimentally.  If they resemble locally used plants, they may be tried for the same conditions.  In the last five years, noni, a Hawaiian fruit used in its native home as a cureall, has come to Mayaland, being used for diabetes and sometimes other conditions.  It is now grown in many gardens, and is available for sale at local markets.  It has been tried by countless individuals to treat their diabetes, and compared in effectiveness with local remedies like k’ooch (Cecropia spp.) and catclaw vine (uncertain identification).  I have also watched the Maya learn about, label, and study the behavior of new-coming birds, plants, and peoples.  Their ethnoscience is rapidly evolving.  The traditional Maya framework is quite adequate to interpret and analyze new phenomena and add knowledge of them to the database.

This makes it very difficult to label Maya knowledge “traditional ecological knowledge” as opposed to science.  It is not “traditional” in the sense of old, stagnant, dying, or unchanging.  Nonanthropologists generally assume that traditional ecological knowledge is simply backward, failed science (see e.g. Nadasdy 2004).  They are now supposed to take account of it in biological planning and such-like matters, but they do so to minimal extent, because of this assumption.

Until modern times, the Maya did not have the concept of a “science” separate from other knowledge.  They also lacked the institution of “science” as a field of endeavor, employment, grant-getting, publishing, etc.  Now, of course, there are many Maya scientists.  Quintana Roo Maya are an education-conscious, upwardly-mobile population.  The head of the University of Quintana Roo’s Maya regional campus is a local Maya with a Ph.D. from UCSC in agroecology.

This might not matter, but Maya knowledge also includes those supernatural beings, mentioned above.  This was a problem for Roberto Gonzalez and Gene Hunn as well, in their conceptualization of Zapotec indigenous knowledge as science.  The Quintana Roo Maya are a hardheaded, pragmatic lot, and never explain anything by supernaturals if they can think of a visible, this-world explanation, but there is much they cannot explain in traditional terms or on the basis of traditional knowledge.  Storms, violent winds, and other exceptional meteorological phenomena are the main case.  Strange chronic health problems are the next most salient; ordinary diseases have ordinary causes, but recurrent, unusual illnesses, especially with mental components, must be due to sorcery or demons.

One could simply disregard these rather exceptional cases, and say that Maya science has inferred these black-box variables the way the Greeks inferred atoms and aether and the way the early-modern scientists inferred phlogiston and auriferous forces.  However, discussion with my students, especially Rodolfo Otero in recent weeks, has made it more clear to me that much of our modern science depends on black-box variables that are in fact rather supernatural.  Science has moved to higher and higher levels of abstraction, inferring more and more remote and obscure intervening variables.  Physics now presents a near-mystical cosmology of multidimensional strings, dark matter, dark energy, quark chromodynamics, and the rest.  The physicist Brian Greene admits of string theory:  “Some scientists argue vociferously that a theory so removed from direct empirical testing lies in the realm of philosophy or theology, but not physics” (Greene 2004:352).

Closer to anthropology are the analytic abstractions that have somehow taken on a horrid life of their own—the golems and zombies of our trade.  These are things like “globalization,” “neoliberalism,” “postcoloniality,” and so forth.  Andrew Vayda says it well:  “Extreme current examples…are the many claims involving ‘globalization,’ which…has transmogrified from being a label for certain modern-world changes that call for explanation to being freely invoked as the process to which the changes are attributed” (Vayda 2009:24).  “Neoliberalism” has been variously defined, but there is no agreed-on definition, and there are no people who call themselves “neoliberals”; the term is purely pejorative and lacks any real meaning, yet it somehow has become an agent that does things and causes real-world events.  There is even something called “the modernist program,” though no one has ever successfully defined “modern” and none of the people described as “modernist” had a program under that name.  Farther afield, we have the Invisible Hand of the market, rational self-interest, money, and a number of other economic concepts that suffer outrageous reification.  The tendency of social sciences to reify concepts and turn them into agents has long been noted (e.g. Mills 1959), and it is not dead.

The real problem with supernatural beings, Maya or postmodern, is that they tend to become more and more successful at pseudo-explanation, over time.  People get more and more vested interest in using reified black-box postulates to explain everything.  The great advantage of modern science—what Randall Collins (1999) calls “rapid discovery science”—is that it tries to minimize such explanatory precommitment.  The whole virtue of rapid discovery science is that it keeps questioning its own basic postulates.

If we can claim “science” for superstrings, neoliberalism, and rational choice, the Maya winds and storm gods can hardly be ruled out.  At least one can see storms and feel winds.  Black-box variables are made to be superseded.  We cannot use them as an excuse to reject the accumulated wisdom of the human species.


 

RURAL DEVELOPMENT AND MAYA ETHNOBIOLOGY: A VIEW FROM CENTRAL QUINTANA ROO, MEXICO

Saturday, July 30th, 2011

E. N. Anderson

Dept. of Anthropology, University of California, Riverside

Gene@ucr.edu

www.krazykioti.com

 

Preface

 

These papers began life as papers delivered at various learned venues since 2003.  They report some new field work, and a great deal of new thinking about older field work.  Since 1989, I have done research in and around Chunhuhub and Presidente Juarez, Quintana Roo, on Yucatec Maya development and ethnobiology.  Some of this work has theoretical or ethnographic significance, and a great deal of it has some relevance to current development issues.  It seems reasonable to bring my recent writings together in the present format, and make them available as a package.  I am also leaving these writings, deliberately, in a rather informal style and format, and posting them to my website, rather than trying to write them up as a formal publication.  I would rather have them free and accessible in all senses of the word, for students and professionals in the field of agricultural and rural development.

(more…)

Food and Development

Sunday, June 26th, 2011

Food and development

E. N. Anderson

“The first law of economics is that for every economist there is an equal and opposite economist,… and the second law is that they are both invariably wrong.”  (Paul Sillitoe, 2010:xvii.)

(more…)

Recipes Worth a Thousand Gold

Sunday, June 12th, 2011

Recipes Worth a Thousand Gold:  The Food Sections

 

By

 

Sun Simiao

 

Translated by Sumei Yi, Dept. of History, University of Washington, Seattle, with notes summarized, from edition published in Peking, 1985; ed. E. N. Anderson

 

 

Introductory Notes by E. N. Anderson

Sumei Yi, a graduate student in Chinese history at the University of Washington, has done the world a signal service in translating the material on food and nutrition from the medieval Chinese work Recipes Worth a Thousand Gold (654 A.D.).

The following is a preliminary version.  I have lightly edited it but neither of us has, so far, checked it over carefully or compared it with other editions.  Ms. Yi is releasing it to the Chinese medical world for help and advice.  We need help especially with the medical terms.

(more…)

Maya Ethnobotany: Four Studies

Sunday, June 12th, 2011

Maya Ethnobotany:  Four Studies

 

E. N. Anderson

 

1.  Yucatan Maya Herbal Medicine:  Practice and Future                                      2

2.  Wild Plum Shoots and Jicama Roots:  Food Security in

Quintana Roo Maya Life                                                                        12

3.  African Influences on Maya Foods                                                            21

4.  From Sacred Ceiba to Profitable Orange                                                            30

 

 

 

1.  Yucatec Maya Herbal Medicine in Quintana Roo:  Practice and Future

 

 

Abstract

The Yucatec Maya of west-central Quintana Roo maintain an herbal medical tradition involving over 450 named taxa.  Some 347 species have been identified botanically (at least to genus level) to date; others remain unidentified.  Significant differences exist between this tradition and those found in neighboring Yucatan state.  Conditions treated are usually minor:  skin problems, respiratory diseases, stomach upsets.  However, more serious conditions, including diabetes and cancer, are also treated routinely.  Commercialization of major herbal drugs is beginning.  This presages problems with biopiracy and overexploitation in future.  To avoid conflicts such as those in Chiapas recently, there must be cooperation between governments, biologists, and Maya communities.  Some efforts in this direction have been made.

(more…)

The Morality of Ethnobiology

Sunday, June 12th, 2011

Doctor Faustus, Ethnobiologist:

The Morality of Ethnobiology

 

E. N. Anderson

Dept. of Anthropology and

Center for Conservation Biology

University of California, Riverside, CA 92521-0418

Gene@ucrac1.ucr.edu

 

 

Abstract

 

Recent debates over bioprospecting, biopiracy, and indigenous intellectual property rights have raised some basic ethical issues that lie well outside the ordinary province of anthropology and biology.  This paper focuses on the wider issues, some of which are rather intractable.   Possibilities for amelioration are suggested, but the paper is concerned primarily with basic questions about the morality of extracting information that is extremely useful but could be expropriated or abused.

(more…)

Saving American Education in the 21st Century:

Monday, February 7th, 2011

Part I:  K-12

1

Modern classroom education is very different from traditional teaching.  A teacher lectures, often in highly abstract terms, and often with no demonstration (though perhaps with “visuals”—not necessary very relevant or revealing ones).  Students copy facts and memorize them.  Testing does not involve making the students do what they’ve learned; it involves making them guess which one of four statements is most like what a testmaker would think was the correct answer.  As Marc Bousquet (a professor at Santa Clara University) puts it:  “We herd them into a system that manufactures desperation and then hand them hamster wheels with sickly, hypocritical grins on our faces” (Bousquet 2010:B2).

The lecture-and-examination system arose in the ancient world and was perfected in medieval times.  It evolved to teach philosophy and other highly abstract fields to high-level students.  It has persisted today largely because it is cheap.  One can hire someone, not always the most qualified person, to teach a very large number of people.  This works if all one wants to do is teach a bare minimum of information.

However, when actual usable knowledge is the goal, we revert to the age-old demonstration-and-imitation model.  We do this for lab science, computer skills, typing, cooking, driving, sports coaching, and above all apprenticeship on the job.  The technique requires much input of teaching effort by skilled personnel, but it is the only way that works, as everyone has known since Ug the cave man taught his kids to flake stone tools.

Thus, though we Americans are so cynical that we pretend not to know how to teach, in areas that matter to us are taught perfectly well.  Young people are guided, in actual practice, by coaches and mentors.  Tell a sports coach, construction foreman, driving teacher, or chef that he should teach his students by making them sit motionless and memorize random bits for a standardized test.  Preposterous! 

It is only in “book learning” that we pretend such methods work; this shows our opinion of the learning in question.  Lecture-and-examination education is, in short, not a good way of teaching.  It is too abstract, remote, hands-off, and impersonal.  It leads to rote memorization.  It discourages creative application of knowledge.  Recent letters to Science and the Chronicle of Higher Education have responded to this truism by stoutly maintaining that a professor who is a great speaker and actor can teach effectively through lecturing.  Sure, but this line gives away the store; if you have to have a movie star to do the job right, what hope is there for even good lecturers, let alone mediocre ones?

Rote learning is far worse.  It is the method of choice for those who want to regiment citizens rather than enlighten them.  As such, it has become the darling of politicians, who want followers, not thinkers.  It has given us a generation many of whom who can’t write, can’t understand what they read (having been trained to read only to memorize random facts), can’t do scientific experiments, and don’t know the local environment. 

Even worse, many students come to believe that actual thinking and creativity are strictly for the outside-of-class world!  Students who are perfectly thoughtful and creative in their daily lives diligently turn off their brains and stop questioning when they get into class.  There is now an active culture among teenagers of writing short stories on the Internet for their friends.  They write stories and poems for their friends and posting these on their MySpace and Facebook sites—with no idea that students were once supposed to write such things.  Students have ceased to see education as anything but standardized testing.  They never get to write stories in class.  They appear genuinely unaware that writing short stories was once a part of education!  They are constantly online, learning and writing and sharing, but they separate these activities from formal education.

Unfortunately, many modern alternative methods do not work well either.  Creativity for its own sake, or “discussion” for its own sake, can become undirected and trivial.  Relying on children’s natural desire to learn is a fine and necessary start, but inadequate to get through the slogging of memorizing times tables and chemical elements. 

Education for the future has to empower children and strengthen them, and make them lifelong learners.  Recently, the trend has been all the other way:  toward dragooning, forced memorization, standardized testing, and every other thing that breaks a child’s will and ruins his or her mind for life. 

            Americans will have to figure out what they actually want from education.  Rote memorization of trivia?  Citizenship?  Understanding the world?  Job skills? 

2

            We have long known how to teach and learn.  Yet, a great deal of what we know has been forgotten.  John Medina has conveniently reviewed much of this in Brain Rules (2008).  His rules—as conveniently summarized on the back cover—are:

“Exercise:  Exercise boosts brain power.

Survival:  The human brain evolved, too.

Wiring:  Every brain is wired differently.  [Individual differences are far too great to ignore—yet we generally ignore them, wrecking the teaching process.]

Attention:  We don’t pay attention to boring things.

Short-term Memory:  Repeat to remember.

Long-term Memory:  Remember to repeat.

Sleep:  Sleep well, think well.  [Of all rules, this is the most forgotten.  We now know that learning is consolidated during sleep.  Rats learning mazes replay these in their dreams; Medina 2008:164.]

Stress:  Stressed brains don’t learn the same way.  [In fact, they barely learn—except for the blazing, brilliant memory for the major stressor in a given case.  One focuses.  Depression, being generally a form of long-term stress, is thus devastating to learning.]

Sensory integration:  Stimulate more of the senses.  [He highlights smell, often forgotten.  He also points out that humans cannot multitask; the brain simply cannot pay real attention to two things at once.  So one must be careful to keep the multimodality targeted.]

Vision:  Vision trumps all other senses. 

Gender:  Male and female brains are different.  [Trivially, however.]
Exploration:  We are powerful and natural explorers.”

            We may add that hard tests are crucial; people have to know what they don’t know.  Educators even advocate giving students tests on material they are about to learn, so that they will at least know what the hard questions are (Roediger and Finn 2010).  Of course the students fail the tests, but they don’t blame themselves, and then will work harder and with more focus.

            Of these, we may note that some were known to the ancient Greeks.  My high-school psychology course more than 50 years ago taught me that “frequency, recency, and vividness” were the keys to remembering, and the line was classic long before that.

            Play has also greatly declined.  Recess and physical education have been dropped from schools, to provide yet more time for mindless drills.  At home, fear of street violence, availability of TV and video games, and other factors have virtually eliminated actual play in the old sense.  This is clearly disastrous from a psychological and educational view (Winerman 2009).   Yet, everybody knows, at some level, that successful education has to involve physical activity, including a good deal of “fun.”  Without field trips, experiments, and personal experiences, it doesn’t work. 

            If one uses all these rules, or whatever variants of them one prefers, one finds a classroom with a great deal of multimodal teaching, a fair amount of moving around, a great deal of repetition in different ways and forms,  and not too overwhelmingly much presented at a time. 

            Yet, during my lifetime, most American education has been moving away from these goals.  The No Child Left Behind initiative in particular—coupled with the huge tax cuts that accompanied it—led to enormous classes, drilled endlessly in mindless and overpacked curricula, with no accommodation to individual differences, need for rest, need for exercise, need for multimodal presentation, or anything else human. 

3

The schools are one area in which government must do the job.  It is a necessary political and social service, not a matter of material production.

Inevitably, then, politics has invaded education.  One reason for the failure of American education in science is that it has become politicized in an unsavory manner.  

Taxpayers and governments are so indifferent to their responsibility to educate the young that America’s schools are typically in serious need of repair, paint, landscaping, and new equipment; many are falling apart and downright unsafe.  Computer facilities and libraries are in dreadful shape. 

Every American child can compare his or her school with the local shopping mall, and see very clearly which one gets the attention and the money.  That lesson in values outweighs everything learned in class.  Meanwhile, right-wing politicians and talk show hosts continually attack teachers, claiming they are overpaid, coddled, and so on.  Clearly, if the community makes its scorn for schools and teachers obvious, the students will not take education seriously.  Very different are traditional societies, from hunter-gatherer societies to ancient Greece and China.  Then, even if both children and teachers were penniless, elders and their teachings were respected.  Still more different was America 50 years ago, when I was learning.  In those days, learning, school, and teachers were respected, and we kids listened up.

The problem of school maintenance and budget is obviously worst in poor neighborhoods, but paradoxically they may have less problem with students making the negative comparisons, because the difference between the school and other local buildings is less.  This does not change the brute fact of extreme economic injustice.  Spending on schools in a poor community that cares a lot may be only a fraction of that spent in a rich one that cares relatively less for education.

Government and private schools currently suffer from the belief that education is valuable only in so far as it is training for specific jobs.  No.  Education is essential to human development.  Humans are an end, not a means. 

Probably an even worse attitude, harder to spot today but much more open when I was young, is the idea that teaching is about making children learn discipline—“learning to mind,” it was called in my day.  My father (a Texas farm boy, educated in a tiny rural schoolhouse) quoted a (mythical) Texas farmer: “I don’t care what you learn ‘em so long as they don’t like it.”  This Puritanical attitude has made Americans focus on what children “should” learn and “should” do, and on making sure they don’t like it, rather than on what the children actually need.  We tend to teach whatever is the most grimly unpleasant and mind-deadening side of education, and abolish the pleasant or directly useful subjects as “frills.”  Really valuable subjects like natural history, nutrition, health, and exercise have thus gone to the wall.

All the above implies that saving American education at the grade-school level will take work.  It must involve, first, spending a great deal more money on actual classrooms and classroom teaching.  Rebuilding deteriorated schools is not only a matter of safety and common care for children, but a matter of community pride in education.  Teacher/student ratios much above 20-25 students per teacher in grade school and around 100 in big college classes make education simply impossible, unless rote memorization for standardized tests is dishonestly called “education.”  Teachers have to mentor, guide, and correct.  This cannot be done in mass batches.

George W. Bush’s “No Child Left Behind” policy has been a total disaster.  Even the idolized standardized test scores have fallen since it was introduced—let alone the measures of real education, such as the ability of college freshmen to write and do research, or the ability of fledgling employees to do useful work.  “NCLB” penalized teachers and principals for things over which they had no control, notably the number of non-English-speaking students in their districts.  It did nothing to reduce class size or provide better equipment.  It laid unfunded mandates on cash-strapped states and communities.  It favored private schools over public ones.  

By far its worst damage, however, has been its single-minded focus on standardized tests as the sole measure of quality.  The Educational Testing Service, which has a virtual monopoly on the tests, was a huge donor to Bush’s campaigns. 

The result has been an enormous relative increase in testing and in teaching to the test.  Schools compete to see who can achieve the highest test scores; those that fall behind are savagely penalized.  Teachers and principals are evaluated solely by how well their students do on the all-important tests. 

Standardized tests are bad enough of themselves, but it is possible to construct multiple-choice tests that require creativity, originality and real thought.  I have seen it done.  Cleverly designed standardized tests are a blessing in many situations.  Moreover, it is possible to use even the more mindless sort of standardized test to advantage when all one needs to test for is straight declarative knowledge—memorizing scientific names or chemical elements. 

However, the mass tests used in schools do not even approximate this goal.  One must seriously wonder whether anyone ever intended that they should. Given the administration that designed the plan, one suspects that it was deliberately designed to reduce original and critical thinking as much as possible.

We in America thus return to the level of traditional schools in Asia and Africa, where children learn to chant sacred books without understanding the words.

One of the predictable results of No Child Left Behind has been skyrocketing rates of cheating.  64% of high school students now cheat on tests, and 36% have plagiarized papers (David Crary, Associated Press, online article, Dec. 1, 2008).  In my childhood, cheating in high school was virtually unknown.  Teachers and staff are too overworked to police this, and many schools look the other way in any case, since their funding and many jobs are on the line.

4

It is unfair to single out Bush.  All segments of the body politic are guilty.  Liberals have rushed to embrace anti-teacher reforms (under Obama) and, more generally, the anti-science rhetoric of the “postmodernists.”  Moderates have supported “professional” schools in universities at the expense of the liberal arts and sciences. 

Politicians of all stripes routinely campaign against teachers and school taxes, and label them “special interests.”  Education was in desperate financial straits in most of the country even before the 2008 recession, but the reverberations of that crash are seriously threatening to end public education in much of the country.  California, Texas, and many smaller states are cutting education to the bone.  As of 2011, Texas, already one of the least-educated American states and one spending the least on education, is faced with a huge budget shortfall, and is proposing to deal with it by cutting public education 1/3 or more (Los Angeles Times, Feb. 7, 2011, p. A1).  This will, in the medium term, reduce Texas to levels of literacy and numeracy well below those of the less corrupt Third World nations.

Schools are blamed for all the faults of the young.  To hear politicians talk, parents have nothing to do with the kids’ problems, and have no responsibilities for their children.  This is because parents vote, and are numerous.  No politician wants to blame a substantial voting bloc for anything.  Moreover, many politicians go on to say that public money spent on education is “wasted.”  They cut school funding, in spite of occasional lip service to education in general.  Open or slightly-covert support for private schools over public schools is official in the Republican party and not infrequent among other parties.    

“No Child Left Behind” certainly was designed to hurt public schools, thus giving private schools a leg up; one may even suspect a deliberate attack on education and learning in general, since the Republican Party has, statistically, become the party of the less educated (a striking reversal since the 1950s).  However, the real target was more limited:  NCLB was explicitly and openly intended most specifically as an attack on teachers’ unions and associations.  This attack was largely due to the unions’ being strongly Democratic and active in politics.  Of course they were Democratic because of decades of Republican opposition to funding for public education, so the whole matter became a positive feedback loop.  The more the Republicans attacked teachers’ unions, the more solidly Democratic the latter became. 

However, there is more to this.  Besides a desire not to alienate parents by assigning them some blame, the real underlying hatred of unions is due to money issues.  The Republican imperative to cut taxes means both cutting teachers’ salaries and cutting expenses on schools overall.  The resulting inevitable decline in educational levels is then blamed on the unions for protecting “bad teachers” from public wrath—and from firing.  Meanwhile, class sizes rapidly increase, school nurses and counselors have joined the dodo and the great auk, physical education and other relatively expensive programs have gone too, and teachers spend an average of $400 a year on basic classroom materials—chalk, paper and such—that schools no longer provide. 

Teachers were formerly fairly conservative, and many teachers were Republicans.  The tensions of the last 20 years have changed this, and the resulting polarization is not healthy.  Even moderates now often blame the unions for failed schooling, especially because they protect “mediocre” teachers.  Conservatives such as David Brooks argue that what the schools need is the abolition of tenure, cutting teachers’ pay, and firing “inadequate” teachers—inadequacy to be determined on the basis of their students’ standardized test scores.  Obviously, many conservatives would dearly love to fire teachers for political reasons, and often try to do so.  But even if they fired teachers “fairly,” the result would be a massacre. 

Already, burnout and job dissatisfaction are costing the United States thousands of teachers every year.  Special-needs children are mainstreamed, class sizes are steadily expanding, and teachers’ aides are being eliminated by budget cuts.  Attracting the finest to teach school under these conditions is already an enormous challenge.

What would happen to teaching if the conservatives had their way?   No one seriously thinks we can attract better teachers by promising less pay, eliminating job security, and threatening them with summary firing if they disagree with the principal or have a run of poor students.

The right wing has proposed a “voucher system,” in which children would be given money for private schooling to escape the public school system.  This would provide a sterling excuse to defund the public schools and ultimately to end public education.  Experience teaches that the private schools would continue to raise their fees.  The voucher sums could and would easily be cut whenever fiscal problems struck a state, because there would be the obvious alternatives. 

One can only conclude that the real agenda of the conservatives is to end public education.  This became open in the 2010 elections, with some Republican candidates openly advocating abolition of public education.  It is, after all, a huge consumer of taxes, and it is by far the most important leveling mechanism in the United States.  It is, in fact, the only surviving bastion against total takeover by the elites, and the only real source of opportunity for nonwhites and less than affluent families.  Would abolishing it accomplish anything except cutting off these opportunities?

The No Child Left Behind plan is openly racist.  Bush and his advisors knew perfectly well that impoverished minority schools could never compete, if only because of the terrible health problems in the ghettos and barrios they serve.  One cannot possibly avoid the conclusion that penalizing these schools was meant to hurt minority children, not to help them.  The penalties, such as replacing principals and thus destroying any continuity (rather than—say—actually evaluating the principals on the basis of their administrative skills), make sense only if they were designed to hurt the slower schools, not to improve them.

Yet, in America, quality private schools are hopelessly inadequate in number and highly concentrated in a few cities.  Outside of the richer parts of the northeastern US and the Washington, DC, area, there are relatively few private schools that actually focus on academics.  The vast majority of private schools in the United States are religious, and many of those teach little beyond religious bigotry and six-day creationism.  The religious right, with the support of cynical politicians who know better but need the votes, has set itself unalterably against the teaching of evolution and environment in the schools.  They often claim that they want only “equal time for creation.”  This might not be bad (I think it would be good) if actual evolutionary theory and also Native American and other creation stories were allowed as well as “literal” Judeo-Christian ones.  However, where creationists have taken power, or been able to frighten school boards, they have simply ruled evolution off the turf and out of the textbooks.  The basis of biology—Darwinian theory—is now not taught widely in the United States.  In fact, only 28% of science teachers teach it; 60% equivocate; 13% deny it outright and teach accordingly (Berkman and Plutzer 2011).  In some states, it is gone from grade school education.  In others it is still in the books, but so watered down that it is not even a shadow of its true self. 

The same people have attacked sex education in schools.  Evidently, many Americans are more interested in certain kinds of indoctrination than in actual study and assessment of evidence, or, for that matter, in morality.  American education has moved away from a focus on life skills and health; time spent on hygeine and health education, physical education, and relevant aspects of biology have all declined. 

Of course, multiculturalism is also under attack, though common experience confirmed by serious research shows that (at least for Latinos, and doubtless for all bicultural individuals) both involvement in one’s culture of origin and involvement in US mainstream culture are valuable (Smokowski et al. 2009).  Strong confidence in one’s own traditions is important for learning others’ traditions well.

Because of political controversies, time spent on civics and history has also declined.  Far-left and far-right parents feud with the schools over how these controversial subjects will be taught.   Ultimately, many schools shy away from teaching more than a bare minimum.  Fortunate are those states like California that have public university systems that make no-nonsense demands on the public schools:  no decent history courses, no entry to the universities.  But California’s funding crisis has now gutted even this.

Foreign languages, too, are generally required for college entrance, but anyone who travels in Europe or Asia is aware of the incredible deficiency of North Americans in this regard, and any American who is not ashamed is not paying attention.  Swiss children are expected to know five languages fluently, and most Europeans know at least three.  I have known totally unschooled individuals in Asia who knew five languages—they had simply picked them up—and have met more school-trained Asians who knew over 20!  The human animal is biologically programmed to learn languages fast and easily (Hauser and Bever 2008; Pinker 1995).  Humans benefit by knowing more than one.  It makes learning further languages and other linear communication forms that much easier. Learning only one language is probably unnatural for humans, and certainly limits learning ability.  It probably leads to failure to develop key neural channels; inadequate learning of a single language most certainly does, as we know from a few tragic cases of isolated children (Pinker 1995).  Fluency in two or, better, three languages should be required.  As in every other case, the obsession with mindless standardized tests has ruined language teaching in America. 

Many Americans defend their ignorance by claiming that learning a second language interferes with knowing the first one!  Immigrants and Native Americans have been constantly attacked for speaking their heritage languages, and attempts go on and on to force them to speak English only.  Science proves the opposite:  since the human mind is designed to learn languages, the more one learns, the better one knows one’s own.  Opposition to second languages is second only to standardized test mania as a proof that American education is far too influenced by irresponsible and ignorant people.

5

Right-wingers and the more extreme end of the business world consider teaching about ecology and the environment to be a threat to their interests.  Even the most innocuous references to air and water pollution have been forced out of textbooks.  Many dubious ideas surface in literature made available by coal, oil, and nuclear interests (Selcraig 1998; Stauber and Rampton 1995).  

Some of the right-wing writings on the subject are so extreme as to be chilling.  Facts Not Fear: A Parents’ Guide to Teaching Children about the Environment (Sanera and Shaw 1996, with foreword by Marilyn Quayle) manages not only to misrepresent both science and environmental politics, but goes on to imply strongly that all ecologists and environmentalists are actually Communists trying to destroy the capitalist system.  This is part of an even wider disinformation campaign by polluters and deforesters (Stauber and Rampton 1995; for more examples, see Rush Limbaugh’s See I Told You So [1993] and other books by Limbaugh and by Glenn Beck).

Some environmental education has indeed been politicized in an overly anti-capitalist way.   Conservation biologists were stung into releasing a report in 1997 that found many problems with books for the public and school market: “some texts seem more interested in advocacy than science,” promoting errors and misrepresentations of their own (“Overhauling Environmental Education,” Science 276:361, 1997).   One observes that many such texts also blame “capitalism” or “the capitalist system” for the environment’s ills.  It is hard to understand what they mean by this, since they leave their terms undefined.  Certainly it does not square with what we know of environmental management in ancient Rome—1500 years before capitalism—or in the USSR or modern China.    

6

Teachers need much more freedom to teach as they will, and much more training in the actual subjects they teach, than they get in most public and private schools.  They need to study biology, as well as whatever may be useful in “education” curricula.

The current problems of the schools are greatly exacerbated by the layers of administration to which they are subject.   Many school systems, from grade school to universities, spend over 40% of their very limited budgets on administration.  “Local control” of education should mean not control by local politicians, but control by the teachers, subject to consultation with the parents.  The teachers need to be insulated from both politics and parental interference. 

Parents—but not politicians—need some recourse. We need to go back to a world in which teachers, students, and parents can interact, without having to deal with arbitrary, Byzantine, and frequently corrupt layers of administrative management.  This requires drastically cutting back on the power and funding of administrations. 

It also requires reforming the complex codes that make them unfortunately necessary in many polities.  The administrators and politicians have created a vast network of laws, rules, policies, conventions, and paperwork requirements that serve to keep administration necessary.  Whether they do it consciously or not, administrators (from NSF to the neighborhood school board) create policies whose ultimate result is to force teachers to do more and more paperwork and trivial nit-picking. This runs up the expense of education, again, since it means the university must hire a phalanx of lawyers and specialists.  It also keeps the teachers too busy to organize.  It also keeps them convinced that administrations are necessary.  Teachers have time either to teach and do research or to play politics; they can’t do both.  The honorable ones thus are more or less forced to leave politics to the rest.  Simplifying the rules and paperwork, again from NSF down to the town school district, is clearly a high priority for improving education.

            The worst problem with modern education is the one revealed by the universal, and increasing, reliance on standardized multiple-choice tests (SMCT’s) to evaluate anything and everything.

            It is possible, with creativity and ingenuity, to devise SMCT’s that successfully evaluate critical thinking and analytic ability.  Several professional bodies, especially in the health professions, have been doing this successfully for years.  The problem is not SMCT’s per se, but their misuse as a crutch to allow schools to save money by teaching canned, mindless knowledge to huge classes.  This, plus the savage competition between schools that No Child Left Behind has forced on us, has led to making education more and more a process of cramming students with random facts, as a Strassbourg goose is crammed with corn.  The facts are those tested on recent SMCT’s, rather than those students might actually need.  A whole industry of creating cheap, inane, badly-done SMCT’s has arisen to cater to this.  Some recent reforms in the early 2000s have ameliorated the worst abuses, considerably improving many SMCT’s.  However, this trend is offset by the steady expansion of SMCT’s throughout grade-school and university teaching.

            On this altar, music, arts, serious science, physical education, and other “frills” have all been sacrificed.  More to the point, we have sacrificed critical thinking, originality, creative writing, and everything else a serious education is supposed to produce.

Most of the skills we teach at the university, from laboratory science to engineering to archaeology, are like driving, or duck hunting, or farming.  They require both a huge amount of factual knowledge and a tremendous amount of hands-on physical experience, and they require, above all, critical thinking and good judgment.  None of this can be taught by rote memorization.  The factual knowledge can be appropriately tested with SMCT’s, but not the quick thinking for reasoned judgment under real-world conditions.  Physical skills have to be “embodied”—our muscles and sinews actually have to grow, shape themselves, and accustom themselves to particular patterns of movement.

Sports require more physical training, less knowledge, but even they require analytic thinking and quick judgment.  Of course no one would evaluate a swimmer or tennis player by giving her an SMCT.

            Research, leadership, cooperation, organizing, original and critical thinking, writing, and other basic academic skills depend on experience and practice.  They have little to do with memorizing facts, and cannot be tested by SMCT’s.  They do not depend on specific physical skills, but they do depend on the body being in reasonably good physical shape, a fact well known to the ancient Greeks but forgotten in modern classrooms.  We have sacrificed physical training and created a generation of children almost 40% of whom are obese. 

            Evaluating real skills by serious evaluative methods is a problem that will take some thinking.  We are not thinking about it.  In the meantime, SMCT’s should be restricted to a very small role—testing the minimal knowledge needed by people for specific activities.

            This is routinely done in driving:  we take brief SMCT’s on traffic law, but the serious tests are the driving test and the eye test.  Those are taken more seriously than the law test.  The same is true in sports; there is a little teaching of factual knowledge, but of course almost all evaluation is practical.

Part II:  College

7         

Another hotly debated field is university education (see Marc Bousquet’s excellent book, How the University Works, 2008; also Arum and Roksa 2011; Clawson 2009).  Here too, mindless rote memorization is getting rapidly commoner.  Almost as pernicious—and related—is the steady growth of the size of lecture classes.  Classes of several hundred or even more than a thousand students are common.  In these, the real teaching is done by graduate students or lecturers, who are usually very dedicated and hard-working, and establish fine rapport, but cannot always handle the job of transmitting expert knowledge to hundreds of students.  Worst, such classes are especially common at the freshman level, where they disserve students already overburdened trying to adjust to a system they do not yet understand.  Community colleges are at last waking up to the need for first-rate science (Boggs 2010).

It is no surprise to find that college students learn little—and often nothing—in their first two years (for this and what follows, see Arum and Roksa 2011; they make many of the points developed below, and add that college has become more “social” than educational).  Parents and students want quick certification more than real education; professors are on a running wheel of research-and-publish; administrators are farther and farther removed from teaching, more and more bureaucratic.  The public is losing faith in the system, but can think of little to do; the right wing takes advantage of this to attempt to eliminate tenure, cut pay and retirement plans, and bring thought control to the university.  College education is rapidly declining in quality and value.

The public, including college administrators, undervalues biology.  College biology departments sometimes are treated by administrators as nothing but premed training camps.  The courses are made as dull and difficult as possible, to weed out less gifted premeds (Greenwood and North 1999).  I have heard biology professors boast outright of doing this.  Prospective environmental scientists often become disillusioned and discouraged.  Moreover, among those who do go on, promotion goes to narrow specialists who publish highly technical papers, not to those who reach out to the public.  The public—including lawmakers and budget planners—concludes that field and organismal biology is unimportant and irrelevant. 

The university tenure system of a generation ago worked well to protect professors from administrative abuses, but has been undercut by administrative takeover and by rather astounding legal opinions to the effect that academic freedom does not protect whistle-blowing on administrative crime! 

Academia today bears the same relationship to scholarship that organized religion bears to religion.  Religions generally teach love, tolerance, fairness, and justice.  Organized religion, to the degree it is hierarchic, almost always ends up promoting hate, bigotry, oppression, and mindless obedience.  The similarities between a modern “multiversity” and the medieval church are not accidental or trivial.  Quite apart from the historic roots of the former in the latter, the current social dynamics are the same:  a top-down hierarchy, promoted by nontransparent internal means, and subject to every sort of vicious backroom politicking.

Organismal biology, if taught at all, is taught via lectures and textbooks.  My university is typical in having cut back steadily on field biology courses and training, in order to divert resources to molecular and cellular biology.  These latter are necessary and desirable, but the world simply cannot afford to lose the field courses.  The situation in the lower grades is similar or worse.  Biology is poorly taught, and is increasingly focused on non-organismal biology—partly because it is safer from challenges by anti-evolutionists.

            All the above led to a recent letter to Science, signed by 20 leading scientists (Bazzaz et al. 1998; the signers included leading ecologist Paul Ehrlich, and Jane Lubchenco, later a leader in the Obama administration’s team) from the United States and Mexico.  It called for training students “who will be ready and willing to devote part of their professional lives to stemming the tide of environmental degradation and the associated losses of biodiversity and its ecological services, and to teaching the public about the importance of those losses.”  It continued:  “We believe that such efforts should be rewarded as part of the process by which ecologists are considered for academic posts, granted tenure in universities, elected to membership in learned societies, and so on” (Bazzaz et al. 1998).  David Orr (1992, 1994) has written eloquently and movingly on the lack of real concern with life that is shown in much biology teaching.  He has advocated that we of the scientific community be more open about love for the world and for our fields. 

Modern electronics provide an escape.  With clickers, email, visual and multimedia displays, instant messaging, Blackboard and other classroom-related software, and other wonders of the 21st century, highly interactive teaching is possible at a distance, and some of the excitement of hands-on education can come into a lecture hall.  This would bring back real teaching and learning to classes with a hundred, or a very few hundreds, of students. 

The bad news is that many indications suggest that these methods will be used as yet more ways to cut costs by reducing staff levels.  The online-education advocates seem to think that, with enough gadgetry, we could have a single professor teaching 10,000 students.

The good news is that sanity is not entirely lost.  Sarah Miller and others, writing in Science (2008), report finding that what works for elementary school students works for college students:  an hour spent in varied activities with full feedback beats lecturing out of the field.  They managed to get bits of lecture, brainstorming, data interpretation, a case study, a “think-pair-share” period, and some feedback via clicker or instant “paper” of a line or so into an hour.  (This seems incredible to an old college professor, but my daughter Laura, a high school science teacher, does it all the time.)  Miller et al. found stunning increases in effectiveness when college science was taught this way.  Obviously, it takes an incredible amount of hands-on work by the professor, and is possible at all only because of clickers, text messaging, and so on.  No 10,000 here.  But it works.

Surveys show that most college students are concerned, first, with getting skills they need to find decently-paying jobs; second, with learning enough about society to make them informed citizens.  The older generation of professors decries the focus on careers and money, but fail to realize this is not the 1960s, when education was free, jobs abundant, and a house cost $25,000.  Today’s students face high and fast-rising tuition costs.  They graduate with five- or six-figure debts.  They face a world where good jobs are few and houses start at $400,000.  Blithely ignoring career issues and filthy lucre is not an option. 

Universities are badly strapped themselves.  Harvard’s endowment is in the billions, but most universities are not so lucky.  As of 2005, average endowment per student in the top quartile of schools was $376,000, in the bottom quartile a mere $32,668; as a result, the former spent $13,069 per student on actual instruction, the latter $3,290.  The former figure had risen dramatically since 1995, the latter hardly at all (Selingo and Brainard, in The Chronicle of Higher Education, 2006, p. 13; figures now average somewhat higher, but, thanks to the 2008 recession, not much higher, and in some cases lower).  Social justice is not a part of American education.

Educating an undergraduate at a typical public university costs about $8,000 a year minimum (cf. Schwartz 2007, but I have updated the figures as of 2011).  In a good public university the figure is closer to $17,000; private universities can run three or four times that.  Currently, tuition has risen to that level at most American colleges.  Some charge even more, making the students support research that has only indirect benefits to them and that is more important and relevant to state business interests.

This increase in costs is not just boodling (though there is plenty of that).  The biggest actual reason is the new information technology.  A university now has to spend thousands of dollars per student per year on new hardware and software.  Education, especially in the sciences, can no longer take place without the latest computers, programs, security software, licenses, and so forth.  The costs of books and journals have also skyrocketed in the last 20 years, largely because giant firms have acquired a virtual monopoly on key publications, especially in the biomedical field, and charge accordingly.  A major medical journal now costs over $10,000 a year, virtually all of which represents profit for the publisher.  (On top of that, many of these private journals reach truly outrageous heights by forcing the contributors to pay the publication costs, thus making a clean 100% profit!  Grants often cover the costs; a researcher not on a grant is virtually ruled off the turf.)

Professors’ salaries have not moved much, in constant dollars.  My father’s salary at the University of California in the 1950s was higher in buying power than mine in the same position at the same university in the 2000s.  Professors’ salaries have increased 5% in real dollars since 1970, but that is due mostly to the aging and thus increasing seniority of the faculty.  Salaries actually decreased for assistant and associate professors (Clawson 2009).  This is bad enough, but worse is the spectacular inflation of book, journal, and lab equipment prices since 1970.  The tools of our trade have been priced out of our reach.  A bit of amusing but thought-provoking symbolism: the old symbols of the professoriate, sherry and tweed jackets, are now out of a typical younger faculty member’s reach.  

The universities have further saved money by replacing professors with “temps”—graduate students or temporary postdoctoral staff—to teach beginning courses.  These are notoriously exploited, the temporary staff being paid less than a living wage because they are doing it in hopes of getting experience toward a “real” job later (Bousquet 2008 has a thorough discussion of this problem; I am proud to say that the temps unionized at UC, with help from the professors and students).  Most professors are now nontenured and temporary, a new development (Hacker and Dreifus 2010 give the dismal statistics).  This is a disaster.  Tenure is necessary not only for academic freedom, but also for continuity, commitment, accountability, and loyalty (see a superb short essay by Cary Nelson, 2010).  One might think tenure removes accountability, but who is more accountable:  a professor who is always around, committed to the system, and not at all protected from firing for genuine fault, or a temp who will be gone without trace in a month?

One problem for the universities has been the natural tendency of professors “climbing the ivy” to fall into highly specialized and professionally-popular topics.  It is always depressing to see a scholar who began as a genuinely curious, broadly interested person slowly narrow down into a hyperspecialist, desperate to stay au courant with an insignificant field, caught up in academic politics.

8

Far more serious is the convergence of universities on the giant corporations (Washburn 2005).  Overadministration is now common (see Birnbaum 2000 and Bousquet 2008 for merciless looks).  Most of the administrators are well-meaning, though often shortsighted, but many are cynical, corrupt careerists.  I could name names and pin down millions of dollars stolen.  The Chronicle of Higher Education’s annual “almanac issue” for 2006 (Aug. 25, pp. 3-4) included an appalling list of presidents and high administrators caught red-handed in financial scandals, usually involving “liberating” university resources for their own indulgence.  They were acting like their role models, the executives of corporations such as Enron.  The list goes on for two pages of fine print, and names names all over the country.  Whole university administrations, including my own (the University of California’s), were caught.

Many of these individuals are career administrators trained in business management, rather than academics.  Others are academics corrupted by the Enron model in academic circles. Both groups go by the book—regulations when possible, administrative manuals and books otherwise.  My wife once served under a dean who made everyone read a management text based on case studies of several “successful” firms; unfortunately for the model, half the firms were in court within a year or two!  Alas, my wife’s school copied them all too well.    

However, these “rotten apples” are not the real problem. They can be handled.  As Max Weber taught us in his classic studies of bureaucracy (see Weber 1946), administrators do not have to be evil to do harm.  Weber classified leaders as traditional, charismatic, or legal-bureaucratic.  Tradition and charisma survive in the modern university, but no one would question the point that modern universities are overwhelmingly led by the last of Weber’s types.  This is inevitable in a world where assigning classrooms, allocating budgets, and setting up anti-cheating policies are the common tasks and where charismatic speechmaking is confined to commencement exercises.

Academia during my lifetime has made an insidious shift from a broadly democratic organization to a bureaucratic one.  In the age of faculty governance, individuals did research and teaching, and competed with one another to do the best job (or at least an adequate job).  They ran the universities, and managed them to maximize the amount of knowledge generated and transmitted.  This created a “wisdom of crowds” situation (Surowiecki 2005):  the more independent minds worked on a problem, the more it was effectively addressed.

Over time, the job of governing the modern college became too much.  Today’s mass-education facilities and huge research universities simply cannot be run by professors in their spare time.  Alas, this meant a shift to the worst type of organization:  one managed by an oligarchy of faceless bureaucrats who are paid only to manage.  They are not accountable.  In particular, they have no stake in the actual output of the university.  They do not teach, and they do not do research. 

They love simple outcome measures that are wildly inappropriate:  number of students enrolled and graduated (as opposed to amount taught to said students), or number of pages published (as opposed to quality of work).  Silliest of all is evaluation by the number of citations an article receives.  Quite apart from the perverse incentive to create mutual-citing clubs (now routine), this measure ignores the number of papers and books that are so bad that everyone attacks them.  In anthropology, several books over the years have accumulated fantastically high citation indices because they were everyone’s examples of how not to do it.  Some straw men are real, and they get cited accordingly.  As well measure a person’s driving by the number of traffic citations!

Bureaucrats are driven by the nature of administrative systems to pass the buck, dodge accountability, fear change, drag their feet, stick with mindlessly administered policies, and resort to meaningless managerial doublespeak when challenged.  Everyone in large hierarchical organizations knows this from countless experiences.  The more overworked and underpaid the bureaucrats are, the more they act this way, and thus the progressive budget cuts suffered by universities in recent years have extremely counterproductive effects. 

The nature of bureaucracy selects for a certain type of person.  One has to be personally ambitious to tolerate such conditions.  This can be good.  Teachers are generally dedicated people who live to help others, and thus their ambition may be of the noblest sort.  Unfortunately, teachers who want to teach are not usually fond of administration, since it takes them from teaching and dooms them to a round of managerial tasks which they often find maddening and trivial.  They see this (often all too correctly) as a move from telling devoted students how to save the world to dealing with cheaters, backbiters, and squabblers over tiny pockets of money.  Many still get into administration, and do well, but administration becomes “over-enriched” with people who are either failures as scholars or personally driven to individual success rather than teaching per se. 

In the business-imitating climate of today, the slick, suave, manipulative individual with no scholarly pretensions but much personal charm tends to succeed.  Such individuals can actually be good administrators, but often are simply there to rip off the system for selfish benefits.  Others mean well, but are simply inept.  Professors denied tenure for incompetence, but too nice to fire, are often taken into administration—at my university, anyway.  Others—the worst—are passive-aggressive souls who “climb the ivy” because they are driven by a sense of personal inadequacy.  These are the ones most likely to turn into bullies, oppressors, and harassers. Again, these are fortunately rather rare, and the usual conflict is between the idealists and the more ordinary careerists.

The modern administrator dodges responsibility at all times.  The result of a failed policy is not admission of a need for change, but—usually—a move to another school and another attempt at the same policy.

Once again, I am not saying that administrators are an evil lot, or that administration is bad.  The administrator who redirected the library money to redecorating his office and the one who followed a shady model did much damage, but they were really rather exceptional.  Far commoner are the well-meaning souls who are mindless regulation-followers, or slick self-promoters, or simply overwhelmed bureaucrats trying to do what they can.  I am saying, following Weber, that a bureaucratic system selects for certain types of people and certain types of behavior, and that we have made it far worse in America by consciously adopting the business-management model for academic administration.  Nothing could be further from the true entrepreneur, who, whether ruthless businessman or dedicated world-saving scientist, is at least fearless and decisive!  (One can see this on a larger scale in the conflict of Republicans and Democrats in Obama’s time:  the former ferociously and mercilessly hard-working and committee, though to antisocial ends; the latter well-meaning but utterly bureaucratized and thus futile.)

We have to get rid of the bad apples, but far more important is changing the system.

Tenure, and thus academic freedom, is seriously threatened, and indeed the whole idea of professors as independent scholars is being replaced with the business concept of professors as low-level workers who produce a product defined by higher-level administrators.  Inevitably, such a product must be whatever produces immediate benefit for the administrators—whether high enrollments, big donations, or large research grants.  Actual education and research are sidelined.

Obviously, the immediate and necessary cure is the same as it is in all bureaucratic situations:  accountability and recourse.

However, it would not be enough.  We also need to teach leadership.  Teaching “management” only makes things worse; business management and its “educational administration” imitator are notorious, for reasons too well known to need elaboration here. 

Leadership was once taught in many contexts in American society.  Some of these contexts, notably sports and the military, were not necessarily those that liberals love, but they did their job.  More ordinary civic and educational venues (possibly more acceptable to the liberal mind) worked well also.  The result was an age of administrators like David Starr Jordan of Stanford, Robert Hutchins of Chicago, and somewhat later Franklin Murphy of UCLA.  Where are their like today? 

If anyone wants to revive leadership training, the basis of it is listening to everybody and getting all possible input, then acting decisively according to one’s own best sense of what to do, and finally take full responsibility for the result.  Then duly thank everyone for their input (whether it was used or not).  The courage to take advice, then come to a rational decision, and then carry it through to conclusion and bear the brickbats or roses, is what academic administrators lack today.  In my experience, and in accounts in the Chronicle of Higher Education and elsewhere, high administrators listen to faculty only when forced, and rarely take the advice forthcoming. 

Leaders also make decisions for all their followers, not only their own core group.  Academic administrators naturally develop a sense of unity, often against the professors and other employees of their universities.  They then make decisions to benefit administrators at the expense of the rest.  Leadership training in the old days paid special attention to this natural tendency and did everything to teach leaders to avoid it.

Leadership does not just happen.  It comes from training and practice.  All graduate students should get both.  Being a teaching assistant does not do the job.  In my field of anthropology, archaeology students who supervise field “digs,” lab-science students who get and manage their own grants, and field workers who do not just do ethnobgraphy but have to develop and manage field teams involving local people do get the necessary experience.  Their only problems are that they are not always well taught, and their professors are not always good role models. 

In short, fairly simple lessons, learned in real apprenceships with real practice, are what we need.  Turning students loose to sink or swim, or giving them brief “educational administration” courses, do more damage than help.

A solo player can be a genius, limited only by individual ability.  A string quartet, even a quintet, takes coordination, but can manage itself.  Beyond that, the human conscious mind cannot handle more than seven things at once, and usually tops out at five.  Any group bigger than a quintet needs a conductor.  Then we can hope someone like Arturo Toscanini, who could weld a huge orchestra into one single organism and get that organism to play beyond anything one would think possible even from a soloist.  Not everyone can become Toscanini, but the more we can approach that sort of leadership, the better we do.

9

Possibly the biggest single area where leadership, not bureaucratic management, is needed is core curriculum:  required courses, and overall course and department offerings.

Sclerotic bureaucracy and lack of leadership guarantees an outcome in which the biggeset departments have the most political power, and use it to stay big.  Staying big usually means that they make sure their beginning courses are the required ones for the university.  This makes change almost impossible.

The business-school alternative is to fire the faculty, hire “temps” instead, and go with “consumer demand,” i.e. student choice (as is advocated by Hacker and Dreifus 2010).  This guarantees that fads will prevail, and that above all the parents’ delusions about what is the “most saleable degree” will be all-important.  Anyone who has spent a year in a college or university knows all too well that the younger students are all going to be doctors, computer programmers, or whatever else the TV set tells the parents is the safest and surest way for their helpless young to make money in the near future.

In so far as this ideal might be achieved, it would be even worse than the frozen state.  The big departments at least reflect some kind of accumulated wisdom.  They generally include English, history, and similar classic fields.  The pre-professional philosophy, by contrast, guarantees a wild swing from one fad to another.  Students concentrate in the “hot” area, oversupply it with qualified people, and thereby crash it as a sure source of employment.  Engineering is particularly notorious for this.  Engineers were seriously short in the American economy in the 1960s, leading to overproduction in the 70s, which led to students avoiding that major and causing another shortage in the 90s, which led to another glut and round of firings in the 2000s.  Doctors have prevented such cycles by making an MD extremely difficult to get; hoops to jump through range from the shortage of good medical schools to the savage and unnecessary hazing of the interns.  The AMA has very consciously worked to keep doctors scarce.

Long-term planning for the future of both students and the American economy would require leadership, because it would require major change. 

As for the students:  it should be obvious to anyone, but is not, that—whatever they do in their lives—all students need a few skills.  The most obvious are good writing skills, critical thinking, some knowledge of economics (including the math), and, yes, leadership ability.  I would add some serious knowledge of American and other cultures, past and present.  I would certainly hope for some serious knowledge of ethical philosophies—not debate over the idiotic ethical dilemmas that infest “Phil 1” textbooks, but serious readings of Kant, Mill, Rawls, and their peers.

As for the future, environmental education is clearly the most desperate need now.  A country where global warming and Darwinian evolution are still seriously doubted by many educated people is obviously headed for self-destruction, and richly deserves it.  The basic concepts of ecology, including the importance of biodiversity and wild lands, are totally absent from the standard curriculum, and totally lacking in the minds of most Americans. 

Some other obvious problems include the failure of health education.  This gives us the current rapid increase in obesity, diabetes, heart diseases, and similar lifestyle problems.  It also gives us the incredible shortage of nurses—indeed, of all non-MD medical personnel—that is crippling American health care and driving up its costs.  The United States is a million nurses short, if our goal is to provide medical care with proven adequate staffing rates for all citizens. This gap is growing exponentially, as population increases and baby-boomers age.  Rather ironically, one of the main reasons is the success of women’s liberation, which targeted nursing as an old-fashioned “women’s profession.”  The media duly portrayed it—till recently—as a lowly, servile occupation.  A very feisty book, Saving Lives (Summers and Summers 2009), pointed this out in no-nonsense terms, and turned the media at least partially around, but the problem remains.

One could go on:  the failure of political education, the decline of knowledge of history….  Suffice it to say that neither the frozen-tradition model nor the business-management model work.  In fact, their continuance will be devastating to America and the world in the near future.

10

Most professors cling to an ideal of “liberal education,” the content of which is under constant and hot debate.  Not much meeting of the minds comes out of all this.  The problem in this case is not lack of discussion, but lack of any good way to resolve it.

We are having enough trouble maintaining any vision of liberal education in the old sense.  “Liberal” education referred, originally, not to a “liberal” political position but to the liberating power of curricula based on the sciences and arts.  Nobody seems even to remember that now, let alone advocate it.

            In the Good Old Days, there was a “canon” of texts that had “made” the culture in question.  The students would read these texts and would thus know their culture, or at least the elite literary representation of it.  Unfortunately, if those Good Old Days every really existed, they vanished long ago.  Something like them appears to have existed in ancient Greece, Rome, and China.  However, we of the Euro-American educational world really got our idea of the “canon” from religious education.  The “canonical” readings were the Bible (the Hebrew Bible for Jews, that and the New Testament for Christians) and the orthodox commentaries on it.  The Islamic world had the Quran, Hadith, and commentaries.  China had a similar, but less overtly religious, canon: the Confucian classics.

            This had the advantage of giving everyone the same background.  All “educated persons” knew certain things.  The Chinese, especially, saw this as a basic necessity of civilization; they were sometimes less concerned with the actual content of the canon than with the fact that every educated person should share a common heritage.  The downside of this was the fearful snobbism often involved.  Canonical texts, especially literary works, tended to be by elite older males, in China and in the West.  And the “educated” who knew those texts looked down on the poor fools who did not.  Such prestigious knowledge has recently gained the name of “cultural capital.”

            Since the Renaissance in the west and the later coming of Western culture to China, this sort of canonical education has been a nostalgic memory in both west and east.  Higher education has seen almost continual fights over content.  The Renaissance scholars fought to re-introduce the Greek and Latin classics, to the horror of the older generation, who saw them as filled with paganism and sin.  By the time the old churchmen had finally caved in, a new horror had arisen:  vernacular education in the various European languages.  As recently as the mid-19th century, many English educators held that Shakespeare and Milton were far too uncouth and gross to be part of proper education, which could only be the Greek and Latin classics, and, of course, the Bible.  Shakespeare and Milton were “canonical” by 1900, but then came the whole fight over modern literature and, worse, modern art.  This fight was still hot and vicious when I was a student, with a strong rearguard of educators seriously maintaining that James Joyce was too obscene for the young, and modern art was communist and sinful and should be banned.  However, in the end, Joyce and Picasso became canonical.

In the late 20th century, another fight arose as women and minority authors and artists found places in literature and art curricula.  Conservatives objected, usually—alas—on purely sexist and racist grounds, but sometimes out of sheer love for the earlier canons.  Of course, women and minorities won a place in the canon.  The fights at the time I am now writing are over the inclusion of films, TV plays, and other media forms. 

            The previous brief history shows that the old guard always crumbles, and has since 1200.  The real problem now is that the “canon,” by any definition, has exploded beyond anything any student could possibly read or see.  Even by 1900, few indeed were the students who got through all the English literature they were supposed to know (Shakespeare, Milton, Austen, Thackeray, Dickens, and on and on), let alone the Greek, Latin, French, German, Russian—in the original languages, of course….  Today, it is a well-educated student who even knows the names of all the kinds of media that have their own canons! 

            Obviously, the goal of giving students the True Basics of their culture has become an impossible dream.  This is especially true in the United States.  In spite of the nonsense about America being a product solely of English or of West European civilization, the United States has been profoundly influenced by all Europe, and Europe in turn received much from the Middle East.  The United States also learned much from its Native American heritage, its Chinese contacts, its (tragically involuntary) African immigrant streams, and much else.  Imagine trying to understand American music without admitting the African presence.  American culture has now diverged far from west European.  Students in England do not know Twain or Scott Fitzgerald, let alone Amy Tan or Toni Morrison.  Yet a well-educated American is expected to know all these authors’ works and also the English canon.

            Moreover, American freedom, which in the case of higher education verges on a hilarious and fermenting anarchy, guarantees that nobody can impose an arbitrary, or even a reasonable, canon on anyone else.  A very small college can sometimes manage to agree on a set of books every student should read.  Getting even one state’s public education system to agree on this would be, in the endlessly repeated phrase academics use, “like herding cats.”  Typically, each department of literature or arts has its specialists.  Knowledge becomes more specialized over time.  One English department may specialize in Shakespeare (and a professor may specialize in only one play), while the English department at the next university down the road specializes in nineteenth-century fiction, and the next one farther on specializes in Black American authors.  Students read accordingly, and learn very different things in different colleges. 

Liberal education now does not usually seem to give students much idea of what “good” literature or art means—why Sophocles and Shakespeare really are better, in important ways, than the general run of Hollywood offerings.  This is, however, not because the canon has been opened up.  I recently read an essay claiming that reading trash literature is now common because we 1960s radicals threw out the canon.  Alas, I fear I must inform the author that people were reading trash when I was a kid, and that grave authors complained about the same problem in ancient Greece and Rome—and in every culture since.  The problem is that most professors since the 1950s seem to have missed, in their own education, any discussion of what makes the difference between great literature and garbage.  We need more thinking, not more dragooning.

When students from different schools meet, their cultural common ground is popular film and TV—not the material they learn in classes.  Because of this and many other changes in western culture, movies and TV have taken over from literature the role of giving people a common cultural ground.  Movies and TV provide the reference points for discussion of morals, social codes, and worldviews.  The Chinese were right:  people need a shared set of cultural knowledge, and it helps if what is shared is the very best.  We of today fail notably in the latter regard. 

No obvious solutions come to mind.  One possibility would be a core curriculum of books that really shaped American political thinking and through that the American political system.  This might be manageable.  Certainly, it would include Plato’s Republic, Aristotle’s ethical and political works, Hobbes’ Leviathan, John Locke’s writings on government, and the major writings of the Founding Fathers of the United States.  I would guess that most authorities would further agree on John Stuart Mill’s On Liberty and perhaps his other writings, and on a few other books.  After that, though, we would see a terrific political fight that would probably never resolve.  Moreover, some of the above works require a great deal more training in history and politics than most students today receive.  Hobbes and Locke, in particular, assumed when they wrote that the reader knew the Greek and Latin classics.  They also assumed (reasonably enough) that the reader knew everything important about English and Continental politics of the time.  Moreover, writing in the 17th century, they used the English of their time.  The language has changed since—more than some readers realize.  This is one reason they are both horribly misinterpreted today. 

            All this led to the end of the “canon wars” of the 1980s and 1990s.  Even the most conservative gave up hope that anyone could come up with a selection that would be clean, concise, and universally accepted.  We are left with sets of “breadth requirements.” These are often chosen with less attention to student needs than to guaranteeing big-enrollment classes to key departments.  At my university, in fact, the latter seemed to me to be the only factor considered.  Seeing no rhyme or reason to the requirement structures, some students cynically conclude that the “breadth requirements” are required to keep the students in college, and thus paying tuition, for an extra year or two.

So, what should we do with higher education?  Let it become strictly specialized job training?  Make it cover these political writings, to explain where the United States is coming from?  Provide necessary information for survival in the 21st century, including health and +environmental knowledge?  Provide enough “great art” to give students some idea of what the standards are?         

Accumulated anthropological wisdom suggests that not only should we change the methods to more hands-on ones, and the locations to more prestigious and well-maintained settings, but that we should change the content to reflect what we as a society really want to share.  This would certainly include minimal civics—for example, in the United States, some understanding of the Constitution and Bill of Rights and their immediate origin.  It would certainly include basic reading and writing skills, including analytic and creative skills.  I would add that we really do need, desperately, to show students and others that there is indeed a difference between Shakespeare and TV soaps.  We also need to expand even the minimalist canon to include the great writers of the world, not just those of the English language.  If we raise a generation without self-conscious understanding of the deeper currents of human emotion and thought, we are doomed as a civilization.

11

One nonproblem is the alleged domination of education (at least in America) by “liberals,” whatever they are.  American campuses display an incredible range of opinions, and a very large percentage of professors are anything but liberal.  The complaints seem concentrated strictly within a segment of society that wants to impose their own brand of “conservatism” on the ivy, outlawing not only liberals but traditional conservatives.  This segment represents an extreme right-wing fringe, and what they want to impose includes six-day creationism, denial of global warming, Holocaust denial, and other views that simply are not true.  For them, even traditional conservatives are dangerous leftists.  This is why the far right feels that the universities are taken over by “liberals”; in their twisted world, Milton Friedman and even George W. Bush are liberals.

Actually, academia serves as the last home of lost causes, and in fact all these long-disproved notions are taught somewhere.  No need to demand more.  What is much more amazing is that neither the self-styled conservatives nor their self-styled liberal antagonists spend any effort looking at the real problems of academia:  bureaucratization, topheavy administration, standardized testing, huge class sizes, mind-numbing boredom in many classes, and lack of intellectual challenge.  Far better if the critics were to unite against those. 

12

This leads to something more radical, and dearer to an anthropologist’s heart:  serious concern with indigenous, local, and small-scale societies and their traditions.  The small, local societies of the world almost all manage resources better than we moderns do.  They all have music, art, and literature, often world-class and certainly worth recording for posterity.  They all have their own unique and wonderful variations on the basic theme of humanity and the human experience.  Their works are creations of the human spirit, and deserve consideration as such.

Early anthropologists realized this, and recorded traditional cultures and their creations with meticulous care.  We have now dropped this emphasis.  To some extent, it falls between the chairs.  Anthropologists have increasingly abandoned the field to scholars from the relevant societies—indigenous scholars and scholars from minority groups. 

Yet, such scholars are almost inevitably concerned with their groups’ more immediate and pressing problems.  They are worried about health care, legal rights, economic justice.  They have little free space to document cultural riches.  Those that do often have sadly limited opportunities to make them available to a wider audience.  Countless wonderful dissertations, reports, and collections gather dust in university archives, unpublished and often not even catalogued. 

Also, there are still far too few scholars from the groups in question.  Racism is legally dead in the United States, but obviously nowhere close to dead in actual practice.  One need only contemplate the college completion rates of Native American or Black students compared with whites.  In many other countries, bias is not even legally defunct.

The result is that of 6800-7000 languages in the world, the vast majority faces imminent extinction.  About 20% of North America’s Native American languages are extinct.  Over 20% of the rest are spoken by one or a few elderly people.  All are declining, and only a tiny handful (including Navaho, Hopi, and Cherokee) seems secure for the foreseeable future.  Even the isolated communities of Alaska are losing their languages.  The situation is similar in Australia, Latin America, and elsewhere.  European minority dialects, and even languages like Breton and Savoyard, are fading away.  Even though Africa is no longer dominated by European powers, it is losing local languages.  When a language dies, a whole culture is reduced. 

Obviously, we cannot expand the canon to include all 7000 languages and their works, but we need to be more sensitive to the problem.  We desperately need to preserve the languages of the world and the arts and useful knowledge systems that go with these.

13

“Education is all right; I’ll tell you before you start:

Before you educate the head, try to educate the heart.”

Washington Phillips, bluesman, recorded in Dallas, 1930

Learning is itself a good—one of the highest goods.  Having an open mind and wanting to learn more about anything and everything is about the most valuable trait one can have, and is a basic personality disposition (the “openness” of personality theory). 

Individual experience in dealing with the world also provides strength to those lucky enough to have some strength at the start.  They can deal with progressively tougher problems and thus become progressively stronger.  Rural people in the United States in my youth had these characteristics; they were tough, independent, and resourceful.  They were emotionally strong, creating the great folk music of those days. 

This classic “building of character” is rare today for three reasons.  First, there are many hurts that are simply impossible to overcome and that are now common.  Most obvious of these, perhaps, is massive brain damage due to fetal alcohol syndrome, maternal drug abuse, or early physical abuse.  Over 10% of children in America today suffer from one or more of these.  Second, our society, in which “the media” provide information and entertainment to passive individuals, encourages and implicitly idealizes passivity and discourages self-help.  Most important is the third reason: few are there to provide the backup support and encouragement that is necessary for a child trying his or her wings. Unsupported children become weak, and the weak, ill-prepared, and vacillating have major problems with learning.   

The dynamic of oppression can play out in a family, a small community, a nation, or the world.  A rich man from a powerful family can be reduced to utter wretchedness if that family is harsh enough.  An impoverished woman from a despised minority can rise to the top, if a strong family with a strong and supportive religious tradition is behind her (Werner 1989; Werner and Smith 1982).  I have known such cases; probably most people have.  They are, however, uncommon; they should not be used (as they often are) to excuse the wider community from all responsibility for the poor.  Poverty, and especially decline relative to others, dispirits and disempowers most people.  And schools notoriously train people for the lives they are expected to face.  Even with good intentions, teachers often convey messages that tell students exactly how low the expectations are for them.  The effects are widely studied and known to be devastating (Rosenthal and Jacobson 1992; Willis 1984; this is well portrayed in the film “Stand and Deliver,” about the career of Jaime Escalante in successfully breaking the pattern). 

            There was a time when education was about teaching people deeper and wider emotional experiences—or at least exposing them to art and literature that would give them the chance to learn.  Such depth and breadth of sensibility should (should, but often do not) inform coping responses, and teach people to cope rationally rather than with reactive defensiveness. 

Unfortunately, that sort of education seems lost today.  Besides the problems of overspecialization and technical narrowness, we have too often succumbed to negative views of humanity.  People are seen as entirely the playthings of circumstance: as automatons or as mere victims (or mere oppressors).  This latter view, basically the “postmodern” one, is intensely dehumanizing and insulting. 

There was a time when social science strove improve the world, and to bring good things to a wider audience.  Anthropologists shared the good ideas of small-scale, traditional societies with the world.  Transmission, translation, and explanation were basic to this enterprise.  Valuing people and valuing diversity were goals; understanding the full range of human phenomenological experience was perhaps the highest goal.  All this was based on respect for people in general and for individuals in particular.  I hope we can recapture that.

References

Arum, Richard, and Josipa Roksa.  2011.  Academically Adrift:  Limited Learning on College Campuses.  Chicago: University of Chicago Press.

Berkman, Michael B., and Eric Plutzer.  2011.  “Defeating Creationism in the Courtroom, but Not in the Classroom.”  Science 331:404-405.

Birnbaum, Robert.  2000.  Management Fads in Higher Education:  Where They Come From, What They Do, Why They Fail.  San Francisco:  Jossey-Bass.

Boggs, George R.  2010.  “Growing Roles for Science Eucation in Community Colleges.”  Science 329:1151-1152.

 

Bousquet, Marc.  2008.  How the University Works:  Higher Education and the Low-Wage Nation.  New York:  New York University Press.

—  2010.  “The ‘Race to Nowhere’ Is Everywhere.”  Chronicle of Higher Education, Chronicle Review, Nov. 26, p. B2.

Clawson, Dan.  2009.  “Tenure and the Future of the University.”  Science 324:1147-1148.

Hacker, Andrew, and Claudia Dreifus.  2010.  Higher Education?  How Colleges Are Wasting Our Money and Failing Our Kids—and What We Can Do about It.  New York:  Times Books.

Limbaugh, Rush.  1993.  See I Told You So.  New York:  Pocket Books.

Medina, John.  2008.  Brain Rules:  12 Principles for Surviving and Thriving at Work, Home, and School.  Seattle:  Pear Press.

Miller, Sarah; Christine Pfund; Christine Maidl Prebbenow; Jo Handelsman.  2008.  “Scientific Teaching in Practice.”  Science 322:1329-1330.

Nelson, Cary.  2010.  “Parents:  Your Children Need Professors with Tenure.”  Chronicle of Higher Education, Oct. 8, p. A104.

Pinker, Stephen.  1995.  The Language Instinct.  New York:  HarperPerennial.

Roediger, Henry L., III, and Bridgid Finn.  2010.  “The Pluses of Getting It Wrong.”  Scientific American Mind, March-April, 39-41.

Rosenthal, Robert, and Lenore Jacobson.  1992.  Pygmalion in the Classroom:  Teacher Expectation and Pupils’ Intellectual Development.  New York:  Irvington Publishers.

Sanera, Michael, and J. Shaw.  1996.  Facts Not Fear:  A Parents’ Guide to Teaching Children about the Environment.  Washington:  Regnery.

Schwartz, Charles.  2007.  “Old and New Thinking about Financing the Research University.”  Posted Dec. 18 to website: webfiles.berkeley.edu/~schwrtz.

Selcraig, Bruce.  1998.  “Reading, ‘Riting, and Ravaging.”  Sierra, May-June, 60-92.

Selingo, Jeffrey, and Jeffrey Brainard.  2006.  “The Rich-Poor Gap Widens for Colleges and Students.”  The Chronicle of Higher Education, April 7, pp. 1, 13.

Smokowski, Paul; Rachel Buchanan; Martica Bacallao.  2009.  “Acculturation and Adjustment in Latino Adolescents:  How Cultural Risk Factors and Assets Influence Multiple Domains of Adolescent Mental Health.”  Journal of Primary Prevention 30:3-4:371-393.

Stauber, John, and Sheldon Rampton.  1995.  Toxic Sludge Is Good for You!  Monroe, MN:  Common Courage Press.

Summers, Sandy, and Harry Summers.  2009.  Saving Lives:  Why the Media’s Portrayal of Nurses Puts Us All at Risk.  New York:  Kaplan.

Surowiecki, James.  2005.  The Wisdom of Crowds.  New York:  Doubleday. 

Washburn, Jennifer.  2005.  University, Inc.:  The Corporate Corruption of American Higher Education.  Basic Books. 

Weber, Max.  1946.  From Max Weber:  Essays in Sociology.  Ed. and tr. Hans Gerth and C. Wright Mills.  New York:  Oxford University Press.

Werner, Emmy.  1989.  “Children of the Garden Island.”  Sci Am 260:4:106-111.

Werner, Emmy, and Ruth S. Smith.  1982.  Vulnerable but Invincible:  A Longitudinal Study of Resilient Children and Youth.  New York:  McGraw-Hill. 

Willis, Paul.  1981.  Learning to Labor.  Columbia University Press.

Winerman, Lea.  2009.  “Play in Peril.”  Monitor on Psychology, Sept., 50-52.

Appendix:  Reviewing a Pernicious Book (Review posted on Amazon.com, Sept. 18, 2010)

       Hacker and Dreifus appear to have high ideals:  trying to restore old-fashioned, caring, hands-on liberal education for undergraduates.  They correctly identify many of the problems:  overspecialized faculty, faddish and jargon-heavy teaching, top-heavy administration, excessive use of temporary teaching staff, too much vocational training, and the ever-present, ever-infuriating problem of athletics that takes far too much money and attention.  They describe some successful ideas and schools.  The best thing in the book, to me (a retired professor who taught for forty years at the University of California, Riverside), is the chapter on colleges they recommend.  They name ten schools that have been doing exciting, innovative, successful things with undergraduate education.  Where I know the colleges in question, I agree with their pick, and am delighted to see those schools get recognition.

     However, Hacker and Dreifus seem not to understand “the story behind the story.”  They allege, for instance, that professors typically work only a couple of hours a week.  This echoes the popular idea that professors do nothing except lecture.  Hacker and Dreifus claim that professors do not update their courses.  Yet, how could faculty get away without updating courses in computer science, or biology, or medicine, or law, or any other field except perhaps “bonehead English”?  In fact the typical professor spends hours a week on prep.  They cite a case of a professor who had a paper-reader to do the grading for a class of 20.  This seems beyond the pale; we at UC used to get a reader if we had 80, but now I believe the cutoff is 100.  Also, there are sharp limits on readers’ and assistants’ hours, so I wound up reading 600 papers per quarter in my big classes.  Finally, they treat a two-course-per-term load as if it were standard. In fact most professors are at teaching-oriented schools where the load is around four courses per term, and most of these are big classes, up to a thousand students.

     Hacker and Dreifus also object to academic research, and sabbatical leaves that permit it.  They feel there is too much research; professors should stick to teaching.  This would gut American science, since so much basic research is done by professors on sabbaticals.  However, research and teaching do sometimes interfere with each other.  The reason is one that Hacker and Dreifus appear not to understand:  most American universities now depend largely on grant money, from governments and private firms.  This is what leads to excessive focus on research.  Professors are constantly harrassed by administrators to apply for more and more grants.  I was associated for three years with the University of Washington, which gets more grant money per professor than any other full-offering university.  The cost is that the undergrads are taught, more and more, by graduate students and lecturers, and given very minimal attention.  But the taxpayers of the state had turned against the place, and the choice was to do this or close down.  My university is less grant-dependent and more teaching-oriented, but still it’s the huge science grants that really keep the place going.  This is by far the main reason why many professors don’t teach as well as they might.  Most professors are dedicated and competent teachers (in my experience), but the rewards and visibility go to the grant-getters, who are not apt to be spending much energy on teaching.

     Linked to this is the other real problem:  out-of-control administration. Hacker and Dreifus briefly mention the fact that there are twice as many administrators per 1000 students as there were a generation or two ago.  More important is the far higher pay; the University of Washington’s president gets almost a million a year.  Also, the huge bureaucracies have essentially no accountability or transparency.  In all the time I taught, we faculty never saw the budget.  We could never call any administrator to account for anything.  Universities spend much on splashy projects and athletics; these look impressive, and advances administrators’ careers.  Professors have essentially no say in the matter.

     University administrations often operate outside state laws, such as conflict-of-interest legislation.  The UC Board of Regents (=Directors) included, at one point, the head of the firm that did all our campus construction work; at another time, the Riverside regent was a lawyer who handled a lot of our law business.  Both were perfectly good regents and didn’t abuse their power (so far as I know), but this would not be allowed in any state government office.

    Hacker and Dreifus feel professors are overpaid, and that tenure is an evil.  They dismiss the problem of academic freedom, which shows they out of touch; every year I read of a case of state legislatures trying to crack down on academia, and I have run into many cases personally.  Eliminate tenure and public colleges and universities would be instant chaos—every time the Democrats replaced the Republicans, or vice versa, faculty would be fired and replaced with loyalists, as in state government offices.   Conversely, Hacker and Dreifus considerably exaggerate the problem of “retiring on tenure.”  I knew only one professor who “retired on tenure”; he was held at a lowly salary and eased into early retirement as soon as possible.  Otherwise, my school made sure nobody got tenure unless they were such compulsive workers that they were more likely to work themselves to death than to retire on tenure.  I knew several professors who collapsed and died of sheer exhaustion from overwork.

          The new wisdom in education, from Obama to Hacker and Dreifus, is that the way to attract or create better teachers is to cut their pay and eliminate their job security.  Economic wisdom suggests otherwise.  The truth is that until we solve the linked problems of out-of-control administration and dependence on grants for funding, undergraduate education will suffer. 

Hacker, Andrew, and Claudia Dreifus.  2010.  Higher Education?  How Colleges Are Wasting Our Money and Failing Our Kids—and What We Can Do about It.  New York:  Times Books.

Saving American Education in the 21st Century: The Lessons of Traditional Environmental Education

Monday, February 7th, 2011

A paper based on this posting is under consideration for publication.

Abstract

Education in science, natural history, and the environment was carried out in traditional societies largely through learning-by-doing, supplemented by watching and by listening to tales and stories.  These stories were usually either myths or highly circumstantial personal memoirs told by elders and mentors.  Contemporary science education is more typically done through passively sitting still, memorizing “facts” for assessment by machine-scored standardized tests.  Experience teaches that the former methods succeed; children in traditional societies quickly learn incredible amounts about their environments and about making a living from those, while modern American children are almost totally ignorant about their environments, typically failing to retain even the small amounts they are taught.  It appears desirable to move back to hands-on learning, personal involvement, and serious mentoring by elders and older peers.

1

Culture is about learning; children absorb it from parents and peers.  However, children bring their own skills to the process.  The human brain develops in a predictable way, and learns accordingly.  Thus (for example), children learning language go through a striking and very distinctive process.  They first use a word to correspond to a single object or person.  Mommy and Daddy are just the infant’s own mother and father.  “Dog” is the family dog.  Then, suddenly, around 7 or 8 months, they get the idea, and suddenly generalize the words out of all normal usage:  all female humans are Mommy, all males Daddy, and all four-footed creatures are “dog.”  Then, more slowly but still fairly fast, they learn to restrict these words to their proper meanings.  But restriction normally follows from learning new words for things previously covered by overextended words.  My first daughter learned “leaf” at 8 months, with reference to a single leaf.  She soon generalized it to cover all soft colorful things, including flowers, clothes, and sheets of colored paper.  Then she learned “flower,” which took a huge bite out of “leaf”; then “clothes” and “paper” took more bites.  Soon “leaf” meant what it means in normal adult English.  Children are programmed to learn this way, and it is exciting to watch.  They do not learn by stimulus-and-response or by simple copying.  They learn by extrapolating a definition or a rule and then vastly overgeneralizing it.

Culture consists of useful knowledge—data and rules—that we learn and then use in adapting to daily challenges and opportunities.  It includes countless alternatives that we can invoke and reinterpret at will.  If I want to pluralize “sheep” as “sheeps,” or even “sheepen,” I can do it, in spite of cultural rules to the contrary.  Moreover, I will be understood by standard-English-speaking hearers.  They will correctly assume I am playing language games.  If they are young enough, they will be amused.  Children love to see adults deliberately playing with the rules—it feeds into the learning process.  Creative use of knowledge and rules is what life is all about, and any culture that imposed a rigid crust of “constructions” on its bearers would immediately die out.

2

A culture, like a biological organism, has to reproduce itself—its working knowledge, its social organization, its hierarchy, its belief system.  Just as reproduction of the species occurs through mating and birth, reproduction of culture occurs most typically through formal and informal education of the young.  This process is fraught with social meanings and consequences (Bourdieu and Passeron 1990). 

Surprising uniformity emerges from studies (admittedly few in number) done on informal working education in traditional societies.  Everywhere, learning is by doing—but doing things while being guided by elders (Anderson 1992, 1999, 2007; Cole 2010; Lancy et al. 2010; Lave 1988; Stafford 1995). 

Everywhere, such learning is supplemented by stories the elders tell.  Some of these stories are hallowed myths that provide a sacred charter for conservation or other ethical behavior (the vital importance of serious myths in education is discussed in Cajete 1994).  Almost always, such mythic texts are told in special contexts:  During ceremonies and rituals, during long winter nights around the fire, or during long periods of work at the particular activity the myth concerns.  

In no case is teaching done through formal lectures in a neutral, alien environment.  The stories are graphic, dramatic, exciting, and personally compelling—partly because they are either sacred traditions or part of the life experiences of known and (hopefully) respected individuals. 

Usually, of course, it is the practice that matters.  The myths and tales supplement knowledge gained through experience.  The knowledge is then not merely verbal; it is learned by the whole body and the whole mind.  One learns with one’s entire being—hands and feet, emotions and cognitions, ears and eyes.  The more total the body and mind involvement, the more learning.  It is truly embodied, but it is more than that:  it is part of the whole dynamic process of using one’s body and mind in practice (cf. Gibbs 2006).

The results of such training are truly striking.   Both lowland and highland Maya of college age, and even of early teens, know hundreds of plants and animals by name and use (Stross 1973; Zarger 2002, 2010; Zarger and Stepp 2004).  They have an encyclopedic knowledge of farming (Kramer 2005) and forest management.  Chinese fishermen know hundreds of fish, how to catch them, and how much value they have in the market; they can handle boats, predict weather changes, and deal with coordinating crews (Anderson 1999, 2007; Stafford 1995).  Northwest Coast Native peoples have, by adulthood, gone through initiations that provide guardian spirit visions; in the course of these, they learn ceremonies and myths.  They also learn the expected encyclopedic amounts about fish, game, and plants, but from actual hunting and gathering practice rather than from rituals. 

The working knowledge bases of these traditional peoples are not greater than those of an extremely well-educated American young adult, but they are far greater than those of the typical product of American schools:  barely literate and almost completely ignorant of science.  The American young adult may know much, but most of it will be about consumer products and popular celebrities.

Wider reading in the anthropology of education confirms this as a general case.  Serious research in educational anthropology began with Maria Montessori, who put her findings to good use by starting the Montessori school movement.  Alexander Chamberlain’s The Child and Childhood in Folk-Thought (1895) opened the topic for research in the United States, but Chamberlain died shortly after this book appeared, ending a promising career.  More important and visible, but still without discernable influence on the field, was a striking article by J. W. Powell (1901) on “sophiology,” his term for the art of instruction; he anticipated much of what is below, and one wishes his article had had its intended effect of starting a whole field.  If it had, American education would be far, far better than it is today.

Studies of traditional nonschool education were few and far between for a long time.  The Sioux writer Charles Eastman (1902) reminisced about his boyhood in an extremely interesting and detailed review.  Many Native Americans since have contributed importantly to knowledge of traditional education (Cajete 1994 gives an excellent general discussion; among many autobiographies, Eastman 19092 and Reyes 2002 is outstanding; for inculcating general values, see also Atleo 2004; George 2003).  The Berkeley education professor George Pettitt became seriously interested in the whole issue and produced outstanding (though now dated) studies, first of the Quileute people, then of Native American education in general (Pettitt 1946, 1950). 

More recently, important research was started by John and Beatrice Whiting (Whiting 1951, 1994), of Harvard’s Social Relations Department, on how culture, via education and training, influences personality, and vice versa (see valuable review by Munroe and Munroe 1975).  Much of this dealt with emotional development, especially aggression and gender issues.  The Whitings’ most famous finding was a strong correlation between the degree to which boys are raised only by women and the level of pain and drama in male initiation rites.  Cultures where women raise the boys (because the men are off working, fighting, or whatever) have much more dramatic and painful rites—circumcision, scarification, and worse. 

As psychologists turned to studies of cognition in the 1960s, most of the Whitings’ students flocked to that area.  Their work on emotion is outside my view here, but the peak of their activity and influence occurred just as the “cognitive revolution” (H. Gardner 1985) was sweeping Harvard’s social sciences with major transformative effect.  The Whitings’ more cognitive-oriented students were swept up in the moment.  Kimball Romney, arguably the leader of cognitive anthropology for the next 40 years, got his start studying children under the Whitings’ direction (Romney 1966). 

Eventually, ethnographic and psychological research under the Whitings’ direction produced a fairly concrete set of findings on how non-classroom education normally proceeds (summary surveys include LeVine 2007; Munroe and Munroe 1975; Whiting 1994).

The Harvard Social Relations Department also included Evon Vogt, whose enormous Chiapas Project trained two generations of anthropologists (Vogt 1994).  Inevitably, interest in education and child life was part of this, leading ultimately to the recent work of Patricia Greenfield (Greenfield 2004; Greenfield et al 2003; Zambrano and Greenfield 2004), Eugene Hunn (2002, 2008), Brian Stross (1973), J. R. Stepp, Rebecca Zarger (2002, 2010; Zarger and Stepp 2004), Felice Wyndham (2009), and others (myself included).  Of these, Hunn, Stepp, Zarger and I were students of Brent Berlin, who had gotten his start on the Vogt project. 

Independently, Hilaria Maas Colli (1983) studied Yucatec Maya child life with special reference to the role of ceremonies and rituals in reinforcing gender-role training; one of the very best studies of traditional child life ever done, this work remains forlorn and unpublished in the University of Yucatan anthropology library.  Karen Kramer (2005) observed Yucatec Maya child life on the farm, and though her work is more concerned with the role of child labor in farming, she provided excellent observations on what tasks are learned first and which ones later.  All this has made the Mexican Maya by far the best known traditional small-scale societies in the world in traditional nonschool education.

Closely related in approach was Jean Lave (1988), who, though not part of the Whiting or Vogt projects, was trained in the cognitive revolution days.  She later worked with psychologist Barbara Rogoff (Rogoff and Lave 1984; Rogoff 2003), training Greenfield, as well as Mary Gauvain (2001), who has provided broader psychological overviews.  She was influenced by the cognitive psychologist Michael Cole, whose intercultural interests could not have begun farther from Harvard; Cole acquired them at Moscow State University with Alexander Luria.

 There were, meanwhile, a few—a very few—independent efforts to understand traditional training.  By far the most impressive was the work of geographer Kenneth Ruddle with the education specialist Ray Chesterfield (Ruddle 1993; Ruddle and Chesterfield 1977).  Much of what follows is based on their work.  Several other, largely isolated, studies appeared, but none has been followed up so far (Franquemont 1988; P. Gardner 2003; Quisumbing et al. 2004; Stafford 1995).  Pelissier (1991) provided a very valuable review of child life studies in anthropology as of 1991, but, alas, the main thing her review shows is that most research has been done in and on school environments.  There has also been some attention to what and how children really learn in modern schooled society, notably the superb and underappreciated prospective research of Emmy Werner (1989; Werner and Smith 1982) and the much more famous work by Paul Willis, Learning to Labor (1981; on youth and learning, see also Bjorkland 2007).  Peter Kahn and his group have studied nature learning in modern America (Howe et al. 1996; Kahn 1999; Kahn and Kellert 2002.)  Charles Stafford (1995) wrote an excellent, but unique, book on childhood on Taiwan; interestingly, his findings on Taiwanese fisher children were virtually identical to mine on fisher children in Hong Kong (Anderson 2007).

Recently, a major new trend has opened up in natural-historical studies of childhood.  Biological anthropologists interested in evolutionary and ecological questions started much of it, but third-generation Whitingians have been involved, as well as others interested in practice, or cognitive development, or simply in children.  A recent work edited by David Lancy, John Bock and Suzanne Gaskins, The Anthropology of Learning in Childhood (2010), brings all this together, with really superb overviews of the field (including a history by Munroe and Gauvain, 2010).  No longer is the anthropology of real-world education a minor side-channel.

From all of the above, a conclusion emerges:  Whether one is a Hadza learning to hunt antelope, a Trobriander learning to play cricket, or an American learning to swim or fish or ride a bike, the process is broadly similar. 

It was most succintly stated by Native American basketmaker Nettie Jackson (Klikitat of Washington state), describing her own training:

“’When you want to learn something, don’t always talk and ask questions, just watch and do it,’ my mother and grandmother told us when we were children.  ‘If it is in you, you will do it.  Even if it sems as if you can’t learn, it will come to you when you are ready’”  (Jackson 1994:200).

In some cultures, learners receive minimal guidance, especially from adults; people are supposed to be able to copy anything they have seen, or at least to try it and then work out any bugs by trial and error (Eastman 1902; Gardner 2003; Lancy et al. 2010, esp. Lancy and Grove 2010).  In other cultures, adults or older children model the behavior many times over (Greenfield 2004).  Still other cultures instruct the trainers to provide some verbal explanation along with the modeling (this is what I have seen among Chinese and Maya).  Always, however, the emphasis is on doing, not telling (Lancy et al. 2010).  

Modeling-with-words is appropriate for tasks like computing; most of us learn our basic computer skills this way—some peer shows us, with verbal and physical guidance, and we try to emulate.  For motor and mechanical skills, where words are often inadequate, modeling-without-words is often the rule.  You can go only so far in explaining how to swim or ride a bike. 

Children tend to begin by intently watching the process.  Few words or direct teaching is involved.  Then they try it, with more or less guidance from older children (for simpler, more “kid”-level learning) or from adults.  The best account I have seen is Patricia Greenfield’s (2004; and Lancy et al. 2010 review dozens of similar studies). 

When words are necessary, as in language learning, people in ordinary daily life (as opposed to formal schooling) embed the words in ordinary conversation.  Often, they use requests:  “Get me the ixi’im,” “go out and find a k’uum and bring it in,” and so on.  If the child does not know the word, the parent shows him or her the item in question (corn, and squash, in the Yucatec Maya examples above).  Or the word is simply embedded in conversation and the child is expected to pick it up:  “See, I’m going out to bring in the ik, come help me, OK?”  The child follows, sees the parent harvesting chile peppers, and thus learns that ik means those painful green or red items.  

Gathering firewood, medicinal herbs, and flowers all provide “teachable moments.”  What matters is not only the learning opportunity, but the child’s increasing realization that these are important skills—in fact, the very core of necessary knowledge.  Being an adult Maya means being able to raise corn (first and foremost!), find good firewood, treat one’s illnesses.  Children thus learn through work.  Often this is productive, necessary work; often it is play at adult roles, though the amount that children actually learn from such play is somewhat controversial (Chick 2010) and clearly depends on how much the play is really like the activity modeled.  Playing at making pottery is a good learning experience—we must all start that somewhere.  Playing at hunting is less valuable; tracking and killing large game animals is so difficult that it is learned late and often throughout a lifetime.  Thus hunter-gatherer cultures have longer “childhoods” with more play and less real practice than most agricultural ones do (Lancy et al. 2010, esp. Bock 2010).

Much of this learning takes place without punishment or major reward.  Children are not beaten when they fail and not given candy when they do well.  Motivation is a combination of intrinsic interest and validation by elders and peers.  Children everywhere want to learn what is culturally important.  This approach to motivation often shocks westerners, who cannot imagine raising a child without physical punishment.  In Hong Kong, British parents were always telling me that Chinese parents “spoiled” their children, in spite of the very obvious fact that the Chinese children were better behaved than the British ones.  The Jesuits in Canada in 1648 recorded it as a great triumph of their teaching when a mother beat her four-year-old child for some minor slip; the Jesuits could not imagine Christian childrearing without beating, but the Huron people they were converting never used physical punishment, feeling it was disrepectful to the child (Blackburn 2000:94).  On the other hand, corporal punishment is very widespread, especially among agricultural societies, and can be rather savage, as can shaming and guilt-tripping (Lancy and Grove 2010). 

Finally, older children teach younger ones.  This not only helps the younger ones; it helps the older—possibly more, in fact.  The truest proverb I know is “the best way to learn a subject is to teach it,” and these older children are doing their most important learning.  Current research suggests that the faster a learner (of any age) actually applies his or her learning, the better the understanding and retention.  Today we get children to take tests (as soon and as often as possible; Glenn 2007; Karpicke and Roediger 2008) or write down (hopefully with some thought) what they have learned.  How much better to get them to go right out to teach the younger ones!

This works.  Working, again, with Maya highlanders, Brian Stross’ classic study of Tzeltal Maya children showed that they knew an enormous number of plants, learning the names often from peers and especially in older childhood (Stross 1973; Janet Dougherty 1979 found that United States children knew far less).  A recent replication of this study by Rebecca Zarger and collaborators found that knowledge has been passed on, the same way, for yet another generation (Zarger 2002; Zarger and Stepp 2004).  Salient, culturally important plants are also learned first and best, as Felice Wyndham (2009) found working with highland Maya.  Children learn almost from birth to attend to things their parents and older peers stress and emphasize, and this is clearly one of the most important—probably the most important—variable in determining what is learned.  Wyndham also stresses the total experience—bodily, emotional, and cognitive—and thus takes a phenomenological approach to learning.  This is an important development; the artificial and arbitrary splitting of experience is one of the major reasons for the catastrophic failure of education in the modern United States, and phenomenology offers a needed corrective.

Learning is thus highly social, and is characterised in these traditional societies by being a full, rich experience with actual real-world choices to make.

Similarly, Eugene Hunn found that Zapotec children know an enormous amount about the plants in their environment—and, by inference, everything else in it too—at an early age; almost all children in the village knew dozens of plants well before the age of 10 (Hunn 2002, 2008; documentation and photographs in the latter work are outstanding and important).  Hunn (personal communication) has found a surprising amount of knowledge of nature among American college students, but it is learned from television and zoos, and is more apt to concern large African animals than small American ones!  Colleen O’Brien (2010) found that children in the isolated desert community of Ajo, Arizona, know a good deal about the desert, and could know a great deal more if anyone worked with them; but elders often know little themselves, and in any case have given up on the children, maintaining that “they know nothing” and are hopeless.  This attitude is not confined to Ajo (Louv 2005).  Obviously, giving up on the young is no way to teach them.  (College professors take note.  Many of my colleagues claim that “students these days” are hopeless—uninterested, illiterate, etc.  Of this more anon.)

One other set of studies informs our search:  participant observation on traditional specialized education.  A large literature on traditional training of religious and visionary practitioners (such as shamans) is too hard to evaluate for our purposes here.  Many traditional religions seem to teach largely through rote memorization of texts and rituals, but good descriptions of the actual process are few and far between (though see e.g. Boyce 1979 on Zoroastrian lay and priestly training).

Studies of traditional survival arts abound (e.g. Campbell 1999).  They rarely go into detail on learning, but they say enough to make it clear that the writers learned by watching and imitating.  Partly because it is the best way to learn, and partly because their consultants always taught that way, these survival-skills scholars learned by quite traditional methods.

A more important and deeply researched body of research is found in studies of traditional medicine.  Among those particularly good, and useful to us here, are two books by western Sinologists who studied Chinese medicine:  Knowing Practice by Judith Farquhar (1994) and The Transmission of Chinese Medicine by Elizabeth Hsu (1999).  Both apprenticed themselves to Chinese doctors.  Teaching was largely by apprenticeship.  In this case, there was a solid body of textual knowledge which had to be learned, but it greatly underspecified and underdetermined actual practice.  Farquhar spent much time learning to be a Chinese medical worker.  Hsu spent a year in Kunming, Yunnan, studying traditional medicine and qigong exercise.  Her  deeply insightful book covers the relationship of text, teaching rhetoric, and practice.  Both came to similar conclusions:  Chinese medicine is an art, learned by actual interaction with patients, not a craft learned from books.  The books are at best unclear and at worst incomprehensible; they never specify enough to determine practice clearly.  One has to work under a doctor’s direction for a long time. 

A few other such medical memoirs from other cultures exist, though many do not tell us much about learning the trade (see e.g. Leighton and Leighton 1949, which pays more attention to a Navaho healer’s inferred personality problems than to his practice).  However, it seems clear from all studies that most traditional and folk medicine is learned by doing, as in the case of Chinese medicine. 

My own experience is relevant.  I learned Maya healing largely from Don José Cauich Canul, a jmeen (healer) of Polyuc, Quintana Roo.  He consciously took me on as a trainee.  He took me out looking for herbs, demonstrated massage and other techniques on me, got me to do the simpler standard routines he used, and wrote up a manuscript with his favorite cures (see Anderson 2003).  There was, thus, a combination of apprentice practice, modeling, verbal instruction, and use of textual material.   

What works best is apprenticeship—or, more broadly, what Jean Lave (1988; Lave and Wenger 1991) calls “legitimate peripheral participation.”  It has also been called “cognitive apprenticeship” (Cole 2010), though in fact it is basically just old-fashioned apprenticeship, and the “cognitive” is thus unnecessary. We learn by helping.  Think how you learned to cook, or work on a car engine, or do any environment-related thing from backpacking to restoring habitat.  Almost certainly, you learned by actually working with a senior and more experienced person, and you gradually came to do more and more of the work by yourself.  If you did learn some of it from books, you are aware how much better participation is than book-learning.

In short, across a very wide range of skills and societies, surprisingly little discussion and virtually no lecturing takes place.  Much learning takes place through interaction, negotiation, and discussion, but often this is the kind of unconscious learning that goes on all the time, especially in language learning by young children.  Learning through discussion seems to be significantly commoner among modern large-scale societies, in both Asia and the western world, but we lack a wide enough sample to be truly sure of this.  Moreover, in these developed worlds, physical skills like sports playing and woodworking seem to involve less discussion than more purely language-based matters, and thus approximate to the typical learning situation in small-scale societies.  However, even in teaching physical skills, verbal coaching is still the rule in North America and parts of west and south Asia, though not so much in of East and Southeast Asia (at least in my field work days). 

As mentioned earlier, the one really important traditional way of verbal teaching in most of the world’s cultures, including out-of-classroom America, is through stories (Cajete 1994; Cruikshank 1998; Eastman 1902; Gardner 2003; Gould 1968; Goulet 1998; C. Laird 1976; Rose 2000; many others).  An exciting story, whether an ancient myth or a personal story told by the teacher, packages knowledge in a memorable, exciting way.  Aesop’s ancient Greek fables remain popular today.  Native Americans still tell their folktales, even among groups that have lost their language and most of their traditional culture.  Not only social skills, but everything from hunting to water hole location and from the highest religious ideals to the lowest sexual practices, is passed on in stories.  In non-literate cultures, stories are often the only teaching texts.  Cultures that have writing will add books and manuscripts, but often only for highly technical lore (be it math or theology).

Notably important are two very different kinds of teaching stories:  myths and personal stories.  Myths are a great way to make knowledge seem sacred, super-important, and God-given (see e.g. Cajete 1994).  Cultures as far apart as the Southern Paiute (Laird 1976) and the Australian aborigines (Gould 1968; Rose 2000) encode knowledge of water hole locations, hunting grounds, and food plants in racy stories about the animal beings in the mythic time.  Lots of adventure, sex, and danger, plus the advantage of being sacred, make these stories memorable.  Children learn the water holes thoroughly and in order.  Memorizing a bare list of water holes would not be as effective, and in the desert such relative lack of knowledge would be certainly fatal.

Personal stories often are used to pass on information, but are also well adapted to telling children what not to do.  In many cultures, one cannot criticize another person openly.  So, if a young person is goofing off, an elder will say:  “When I was young, I used to….  Here is what happened….”  The storyteller does not need to say that his foolish actions were the same things the young person is now doing, and does not need to point up the moral after humorously recounting the painfully instructive consequences.  This sort of indirect warning is usually highly effective!  I remember it from my own youth, and it seems to be cross-culturally general, along with other ways of using personal stories to teach (Sterponi 2010).

Other stories are reminiscences and circumstantial tales by the elders about their own experiences (see e.g. Hunn 1991).  These are told around the fire or during actual work.  Hunting tales are traditionally told while going to or from the hunting grounds.  Tales of farming are told while going to or from the fields. 

Most of us in my generation learned our life skills in these ways:  participation and stories.  We remember them better than most of our classroom learning.  Psychologists and anthropologists have demonstrated that knowledge packaged in concrete and specific stories is more memorable than knowledge presented abstractly.  The better-told and more exciting the story, the more it sticks. 

In traditional cultures, teaching by myth and story is usually done by respected elders.  They are well known to the learner, and are people who are highly regarded in the community.  Teaching simple skills by modeling, however, is the parents’ and peers’ job. 

In at least one culture, teaching can even come from the dead:  Among the Cambodians, for whom reincarnation is all-important, “a child’s previous-life mother is understood to play an important role in protecting the child from his or her current parents’ abuses or their inattention to the character the child has inherited from a previous life” (Fung and Smith 2010:266, citing research by Nancy Smith-Hefner).  I have gotten close to this myself; my son Rob was duly diagnosed by my wife’s Cambodian friends and research contacts as having important previous-life influences, not least because he was born on Buddha’s birthday. 

Teaching by rote memorization and formal instruction occurs widely, but usually it is confined to sacred songs or texts.  Normally, the traditional communities of the world place such teaching in a dramatic context—typically as part of a religious ceremony.  This involves everyone in the process, emotionally, and makes the knowledge more memorable because of that.  Often, elders teach the most important rote-learning during initiation ceremonies, often painful and difficult ones.  Knowledge comes with adulthood, and adulthood is hard-won.

Teaching is individualized (Cajete 1994), since normally it is done by elders working with their own family or community members.  It is also total-person training, involving body and mind together, and it is normally applicable immediately in daily practice.

Guided teaching of the traditional kind—copying of behavior modeled by the teacher, supplemented with stories—seems to remain the most effective method.  That is why it is traditional.  It worked well enough to be propagated. 

Modern derivatives, including lab science, hands-on activities, guided practice, coaching, interactive learning, and just plain learning by doing, work very well (McGinnis and Roberts-Harris 2009) but require a good deal of effort, including one-on-one teaching.  The cost of this could be substantially diminished by doing what all traditional societies do:  getting older children to teach younger ones.  The rigid age-segregation of American education appears, from cross-cultural evidence, to be an extremely bad idea.  Programs of mentoring by older children have succeeded extremely well in some places.

Such training is extremely effective in teaching practical skills.  It is not necessarily so good at teaching the kinds of analytic and interpretive skills that are expected in higher education today.  But neither is the lecture-examination system; modern higher education at the graduate level, relies on one-on-one teaching, apprenticeship in writing, and, in the sciences, hands-on lab work—in short, something very much like traditional informal education.  There is a deep human truth here.   

            The same applies to moral training:  students have to learn to care and be responsible.   People learn to be moral by dealing with actual life experiences (Kohlberg 1981, 1983).  A few philosophers may get their ethics from grave tomes, but the rest of us get ours from doing something—often something helpful, but often something “bad”—and getting set straight by our parents or other respected figures.  This is supplemented by stories, especially the rueful reminiscence stories noted above, which seem to be universal. 

Whatever the philosophers may say, morals are not abstract principles.  They are pragmatic coping rules for dealing with others.  They are learned not from abstractions but from interactions.

3

More generally, moving out from traditional education to education in general, several other points emerge.

The sooner and more often one retrieves and uses a piece of information, the better one learns and remembers it (Karpicke and Roediger 2008).  Traditional societies teach in context and get the learners to repeat endlessly.

Studies of education also show that the higher the motivation—emotional, social, economic, or otherwise—the more the learning.  Salient facts are stored easily. 

Typically, one learns in a family context, or at least in the community and from well-known community members.

A surprising amount of non-classroom learning is from only slightly older children, cross-culturally confirming Judith Rich Harris’ (1998) findings about the importance of peer groups.  Earlier, thinkers and educators had overemphasized the importance of adults.  Most had hardly noticed the great importance of slightly-older peers.  Yet it is doubly important, because as the younger ones learn by doing, the older ones learn by teaching.  Explaining what one has learned is well known as a particularly valuable way of organizing and cementing knowledge (Siegler 2005).  As the proverb says:  “the best way to learn something is to teach it.” 

Several important general points behind all this have been made by Karim-Aly Kassam (2009:75-81).  He cites Gilbert Ryle’s distinction between knowing how and knowing that.  In more formal terms, this is a contrast between procedural knowledge and declarative knowledge.  Children in traditional societies basically learn how.  Learning that is a part of this wider agenda.  Children must learn a great deal of declarative knowledge, including all those plant names, but they learn this as part of the wider process of learning how to make a living, run a household, and act as responsible citizens of their communities.  Declarative knowledge is reduced to its proper place:  a subsidiary branch of procedural knowledge. 

Traditional ecological knowledge, in Aristotle’s terms, is phronesis.  In Kassam’s very useful treatment of traditional knowledge, phronesis is practical, applied learning in general, made up of “knowing how” with enough “knowing that” added in to provide the basic useful information.  Aristotle distinguished techne—the word that survives in our “techniques” and “technology”—and episteme, basically declarative knowledge; actually, traditional wisdom includes all three, as Aristotle knew, but (again) techne and episteme are subordinate to phronesis in traditional work and environment.  (However, in other realms, such as religion, cosmology, and myth, episteme often dominates, and of course things like stone tool making are strictly techne.) 

Following Argyris et al. (1985), Kassam sees phronesis—and action research—as nesting in “communities of social practice,” while “knowing that” nests in “communities of inquirers” (Kassam 2009:166).  This has clear implications for teaching, and indeed for all aspects of organizing, acquiring, and transmitting knowledge.  We need to get working knowledge out into the field, and work with local people; keeping it in the academy won’t do.  A lifetime of experience in applied anthropology and (via my wife) global public health makes me very sensitive to this point.  Public health projects are constantly wrecking on the same rock:  academics plan and organize them, without awareness of what the people on the ground will make of them.   

Kassam applies to traditional learning a stage model that leads from novice through advanced beginner, competent performer, and proficient performer, finally reaching expert level (Kassam 2009:77-79).  Greenfield and others cited above found, but did not so clearly distinguish and name, similar stages.  Kassam also brings out the point that this all involves learning morals along with practical knowledge.  Morals are part of the work. 

The idea of separating ethics from practice is rather new even in the modern west, and is certainly not typical of modern international science, where both the goals and the practice are morally defined.  A medical researcher is working toward a moral goal (healing the sick), hopefully in a moral way (not plagiarizing, not hyping his funder’s product).

4

Probably the most striking difference between traditional education and ours in the United States today, however, is in the developmental process.  Children in traditional societies generally grow slowly and steadily into adult roles.  They begin by helping in small ways around the house, and are given increasing responsibilities as they get older.  Teenagers are given adult privileges and prerogatives in direct proportion to the adult responsibilities they have taken on.  No privilege is given without prior proof of a proportionate advance in reliability at increasingly demanding adult roles.  At least this is the case in the societies I know—the Chinese fishermen and the Maya—and seems to be the consensus in other descriptions.

Charles Stafford (1995) and I (1999) have described in some detail the order this takes among Chinese fishermen.  I have seen it among the Maya as well, and indeed most of the above-cited sources mention it.

Exceptions are largely in matters of ceremonial knowledge and practice, where a grand initiation into adulthood may suddenly change a boy to a man, a girl to a woman, in a matter of days.  Such “liminal” initiation rites (see van Gennep 1960) usually overdraw a process that is really rather less dramatic, but indeed there is a real difference here from the learning of practical everyday knowledge.

Emotional and personal development similarly is socialized gradually over time, and here our modern society is closer to the traditional.  However, we treat children as children—little kids—until they are in college, or even until they have graduated from it.  Hence endless problems with teenagers, who desperately need to be treated like young adults and made to shape up and act like young adults.  Infantilizing them is seriously harmful to emotional development. 

We have also created a consumer culture that sells to children and uses peer pressure relentlessly, with serious and dangerous results for education and for childhood in general (Pugh 2009).  Families need to stick together and act as a unit to combat this (Hofferth 2009; Pugh 2009), but usually do not, because of work demands and because parents too are caught up in consumerism.  The desire to “do what’s best for the child” now too often involves both buying brand-name items and hovering over the child in school and even in university, never allowing the child to develop any independence or self-reliance.  This is not a good context for environmental education.

5

The contrast between traditional and contemporary education is obvious.  One of the reasons for the widespread ruin of the environment by irresponsible individual actions is the abysmal state of environmental education.  Indeed, there is, worldwide, an incredible ignorance of science, especially biology (Greenwood and North 1999).  This is true especially in the United States (among developed countries).  Half of Americans believe the world was created by God in six 24-hour days.  American children score among the lowest in the world in science and math.  They do worse and worse, by comparison with Europe and east Asia, as they go through the grades /1/.  

Yet, interacting with nature has major beneficial effects on cognitive functioning, both improving performance and reducing stress (Berman et al. 2008).

Paolo Freire’s class Pedagogy of the Oppressed (1984) directed us to teach for liberation.  Modern American education teaches for passivity.  The schools are not explicitly “teaching the kids to mind,” as they were in my childhood, but they are effectively doing exactly that.  The independent citizenship necessary for environmental concern is increasingly harmed rather than favored.  Above all, teaching has become a process of drilling huge classes in mindless rote memorization for the purpose of answering machine-scored standardized tests.  One could not design a better way to make inquiring children and young adults passive and ignorant.

An editorial in Science, by Lorrie Shepard (2010), pulls no punches:  “An extensive research ilterature has documented the negative effects of such test-driven instruction, the most obvious being the reduction or elimination of less-tested subjects, including science and social studies.  Less obvious have been the negative effects on learning in tested subjects.  When students are drilled on materials that closely resemble accountability tests, test scores can rise dramatically without a commensurate gain in learning” (Shepard 2010:890).  The author goes on to document in detail the subversion of education by mindless but easily scored tests, and the devastation of science education that results.

Students of education speak of a “hidden curriculum,” a term which “reefers to the social relations in the school system and the taken-for-granted values that uphold the social relations valued by…society”—which, in most of the world, means “a hierarchichal, gendered society…[with] systemic racism and sexism” (Fiske and Patrick 2000:240).  This is not just a problem for indigenous people; it is exactly what Willis (1981) was describing in Learning to Labor.  The only comment to make is that this curriculum is not at all hidden.  It is not usually stated upfront in the school curriculum plans, but even if it is not (and it often is!), everyone knows about it.

Unsurprisingly, United State students rank far behind other developed countries in science education—13th out of 34 countries in a recent survey, but it is based on standardized testing and thus makes the US look better than it otherwise might (“American Students Do Poorly in Science,” Reuters News online, Jan. 25, 2011).  Only 21% of high school students were proficient and only 2% really adept.  Only about 28% of high school biology teachers unequivocally teach evolution as fact; 13% teach creationism and 60% temporize and refuse to go into depth on the issue (Berkman and Plutzer 2011).  The equivocators and creationists are less adequately taught in biology, and are, obviously, passing on that dubious legacy all too successfully.

The problem of environmental education requires an entire book of its own, and some books do indeed exist (e.g. Louv 2005; Nabhan and Trimble 1994; Orr 1992, 1994).  Richard Louv, in a superb book, Last Child in the Woods (2005), points out that contemporary American childhood is very different from the childhood my generation knew.  Television and electronic gadgets get all the attention.  Children learn very well what they see as salient:  Hollywood shows, mechanical devices, sports, brand name clothing, and so on.  They learn these by the time-honored route:  interaction, peer activity, stories.  These things also have prestige.  No American child misses the contrast between our huge, flashy, brilliantly lit shopping malls and our wretched, collapsing schools.  Thus many children now have a fantastic knowledge of popular culture while being almost completely ignorant of school learning.  The combination of peer judgements of what is “cool” and actual living engagement beats out lectures in shabby, overcrowded classrooms every time.

Environmental education requires that children be exposed to a significant extent to reasonably wild nature.  Yet urbanization and environmental degradation make it impossible for most children to get anywhere near a natural area.  Exposure to wild nature is harder and harder to get these days, as urban sprawl and industrial-style farming take over all the landscape.  Children in much of the United States, to say nothing of the rest of the world, have no opportunities at all.  Even in areas near wild mountains and waters, children rarely get out into the wild for more than a few hours of sunny daytime.  The difference from my childhood is startling.  Visits to national parks and forests, as well as hunting and fishing, have sharply declined; outdoor recreation has been declining at 1% a year since the 1980s, for a total decline of about 25%  (Biello 2008).

Nature study—what we would now call environmental education—was a major part of American education in earlier times.  In Teaching Children Science:  Hands-on Nature Study in North America, 1890-1930 (2010), Sally Gregory Kohlstedt recounts this agenda.  Nature study had been popularized in America by 19th-century naturalists such as John Burroughs.  Biologists and naturalists realized that children needed hands-on experiences, and school gardens, nature walks, and the like flourished. A leader in the movement was the great biologist and economic botanist Liberty Hyde Bailey, so we ethnobiologists can feel we were at the heart of it.  One may add that this period—and on through the 1950s—was the golden age of summer camps, when children were introduced to nature in a much more serious way; many camps provided genuine wilderness experiences.  Summer camps today are usually much more tame, urban, and electronically connected.  Even remote mountain camps rarely have the roughing-it quality of a few generations ago. 

Experience with virtual nature, tamed environments (like zoos and gardens), and books does not give children the same degree of feel for or concern for the environment (Peter Kahn, personal communication, 2006, during a visit to his lab to observe ongoing research). 

Excessive caution makes parents and schools restrict and scare children.  Young people often develop a real terror of anything beyond a manicured lawn.  In inner cities they have genuine worries, notably drugs and random gunfire.  But even suburban children are terrorized.  They are afraid of imagined snakes and spiders, unlikely tree-falls, and such.  They are not afraid of the real killers:  automobiles, home poisons, falls, and common illnesses.  The result is that children frequently know nature only from TV wildlife programs. 

Louv labels the syndrome “nature deficit disorder.”  He addresses its real risks in terms of health (starting with obesity), mental state, community life, knowledge of vitally important public issues, “feel” for the need for a decent environment, and much more.  He presents a comprehensive review of strategies to fix the problem, but there is, at present, neither the funding nor the public will to do much about it.

            Many programs have arisen, partly in response to Louv’s book (Novotney 2008), but the problem continues to worsen as more and more electronic devices seduce a more and more urbanized youth.

            The general de-funding of education—private as well as public—in the United States has led to the elimination of  field trips and hands-on experiences.   Also, specialists in education have been resistant to input from scientists.  In California, a group of scientists volunteered their time and effort to design a science curriculum for the grade schools.  It was challenging, exciting, and full of hands-on experiences.  The state rejected it in favor of a curriculum designed by people with “Education” degrees, and based on rote memorization of terms, with minimal hands-on work (Laura Anderson, high-school science teacher, personal communication). 

Incredibly, there is a large segment of the education community that believes that interacting with flashy teaching-machines and then taking standardized tests is the only way (Meltzoff et al. 2009; Pianta et al. 2007.)  Their plans would banish nature, labs, and creative writing, and would do nothing for the vast majority of schools that are too poor to afford the flashy machine-teaching gadgets.  One is regrettably reinforced in one’s suspicion that the worst enemy of education is “Education.”

We are now betrayed even by children’s dictionaries.  The Oxford Junior Dictionary as of 2009 has replaced “wren,” “dandelion,” “otter,” “acorn,” and “beaver” with “MP3 player,” “blog,” “cut and paste,” and other hi-tech words (Keisman 2009).  (“Cut and paste” doesn’t mean what it did when I was in grade school!)

Fortunately, there are much better plans afoot, that either draw on traditional learning methods or have independently invented them.  I do not know which, but I am happy either way!  National Academy of Science papers advise schools to use hands-on methods, discovery procedures, teaching for understanding, and other traditional methods  (National Academy of Sciences-Kindergarten… 2007; National Academy of Sciences 2007).  This advice and other similar counsel from other sources has led to changes in Advanced Placement courses in the high schools (Mervis 2009).  Instead of drill on rote memorization for mindless tests, “new courses will emphasize conceptual knowledge, updated legularly and learned by doing, along with teaching how scientists ask and answer important questions” (Mervis 2009:1488).  Students will, hopefully, have to understand and explain, rather than guessing at one of four machine-scored answers.  Change comes glacially slow in classrooms.  One hopes this will proceed more rapidly than most grade-school processes.  

Pursuant to this, Newcombe et al. (2009) have written a major programmatic article, with a long review of the literature, on how to teach science in the schools.  Their suggestions are appropriate, indeed excellent, for environmental matters.  Their recommendations are in line with the above.  Among other things, they include being more attentive to young children’s knowledge.  Children enter school with both natural predispositions to think in certain ways and a great deal of cultural baggage; by 5 years old they are fluent in their languages, and inevitably in many teachings (religious and other) that those languages carry.  The panel also advises practical approaches—examples, problems and solutions, concrete representations, and deep explanations.  They advocate graphic as well as verbal approaches, and more generally adapting to particular students’ learning styles.  (This is quixotic in a world of 30 students to a class, but maybe in future….) 

On tests, they are fortunately sensible: 

“In the worst scenario, tests  have the unintended consequence of motivating unproductive curricular changes such as increased test practice or elimination of curricular acitivities that are not directly measured by the test.

“Analysis of state mathematics and science tests, for example, shows that they rarely measure important abilities such as using evidence to form arguments, interpreting contemporary dilemmas, or comprehending the nature of science.  As a result, tests deter teachers form teaching the skiill that are valuable for science-literate individuals.  Some teachers infer that practice on test items would be the best way ot improve performance, and textbooks regularly include standardized items as part of class tests.  When they are evaluated on standardized test performance [of their students], many math and science teachers abandon inquiry goals and teaching for understanding and substitute memorization and drill on multiple-choice questions requiring the recall of facts….”  (Newcombe et al. 2009).

Of course, as they know full well, this is not the choice of “some teachers” but a behavior essentially forced on the schools and thus on virtually all teachers by the No Child Left Behind policy and its state-level counterparts.  If schools, principals, and teachers are all evaluated solely on the basis of student performance on the most mindless and rote-drill of tests, with teachers and principals being relocated or fired outright if their students perform low, the results can only be one thing.

AP biology in high schools has also received considerable recent attention, with the same goals and recommendations.  William Wood, a biologist who chaired the National Resesarch Council’s Biology Subpanel and edited reports that broke the logjam, reports that current thinking is for the high school curricula to look at evolution, biological systems, information, and interaction of systems components (Wood 209:1627).  He lists the recommendations for science practicies AP students need to learn:

“Use models and representations

Use quantitative reasoning

Pose hypotheses…

Plan experiments and data collection strategies

Perform data analysis and evaluate evidence

Work with scientific explanations and theories

Integrate and transfer knowledge across scales, concepts, domains, and disciplines” (Wood 2009”1628).  Of course this is all done through hands-on, interactive leraning—the apprenticeship model again /2/.

Considerable further material has appeared in the science journals.  Science, 23 April 2010 (section “Science, Language and Literacy”), has a review of some recent ideas, including a valuable article by Pearson et al. that savages the standardized test mania and other perversions.  The editors of Scientific American, in an editorial of 2010, note that kindergarten students in the United States have already developed fear of science, though they know nothing of it and normally get no education in it until much later.  Math and science phobia is common, particularly among girls—even at that tender age.  This is, of course, disturbing, and the editors make the obvious recommendations, noting the existence of a few (very few) programs to remedy the lack of science in early years.

Another development that would enormously help environmental education is teaching children about probability, risk, and uncertainty (Bond 2009).  We have always before put science in the form of settled “facts.”  Real science, and above all environmental problems, often turn on probabilities, yet we have neglected education in this area.  Such leading experts in the psychology of uncertainty as Gerd Gigerenzer are now working on this issue (Bond 2009).

Making science relevant to ordinary children’s lives greatly increases interest and performance (Hulleman and Harackiewicz 2009).  The amazing thing is that the education establishment sees this as a revolutionary new finding!

Community colleges are also a major area to work on (Boggs 2010).

The move to make traditional teachings and teaching methods relevant has recently received a boost in books by Gregory Cajete (1994) and Gary Holthaus (2008) and articles such as Michael Cole’s (2010) and the work he reviews therein.

Cajete’s book deals largely with content, especially worldview and philosophy, but also stresses the methods discussed above:  hands-on training, use of myths and personal stories, development of individual character and ability, embodied learning, and grounding in the environment.  Cajete gives some specific ideas and methods in the last parts of the book.  Cole advocates attention to context, teaching for real life, and mixing play and education.  This leads to work to design “serious” games, and cooperation between teachers, education schools, and communities to create “gardens for development” (Cole 2010:805) that integrate as much of the community as possible in many kinds of learning, including physical training and interactive practice.

Teaching conservation and environmental responsibility must be a very broad-based and broadly accepted activity if it is to have even the slightest chance of success.  We have few “green campuses” and “green curricula” at the present time.  Administrators and many professors are too specialized, too committed to the bottom line, and too concerned with linking universities to big business.  Even professional meetings seriously need to be “greened.”  Brian McKenna, Paige West, and several other environmental anthropologists are conducting research on these matters as of this writing.

The right wing must give up its opposition to the whole concept, but the left wing will also have to think seriously about some of its positions.  Broad-brush attacks on “capitalism,” “greed,” “Western civilization,” and even the entire male gender (Merchant 1996) do not get us far. 

            We should be exceedingly cautious about frontal attacks on all of western or eastern civilization.  It seems better to stress the ecological and environmentalist streams in the great religious traditions, as Baird Callicott (1994) has done.  It seems better, also, to place environmental thinking within the classic traditions of scientific and cosmological thought, rather than trying to attack and discredit 3000 years of science because (for example) Descartes can be misinterpreted as saying we should not care about animals (Merchant 1996).  I am not suggesting this solely for cynical tactical reasons.  As an approach, it seems more intellectually honest and humane, quite apart from its tactical value.

/1/  The journal Science is concerned with the matter, publishing inputs from some of the most distinguished science writers (Greenwood and North 1999; Gould 1998; Miller et al. 2008; Wheeler 1998).  Noting that this was an issue of national concern, for scientists and others, these authors lament the general decline of science in the public eye.

            Some of the reason is captured in another Science report, this one on the lack of employment opportunities for biology Ph.D.s (Holden 1998).  Clearly, there is a feedback loop.

            More recently, there are excellent recommendations by Trombulak et al. (2004) in Conservation Biology.

/2/ “Student-centered teaching” is now becoming deservedly popular; it involves a return to small groups, real-life problems, group projects, multiple drafts of written work, student evaluations of each other’s work, reflective writing or journaling, electronic quizzes with immediate feedback in class, and real papers (Chronicle of Higher Education, Oct. 23, p. A4). 

References

Anderson, E. N.  1992.   “Chinese Fisher Families:  Variations on Chinese Themes.”  Comparative Family Studies 23:2:231-247.

—  1999.  “Child-raising among Hong Kong Fisherfolk:  Variations on Chinese Themes.”  Bulletin of the Institute of Ethnology, Academia Sinica, 86:121-155.

—  2003.  Those Who Bring the Flowers.  Chetumal, Quintana Roo, Mexico:  ECOSUR.

—  2007.  Floating World Lost.  New Orleans:  University Press of the South.

Argyris, Chris; Robert Putnam; Diana McLain Smith.  1985.  Action Science:  Concepts, Methods, and Skills for Research Intervention.  San Francisco:  Jossey-Bass.

Atleo, E. Richard.  2004.  Tsawalk:  A Nuu-Chah-Nulth Worldview.  Vancouver:  University of British Columbia Press. 

Bazzaz, Fakhri, and 19 other signers. 1998. “Ecological Science and the Human Predicament.  Science 282:879.

Berkman, Michael B., and Eic Plutzer.  2011.  “Defeating Creationism in the Courtroom, but Not in the Classroom.”  Science 331:404-405.

Berman, Marc; John Jonides; Stephen Kaplan.  2008.  “The Cognitive Benefits of Interacting with Nature.”  Psychological Science 19:1207-1212.

Biello, David.  2008.  “Not Going Out to Play.”  Scientific American, April, p. 36.

Bjorkland, David.  2007.  Why Youth Is Not Wasted on the Young.  Oxford:  Blackwell.

Blackburn, Carole.  2000.  Harvest of Souls:  The Jesuit Missions and Colonialism in North America, 1632-1650.  Quebec:  McGill-Queen’s University Press.

 

Bock, John.  2010.  “An Evolutionary Perspective on Learning in Social, Cultural, and Ecological Context.”  In The Anthropology of Learning in Childhood, David Lancy, John Bock, and Suzanne Gaskins, eds.  Walnut Creek:  AltaMira.  Pp. 11-34.

Boggs, George R.  2010.  “Growing Roles for Science Eucation in Community Colleges.”  Science 329:1151-1152.

 

Bond, Michael.  2009.  “Risk School.”  Nature 461:1189-1192.

 

Bourdieu, Pierre, and J. Passeron.  1990.  Reproduction in Education, Society, and Culture.  2nd edn.  Tr. Richard Nice.  London:  Sage Publications.

Boyce, Mary.  1979.  Zoroastrians.  London:  Routledge Kegan Paul.

Cajete, Gregory.  1994.  Look to the Mountain:  An Ecology of Indigenous Education.  Skyland, NC:  Kivaki Press.

 

Callicott, J. Baird.  1994.  Earth’s Insights.  Berkeley:  University of California Press.

Campbell, Paul D.  1999.  Survival Skills of Native California.  Salt Lake City:  Gibbs-Smith.

Chamberlain, Alexander F.  1895.  The Child and Childhood in Folk-Thought.  New York:  MacMillan.

Chick, Garry.  2010.  “Work, Play and Learning.”  In The Anthropology of Learning in Childhood, David Lancy, John Bock, and Suzanne Gaskins, eds.  Walnut Creek:  AltaMira.  Pp. 119-143.

Cole, Michael.  2010.  ““Education as an Intergenerational Process of Human Learning, Teaching, and Development.”  American Psychologist 65,796-807.

Cruikshank, Julie.  1998.  The Social Life of Stories.  Lincoln:  University of Nebraska Press.

Dougherty, Janet.  1979.  “Learning Names for Plants and Plants for Names.” Anthropological Linguistics 21:298-315.

Eastman, Charles.  1902.  Indian Boyhood.   New York:  McClure, Phillips and Co.

Farquhar, Judith.  1994.  Knowing Practice.  Boulder:  Westview. 

Fiske, Jo-Anne, and Betty Patrick.  2000.  Cis Dideen Kat:  When the Plumes Rise.  Vancouver:  University of British Columbia Press.

Franquemont, Christine.  1988.  “The Mnemonics of Chinchero Botany:  How Children Learn and Adults Remember the Natural World.”  Paper, Society of Ethnobiology, annual conference, Mexico City.

Freire, Paulo.  1984.  Pedagogy of the Oppressed.  Tr. Myra Bergman Ramos.  Portuguese orig. 1968.  New York:  Continuum.

Fung, Heidi, and Benjamin Smith.  2010.  “Learning Morality.”  In The Anthropology of Learning in Childhood, David Lancy, John Bock, and Suzanne Gaskins, eds.  Walnut Creek:  AltaMira.  Pp. 261-285.

Gardner, Howard.  1985.  The Mind’s New Science:  A History of the Cognitive Revolution.  New York:  Basic Books.

Gardner, Peter.  2003.  “Rethinking Foragers’ Handling of Environmental and Subsistence Knowledge.”  Ms. circulated at American Anthropological Association meeting, Chicago.

Gauvain, Mary.  2001.  The Social Context of Cognitive Development.  New York:  Guilford Press.

George, Earl Maquinna.  2003.  Living On the Edge:  Nuu-Chah-Nulth History from an Ahousaht Chief’s Perspective.  Winlaw, BC:  Sono Nis Press.

Gibbs, Raymond W., Jr.  2006.  Embodiment and Cognitive Science.  Cambridge:  Cambridge University Press.

Glenn, David.  2007.  “You Will Be Tested on This.”  Chronicle of Higher Education, June 8, pp. A15-A17.

Gould, Richard.  1968.  Yiwara:  Foragers of the Australian Desert.  New York:  Charles Scribner’s Sons.

Goulet, Jean-Guy.  1998.  Ways of Knowing.  Lincoln:  University of Nebraska Press.

Greenfield, Patricia Marks.  2004.  Weaving Generations Together:  Evolving Creativity in the Maya of Chiapas. Santa Fe, NM:  School of American Research Press.

Greenfield, Patricia M.; Heidi Keller; Andrew Fuligni; Ashley Maynard.  2003.  “Cultural Pathways through Universal Development.”  Annual Review of Psychology 54:461-490.

Greenwood, M. R. C., and Karen Kovacs North.  1999.  “Science Through the Looking Glass:  Winning the Battles but Losing the War?”  Science 286:2072-2078.

Harris, Judith Rich.  1998.  The Nurture Assumption.  New York:  The Free Press (division of Simon and Schuster).

Hauser, Marc D., and Thomas Bever.  2008.  “A Biolinguistic Agenda.”  Science 322:0157-1058.

Herdt, Gilbert.  1981.  Guardians of the Flutes:  New York:  McGraw-Hill.

Hofferth, Sandra L.  2009.  “Buying So Children Belong.”  (Review of Pugh 2009.)  Science 324:1647.

Holden, Constance.  1998.  “Report Paints Grim Outlook for Young Ph.D.s.”  Science 281-1584.

Holthaus, Gary.  2008.  Learning Native Wisdom:  What Other Cultures Have to Teaach Us about Subsistence, sustainability, and Spirituality.  Lexington:  University of Kentucky Press.

Howe, Daniel C.; Peter Kahn; Batya Friedman.  1996.  “Along the Rio Negro:  Brazilian Children’s Environmental Views and Values.”  Developmental Psychologye 32:979-987.

Hsu, Elisabeth.  1999.  The Transmission of Chinese Medicine.  Cambridge:  Cambridge University Press.  Cambridge Studies in Medical Anthropology, 7. 

Hulleman, Chris S., and Judith M. Harackiewicz.  2009.  “Promoting Interest and Performance in High Schol Science Classes.”  Science 326:1410-1412.

Hunn, Eugene.  1991.  Nch’i-Wana, The Big River.  Seattle:  University of Washington Press.

Hunn, Eugene.  2002.  “Evidence for the Precocious Acquisition of Plant Knowledge by Zapotec Children.”  In Ethnobiology and Biocultural Diversity, J. R. Stepp, F. S. ndham, R. K. Zarger, eds.  Athens: University of Georgia Press. Pp. 604-613. 

Hunn, Eugene.  2008.  A Zapotec Natural History:  Trees, Herbs, and Flowers, Birds, Beasts and Bugs in the Life of San Juan Gbëë.  Tucson:  University of Arizona Press.

Jackson, Nettie.  1994.  “A Klikitat Basketmaker’s View of Her Art.”  In Columbia River Basketry:  Gift of the Ancestors, Gift of the Earth by Mary Dodds Schlick.  Seattle:  University of Washington Press.  Pp. 199-201.

Kahn, Peter. 1999.  The Human Relationship with Nature:  Development and Culture.  Cambridge, MA: MIT Press.

Kahn, Peter, and Stephen R. Kellert (eds.).  2002.  Children and Nature:  Psychological, Sociocultural, and Evolutionary Investigations.  Cambridge, MA:  MIT Press.

Karpicke, Jeffrey D., and Henry L. Roediger III.  2008.  “The Critical Importance of Retrieval for Learning.”  Science 319:966-968.

Kassam, Karim-Aly S.  2009.  Biocultural Diversity and Indigenous Ways of Knowing.  Calgary:  University of Calgary Press.

Keisman, Anne.  2009.  “When Words Become Endangered.”  National Wildlife, Oct., p. 12.

Kohlberg, Lawrence.  1981.  The Meaning and Measurement of Moral Development.  Worcester, MA:  Clark University, Heinz Werner Institute.

Kohlberg, Lawrence.  1983.  Moral Stages.  Basel:  Karger.

Kohlstedt, Sally Gregory.  2010.  Teaching Children Science:  Hands-ON Nature Study in North America, 1890-1930.  Chicago:  University of Chicago Press. 

Kramer, Karen.  2005.  Maya Children:  Helpers at the Farm.  Cambridge, MA:  Harvard University Press.

Laird, Carobeth.  1976.  The Chemehuevis.  Banning, CA:  Malki Museum Press.

—  1984.  Mirror and Pattern.  Banning, CA:  Malki Museum Press.

Lancy, David; John Bock; Suzanne Gaskins (eds.).  2010.  The Anthropology of Learning in Childhood.  Walnut Creek:  AltaMira.

Lancy, David, and M. Annette Grove.  2010.  “The Role of Adults in Children’s Learning.”  In The Anthropology of Learning in Childhood, David Lancy, John Bock, and Suzanne Gaskins, eds.  Walnut Creek:  AltaMira.  Pp. 145-179.

Lave, Jean.  1988.  Cognition in Practice:  Mind, Mathematics, and Culture in Everyday Life.

Lave, Jean, and Etienne Wenger.  1991.  Situated Learning:  Legitimate Peripheral Participation.  New York:  Cambridge University Press.

Leighton, Alexander, and Dorothea Leighton.  1949.  Gregorio the Hand-Trembler:  A Psychobiological Personality Study of a Navaho Indian.  Cambridge, MA:  Harvard University Press.

LeVine, Robert A.  2007.  “Ethnographic Studies of Childhood:  A Historical Overview.”  American Anthropologist 109:247-260.

Louv, Richard.  2005.  Last Child in the Woods:  Saving Children from Nature-Deficit Disorder.  Chapel Hill:  Algonquin Books of Chapel Hill.

Maas Colli, Hilaria.  1983.  Transmición cultural:  Chemax, Yucatan:  un enfoque etnográfico.  Thesis, Licenciada en Antropología. Social, Universidad Autónoma de Yucatán.

McGinnis, J. Randy, and Deborah Roberts-Harris.  2009.  “A New Vision for Teaching Science.”  Scientific American Mind, Sept.-Oct., 62-67.

Meltzoff, Andrew N.; Patricia K. Kuhl; Javier Movellan; Terrence J. Sejnowski.  2009.  “Foundations for a New Science of Learning.”  Science 325:284-288.

Mervis, Jeffrey.  2009.  “Revisions to AP Courses Expected to Have Domino Effect.”  Science 325:1488-1489.

Miller, Sarah; Christine Pfund; Christine Maidl Prebbenow; Jo Handelsman.  2008.  “Scientific Teaching in Practice.”  Science 322:1329-1330.

Munroe, Robert L., and Mary Gauvain.  2010.  “The Cross-Cultural Study of Children’s Learning and Socialization:  A Short History.”  In The Anthropology of Learning in Childhood, David Lancy, John Bock, and Suzanne Gaskins, eds.  Walnut Creek:  AltaMira.  Pp. 35-64.

Munroe, Robert L., and Ruth H. Munroe.  1975.  Cross-cultural Human Development.  Monterey, CA:  Brooks/Cole.

Nabhan, Gary Paul, and Stephen Trimble.  1994.  Geographies of Childhood:  Why Children Need Wild Places.  Boston:  Beacon Press.

National Academy of Sciences, Kindergarten through Eighth Grade Committee on Science Learning; Richard Duschl et al., eds.  2007.  Taking Science to School.  Washington:  National Academies Press.

National Academy of Sciences; Sarah Michaels, Andrew Shouse, Heidi Schweingruber, (eds.).  2007.  Ready, Set, Science!  Putting Research to Work in K-8 Science Classrooms.  Washington:  National Academies Press.

Newcombe, Nora S.; Nalini Ambady; Jacquelynne Eccles; Louis Gomez; David Klahr; Marcia Linn; Kevin Miller; Kelly Mix.  2009.  “Psychology’s Role in Mathematics and Science Education.”  American Psychologist 64:538-550.

Novotney, Amy.  2008.  “Getting Back to the Great Outdoors.”  Monitor on Psychology, March, pp. 52-54.

O’Brien, Colleen.  2010.  “Do They Really ‘Know Nothing’?  An Inquiry into Ethnobotanical Knowledge of Students in Arizona, USA.”  Ethnobotany Research and Applications 8, article 35.  14 pp.

Orr, David.  1992.  Ecological Literacy:  Education and the Transition to a Postmodern World.  Albany:  SUNY Press.

Orr, David.  1994.  Earth in Mind:  On Education, Environment and the Human Project.  Washington and Covelo: Island Press.

Pearson, P. david; Elizabeth Moje; Cynthia Greenleaf.  2010.  “Literacy and Science:  Each in the Service of the Other.”  Science 328:459-463.

Pelissier, Catherine.  1991.  “The Anthropology of Teaching and Learning.”  Annual Reviews in Anthropology 20:75-95.

Pettitt, George A.  1946.  Primitive Education in North America.  University of California Publications in American Archaeology and Ethnology 43:1.

—  1950.  The Quileute of La Push, 1775-1945.  Berkeley:  University of California Press, Anthropological Records, 14:1.

Pianta, Robert C.; Jay Belsky; Renate Houts; Fred Morrison; The National Institute of Chidl Health and Human Development (NICHD) Early Child Care Research Network.  2007.  “Opportunities to Learn in America’s Elementary Classrooms.”  Science 315:1795-1796.

Powell, J. W.  1901.  “Sophiology, or the Science of Activities Designed to Give Instruction.”  American Anthropologist 3:51-79. 

Pugh, Allison.  2009.  Longing and Belonging:  Parents, Children, and Consumer Culture.  Berkeley:  University of California Press.

Quisumbing, Agnes R.; J. Estudillo; K. Itsuke.  2004.  Land and Schooling.  Johns Hopkins. 

Reyes, Lawney.  2002.  White Grizzly Bear’s Legacy.  Seattle:  University of Washington Press.

Rogoff, Barbara.  2003.  The Cultural Nature of Human Development. New York:  Oxford University Press.

Rogoff, Barbara, and Jean Lave (eds.).  1984.  Everyday Cognition.  Cambridge, MA:  Harvard University Press.

Romney, A. Kimball.  1966.  The Mixtecs of Juxtlahuaca, Mexico. New York:  Wiley.

Rose, Deborah.  2000.  Dingo Makes Us Human:  Life and Land in an Australian Aboriginal Culture.  New York:  Cambridge University Press.

Ruddle, Kenneth.  1993.  “The Transmission of Traditional Ecological Knowledge.”  In Traditional Ecological Knowledge:  Concepts and Cases, J. T. Inglis (ed.).  Ottawa:  International Development Research Center and International Program on Traditional Ecological Knowledge.  Pp. 17-31.

Ruddle, Kenneth, and Ray Chesterfield.  1977.  Education for Food Production in the Orinoco Delta.  Berkeley:  University of California Press.

Scientific American Editors.  2010.  “Start Science Sooner.”  Scientific American, March, 28.

Shepard, Lorrie A.  2010.  “Next-Generation Assessments.”  Science 2010:890.

Siegler, Robert S.  2005.  “Children’s Learning.”  American Psychologist 60:767-778. 

Stafford, Charles.  1995.  The Roads of Chinese Childhood:  Learning and Identification in Angang.  Cambridge:  Cambridge University Press.

Sterponi, Laura.  2010.  “Learning Communicative Competence.”  In The Anthropology of Learning in Childhood, David Lancy, John Bock, and Suzanne Gaskins, eds.  Walnut Creek:  AltaMira.  Pp. 235-259.

Stross, Brian.  1973.  “Acquisition of Botanical Terminology by Tzeltal Children.”  In Meaning in Mayan Languages, M. S. Edmonson, ed.  Hague:  Mouton. Pp.107-141.

Trombulak, Stephen, et al.  2004.  “Principles of Conservation Biology:  Recommended Guidelines for Conservation Literacy from the Education Committee of the Society for Conservation Biology.”  Conservation Biology 18:1180-1190.  

Van Gennep, Arnold.  1960.  The Rites of Passage.  Chicago:  University of Chicago Press.

Vogt, Evon Z.  1994.  Fieldwork among the Maya:  Reflections on the Harvard Chiapas Project.  Albuquerque:  University of New Mexico Press.

Werner, Emmy.  1989.  “Children of the Garden Island.”  Sci Am 260:4:106-111.

Werner, Emmy, and Ruth S. Smith.  1982.  Vulnerable but Invincible:  A Longitudinal Study of Resilient Children and Youth.  New York:  McGraw-Hill. 

Whiting, John.  1951.  Becoming a Kwoma.  New Haven: Yale University Press.

Whiting, John, ed. by Eleanor Hollenberg Chasdi.  1994.  Culture and Human Development:  The Selected Papers of John Whiting.  New York:  CambridgeUniversity Press.

Willis, Paul.  1981.  Learning to Labor.  Columbia University Press.

Wood, William B.  2009.  “Revising the AP Biology Curriculum.”  Science 325:1627-1628.

Wyndham, Felice.  2009.  “Children Learning the Plant World:  Landscape, Ontogeny and Eco-Cultural Salience.”  Presentation, Society of Ethnobiology, annual conference, New Orleans.

Zambrano, Isabel, and Patricia Greenfield.  2004.  “Ethnoepistemologies at Home and at School.”  In Culture and Competence: Contexts of Life Success, Robert J. Sternberg and Elena L. Grigorenko (eds.).  Washington:  American Psychological Association.  Pp. 251-272. 

Zarger, Rebecca K.  2002.  “Acquisition and Transmission of Subsistence Knowledge by Q’eqchi’ Maya in Belize.”  In: Ethnobiology and Biocultural Diversity, J. R. Stepp, Felice S. Wyndham and R. K. Zarger (eds.).  Athens:  University of Georgia Press. Pp. 593-603.  

—  2010.  “Learning the Environment.”  In The Anthropology of Learning in Childhood, David Lancy, John Bock, and Suzanne Gaskins, eds.  Walnut Creek:  AltaMira.  Pp. 341-370.

Zarger, Rebecca K., and John R. Stepp.  2004.  “Persistence of Botanical Knowledge among Tzeltal Maya Children.”  Current Anthropology 45:413-418.

Anthropology Theory and History Bibliography

Wednesday, November 24th, 2010

Anthropological Theory

Some Useful Readings on Theory and History:  Basic Sources and Modern Reviews

Not intended to be comprehensive or even representative–just some things I find useful.

“Heard some anthropology talk, yes siree!

We’re all descended from a family tree….”

From “Anthropology” by Dizzy Gillespie and Charlie “Bird” Parker

Abrutyn, Seth.  2009.   “Towards a General Theory of Institutional Autonomy.”  Sociological Theory 27:449-465.

Abrutyn, Seth, and Kirk Lawrence.  2010. "From Chiefdoms to States:  Toward and Integrative Theory of the Evolution of Polities."  (Vol. 53, no. 3) of Sociological Perspectives

 

Abu-Lughod, Lila.  1985.  Veiled Sentiments: Honor and Poetry in a Bedouin Society. Berkeley:  University of California Press. 

Agar, Michael.  1985.  Speaking of Ethnography.  Newbury Park, CA:  Sage.

Anderson, Benedict.  1991.  Imagined Communities.  2nd edn.  London:  Verso.

Barnett, Homer.  1953.  Innovation:  The Basis of Cultural Change.  New York:  McGraw-Hill.

Beals, Alan.  1967 (2nd edn. 1979).  Culture in Process.  New York:  Holt, Rinehart, Winston.

Bellah, Robert; Richard Madsen; William Sullivan; Ann Swidler.  1996.  Habits of the Heart:  Individualism and Commitment in American Life.  2nd edn.  Berkeley:  University of California Press.

Bellah, Robert; R. Madsen; Wm. Sullivan; Ann Swidler; Steven Tipton.  1991.  The Good Society.  Random House.

Bentley, R. Alexander; Herbert Maschner; Christopher Chippindale (eds.).  2008.  Handbook of Archaeological Theories.  Lanham, MD:  AltaMira. 

Berger, Peter L., and Thomas Luckmann.  1966.  The Social Construction of Reality.  Garden City, NY:  Doubleday.

Boas, Franz.  1904.  “The History of Anthropology.”  Science 20:511, 513-24.  Notes Steinthal.

Boas, Franz.  1917.  “Introductory.”  International Journal of American Linguistics 1:1-8.  This brief start-up editorial for a new journal (still a major one today) stated Boas’ general view of the history of that field till date.  That first issue contained an article by him, in Spanish, on a Native American language of Mexico—one of the first cases of using Spanish as the language of an article in a US professional journal.  That was a time when Spanish was considered practically a barbarous tongue by most American academics.

Boas, Franz.  1924.  The Mind of Primitive Man.  New York:  MacMillan.

—  1928.  Anthropology and Modern Life.  New York:  W. W. Norton.

Boas, Franz.  1940.  Race, Language and Culture.  New York:  MacMillan.   Collected papers; important; shows development of his thought.

Boas, Franz, ed. Ronald P. Rohner.  1969. The Ethnography of Franz Boas:  Letters and Diaries.  Chicago:  University of Chicago Press.

Bourdieu, Pierre.  1977.  Outline of a Theory of Practice.  Tr. Richard Nice.  New York:  Cambridge University Press.

—  1990.  The Logic of Practice.  Tr. Richard Nice.  Stanford:  Stanford University Press.

Bowker, Geoffrey, and Susan Leigh Star.  1999.  Sorting Things Out:  Classification and Its Consequences.  Cambridge, MA:  MIT Press.

Brown, Donald.  1991.  Human Universals.  Philadelphia:  Temple University Press.

Carneiro, Robert L. 1970. “A Theory of the Origin of the State.” Science 169:733-38.

Carrier, James.  1992.  “Occidentalism:  The World Turned Upside-Down.”  American Ethnologist 19:195-212.

Casagrande, Joseph.  1960.  In the Company of Man.  New York:  Harper.

Chamberlin, T. C.  1965 (orig. in Science, 7 Feb. 1890).  “The Method of Multiple Working Hypotheses.”  Science 148:748-759.

Chase-Dunn, Christopher, and Thomas D. Hall.  1997.  Rise and Demise:  Comparing World-systems.  Boulder:  Westview.

Collins, Randall.  1986.  Weberian Sociological Theory.  Cambridge:  Cambridge University Press.

—  1988.  Theoretical Sociology.  New York:  Harcourt Brace Jovanovich.

—   1992.  Sociological Insight:  An Introduction to Non-Obvious Sociology.  2nd edn.  Oxford University Press.

—  1994.  Four Sociological Traditions.  New York:  Oxford University Press.  (2nd edn of Three S. T.’s.)  

—  1998.  The Sociology of Philosophies.  Cambridge, MA:  Harvard University Press.

—  2001.  Interaction Ritual Chains.  Princeton:  Princeton Univeristy Press.

Comaroff, Jean.  1985.  Body of Power, Spirit of Resistance:  The Culture and History of a South African People.  Chicago:  University of Chicago Press.

D’Andrade, Roy.  1995.  The Development of Cognitive Anthropology.  New York:  Cambridge University Press.

De Munck, Victor C., and Elisa J. Sobo (eds.).  1998.  Using Methods in the Field:  A Practical Introduction and Casebook.  AltaMira.

Denzin, Norman, and Yvonna Lincoln (eds.).  2005.  The SAGE Handbook of Qualitative Research.  Sage.

Dilthey, Wilhelm.  1989.  Introduction to the Human Sciences.  Ed./tr. Rudolf A. Makkreel and Frithjof Rodi. (German original ca 1880.)  Princeton:  Princeton University Press. 

Douglas, Mary.  1966.  Purity and Danger:  An Analysis of Concepts of Purity and Taboo.  London:  Routledge, Kegan Paul.

—  1970.  Natural Symbols:  Explorations in Cosmology.  New York:  Pantheon.

Durkheim, Emile.  1933.  The Division of Labor in Society.  NewYork:  Free Press.

—  1973. Moral Education.  New York:  Free Press.

1982.  The Rules of Sociological Method.  S. Lukes, ed.  New York:  Macmillan.

Durkheim, Emile.  1995 [1912].  The Elementary Forms of Religious Life.  Tr. Karen E. Fields.  New York:  Free Press.

—  1951.  Suicide.  Tr. John A. Spaulding and George Simpson.  (French original, 1897.)  Glencoe, IL:  Free Press.

—  1993.  Ethics and the Sociology of Morals.  Tr. Robert T. Hall. 

— and Marcel Mauss.  1963 (Fr. orig. 1903).  Primitive Classification.  London: Cohen and West.

Eliade, Mircea.  1964.  Shamanism:  Archaic Techniques of Ecstasy.  New York:  Pantheon.

Ellingson, Ter.   2001.  The Myth of the Noble Savage.  Berkeley:  University of California Press.

Engels, Frederick.  1942 [1892].  The Origin of the Family, Private Property and the State, in the Light of the Researches of Lewis H. Morgan.  New York:  International Publishers.

— 1966.  Anti-Duhring: Herr Eugen Duhring’s Revolution in Science.  New York: International Publishers.  (New printing. Orig. US edn. 1939.  Orig. English edn. 1894.)

Foster, George.  1961.  “Interpersonal Relations in Peasant Society.”  Human Organization 19:174-178. 

—  1965.  “Peasant Society and the Image of Limited Good.”  American Anthropologist 67:293-315.

Foucault, Michel.  1970.  The Order of Things:  An Archaeology of the Human Sciences.  (Fr. orig., Les mots et les choses, 1966.)  New York:  Pantheon Books (Random House). 

Foucault, Michel.  1991.  “Governmentality.”  In The Foucault Effect:  Studies in Governmentality, ed. Graham Burchell, Colin Gordon, Peter Miller.  London:  Harvest/Wheatsheaf.  Pp. 87-104. 

—  2007.  Security, Territory, Population.  New York:  Palgrave MacMillan.

Foucault, M.  2008.  The Birth of Biopolitics. A Davidson (ed), G. Burchell (trans). New York: Palgrave Macmillan.

Geertz, Clifford.  1973.  The Interpretation of Cultures.  New York: Basic Books.

Gezelius, Stig S.  2007.  “Three Paths from Law Enforcement to Compliance:  Cases from the Fisheries.”  Human Organization 66:414-425. 

Giddens, Anthony.  1984.  The Constitution of Society.  Berkeley:  University of California Press.

Gladwin, Christina.  1989.  Ethnographic Decision Tree Modeling.  Newbury Park, CA:  Sage.

Goffman, Erving.  1959.  The Presentation of the Self in Everyday Life.  Garden City, NY:  Doubleday.

—  1961.  Asylums:  Essays on the Social Situation of Mental Patients and Other Inmates.  Garden City, NY:  Doubleday.

—  1967.  Interaction Ritual.  Garden City, NY:  Doubleday.

—  1963.  Stigma:  Notes on the Management of Spoiled Identity.  Englewood Cliffs, NJ:  Prentice-Hall.

Henshaw, John M.  2006.  Does Measurement Measure Up?  How Numbers Reveal and Conceal the Truth.  Johns Hopkins.

Herder, Johann Gottfried.  2002.  Philosophical Writings.  Transl. and ed. by Michael N. Forster.  Cambrdige:  Cambridge University Press.

Homans, George.  1974.  Social Behavior: Its Elementary Forms.  New York:  Harcourt, Brace, Jovanovich.

Howell, Signe, and Roy Willis.  1989.  Societies at Peace:  Anthropological Perspectives.  London:  Routledge.  See Robarchek below.  Other papers cover Chewong, Buid, Bali, Zapotec, Ufipa, etc.

Huizinga, Johan.  1950.  Homo Ludens:  A Study of the Play Element in Culture.  London:  Roy.

Hume, David.  1969 (1739-1740).  A Treatise of Human Nature.  New York:  Penguin.

Hutchins, Edwin.   1996.  Cognition in the Wild.  Cambridge, MA:  MIT Press.

Ingold, Tim.  2000.  The Perception of the Environment:  Essays in Livelihood, Dwelling and Skill.  London:  Routledge.

Jacobs, Brian, and Patrick Kain (eds.).  2003.  Essays on Kant’s Anthropology.  Cambridge:  Cambridge University Press. 

Kant, Immanuel.  1978.  Anthropology from a Pragmatic Point of View.  Tr. Victor Lyle Dowdell (Ger. Orig. 1798).  Carbondale:  Southern Illinois University Press. 

Keita, S. O. Y., and Rick A. Kittles.  1997. “The Persistence of Racial Thinking and the Myth of Racial Divergence.”  American Anthropologist 99:534-544.  This is the one we’ve been waiting for!  Cite for sts!

Kearney, Michael.  1984.  Worldview.  Novato, CA:  Chandler and Sharp.

Kearney, Michael.  1996.  Reconceptualizing the Peasantry:  Anthropology in Global Perspective.  Boulder, CO:  Westview.

Kipnis, Andrew.  2007.  “Neoliberalism Reified:  Suzhi Discourse and Tropes of Neoliberalism in the People’s Republic of China.”  Journal of the Royal Anthropological Institute 13:383-400.

Kockelman, Paul.  2007.  “Agency:  The Relation between Meaning, Power, and Knowledge.”  Current Anthropology 48:375-401.

Krader, Lawrence.  1980.  “Anthropological Traditions:  Their Relationship as a Dialectic.”  In Anthropology:  Ancestors and Heirs, Stanley Diamond, ed.  Hague:  Mouton.  Pp. 19-34.

Kroeber, A. L.  1944.  Configurations of Culture Growth.  Berkeley:  University of California Press.

— 1948.  Anthropology.  New York:  Harcourt, Brace.

— 1953.  Cultural and Natural Areas of Native North America.  Berkeley: Univ. of California Press.

Kroeber, A. L., and Clyde Kluckhohn.  1952.  Culture: A Critical Review of Concepts and Definitions.  Cambridge, MA: Peabody Museum of American Archaeology and Ethnology, Harvard University. Papers, XLVII:1.

Kronenfeld, David.  1996.  Plastic Glasses and Church Fathers.  New York:  Oxford University Press.

—  2008.  Culture, Society, and Cognition:  Collective Goals, Values, Action, and Knowledge.  Berlin:  Mouiton de Gruyter.

Kropotkin, Petr.  1904.  Mutual Aid, a Factor in Evolution.  London:  W. Heinemann.

Kuklick, Henrika.  1991.  The Savage Within:  The Social History of British Anthropology, 1885-1945.  CUP.

Kuper, Adam.  1999.  Culture:  The Anthropologists’ Account.  Cambridge, MA: Harvard University Press. 

—  1995.  Anthropology and Anthropologists:  The Modern British School.  Routledge.

—  1988  The Invention of Primitive Society:  Transformations of an Illusion.  Routledge.

—  2005.  The Reinvention of Primitive Society:  Transformation of a Myth.  Routledge.

Lanternari, Vittorio.  1963.  The Religions of the Oppressed.  New York:  Alfred A. Knopf.

Latour, Bruno.  2005.  Reassembling the Social:  An Introduction to Adctor-Network-Theory.  Oxford:  Oxford University Press.

Lerro, Bruce.  2000.  From Earth Spirits to Sky Gods:  The Socioecological origins of Monotheism, Individualism, and Hyperabstract Reasoning from the Stone Age to the Axial Iron Age.  Lanham, MD:  Lexington Books.

—  2005.  Power in Eden:  The Emergence of Gender Hierarchies in the Ancient World.  Trafford Publishing.

Lévi-Strauss, Claude.  1964.  Totemism.  Tr. Rodney Needham (Fr. orig. 1962, Presses Universitaires de France).  London:  Merlin Press.

Lévi-Strauss, Claude. 1962.  La pensée sauvage.  Paris:  Plon.

—  1963 (Fr. orig. 1958).  Structural Anthropology.  Tr. Claire Jacobson and Brooke Grundfest Schoepf. New York: Basic Books.

Lévi-Strauss, Claude.  1963.  “The Sorcerer and His Magic.”  Chap. 9 in Structural Anthropology.  Tr. Claire Jacobson and Brooke Grundfest Schoepf.  New York:  Basic Books.  Pp. 167-185.

— 1964-1971.  Mythologiques.  Paris: Plon.

—  1963.  Totemism.  Trans. Rodney Needham.  Boston:  Beacon.

L-S died Oct. 31, 2009, at the age of 100.

Locke, John.  1979 [1697]. An Essay Concerning Human Understanding.  Oxford:  Oxford University Press.

Locke, John.  1924 (orig. 1690).  Two Treatises on Government.  New York: Dutton.

Lowie, Robert H.   1937.  A History of Ethnological Theory.  New York:  Farrar and Rinehart.

—  1920.  Primitive Society.  New York:  Boni and Liveright.

—  1948.  Primitive Religion.  New York:  Liveright.

—  1959.  Ethnologist:  A Personal Record.  Berkeley:  University of California Press.

Lucretius.  1928.  De Rerum Natura.  Tr. W. H. D. Rouse.  Latin orig. ca 55 BC.  London:  William Heinemann; New York:  G. P. Putnam’s Sons. 

Malinowski, Bronislaw.  1944.  A Scientific Theory of Culture.  Oxford:  Oxford University Press.

—  1948.  Magic, Science and Religion.  Glencoe, IL:  Free Press.

Marx, Karl.  l973.  Grundrisse.  Baltimore: Penguin.

Maryanski, Alexandra, and Jonathan Turner.  1992.  The Social Cage.  Stanford, CA: Stanford University Press.

Maslow, A.  l970.  Motivation and Personality.  2nd edn.  NY: Harper and Row.

Mauss, Marcel.  1990.  The Gift.  Tr. W. D. Halls. (Fr. orig. 1925.)  London:  Routledge.

Mauss, Marcel.  1979.  “Body Techniques.”  In Sociology and Psychology:  Essays.  Tr. Ben Brewster.  London:  Routledge and Kegan Paul.

McCracken, Grant.  1988.  The Long Interview.  Newbury Park, CA: Sage.

McLean, Athena, and Annette Leibling (eds.).  2008.  The Shadow Side of Fieldwork. 

When fieldwork gets really upclose and personal.  Csordas, Crapanzano, etc.  Lots med.

Mead, George Herbert.  1964.  George Herbert Mead on Social Psychology.  Ed. Anselm Strauss.  Chicago:  University of Chicago Press.

Merleau-Ponty, Maurice.  1962.  The Phenomenology of Perception.  London: Routledge, Kegan Paul.

— l963.  The Structure of Behavior.   Boston: Beacon Press.

— l964.  “From Mauss to Claude Levi-Strauss.”  In: Signs.  Evanston, Ill:Northwestern University Press.  Pp. ll4-l25.

— 1968.  The Visible and the Invisible.  Chicago, IL: Northwestern University Press.

Mills, C. Wright.  1959.  The Sociological Imagination.  New York:  Grove Press.

Montesquieu, Charles, Baron.  1949 (Fr. orig. 1748).  The Spirit of the Laws.  New York: Hafner.

Morgan, David.  1996.  Focus Groups as Qualitative Research.  Sage.

Morgan, Lewis Henry.  1871.  Systems of Consanguinity and Affinity of the Human Family.  Washington, DC: Smithsonian Institution.  Contributions to Knowledge 17:2.

— 1954 (orig. 1851).  League of the Ho-De-No-Sau-Nee or Iroquois.  New Haven: Human Relations Area Files.

— 1877.  Ancient Society.  New York: Henry Holt.

— 1882.  Houses and House-Life of the American Aborigines.  Washington, DC: Govbernment Printing Office.

Netting, Robert McC.  1993.  Smallholders, Householders:  Farm Families and the Ecology of Intensive, Sustainable Agriculture.  Stanford:  Stanford University Press.

Netting, Robert McC.; Richard R. Wilk; Eric J. Arnould (eds.).  1984.  Households:  Comparative and Historical Studies of the Domestic Group.  UC.

Orans, Martin.  1975.  “Domesticating the Functional Dragon: An Analysis of Piddocke’s Potlatch.”  American Anthropologist 77:312-328.

Patterson, Thomas.  2001.  A Social History of Anthropology in the United States.  Oxford and New York:  Berg. 

Pearsall, Deborah (ed.).  2007.  Encyclopedia of Archaeology.  ScienceDirect.

Powell, J. W.  1901.  “Sophiology, or the Science of Activities Designed to Give Instruction.”  American Anthropologist 3:51-79.  Kanosh on a volcanic butte in Utah:

“He attributed its origin to Shinauav—the Wolf god of the Shoshoneans.  When I remonstrated with him that a wolf could not perform such a feat, ‘Ah,’ he said, ‘in ancient times the Wolf was a great chief.’  And to prove it he told me of other feats which Shinauav had performed, and of the feats of Tavoats, the Rabbit god, and of Kwiats, the Bear god, and of Togoav, the Rattlesnake god.  How like Aristotle he reasoned!”  p. 62. 

Radcliffe-Brown, A. R.  1957.  A Natural Science of Society.  New York:  Free Press.

Radin, Paul.  1927.  Primitive Man as Philosopher.  New York:  Appleton.

—  1957.  Primitive Religion.  New York:  Dover.  (Orig 1937; this has a new preface.)

—  1987.  The Method and Theory of Ethnology:  An Essay in Criticism.  Ed. Arthur J. Vidich.  South Hadley, MA:  Bergin and Garvey.

Riesman, David, with Nathan Glazer.  1953.  The Lonely Crowd:  A Study of the Changing American Character.  New Haven:  Yale University Press.

Robarchek, Clayton A.  1989a.  “Hobbesian and Rousseauan Images of Man:  Autonomy and Individualism in a Peaceful Society.”  In Societies at Peace, Signe Howell and Roy Willis, eds.  New York:  Routledge.  Pp. 31-44.

Robarchek, Clayton.  1989.  “Primitive Warfare and the Ratomorphic Image of Mankind.”  American Anthropologist 91:903-920.

— and Carole Robarchek.  1998.  Waorani:  The Contexts of Violence and War.  New York:  Harcourt Brace.

Romney, A. K.; Susan Weller; William Batchelder. 1986. “Culture as Consensus: A Theory of Culture and Informant Accuracy.”  American Anthropologist 88:313-338.

Rosaldo, Renato.  n.d. “Grief and a Headhunter’s Rage: On the Cultural Force of the Emotions.”  Southwestern Anthropological Assn., Newsletter, 22:4/23:l, pp. 3-8.

Rosaldo, Renato.  1989.  Culture and Truth:  The Remaking of Social Analysis.  Boston:  Beacon Press.

Ross, Norbert.  2004.  Culture and Cognition:  Implications for Theory and Method.  Thousand Oaks, CA:  Sage.

Rubin, Herbert, and Irene Rubin.  2005.  Qualitative Interviewing:  The Art of Hearing Data.  2nd edn.  Sage.

Sahlins, Marshall.  l972.  Stone Age Economics.  Chicago: Aldine.

— l976.  Culture and Practical Reason.  Chicago: University of Chicago Press.

Sahlins, Marshall, and Elman Service.  1960.  Evolution and Culture.  Ann Arbor, MI: University of Michigan Press.

Scott, James.  l976.  The Moral Economy of the Peasant.  New Haven: Yale University Press.   

—  l985.  Weapons of the Weak.  New Haven: Yale University Press.

Scott, James C.  1990.  Domination and the Arts of Resistance:  Hidden Transcripts.  New Haven:  Yale University Press.

—  1998.  Seeing Like a State.  New Haven:  Yale University Press.

—  2009.  The Art of Not Being Governed:  An Anarchist History of Upland Southeast Asia.  New Haven:  Yale University Press.

Shore, Bradd.  1996.  Culture in Mind:  Cognition, Culture, and the Problem of Meaning.  New York:  Oxford University Press.

Spradley, James.  1979.  The Ethnographic Interview.  New York:  Holt, Rinehart and Winston.

Strauss, Claudia, and Naomi Quinn.  1997.  A Cognitive Theory of Cultural Meaning.  Cambridge:  Cambridge University Press.

Turner, Jonathan H.  2000.  On the Origins of Human Emotions:  A Sociological Inquiry into the Evolution of Human Affect.  Stanford.

—  2010-2011.  Theoretical Principles of Sociology.  3 v. 

Turner, Jonathan, and Alexandra Maryanski.  1979.  Functionalism.  Menlo Park, CA: Benjamin/Cummings.

Turner, Victor. l967.  The Forest of Symbols. Ithaca: Cornell University Press. 

Tyler, Stephen (ed.).  l968.  Cognitive Anthropology.  New York:  Holt, Rinehart and  Winston.

Tylor, Edward.  1871.  Primitive Culture, Researches into the Development of Mythology, Philosophy, Religion, Language, Art and Custom.  London: John Murray.

Van Gennep, Arnold.  1960.  The Rites of Passage.  Chicago:  University of Chicago Press.

Vayda, Andrew P.  2009.  “Causal Explanation as a Research Goal:  Do’s and Don’t’s.”  In Explaining Human Actions and Environmental Changes.  Lanham, MD:  AltaMira (division of Rowman & Littlefield).  Pp. 1-48.

P. 24 (fn):  “Extreme current examples of claims of the latter kind [reifying abstractions] are the many claims involving ‘globalization,’ which…has transmogrified from being a label for certain modern-world changes that call for explanation to being freely invoked as the process to which the changes are attributed.”

Vayda, Andrew P.  2009.  Explaining Human Actions and Environmental ChangesLanham, MD:  AltaMira (division of Rowman & Littlefield).

Veblen, Thorstein.  1912.  The Theory of the Leisure Class:  An Economic Study of Institutions.  New York:  MacMillan.

Vico, Giambattista.  2000.  New Science.  Tr. David Marsh.  New York:  Penguin.

Voget, Fred.  1975.  A History of Ethnology.  New York: Holt, Rinehart and Winston.

Wallace, A. F. C.  1970.  Culture and Personality.  New York: Random House.

Wallerstein, Immanuel. 1976.  The Modern World-System:  Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century.  New York:  Academic Press.

Warner, Lloyd.  1953.  American Life:  Dream and Reality.  Chicago:  University of Chicago Press.

Weber, Max.  1967.  Max Weber on Law in Economy and Society, Edited by M. Rheinstein. Translated by E. Shils and M. Rheinstein. New York: Simon and Schuster.

—. 1968. Max Weber on Charisma and Institution Building, Edited by S. N. Eisenstadt. Chicago: University of Chicago Press.

—. 1978. Economy and Society: An Outline of Interpretive Sociology, vol. 1-2, Edited by G. Roth and C. Wittich. Berkeley: University of California Press.

—. [1915] 1951. The Religion of China: The Sociology of Confucianism and Taoism. Translated by H. Gerth. New York: Free Press.

—. [1916-17] 1958. The Religion of India: The Sociology of Hinduism and Buddhism. Translated by H. Gerth and D. Martindale. New Delhi: Munshiram Manoharlal.

—. [1917-19] 1952. Ancient Judaism. Translated by H. Gerth and D. Martindale. New York: Free Press.

—  2002.  The Protestant Ethic and the “Spirit” of Capitalism.  Tr. Peter Baehr/Gordon Wills.  New York:  Penguin.  Tr of the 1907 edition, not the 1920 one tr by Parsons.  Some notable diffs, mostly in notes.  This edn also includes a mess of debate swirling around it all.

—  1998.  The Agrarian Sociology of Ancient Civilizations.  Tr. R. I. Frank.  London:  Verso.  (Orig.1924 from 1909 and 1896 origs.)

—  1958.  The City.  Tr. Don Martindale and Gertrud Neuwirth.    NY:  Free Press.

—  1963.  The Sociology of Religion.  Tr. Talcott Parsons.  German original 1922.  Boston:  Beacon.

—  1946.  From Max Weber:  Essays in Sociology.  Ed. and tr. Hans Gerth and C. Wright Mills.  New York:  Oxford University Press.

White, Leslie A.  1949.  The Science of Culture.  New York: Grove Press.

—  1959.  The Evolution of Culture.  New York:  McGraw-Hill.

Whiteford, Linda M., and Robert T. Trotter II.  2008.  Ethics for Anthropological Research and Practice.  Long Grove, IL:  Waveland Press.

Wolf, Eric.  1982 (new preface in 1997 ed).  Europe and the “People Without History.”  Berkeley:  University of California Press.

Wylie, Alison.  2002.  Thinking from Things:  Essays in the Philosophy of Archaeology.  Berkeley:  University of California Press.  Essays; 514 pp. 

Wylie, Alison.  2004.  “Why Standpoint Matters.”  In The Feminist Standpoint Theory Reader:  Intellectual and Political Controversies, ed Sandra Harding.  London:  Routledge.  Pp. 339-352.

Yoffee, Norman.  2005.  Myths of the Archaic State: Evolution of the Earliest Cities, States, and Civilizations.  Cambridge University Press.

Prentice-Hall.

Histories of Anthropology:

Some useful references, including, for comparison, a selection of histories of other relevant fields.

Most useful ones starred.  Thanks to Julie Brugger, Tom Patterson, Lynn Thomas, among others, for some of these references.

Adams, William Y.  1998.  The Philosophical Roots of Anthropology.  Stanford University, Center for the Study of Language and Information.  More “workmanlike” than brilliant or comprehensive.

American Anthropologist.  2002.  Vol. 104, no. 2: Special Centennial Issue.  Many important historical articles.

Baker, Lee D.  1998.  From Savage to Negro:  Anthropology and the Construction of Race, 1896-1954.  Berkeley:  University of California Press.  Excellent, important book.

Barnard, Alan.  2000.  History and Theory in Anthropology.  Cambridge: Cambridge University Press.

Barth, Frederik, Andre Gingrich, Robert Parkin, Sydel Silverman.  2005.  One Discipline, Four Ways: British, German, French, and American Anthropology. Chicago and London: University of Chicago Press.

Bartra, Roger.  1994.  Wild Men in the Looking Glass.

—  1997.  The Artificial Savage.  Ann Arbor:  University of Michigan Press.

This and the previous are a two-volume study of ideas of “savages” in pre-anthropological days.  Excellent; absolutely not to be missed if you are serious about anthro history.

Bennett, John.  1999.  “Classic Anthropology.”  American Anthropologist 100:951-956.  Observations by a rather neglected but very innovative and important thinker.

Bieder, Robert E.  1986.  Science Encounters the Indian, 1820-1880.  Norman: Univ. of Oklahoma Press.  So-so; mostly superseded by Trautman etc.

Boon, James A.  1982.  Other Tribes, Other Scribes.  Cambridge:  Cambridge University Press.  Skeptical history; very learned and often quite funny. 

Bottomore, Tom (ed.).  1991.  A Dictionary of Marxist Thought.  2nd edn.  Oxford: Blackwell.  Standard, excellent, basic reference.  He’s done other good reference stuff too.

Bowen, John R.  1995.  “The Forms Culture Takes:  A State-of-the-Field Essay on the Anthropology of Southeast Asia.”  Journal of Asian Studies 54:1047-1078.  A regional survey, but, more, this article contains several notably incisive comments on anthropological theory.

Brown, Andrew.  2003.  In the Beginning Was the Worm:  Finding the Secrets of Life in a Tiny Hermaphrodite.  New York:  Columbia University Press.  In the mid-1960s, one Sydney Brenner decided he wanted to truly understand a single multi-celled organism, and picked the worm Coenorhabditis elegans as about the simplest one he could find.  Half a century later we’re still working on it….  This book gives the history of a field that exploded from incredible obscurity to scientific dominance.  No great relevance to anthro, but I couldn’t resist putting it in.

Carneiro, Robert L.  2003.  Evolutionism in Cultural Anthropology:  A Critical History.  Boulder: Westview.  Very good short history, by a proponent.

Cole, Fay-Cooper.  1959.  Reminiscence of his serving as the expert on anthropology for Clarence Darrow in the Scopes trial.  Scientific American, Jan. 1959.  (Reference seen in Sci Am, Jan. 2009, p. 12; haven’t looked up the original.)

Cole, Sally.  2003.  Ruth Landes:  A Life in Anthropology.  Lincoln:  University of Nebraska Press.  Landes was another of Boas’ female students, doing brilliant ethnography but ignored because of gender and other depressing biases in the world.  Landes’ early writings on Ojibwa women were among the first ethnographies specifically dealing with women.

Daniel, Glyn.  1950.  A Hundred Years of Archaeology.  London: Duckworth.  There are later, updated editions that have expanded to “A Hundred and Fifty Years of Anthropology.”

—  1967  The Origins and Growth of Archaeology.

— and A. C. Renfrew.  1988.  The Idea of Prehistory.  New York: Columbia UP.

Darnell, Regna.  1974.  Readings in the History of Anthropology.  New York:  Harper & Row.  Various pre-anthropological selections and some historical notes from early in the field.

—  1990.  Edward Sapir: Linguist, Anthropologist, Humanist.  Berkeley: University of California Press.  Good on life details; not much on his theories or linguistic practice.                          

—  1998a.  And Along Came Boas:  Continuity and Revolution in Americanist Anthropology.  Amsterdam:  John Benjamins. 

—  1998b.  “Camelot at Yale:  The Construction and Dismantling of the Sapirian Synthesis, 1931-39.”  American Anthropologist 100:361-372.

Deacon, Desley.  1997.  Elsie Clews Parsons:  Inventing Modern Life.  Chicago:  University of Chicago Press.  Parsons was a true original, and this book is not to be missed.

De Laguna, Frederika (ed.).  1960.  Selected Papers from the American Anthropologist, 1888-1920.  Evanston, IL: Row, Peterson.  Nice selection and useful to have, but you might just as well root around in old volumes of AA.

Dudley, Edward, and Maximillian E. Novak (eds.).  1972.  The Wild Man Within.  Pittsburgh: University of Pittsburgh Press.  Perhaps most useful to anthropologists is the essay by Hayden White, “The Forms of Wildness: Archaeology of an Idea,” pp. 3-38.  There is much else of value in this book.

Erickson, Paul, and Liam Murphy. 2003.  A History of Anthropological Theory; with companion volume, 2006, Readings for a History of Anthropological Theory. Ontario:  Broadview Press. 

A Canadian view.  Erickson studies fishing in eastern Canada.  He has also done books on teaching anthropology and on biographies of anthropologists.

Evans, Andrew D.  2007.  Rudolf Virchow and the Unity of Humankind:  The Liberal Paradigm in German-Speaking Physical Anthropology.”  Paper, American Anthropological Association, annual meeting, Washington, DC. 

Virchow was the first great German anthropologist to argue strongly against racist views.  He was also a fighting liberal politically, serving in the German parliament for 13 years.  He led a long tradition ancestral to modern physical anthro, which tradition was, of course, eclipsed under Nazism.

Evans-Pritchard, E. E.  1981.  A History of Anthropological Thought.  Though left tragically unfinished when Evans-Pritchard died, this is a great book–don’t miss.  

Evans-Pritchard, E. E.  1965,  Theories of Primitive Religon.  Oxford:  Oxford University Press.

His no-nonsense demolition job on the field; read with care—he isn’t always fair to his victims!

Fournier, Marcel.  2005.  Marcel Mauss.  Tr. Jane Marie Todd.  Fr. orig. 1994.  Princeton:  Princeton University Press.

Freedberg, David.  2003.  The Eye of the Lynx:  Galileo, His Friends, and the Beginnings of Modern Natural History.  Chicago:  University of Chicago Press.

Frierson, Patrick R.  2003.  Freedom and Anthropology in Kant’s Moral Philosophy.  Cambridge:  Cambridge University Press.

Giddens, Anthony.  1971.  Capitalism and Modern Social Theory: An Analysis of Marx, Durkheim and Max Weber. Cambridge University Press.  Possibly the very best intro to these three.  Not as good on them as some more specialized guys are (Elster is better on Marx, Collins on Weber), but nobody puts it all together like Giddens.

Gleick, James.  2003.  Isaac Newton.  New York:  Pantheon.  Among other things, reminds us that Newton seriously researched alchemy and astrology, and was intensely religious.  “Science” in the no-“pseudoscience,” no-“religion” sense was far in the future!

Goldschmidt, Walter.  2001.  “A Perspective on Anthropology.”  American Anthropologist 102:789-807.  Personal views of the field by a veteran scholar.  Wally Goldschmidt was famous for his sometimes rather acid tongue, so expect some fun here if you enjoy fireworks.

Gould, Stephen Jay.  1996.  The Mismeasure of Man.  2nd edn.  Basic history and disproof of racism (with all the arguments you need when you teach Anthro 1 or whatever).

Harris, Marvin.  1968.  The Rise of Anthropological Theory.  New York: Crowell.  Good source for anthro up to about 1890.  For 20th century anthro, this book is completely unreliable as well as extremely biased, and is to be carefully avoided; even basic facts are wrong.                    

Harrison, Ian, and Faye Harrison.  1999.  African-American Pioneers in Anthropology.  Urbana:  University of Illinois Press.

Hiatt, L. R.  1997.  Arguments about Aborigines:  Australia and the Evolution of Social Anthropology.  Cambridge:  Cambridge University Press.

Hinsley, Curtis M., Jr.  1981.  Savages and Scientists:  The Smithsonian Institution and the Development of American Anthropology, 1846-1910.  Good basic reference on the facts; not so adequate on the theories and thoughts.

Hollis, Martin.  2002.  “Philosophy of Social Science.”  In The Blackwell Companion to Philosophy, ed. Nicholas Bunnin and Eric Lsui-James [sic].  Malden, MA:  Blackwell.

Howard, M. C., and J. E. King.  1985.  The Political Economy of Marx. London: Longman.  2nd edn.  Nice basic intro.

Hyatt, Marshall.  1990.  Franz Boas, Social Activist:  The Dynamics of Ethnicity.  New York:  Greenwood. 

Hymes, Dell (ed.).  1974.  Studies in the History of Linguistics.  Bloomington: University of Indiana Press.

Jacobs, Brian, and Patrick Kain.  2003.  Essays on Kant’s Anthropology.  Cambridge:  Cambridge University Press.  Useful if you are truly into Kant; otherwise too specialized for much value, though interesting and well done.

James, Wendy, and N. J. Allen (eds.).  1998.  Marcel Mauss:  A Centenary Tribute.  Berghahn Books.  Disappointing, but at least it’s something.  The man who gave us The Gift, the concept of “habitus” (yet another thing from Kant’s Anthropology book—but Mauss developed it) and embodiment of culture, and many other basic ideas has received amazingly little attention in the English-language literature.  He deserves better.

Kant, Immanuel.  1978.  Anthropology from a Pragmatic Point of View.  Tr. Victor Lyle Dowdell (Ger. Orig. 1798).  Carbondale:  Southern Illinois University Press. 

This is not a history but the start of a history—the book that launched anthropology as a serious name and a serious field.  The word was coined in the late 16th century and used off and on, but this was the first significant book devoted to it, and created it as a scholarly field.  Kant had discussed anthropology already in Critique of Pure Reason (see the Penguin edition, 2007, translated by Max Müller and Marcus Weigelt, esp. pp. 473-4).  One Alexandre-César Chavannes came out in 1788 with a book Anthropologie ou science générale de l’homme, but it is forgotten by all but trivia buffs.)

Kearney, Michael.  1996.  Reconceptualizing the Peasantry.  Boulder: Westview.  Excellent and important history of the concept of the “peasant” in anthropology and of “peasant” studies and related matters in the discipline.

Kelso, Alec (ed.).  2008.  The Tao of Anthropology.  Gainesville: University of Florida Press.

Essays on their careers by senior anthropologists.

Koerner, E. F. K., and R. E. Asher (eds.).  1995.  Concise  History of the Language Sciences from the Sumerians to the Cognitivists.  Kidlington, Oxford, England: Elsevier Science (Pergamon imprint).  Taken, and updated, from of the Encyclopedia of Language and Linguistics (same editors and publishers, 1994).  This book consists of short articles on everything from the Korean alphabet to Panini’s Sanskrit grammar, as well as everything since.  Superb reference, but not a book to sit down and read.

Kroeber, A. L., and Clyde Kluckhohn.  1952.  Culture: A Critical Review of Concepts and Definitions.  Cambridge, MA: Peabody Museum of American Archaeology and Ethnology, Harvard University.  Variously reprinted in more available places.  Basic; indispensable reference.  Now way out of date, but vitally important for history of anthro.

Kuhn, Thomas.  1962.  The Structure of Scientific Revolutions.  Chicago:  University of Chicago Press.  This book argued that science changes through paradigm shifts; a minority view or a new view slowly gains evidence till there is a huge sudden change and it is adopted.  Few, if any, changes seem to fit Kuhn’s story, especially in the social sciences, though Darwinian evolution comes close.  Standard examples of revolutions (Copernicus and Galileo on cosmology, the fall of phlogiston and alchemy, etc.) turn out to be more complex than Kuhn suggests.  Still, this is a very important book, read by almost everyone even slightly interested in the history of science.

Kuklick, Henrika.  1991.  The Savage Within: The Social History of British Anthropology.  Cambridge: Cambridge UP.  Basic.  As history of science, rather than chronicle of anthro thought, this is probably the best of this list.

Kuper, Adam.  1983.  Anthropology and Anthropologists:  The Modern British School.  2nd edn.  London: Routledge.  Covers 20th century British social anthro in a wonderfully witty, thorough and insightful way.  Best book on the subject.

— 1988.  The Invention of Primitive Society.  London:  Routledge.  19th-century British anthropology.  This and Kuklick are the best books on the period.                                    

—  2006.  The Reinvention of Primitive Society.  New edn of above, w brief added chapt on romantic savages and indigenous rights today.

—  1994.  The Chosen Primate.  Cambridge, MA: Harvard University Press.  On misuses of Darwinism, and other foibles.

Kuznar, Lawrence.  1997.  Reclaiming a Scientific Anthropology.  Walnut Creek:  AltaMira.  Not a history, but plenty on the rise of science in anthropology.

Laird, Carobeth.  1975.  Encounter with an Angry God: Recollections of My Life with John Peabody Harrington.  Banning, CA: Malki Museum Press.  This book is the runaway success from this list; it was promptly reprinted in mass editions and is still available.  It has reached something close to classic status as a literary work.

Langness, L. L.  1987.  The Study of Culture.  Rev. edn.  Novato, CA: Chandler and Sharp.  Fair, but now superseded by others on this list.

Leeds-Hurwitz, Wendy.  Rolling in Ditches with Shamans:  Jaime de Angulo and the Professionalization of American Anthropology.  Lincoln:  Univ. of Nebraska Press.

De Angulo tested the limits—he was a brilliant anthropologist and also has a strong case for being the original hippie (I’m serious!); he pioneered Big Sur and developed the counterculture scene there.  This is a sympathetic, thorough look at the man.

Lewis, Herbert S.  1998.  “The Misrepresentation of Anthropology and Its Consequences.”  American Anthropologist 100:716-731.  A VERY important article.  Read it.

—  2001a.  “The Passion of Franz Boas.”  American Anthropologist 103:447-467.  Sets the record straight on a number of important issues.  Lewis is one of the very best writers on the history of anthropology, and THE best on Boas. 

—  2001b.  “Boas, Darwin, Science, and Anthropology.”  Current Anthropology 43:381-406.

Liberman, Leonard.  2001.  “How ‘Caucasoids’ Got Such Big Crania and Why They Shrank.”  Current Anthropology 43:69-95.  Excellent history of racist misuse of anthropology.

Lowie, Robert.  1937.  The History of Ethnological Theory.  New York: Rinehart.  In spite of the militant Boasian bias, this is still a “must read,” because of Lowie’s matchless incisiveness and wisdom in dealing with early writers.          

Mark, Joan.  1988.  A Stranger in Her Native Land: Alice Fletcher and the American Indians.  More than a biography–this is basic for the history of late 19th century anthro.             

Martin, Michael, and MacIntyre, Lee.  1994.  Readings in the Philosophy of Social Science.   Cambridge, MA:  MIT. Monumental collection–the full texts of just about every important article.       

This is absolutely basic; every anthro student should know it and read at least a few articles!

Mason, Otis Tufton.  1895.  The Origins of Invention.  Smithsonian Institution.  Not a history of anthro, but history by anthropology.  Long superseded as far as conclusions go, but an important milestone in the development of anthropological theory.

—  1894.  Women’s Share in Primitive Culture.  New York:  D. Appleton.  Ditto.

McDonald, Lynn.  1993.  The Early Origins of the Social Sciences.  Montreal and Kingston (Canada): McGill-Queen’s Univ. Press.  Major.  Sociology and political science rather than anthropology, but for the 18th century this is hard to do without.  One of the best histories of social science, much better than any comparable work in the anthro literature.

McGee, R. Jon, and Richard L. Warms (eds.).  1996.  Anthropological Theory: An Introductory History.  Mountain View, CA: Mayfield.  In spite of the name, this is a collection of readings from the basic sources, not a history.  Only fair; now superseded.  Don’t waste your time.

Mead, Margaret, and Ruth Bunzel (eds.).  1960.  The Golden Age of American Anthropology.  Selected readings from American anthro standards.                           

Moore, Henrietta and Todd Sanders, eds.  2006.  Anthropology in Theory: Issues in Epistemology.  Malden, MA: Blackwell Publishing.

Morgan, Lewis Henry.  1985 [1877].  Ancient Society.  Tucson:  University of Arizona Press.  The classic work that started theoretical anthro in the US.  Don’t be put off by the fact that it’s now considered “wrong” and its evolutionary scheme now seems outrageously biased against the “primitives” and “savages.”  It was, for its time, an amazingly forward-looking, challenging work.

—1871.  Systems of Consanguinity and Affinity of the Human Family.  Washington:  Smithsonian Institution.  Smithsonian Contributions to Knowledge 17:2.

Munzel, G. Felicitas.  1999.  Kant’s Conception of Moral Character:  The “Critical” Link of Morality, Anthropology, and Reflective Judgment.  Chicago:  University of Chicago Press. 

Pagden, Anthony.  1982.  The Fall of Natural Man.  Cambridge UP.  16th century proto-anthropology!  Important work.  Shows that the traditional Catholics were the protectors of the Native Americans, with Las Casas emerging as one of the great genuine heroes of history; the modern, humanistic Catholics were the bad guys—convinced that Progress meant sweeping the Indians aside.

Palacio-Pérez, Eduardo.  2010.  “Salomon Reinach and the Religious Interpretation of Palaeolithic Art.”  Antiquity 84 (325) 853-863.

Excellent, important historical article.  Reinach basically started it; he was totally involved with anthro and sociology, reviewing Durkheim, writing obit for de Mortillet, etc.  Really brilliant, right-on stuff quoted.

Parezo, Nancy (ed.).  1993.  Hidden Scholars.  Tucson:  University of Arizona Press.  A chronicle of women anthropologists who studied Ariz/NM Indians.  The essay on Benedict is really superb, and the whole book is very valuable.  (Considering that not only Benedict but several of the others were world-famous, highly influential anthropologists, the “hidden” is possibly a bit gratuitous, but some of these scholars indeed suffered from neglect because of gender.)

Patterson, Thomas.  2001.  A Social History of Anthropology in the United States.  Oxford and New York:  Berg.  Superb history with an insightful, incisive political take.  Patterson, an archaeologist turned historian of anthro, is one of the best scholars in the area.

Penniman, T. K.  1965.  A Hundred Years of Anthropology.  3rd ed.  London: Duckworth.  Companion to Daniel, above.  Detailed and still valuable though obviously dated.

Popkin, Richard H.  1987.  Isaac La Peyrère (1596-1676): His Life, Work and Influence.  Under this specialized-sounding title is a book about the idea that there were people around long before Adam—an idea obviously necessary to the development of anthro, especially archaeology and human paleo.  La Peyrere was the first to popularize this idea in the Judeo-Christian world.  His work was later coopted by polygenists and racists; Popkin provides much detail on the rise of racism in the 19th century and on the whole history of early anthropology.

Radin, Paul.  1987.  The Method and Theory of Ethnology.  Introduction by Arthur Vidich.  Boston: Bergin and Garvey.  Classic theoretical statement; Vidich’s long and detailed statement has value for situating Radin in his historical context.

Rankin-Hill, Lesley M., and Michael L. Blakey.  1994.  “W. Montague Cobb (1904-1990):  Physical Anthropologist, Anatomist, and Activist.”  American Anthropologiost 96:74-96.  Early African-American leader.

Robins, R. H.  1990.  A Short History of Linguistics.  London: Longmans.  3rd edn.  The only book in its field.  Hopefully a fuller one will come along.

Rowe, William T.  2007.  “Owen Lattimore, Asia, and Comparative History.”  Journal of Asian Studies 66:759-786.  Major essay on a geographer who influenced anthropology; lots in here on anthro and history (as well as geography) of the early 20th century.

Rudwick, Martin.  2006.  Bursting the Limits of Time.  Chicago:  University of Chicago Press. This enormous book is about the fullest history of an anthro-related discipline.  It’s a fascinating read, even if you aren’t into geology.  Archaeologists and fossil-folk specialists really need to look at it, because it describes the birth of the idea that Europe was around, with weird critters in it, long before people got there.  If you ever have a long summer week to do nothing but bury yourself in what to me the most fascinating story in the history of science, go for it.  He promises a second vol that should be even better.

Sabloff, Jeremy A., and Wendy Ashmore.  2001.  “An Aspect of Archaeology’s Recent Past and Its Relevance in the New Millennium.”  In Archaeology at the Millennium:  A Sourcebook, ed. by Feinman and Price.  New York:  Kluwer/Plenum.

Shapiro, Warren.  1991.  “Claude Lévi-Strauss Meets Alexander Goldenweiser:  Boasian Anthropology and the Study of Totemism.”  American Anthropologist 93:599-620.

Silverman, Sydel (ed.).  2004.  Totems and Teachers:  Key Figures in the History of Anthropology.  2nd edn.  New York:  AltaMira. 

Major chapters on major figures.  Orig 1981, so core is old-timers’ writings.

Smedley, Audrey.  2007.  Race in North America:  Origin and Evolution of a Worldview.  3rd edn.  Boudler:  Westview.

Spencer, Frank.  1982.  A History of American Physical Anthropology, 1930-1980.  Useful, but physical anthro has yet to find its true historian.

Stocking, George.  1968.  Race, Culture and Evolution.  Free Press.  Boas and his intellectual relatives.  Classic.

—  1985.  Victorian Anthropology.  New York: Free Press (MacMillan).  Superb account; terrific storytelling, lots of facts.  Not so good on theories as Kuklick or Kuper, but not to be dismissed.  Stocking likes to call himself “anthropology’s in-house historian,” and he pretty much is; he’s the best chronicler if not the best thinker.

— 1992.  The Ethnographer’s Magic and Other Essays in the History of Anthropology.   Madison:  University of Wisconsin Press.

— 1995.  After Tylor.  Madison:  University of Wisconsin Press.

An enormous history of British anthropology from Tylor to Radcliffe-Brown.

—  2001.  Delimiting Anthropology:  Occasional Essays and Reflections.  Madison:  Univerity of Wisconsin Press.

— (ed.)  History of Anthropology series.  Madison: Univ. of Wisconsin Press.  Something like a journal–every couple of years they issue a volume of essays on one broad topic.  So far, we have: 

Vol. l.  Observers Observed:  Essays on Ethnographic Fieldwork. (1983)

2.  Functionism Historicized (1984–special attn. Kuklick essay)

3.  Objects and Others (museums; 1985)

4.  Malinowski, Rivers, Benedict and Others: Essays on Culture and Personality

5.  Bones, Bodies, Behavior (1988; physical anthro; good essay on Piltdown and DYNAMITE essays on Nazism)

6.  Romantic Motives:  Essays on Anthropological Sensibility

7.  Colonial Situations: Essays on the Contextualization of Ethnographic Kowledge.

8.  Volksgeist as Method and Ethic (1996).  On Boas and German anthro.  The essay by Matti Bunzl, “Franz Boas and the Humboldtian Tradition: From olksgeist and Nationalcharakter to an Anthropological Concept of Culture,” is really superb and important (pp. 17-78).  He traces anthropology–the word and the concept–to Wilhelm von Humboldt, who started it right at the turn of the century (1798-1810 period).  Kant got the word and idea from von H. 

Also interesting is “From Virchow to Fischer: Physical Anthropology and “Modern Race Theories’ in Wilhelmine Germany” by Benoit Massin (79-154).  It traces the decay of German phys anth from liberal Virchow to increasingly right-wing and finally Nazi Fischer.

See also “‘The Little History of Pitiful Events’: THe Epistemological and Moral Contexts of Kroeber’s Californian Ethnology” by Thomas Buckley (257-297).  Rather an unfair hatchet job–he misses, or deliberately ignores, most of Kroeber’s good side.  But he has some real points.                    

Stocking, of course, is THE historian of anthropology.  He’s solid, reliable, fair, and a good read.  He is more chatty and into fun facts than a great critic and dissector of theory, though.  Great for the background and context, but go for Collins, Giddens, Kuklick, Herb Lewis, and Tom Patterson if you want to know what the guys actually said.

Trautman, Thomas.  1987.  Lewis Henry Morgan and the Invention of Kinship.  Berkeley: University of California Press. 

Superb study of Morgan’s life and the intellectual climate of the age.  (But LHM didn’t invent kinship, only kinship studies!) 

Trigger, Bruce.  1989.  A History of Archaeological Thought.  Cambridge: Cambridge University Press.  Major work.  Not unbiased.  But indispensable, especially for American archaeology.

Turner, Jonathan.  1989.  The Emergence of Social Theory.  Wadsworth.

Turner is one of the leading writers on classic sociological theory, including sociologists like Weber and Durkheim that influenced anthro.

Turner, Jonathan, and Alexandra Maryanski.  1979.  Functionalism.  Menlo Park: Benjamin/Cummings.

Van Riper, Bowdoin.  1993.  Men Among the Mammoths: Victorian Science and the Discovery of Human Prehistory.  Chicago: University of Chicago Press.  Brief, clear study of the topic.

Verdon, Michel.  2007.  “Franz Boas:  Culture History for the Present or Obsolete Natural History?”  Journal of the Royal Anthropological Insitutie 13:433-451.  Argues for the latter, but is so wrong that his own quotes from Boas disprove him.

Vermeulen, Hans F. (ed.).  1995.  Fieldwork and Footnotes:  Studies in the History of European Anthropology.  London:  Routledge.

Vico, Gianbattista.  1944.  The Autobiography of Giambattista Vico.  Tr. Max Harold Fisch and Thomas Goddard Bergin.  Ithaca: Cornell University Press.  Easily available still, in paperback.  The interest here is not in the autobiog (dull) but in the translators’ introduction, a superb short study of Vico’s place in the history of social science.

Vincent, Joan.  1990.  Anthropology and Politics.  Tucson:  University of Arizona Press.

Classic.  There is a newer edition now.

Voget, Fred.  1975.  A History of Ethnology.  New York: Holt, Rinehart and Winston.  Encyclopedic; for quick reference, not for reading.  Extremely comprehensive.

Willey, Gordon, and Jeremy Sabloff.  1992.  A History of American Archaeology.  San Francisco: W. H. Freeman.  Standard for its field; basic.  Willey has published books of memoirs, reminiscences and essays that are very valuable.

Wolf, Eric R.  1994.  “Perilous Ideas:  Race, Culture, People.”  Current Anthropology 35:1-12.

Wolff, Larry, and Marco Cipolloni (eds.).  2007.  The Anthropology of the Enlightenment.  Stanford:  Stanford University Press.  Lots on origins.  The first actual “anthropology” book was pre-Kant (see Kant above).  “Civilisation” appeared, first in French, around 1750, and was popularized by Mirabeau.  “Culture” in anything like mod meaning came somewhat later; it just meant “ag” in 1750s.  See Wolff’s “Anthropological Thought in the Enlightenment,” pp. 3-32, esp. p. 4 (first anthro book), 10 (first “civilisation”).  “Ethnographie,” “ethnographisch” and “Völkerkunde” were all coined by one man, August Schlözer, prof at Göttingen from 1769 on.  He was using them by early 1770s.  This from John Gascoigne:  “The German Enlightenment and the Pacific,” pp. 141-171; see p. 144.  Other ideas of this time included stagnant China, childlike India, etc.  The savage New World stereotype continued from the 17th century, and got new spin from Adam Smith.  Lots in here about Siberia, slaves in Haiti, etc.  Very little bullshit (though C’s last chapt is pretty lame).

Belleau, Jean-Philippe E., “Love in the Time of Hierarchy:  Ethnographic Voices in Eighteenth-Century Haiti,” 209-237, has all the horrors Steadman found in Surinam.  Confirmation.

Young, Virginia Heyer.  2005.  Ruth Benedict:  Beyond Relativity, Beyond Pattern.  Lincoln: University of Nebraska Press.

     There are many other excellent works.  Their exclusion from this list is merely due to time and space constraints.  In particular, I have avoided most biographies and autobiographies, but note that W. H. R. Rivers, A. C. Haddon, Robert Lowie, A. L. Kroeber, Margaret Mead, Ruth Benedict, Alice Fletcher, Edgar Lee Hewitt, Jaime de Angulo, Frank Cushing, Claude Levi-Strauss, B. Malinowski, Franz Boas, Hortense Powdermaker, Emile Durkheim, L. H. Morgan, and many other “greats” have been the subjects of biographies or wrote autobiographies.  (The biographies of Sapir, Harrington, Fletcher and Morgan above are “different” because they go far beyond mere biography and cover the whole intellectual life of the periods in question.  For Morgan, there is another more “ordinary” biog as well as Trautman’s more ambitious book.)  

     When seeking to know about a particular person, always remember to look up his or her obituaries in the major journals.  Obits are often very valuable, especially in the early and mid 20th century, when scholars like A. L. Kroeber made the obit into a major scholarly form.          

     Dozens of anthropologists have written popular or semipopular accounts of their field work.  Most of these can most charitably be described as chatty journals and travel accounts.  Uncharitable descriptions could get much worse without being unfair or wrong, I am sorry to say.  Among the few that deserve attention as literature are Jaime de Angulo’s writings and Carobeth Laird’s book Encounter with an Angry GodIn the Company of Man, edited by Joseph Casagrande, provides a good selection of short accounts. 

     Another category missing above are regional histories.  There are superior histories of archaeology in Mexico, the US Southwest, Mayaland, Mesopotamia, China, and many other areas.  The Maya, in particular, are well served; the archaeologists seem almost as fascinating to historians as the ancient Maya themselves.  See esp. a number of works by Robert Brunhouse and by Michael Coe (his Breaking the Maya Code is deservedly a classic).  Other fields have not been so well served, but there are good sources around. 

     A few books are so bad as to require a special avoidance warning.  Marvin Harris has been noted above.  Donna Haraway’s stuff is amusing polemic but not serious history; she gets her facts wrong occasionally, and puts a lot of spin on them even when they’re right.  Several earlier authors (Leslie White for one) didn’t even get their facts straight.

Some Related Items

The Three Great Founders of social science are universally agreed to be Marx, Durkheim, and Weber (whatever one may think of their theories!).  (For American anthropology, add Morgan and Boas.)  All three of the Greats are well translated and analyzed in current literature, and there is no substitute for reading them in detail in the original.

Marx

There is no substitute, in the end, for reading CAPITAL (at least Vol. I) and the GRUNDRISSE, if you are all interested in Marxian matters.  For a quick introduction, though, everybody’s favorite—deservedly so—is Rius’ cartoon book Marx for Beginners.  It’s accurate (more so than most learned tomes on Marx), fair, and human.  Anyway, it’s always fun to recommend a comic book to grad students!

For a more serious take, see Elster, below.

Three readers give quick looks at the Marxian canon:
Elster, Jon (ed.).  1986.  Karl Marx:  A Reader.  Cambridge:  Cambridge University Press.  My favorite.

Elster, Jon.  1984.  Making Sense of Marx, Cambridge UP, 1984, a stunning job of explaining and critiquing the Master.

McLellan, David (ed.).  1988.  Marxism:  Essential Writings.  Oxford: Oxford Univ. Press.  Standard reader, including not only Marx and Engels but also Kautsky, Plekhanov, Lenin, Mao, Marcuse, and even Che Guevara, among others.

Tucker, Robert (ed.).  1978.  The Marx-Engels Reader.  2nd edn.  New York:  W. W. Norton.  Short bits of a lot of disparate things, but useful.

Durkheim:
Durkheim, Emile.  1933.  The Division of Labor in Society.  NewYork:  Free Press.

—  1973. Moral Education.  New York:  Free Press.

1982.  The Rules of Sociological Method.  S. Lukes, ed.  New York:  Macmillan.

—  1995 [1912].  The Elementary Forms of Religious Life.  Tr. Karen E. Fields.  New York:  Free Press.  Note no “the” in the title!!

—  1951.  Suicide.  Tr. John A. Spaulding and George Simpson.  Orig. 1897.  Free Press.

—  1993.  Ethics and the Sociology of Morals.  Tr. Robert T. Hall. 

— and Marcel Mauss.  1963 (Fr. orig. 1903).  Primitive Classification.  London: Cohen and West.

Don’t waste your time with existing English-language biographies of Durkheim.  (The major one is a disaster.  I won’t even mention names.)  Read Collins, and Turner, above.

Philosophy of Science:

There is not, so far, a serious work on the philosophy of anthropology.  (The theoretical, postmodern, and critical works are really a different sort of thing.  They argue for, or describe, particular views.  The books below examine the underpinnings of the whole scientific enterprise.)  Until we have our own, try these more general works:

Dupre, John.  1993.  The Disorder of Things.  Good summary of recent philosophy of science.  Refutes Popper and other naive realists, but also avoids the trap of Kuhn and Feyerabend (“it’s all arbitrary”). 

Elster, Jon.  Vast series of books, all superb, some definitive.  One particularly useful for us is The Cement of Society (Cambridge UP 1989).  Local Justice (Russell Sage Fdn., 1992), reports studies of ways of trying to ensure fairness in situations like the draft and immigration. 

Hacking, Ian.  1999.  The Social Construction of What?  Cambridge, MA:  Harvard University Press.

Kitcher, Philip.  1993.  The Advancement of Science.  New York: Oxford Univ. Press.  Definitive review of the recent literature.   See also his Vaulting Ambition, 1985, a critique of sociobiology.

Kuhn, Thomas.  1962.  The Structure of Scientific Revolutions.  Chicago:  University of Chicago Press.

Latour, Bruno.  2004.  Politics of Nature: How to Bring the Sciences into Democracy.  Tr. Catherine Porter.  Cambridge, MA:  Harvard University Press.

Latour, Bruno.  2005.  Reassembling the Social:  An Introduction to Adctor-Network-Theory.  Oxford:  Oxford University Press.

Merleau-Ponty, Maurice.  Many books, of which The Structure of Behavior is possibly the most useful for anthropologists.  His Nature is, alas, just course notes; he died before writing it up.  Possibly the most important philosopher of social science in the mid-20th c., and a major source of ideas for people ranging from Levi-Strauss to Byron Good.

Rosenberg, Alexander.  1988.  Philosophy of Social Science.  Boulder: Westview.  Excellent introduction.

Some more useful theory stuff, just for completeness:

Anderson, Benedict.  1991.  Imagined Communities.  2nd edn.  London:  Verso.

Classic.  Possibly the most cited book in anthro in the last 20 years.

Berger, Peter L., and Thomas Luckmann.  1966.  The Social Construction of Reality.  Garden City, NY:  Doubleday.

Fairly well-known intro to phenomenology in social science, but you can get it better from Merleau-Ponty and Kay Milton.

Engels, Frederick.  1942 [1892].  The Origin of the Family, Private Property and the State, in the Light of the Researches of Lewis H. Morgan.  New York:  International Publishers.

— 1966.  Anti-Duhring: Herr Eugen Duhring’s Revolution in Science.  New York: International Publishers.  (New printing. Orig. US edn. 1939.  Orig. English edn. 1894.)

Engels is an easier read than Marx and these two books have all of Marx’ directly anthro-useful ideas.  On the other hand, they don’t tell you much about Marx’ most interesting ideas, like the mode of production concept.

Fustel de Coulanges, Numa Denis.  1955 [1864].  The Ancient City:  A Study on the Religin, Laws, and Institutions of Greece and Rome.  Garden City, NY:  Doubleday.

Enormously important book in the history of sociology and anthro, mostly via its influence on Durkheim.  D got from it most of his sense of institutions and their functionality and contextual embedding.

Gaukroger, Stephen.  2006.  The Emergence of a Scientific Culture:  Science and the Shaping of Modernity 1210-1685.  Oxford:  Oxford University Press.

This improbable work covers (in a mere 700 pages) the entire history of western science up till the full emergence.  Incredible undertaking; I can’t believe one guy did it.  Obviously basic background if you are into this, but nothing on anthro per se.

Geertz, Clifford.  1973.  The Interpretation of Cultures.  Basic Books.

The classic Kant-to-Parsons-to-anthro book.

Hodgen, Margaret T.  1964.  Early Anthropology in the Sixteenth and Seventeenth Centuries.  Philadelphia:  University of Pennsylvania Press.

Humboldt, Wilhelm von.  1988.  On Language:  The Diversity of Human Language-Structure and Its Influence on the Mental Development of Mankind.  Tr. Peter Heath.  Ger. orig. ca. 1800. 

The original locus of the “Sapir-Whorf” hypothesis.

Hume, David.  1969 (1739-1740).  A Treatise of Human Nature.  New York:  Penguin.

Everybody’s favorite bit of light-hearted cynicism and total devastation-for-fun of all generalizations.

Locke, John.  1979 [1697]. An Essay Concerning Human Understanding.  Oxford:  Oxford University Press.

Locke, John.  1924 (orig. 1690).  Two Treatises on Government.  New York: Dutton.

These two were extremely influential on the development of social science—as influential in the English and empirical worlds as Kant in the German and Germanic-American ones.

Definite “must reads” if you care about social thought, and easy to read, even though the first is inordinately long.

Mead, George Herbert.  1964.  George Herbert Mead on Social Psychology.  Ed. Anselm Strauss.  Chicago:  University of Chicago Press.

Major thinker; basically started social psych, and brought interactionism to the US (from Dilthey).

Merleau-Ponty, Maurice.  1962.  The Phenomenology of Perception.  London: Routledge, Kegan Paul.

— l963.  The Structure of Behavior.   Boston: Beacon Press.

— l964.  “From Mauss to Claude Levi-Strauss.”  In: Signs.  Evanston, Ill:Northwestern University Press.  Pp. ll4-l25.

— 1968.  The Visible and the Invisible.  Chicago, IL: Northwestern University Press.

ly long.

These are the most useful to anthropologists of MMP’s many books.  The 1964 item is the only thing that makes L-S and MMP actually comprehensible, even easy, to the suffering beginner, and thus is a must read (even if you’re not a beginner).

Martin, Michael, and Lee C. McIntyre (eds.).  1994.  Readings in the Philosophy of Social Science.  Cambridge, MA:  MIT Press.

Mills, C. Wright.  1959.  The Sociological Imagination.  New York:  Grove Press.

This book is Sacred Text to a lot of us from the 1950s and 1960s.  Beyond comment.

Montesquieu (Charles Secondat, Baron of Montesquieu).  1949 (Fr. orig. 1748).  The Spirit of the Laws.  New York: Hafner.

Another truly foundational work.  This book started serious cross-cultural comparison; started rational critique of legal systems on the basis thereof; started the idea of “environmental determinism” as a serious theory; and started enough more things to inspire a huge literature. 

(My friend the expert would comment here that Montesquieu didn’t really start all that stuff, but for all practical purposes M did; nobody read the obscure other guys that anticipated tiny bits of it.)

Orans, Martin.  1996.  Not Even Wrong:  Margaret Mead, Derek Freeman, and the Samoans.  Novato, CA:  Chandler and Sharp.  Definitive final word on a classic controversy.

Oreskes, Naomi.  1999.  The Rejection of Continental Drift:  Theory and Method in American Earth Science.  New York:  Oxford University Press. 

Excellent book about why continental drift wasn’t such a revolution after all; it was rejected by most for lack of evidence (so no real failure to engage) and yet still widely taken seriously (so no real “revolution” when it turned out to be true).  But then….

— (ed.).  2001.  Plate Tectonics:  An Insider’s History of the Modern Theory of the Earth.  Boulder:  Westview.  …she found that the people who actually created the modern theory of plate tectonics saw it as very revolutionary indeed!  Educated in an age when “drifting continents” were literally a laughingstock, they formed a tight band of advocates when Tuzo Wilson (especially) converted because of overwhelming evidence from enemy to enthusiast.  This book consists of their reminiscences about it all, and is the most fascinating book in the history of science that I have read. 

Pagden, Anthony.  1987.  The Fall of Natural Man:  The American Indian and the Origins of Comparative Ethnology.  Cambridge:  Cambridge University Press.

Patterson, Thomas C.  2005.  “The Turn to Agency:  Neoliberalism, Individuality, and Subjectivity in Late-Twentieth-Century Anglophone Archaeology.”  Rethinking Marxism 17:373-384. 

Sahlins, Marshall.  l972.  Stone Age Economics.  Chicago: Aldine.

— l976.  Culture and Practical Reason.  Chicago: University of Chicago Press.

These two are historically important.

Weart, Spencer R.  2004.  The Discovery of Global Warming.  Cambridge, MA:  Harvard University Press.  Again, not directly relevant to anthro, but interesting as history of science.

Wylie, Alison.  2002.  Thinking from Things:  Essays in the Philosophy of Archaeology.  Berkeley:  University of California Press.  Essays; 514 pp. 

Controversies in anthropology

     I find the more famous controversies in anthro a bit unedifying.  Robert Redfield and his student Oscar Lewis famously disagreed about Tepoztlan—Redfield found it a delightful, happy place, Lewis a melancholy and conflicted one.  This is better understood when you read some history and learn that Tepoztlan changed a great deal between Redfield’s and Lewis’ visit.  At the time of R and L’s work, “peasant villages” were supposed to be “changeless,” even when they were virtually suburbs of Mexico City (which Tepoztlan is).  Thus, the differences between R and L were ascribed to differences between the two observers.  In reality, most of the differences were actual differences between Tepoztlan in the early 1920s, in the heady post-revolution days, and in the 1930s, in the dark depths of the Depression.  There were, however, some real differences between R and L.  (And when I was there a few years ago Tepoztlan was much bigger and neither particularly happy nor particularly sad.)

    The Margaret Mead-Derek Freeman thing pits two abysmally incompetent anthropologists against each other (see Orans reference above).  (Mead got much better later–she was in her mid-twenties when she did her Samoa work.)  It’s not worth much attention, but Orans gets off some general points about how things should have been done, and thus raises the issues to levels worth your time.

     Similarly, the “controversy” over Patrick Tierney’s book Darkness in El Dorado isn’t.  Tierney was a sensationalist reporter relying on local gossip.  His work is ridiculous.  He got off some easy points on Napoleon Chagnon (an easy man to hit) but otherwise the book is a waste of time, and the controversy around it too unedifying to take seriously.

     One controversy that IS worth your attention is the Richard Lee-Edwin Wilmsen controversy about the San.  Here we have serious, thoughtful experts carrying on something like a real exchange of views.  See also the recent medical anthro literature for serious exchanges of reasonable views.  Any issue of Current Anthropology will provide examples of other scientifically respectable controversies.

     It’s symptomatic of something (what?) that anthropologists love to focus on the R-L, Mead-Freeman, and Tierney controversies instead of the thousands of reasonable, civilized exchanges of views, leading to real resolution, that have taken place in the field.