Archive for August, 2011

Science and Ethnoscience, bibliography, part 3

Monday, August 22nd, 2011


E. N. Anderson

Dept. of Anthropology, University of California, Riverside

Augmented Bibliography, Part 3

Pagden, Anthony.  1987.  The Fall of Natural Man:  The American Indian and the Origins of Comparative Ethnology.  Cambridge:  Cambridge University Press.

Parkinson, John.  1976 (1629).  A Garden of Pleasant Flowers:  Paradisi in Sole, Paradisus Terrestris. New York:  Dover.

Pavord, Anna.  2005.  The Naming of Names:  The Search for Order in the World of Plants.  New York:  Bloomsbury.

Perezgrovas Garza, Raúl (ed.).  1990.  Los carneros de San Juan:  Ovinocultura indígena en los Altos de Chiapas.  San Cristóbal de Las Casas:  Universidad Autónoma de Chiapas.

Perry, Charles.  2007.  “Foreword.”  In:  Medieval Cuisine of the Islamic World, by Lilia Zaouali.  Tr. M. B. DeBevoise.  Berkeley:  University of California Press.

Pinker, Stephen.  2003.  The Blank Slate.  New York:  Penguin.

Ponting, Clive.  1991.  A Green History of the World.  New York:  Penguin.

Popper, Karl.   1959.  The Logic of Scientific Discovery.  London:  Hutchinson.

Pormann, Peter E., and Emilie Savage-Smith.  2007.  Medieval Islamic Medicine.  Edinburgh:  Edinburgh University Press; Washington, DC: Georgetown University Press.

Posey, Darrell Addison.  2004.  Indigenous Knowledge and Ethics:  A Darrell Posey Reader.  New York:  Routledge.  Posthumous stuff; looks super.

Potter, Jack.  1976.  Thai Peasant Social Structure.  Chicago:  University of Chicago Press.

Powell, J. W.  1901.  “Sophiology, or the Science of Activities Designed to Give Instruction.”  American Anthropologist 3:51-79.

Preece, R.  1999.  Cultural Myths, Cultural Realities. Vancouver:  University of British Columbia Press.

Pyne, Stephen J.  1991.  Burning Bush: A Fire History of Australia.  NY: Henry Holt & Co.

Rabinow, Paul.  2002.  French DNA:  Trouble in Purgatory.  Chicago:  University of Chicago Press.

Radin, Paul.  1927.  Primitive Man as Philosopher.  New York:  Appleton.

—  1957.  Primitive Religion.  New York:  Dover.  (Orig 1937; this has a new preface.)

Re Cruz, Alicia.  1996.  The Two Milpas of Chan Kom.  Albany:  SUNY Press.

Reardon, Sara.  2011.  “The Alchemical Revolution.”  Science 332:914-915.

Reichel-Dolmatoff, G.  1971.  Amazonian Cosmos:  The Sexual and Religious Symbolism of the Tukano Indians.  Chicago:  University of Chicago Press.

—  1976.  “Cosmology as Ecological Analysis:  A View from the Rain Forest.”  Man 11:307-316.

Robb, John Donald.  1980.  Hispanic Folk Music of New Mexico and the Southwest:  A Self-Portrait of a People.  Norman, OK:  University of Oklahoma Press.

Rosaldo, Renato.  1989.  Culture and Truth:  The Remaking of Social Analysis.  Boston:  Beacon Press.


Ross, Norbert.  2004  Culture and Cognition:  Implications for Theory and Method.  Thousand Oaks, CA:  Sage.

Rudwick, Martin.  2005.  Bursting the Limits of Time.  Chicago:  University of Chicago Press.


—-  2008.  Worlds Before Adam.  Chicago:  University of Chicago Press.

Sahagun, Bernardino de. 1950-1982.  Florentine Codex.  Tr. Charles E. Dibble and Arthur J. O. Anderson.  (Spanish original late 16th century.)  Salt Lake City:  University of Utah Press.


Sahlins, Marshall.  l972.  Stone Age Economics.  Chicago: Aldine.


— l976.  Culture and Practical Reason.  Chicago: University of Chicago Press.

Said, Edward.  1978.  Orientalism.  New York:  Pantheon.

Schäfer, Dagmar.  2011.  The Crafting of the 10,000 Thngs:  Knowledge and Technology in Seventeenth-Century China.  Chicago:  University of Chicago Press.

Schneider, Norbert.  1992.  Naturaleza muerte.  Kőln:  Benedikt Taschen.

Schipper, Kristofer.  1993.  The Taoist Body.  Tr. Karen C. Duval (Fr. orig. 1982).  Berkeley:  University of California Press.

Schopenhauer, Arthur.  1950.  The World as Will and Idea.  Tr. R. Haldane and J. Kemp. (German original ca. 1850; this translation orig. publ. 1883).  London:  Routledge, Kegan Paul.

Shah, Idries.  1956.  Oriental Magic.  London:  Rider.

Sharp, Henry.  1987.  “Giant Fish, Giant Otters, and Dinosaurs:  ‘Apparently Irrational Beliefs’ in a Chipewyan Community.”  American Ethnologist 14:226-235.

—  2001  Loon:  Memory, Meaning and Reality in a Northern Dene Community.  Lincoln:  University of Nebraska Press.

Sivin, Nathan.  2000.  “Introduction.”  In Science and Civilisation in China.  Vol. 6: Biology and Biological Technology.  Part VI:  Medicine, by Joseph Needham with Lu Gwei-djen.  Ed. by Nathan Sivin.  Cambridge:  Cambridge University Press.

Skinner, B. F.  1959.  Cumulative Record:  A Selection of Papers.  New York:  Appleton-Century-Crofts.

Sluyter, Andrew.  2003.  “Material-Conceptual Landscape Transformation and the Emergence of the Pristine Myth in Early Colonial Mexico.”  In Political Ecology: An Integrative Approach to Geography and Environment-Development Studies, ed. By Karl Zimmerer and Thomas Bassett.  New York:  Guilford.  Pp. 221-239.

Smith, Adam.  1910 (orig. 1776).  The Wealth of Nations.  New York:  Dutton.

Smith, Claire, and Wobst, Martin.  2005.  Indigenous Archaeologies:  Decolonizing Theory and Practice.  New York:  Routledge.

Smith, David M.  1999.  “An Athapaskan Way of Knowing:  Chipewyan Ontology.”  American Ethnologist 25:412-432.

Stauber, John, and Sheldon Rampton.  1996.  Toxic Sludge is Good for You: Lies, Damn Lies and the Public Relations Industry.  Monroe, Maine: Common Courage.

Steward, Julian H.  1955.  Theory of Culture Change.  Urbana:  University of Illinois Press.

—  1977.  Evolution and Ecology:  Essays on Social Transformation.  Ed. Jane Steward and Robert Murphy.  Urbana:  University of Illinois Press.

Strang, Veronica.  2006.  “A Happy Coincidence?  Symbiosis and Synthesis in Anthropological and Indigenous Knowledges.”  Current Anthropology 47:981-1008.

Sun Simiao.  2007.  Recipes Worth a Thousand Gold:  Foods.  Tr. Sumei Yi.  Chinese original, 654 A.D.  Electronically distributed on Chimed listserv, July 2007.


Tacuinum Sanitatis: The Medical Health Handbook.  1976.  Ed./tr. by Luisa Cogliati Arano.  New York: George Braziller.

Taylor, Shelley E.  1989.  Positive Illusions:  Creative Self-Deception and the Healthy Mind.  New York:  Basic Books.

Terán, Silvia, and Christian Rasmussen.  1993.  La milpa entre los Maya.  Mérida:  Authors.

Tedlock, Dennis.  1985. Popul Vuh.  New York:  Simon & Schuster.

Theophrastus.  1926.  Enquiry into Plants.  Tr. A. F. Hort.  2 v.  Cambridge, MA:  Harvard University Press, Loeb Classics Series.

Thick, Malcolm.  2010.  Sir Hugh Plat:  The Search for Useful Knowledge in Sixteenth Century London.  Totnes, Devon:  Prospect Books.

Torrance, Robert (ed.).  1998.  Encompassing Nature.  Washington, DC:  Counterpoint.

Totman, Conrad.  1989.  The Green Archipelago:  Forestry in Preindustrial Japan.  Berkeley:  University of California Press.


Trautman, Thomas.  1987.  Lewis Henry Morgan and the Invention of Kinship.  Berkeley: University of California Press.

Tsien, Tsuen-Hsuin.  1985.  Science and Civilisation in China.  Vol. 5: Chemistry and Chemical Technology.  Part I:  Paper and Printing.  Cambridge:  Cambridge University Press.

Tuchman, Barbara.  1978.  A Distant Mirror:  The Troubled Fourteenth Century.  New York:  Knopf.

Tucker, Mary Evelyn, and John A. Grim (eds.).  1994.  Worldviews and Ecology:  Religion, Philosophy, and the Environment.  Maryknoll, NY:  Orbis Books.


Turner, Nancy J.  2005.  The Earth’s Blanket.  Vancouver:  Douglas and MacIntyre; Seattle:  University of Washington Press.

Turner, Nancy J.; Yilmaz Ari; Fikret Berkes; Iain Davidson-Hunt; Z. Fusun Ertug; Andrew Miller.  2009.  “Cultural Management of Living Trees:  An International Perspective.”  Journal of Ethnobiology 29:237-270.

Tylor, Edward.  1871.  Primitive Culture.  London: John Murray.

Unschuld, Paul.  1986.  Medicine in China:  A History of Pharmaceutics.  Berkeley:  University of California Press.

—  2009.  What is Medicine?  Western and Eastern Approaches to Healing.  Berkeley:  University of California Press.


Varner, John Grier, and Jeannette Johnson Varner.  1983.  Dogs of the Conquest.  Norman:  University of Oklahoma Press.

Vayda, Andrew P.  2008.  “Causal Explanations as a Research Goal:  A Pragmatic View.”  In Against the Grain:  The Vayda Tradition in Ecological Anthropology, Bradley Walker, Bonnie McCay, Paige West, and Susan Lees, eds.  Lanham, MD:  AltaMira (division of Rowman and Littlefield).  Pp. 317-367.

—  2009.  “Causal Explanation as a Research Goal:  Do’s and Don’t’s.”  In Explaining Human Actions and Environmental Changes. Lanham, MD:  AltaMira (division of Rowman & Littlefield).  Pp. 1-48.

Veith, Ilza.  2002.  The Yellow Emperor’s Classic of Internal Medicine.  New edn. (orig. 1949).  Berkeley:  University of California Press.

Vico, Giambattista.  2000.  New Science.  Tr. David Marsh.  New York:  Penguin.

Vogt, Evon Z.  1969.  Zinacantan:  A Maya Community in the Highlands of Chiapas.  Cambridge, MA:  Harvard University Press.

Waddell, Helen.  1955.  The Wandering Scholars.  Garden City, NY:  Doubleday.

Wear, Andrew.  2000.  Knowledge and Practice in English Medicine, 1550-1680.  Cambridge:  Cambridge University Press.

Weatherford, Jack.  2004.  Genghis Khan and the Making of the Modern World.  New York:  Three Rivers Press.

Weber, Max.  2001.  The Protestant Ethic and the “Spirit” of Capitalism.  Tr.  [German orig. 1907.]   London: Penguin.

Weber, Steven.  2001.  “Ancient Seeds:  Their Role in Undersanding South Asia and Its Past.  In:  Ethnobiology at the Milennium:  Past Promise and Future Prospects, Richard I. Ford, ed.  Museum of Anthropology, University of Michigan, Anthropological Papers 91.  Ann Arbor:  University of Michigan.  Pp. 21-34.

Weber, Steven A., and William R. Belcher (eds.).  2003.  Indus Ethnobiology:  New Perspectives from the Field.  Lanham, Md.:  Lexington Books (member of Rowman & Littlefield).

Whicher, George F.  1949.  The Goliard Poets.  Cambridge, MA:  University Press.


Wilke, Philip J.  1988.  “Bow Staves Harvested from Juniper Trees by Indians of Nevada.”  Journal of California and Great Basin Anthropology 10:3-31.

Wilms, Sabine.  2002.  The Female Body in Medieval China.  Ph.D. dissertation, Dept. of East Asian Studies, University of Arizona, Tucson, AZ.

Witherspoon, Gary.  1977.  Language and Art in the Navaho Universe.  Ann Arbor:  University of Michigan Press.

Wolpert, Lewis.  1993.  The Unnatural Nature of Science.  Cambridge, MA: Harvard University Press.

Worsley, Peter.  1997.  Knowledges:  Culture, Counterculture, Subculture.  New York:  New Press.

Zambrano, Isabel, and Patricia Greenfield.  2004.  “Ethnoepistemologies at Home and at School.”  In Culture and Competence: Contexts of Life Success, ed. By Robert J. Sternberg and Elena L. Grigorenko.  Washington:  American Psychological Association.  Pp. 251-272.

Zaouali, Lilia.  2007.  Medieval Cuisine of the Islamic World:  A Concise History with 174 Recipes.  Berkeley:  University of California Press.

Zarger, Rebecca K.  2002.  “Acquisition and Transmission of Subsistence Knowledge by Q’eqchi’ Maya in Belize.”  In Ethnobiology and Biocultural Diversity, ed. J. R. Stepp, Felice S. Wyndham and R. K. Zarger.  Athens:  University of Georgia Press. Pp. 593-603.

Zarger, Rebecca K., and John R. Stepp.  2004.  “Persistence of Botanical Knowledge among Tzeltal Maya Children.”  Current Anthropology 45:413-418.

Zong Yao Da Zi Dian. 1979.  Shanghai:  Science Publishers.

Science and Ethnoscience: Bibliography, part 1

Monday, August 22nd, 2011


E. N. Anderson

Dept. of
Anthropology, University of California, Riverside


Augmented Bibliography, part 1


To the references in text are added a large number of
references on folk science, including Chinese traditional sciences.


Abd al-Latif al-Baghdadi.
1965.  The Eastern Key.  Tr. K. H. Zand, John A.Videan, Ivy E.
Videan.  London:  George Allen and Unwin.


Ahmad, S. Maqbul, and K. Baipakov.  2000.
Geodesy, Geology and Mineralogy; Geography and Cartography; the Silk Route across Central Asia.  In History of Civilizations of Central Asia, vol. IV, The Age of Achievement:  A.D. 750
to the End of the Fifteenth Century.
2, The Achievements, edited by C. E.
Bosworth and M. S. Asimov. Paris:  UNESCO.
Pp. 205-226.


Anderson, Barbara A.;
E. N. Anderson; Tracy Franklin; Aurora Dzib-Xihum de Cen.  2004.
“Pathways of Decision Making among Yucatan Mayan Traditional Birth
Attendants.”  Journal of Midwifery and
Women’s Health 49:4:312-319.


Anderson, E. N.
1972.  Studies on South China’s
Boat People.  Taipei:  Orient Cultural Service.


—  1987.  “Why is Humoral Medicine So
Popular?”  Social Science and
Medicine 25:4:331-337.


—  1988.  The Food of China.  New Haven:  Yale
University Press.


1992.   “Chinese Fisher Families:  Variations on Chinese Themes.”  Comparative Family Studies 23:2:231-247.


—  1996a.  Ecologies of the Heart.  New York:  Oxford
University Press.


—  1996b.  “An Introduction to Wilson Duff.”  In Bird
of Paradox
by Wilson Duff.  Surrey, BC:  Hancock House.  Pp. 16-119.


—  1999.  “Child-raising among Hong Kong
Fisherfolk:  Variations on Chinese
Themes.”  Bulletin of the Institute of Ethnology, Academia Sinica, 86:121-155.


—  2000.  “Maya Ornithology and ‘Science Wars.’”  Journal of Ethnobiology 20:129-158.


—  2001.  “Flowering Apricot:  Environmental Practice, Folk Religion, and
Daoism.”  In Daoism and Ecology, ed. N. J. Girardot, James Miller, and Liu
Xaiogan.  Cambridge:  Harvard
University Press.  Pp. 157-184.


—  (with José Cauich
Canul, Aurora Dzib, Salvador Flores Guido, Gerald Islebe, Felix Medina Tzuc,
Odilón Sánchez Sánchez, and Pastor Valdez Chale).  2003.
Those Who Bring the Flowers.
Chetumal, QR, Mexico:  ECOSUR.


—  2004.  “’Loving Nature’ among the Maya.”  Paper, Society for Ethnobiology, annual
conference, Davis, CA.


—   (with Aurora
Dzib Xihum de Cen, Felix Medina Tzuc, and Pastor Valdez).  2005.
Political Ecology in a Yucatecan Community.  Tucson:  University
of Arizona Press.


—  ms 1.  The Morality of Ethnobiology.  Ms. in prep.


—  ms 2.  Learning from Experience.


—  ms 3.  “The Antilist.”  Ms.


—  2007.  Floating World Lost.  New Orleans:
University Press of the South.


Anderson, E. N., and Felix Medina Tzuc.  2005.
Animals and the Maya in Southeast Mexico.  Tucson:  University
of Arizona Press.


Anderson, M.
Kat.  2005.  Tending the Wild:  Native American Knowledge and the Management
of California’s Natural Resources.  Berkeley:  University of California Press.


Anderson, Perry.
1974.  Lineages of the Absolutist State.
London:  NLB.


Ankli, Anita; Otto Saticher; Michyale Heinrich.  1999a.
“Medical Ethnobotany of the Yucatec Maya:  Healers’ Consensus as a Quantitative Criterion.”  Economic Botany 53:144-160.


—  1999b.  “Yucatec Maya Medicinal Plants Versus
Nonmedicinal Plants:  Indigenous
Characterization and Selection.”  Human
Ecology 27:557-580.


Arikha, Noga.
2007.  Passions and Tempers:  A History of the Humours.  New


Atran, Scott.
1990.  Cognitive Foundations of
Natural History.  Cambridge:  Cambridge
University Press


Atran, Scott.
2002.  In Gods We Trust.  New York:  Oxford
University Press.


Scott, and Douglas Medin.  2008.  The Native Mind and the Cultural Construction
of Nature.  Cambridge, MA:  MIT Press.


Avicenna.  1999.
The Canon of Medicine (al-Qānūn
).  O. Cameron Gruner and
Mazar H. Shah, tr.; ed. Laleh Bakhtiar.  Chicago:  KAZI Publications.


Bacon, Francis.
1901.  Novum Organum.  (Orig. 1620.)
New York:  P. F. Collier.


Balam Pereira, Gilberto.
1992.  Cosmogonia y uso actual de
las plantas medicinales de Yucatán.  Merida:  Universidad Autónoma de Yucatán.


Ball, Philip.
2008.  “Triumph of the Medieval
Mind.”  Nature 452:816-818.


Linda.  2005.  Needles, Herbs, Gods and Ghosts:  China Healing and the West to
1848.  Cambridge,
MA:  Harvard University Press.


Bennett, Bradley C.
2007.  “Doctrine of
Signatures:  An Explanation of Medicinal
Plant Disocvery or Dissemination of Knowledge?”
Economic Botany 61:246-255.


John W.  l976.  The Ecological Transition: Cultural
Anthropology and  Human Adaptation.  New
York:  Academic
Press, l976.


l982.  Of Time and the Enterprise:
North American Family Farm Management
in a Context of Resource Marginality.
Minneapolis: University of Minnesota


1992.  Human Ecology As Human
Behavior.   New Brunswick, NJ:


Jeremy M.  2007.  “The Age-Old Question of Researcher
Innovation:  Response.”  Science 318:1549-1550.


Berger, Peter L., and Thomas Luckmann.  1966.
The Social Construction of Reality.
Garden City, NY:  Doubleday.


Berkes, Fikret.
1999.  Sacred Ecology:  Traditional Ecological Knowledge and Resource
Management.  Philadelphia:
Taylor and Francis.


Berkes, Fikret; Johan Colding; Carl Folke.  2000.
“Rediscovery of Traditional Ecological Knowledge as Adaptive
Management.”  Ecological Applications


Berlin, Brent.
1992  Ethnobiological
.  Princeton:  Princeton
University Press.


Birkhead, Tim.
2008.  The Wisodm of Birds:  An Illustrated History of Ornithology.  New York:


Thomas, and M. Kat Anderson (eds.).
1993.  Before the Wilderness:  Environmental Management by Native Californians.  Menlo
Park, CA:  Ballena Press.


Blaikie, Piers, and Harold Brookfield.  1987.
Land Degradation and Society.
London:  Methuen.


Blaser, Mario.
2009.  “The Threat of the
Yrmo:  The Political Ontology of a
Sustainable Hunting Program.”  American
Anthropologist 111:10-20.


Bourdieu, Pierre.
1977.  Outline of a Theory of
Practice.  Cambridge:  Cambridge
University Press.


—  1990.  The Logic of Practice.  Stanford:
Stanford University Press.


Geoffrey, and Susan Leigh Star.
1999.  Sorting Things Out:  Classification and Its Consequences.  Cambridge, MA:  MIT Press.


Bowler, Peter J., and Iwan Rhys Morus.  2005.
Making Modern Science:  A
Historical Survey.  Chicago:  University
of Chicago Press.


Boyle, Robert.
2006.  The Skeptical Chymist.  N.p.:
Adamant Media.  Orig. 1661.


Brandt, Richard.
1954.  Hopi Ethics.  Chicago:  University
of Chicago Press.


Braudel, Fernand.
1973.  The Mediterranean
and the Mediterranean World in the Age of Philip II.  Tr. Sian
Reynolds.  Fr orig 1966.  New
York: Harper & Row.


Dennis E., and Robert M. Laughlin.
1993.  The Flowering of Man:  A Tzotzil Botany of Zinacantán.  Smithsonian Contributions to Anthropology 35.


Cecil.  1984  Language
and Living Things:  Unversalities in Folk
Classification and Naming
.  New
Brunswick, NJ:  Rutgers University Press.


Buell, Paul D.;  E. N.
Anderson; Charles Perry.  2000.  A Soup for the Qan.  London:
Kegan Paul International.


Callicott, J. Baird.
1994.  Earth’s Insights:  A Multicultural Survey of Ecological Ethics
from the Mediterranean
Basin to the Australian
Outback.  Berkeley:  University of California Press.


Callicott, J. Baird, and Michael P. Nelson.  2004.
American Indian Environmental Ethics:
An Ojibwa Case Study.  Upper
Saddle River, NJ:  Pearson Prentice Hall.


Carter, M. G.
1990.  “Arabic Lexicography.”  In Religion,
Learning and Science in the ‘Abbasid Period,
M. J. L. Young, J. D. Latham,
R. B. Serjeant, eds.  Cambridge:  Cambridge University Press.  Pp. 106-117.


Carter, Thomas.
1955.  The Invention of Printing
in China
and Its Spread Westward.  2nd
edn., rev. by L. Carrington Goodrich.  New York:  Ronald Press.


Cohen, Mark Nathan.
2009.  “Introduction:  Rethinking the Origins of Agriculture.”  Current Anthropology 50:591-596.


Colding, Johan, and Carl Folke.  2001.
“Social Taboos:  ‘Invisible’
Systems of Local Resource Management and Biological Conservation.”   Ecological Applications 11:584-600.


Collins, Randall.
1998.  The Sociology of
Philosophies.  Cambridge,
MA:  Harvard University Press.


Harold C.  1957.  Hanunoo Agriculture.  Rome:


—  2007.
Fine Description.  Ed. Joel
Kuipers and Ray McDermott.  New Haven:  Yale
Southeast Asia Studies, Monograph 56.


Cronon, William.
1983.  Changes in the Land.  New
York:  Hill and


Cook, Harold J.
2007.  Matters of Exchange:  Commerce, Medicine, and Science in the Dutch
Golden Age.  New
Haven:  Yale University


Crews, Frederick (ed.).
1998.  Unauthorized Freud:  Doubters Confront a Legend.  New York:


Cronon, William.
1983.  Changes in the Land.  New
York:  Hill and


Cruikshank, Julie.
2005.  Do Glaciers Listen?  Local Knowledge, Colonial Encouinters, and
Social Imagination.  Vancouver:  University
of British Columbia


Damasio, Antonio.
1994.  Descartes’ Error.  New
York:  G. P.
Putnam’s Sons.


—  2003.  Looking for Spinoza:  Joy, Sorrow, and the Feeling Brain.  Orlando,
FL:  Harcourt.


D’Andrade, Roy.  1995.
The Development of Cognitive Anthropology.  New York:  Cambridge
University Press.


Dawes, Robyn.
1994.  House of Cards:  Psychology and Psychotherapy Built on
Myth.  New York:
Free Press.


Richard.  1976.  The Selfish Gene.  Oxford:  Oxford
University Press.


—  2006.
The God Delusion.  Boston:  Houghton Mifflin.


Kruif, Paul.  1926.  Microbe Hunters.  New York:
Harcourt, Brace.


De Landa, Manuel.
2002.  Intensive Science and
Virtual Philosophy.  New York:  Continuum Press.


Robert. 1991.  The Malaria Capers.  New
York: W. W. Norton.


Deur, Douglas,
and Nancy J. Turner (eds.).  2005.  Keeping It Living:  Traditions of Plant Use and Cultivation on
the Northwest Coast
of North America.  Seattle:  University
of Washington Press; Vancouver:  University
of British Columbia


De Waal, Frans.
1996.  Good Natured.  Cambridge, MA:  Harvard University Press.


—  2005.  Our Inner Ape.  New
Riverhead Books (Penguin Group).


Wilhelm.  1988.  Introduction to the Human Sciences:  An Attempt to Lay a Foundation for the Study
of Society and History.  Tr. Ramon A.
Betanzos.  Detroit:  Wayne
State University


Mary.  1966.  Natural Symbols.  London:  Barrie and Rockliff.


Dunbar, Robin I. M.
1993.  “Coevolution of Neocortical
Size, Group Size and Language in Humans.”
Behavioral and Brain Sciences 16:681-735.


— 2004.  Grooming,
Gossip, and the Evolution of Language.  New York:  Gardners Books.


Durkheim, Emile.  1995
(Fr. orig. 1912).  The Elementary Forms
of Religious Life.  Tr. Karen
Fields.  New York:
Free Press.


Durkheim, Émile, and Marcel Mauss.  2003.
“De quelques formes primitives de classification.”  Année


Easterbrook, Gregg.
2004.  “Politics and Science Do Mix.”
Los Angeles
Times, April 6, 2004, p. B13.


Benjamin.  2005.  On Their Own Terms:  Science in China, 1550-1900.  Cambridge, MA:  Harvard University Press.


Frederick.  1966.  Anti-Duhring: Herr Eugen Duhring’s Revolution
in Science.  New York: International Publishers.  (New printing. Orig. US edn. 1939.  Orig. English edn. 1894.)


Escobar, Arturo. 1998.
“Whose Knowledge, Whose Nature?
Biodiversity, Conservation, and the Political Ecology of Social
Movements.”  Journal of Political Ecology


— 1999.  “After
Nature:  Steps to an Antiessentialist Political
Ecology.”  Current Anthropology 40:1-30.


—  2008.  Territories of Difference:  Place, Movements, life, Redes. Durham:  Duke University Press.


Evans, L.  1998.  Feeding the Ten Billion.  Cambridge:  Cambridge
University Press.


Evans-Pritchard, E. E.
1950.  Witchcraft, Oracles and
Magic among the Azande.  Oxford:  Oxford
University Press.


Richard, and Mary Beth Felger.
1985.  People of the Desert and
Sea:  Ethnobotany of the Seri Indians.  Tucson:
University of Arizona Press.


Fischer, David H.
1989.  Albion’s Seed:  Four British Folkways in America.  New York:
Oxford University Press.


Ford, Anabel, and Ronald Nigh.  2009.
“Origins of the Maya Forest Garden:
Maya Resource Management.”
Journal of Ethnobiology 29:213-236.


Ford, Richard I.
2001.  “Introduction.”  In:
Ethnobiology at the Milennium:
Past Promise and Future Prospects, Richard I. Ford, ed.  Museum of Anthropology, University of
Michigan, Anthropological Papers 91.  Ann
Arbor:  University of Michigan.  Pp. 1-10.


Foucault, Michel.
1970.  The Order of Things:  An Archaeology of the Human Sciences.  Fr orig Les mots et les choses, 1966.  New
York:  Random


Foucault, Michel.
1973.  The Birth of the
Clinic:  An Archaeology of Medical
Perception.  New York:  Pantheon.


Foucault, Michel. 1978.
The History of Sexuality.  Vol.
I:  An Introduction.  Tr. Robert Hurley; orig. 1976.  New
York:  Random


Foucault, Michel.
1980.  Power/Knowledge:  Selected Interviews and Other Writings,
1972-1977.  Ed. Colin Gordon.  New


Frake, Charles.
1980.  Language and Cultural
Description.  Ed. Anwar S. Dil.  Stanford:
Stanford University Press.


Franklin, Sarah.
1995.  “Science as Culture,
Cultures of Science.”  Annual Review of
Anthropology 24:163-184.


Frederick II of Hohenstaufen.  1943.
The Art of Falconry, being the De Arte Venandi cum Avibus.  Tr./ed. Casey A. Wood and F. Marjorie Fyfe.  Stanford, CA:  Stanford University Press.


Freedberg, David.
2002.  The Eye of the Lynx:  Galileo, His Friends, and the Beginnings of
Modern Natural History.  Chicago:  University
of Chicago Press.


Freely, John.
2009.  Aladdin’s Lamp:  How Greek Science Came to Europe Through the
Silamic World.  New York:  Knopf.


Gage, Thomas.
1958.  Thomas Gage’s Travels in
the New World.
Ed. J. E. S. Thompson.  (Orig.
full edn. 1648.)  Norman,
OK:  University of Oklahoma Press.


Galen.  1997.  Galen:
Selected Works.  Tr. and ed. by
Philip N. Singer.  Oxford:  Oxford
University Press.


—  2000.
Galen on Food and Diet. Tr. and ed. by Mark Grant.  London:


—  2003.
Galen on the Properties of Foodstuffs.
Tr. and ed. by O. Power.  Cambridge:  Cambridge University Press.


—  2006.  Galen on Diseases and Symptoms.  Tr. and ed. by Ian Johnston.  Cambridge:  Cambridge
University Press.


Howard.  1985.  The Mind’s New Science:  A History of the Cognitive Revolution.  New
York:  Basic


Frances.  2007.
“Critical Methods in Tibetan Medical Histories.”  Journal of Asian Studies 66:363-387.


Gaukroger, Stephen.
2006.  The Emergence of a
Scientific Culture:  Science and the
Shaping of Modernity 1210-1685.  Oxford:  Oxford University


—  2010.  The Collapse of Mechanism and the Rise of
Sensibility: Science and the Shaping of Mdernity 1680-1760.  Oxford:
Oxford University Press.


Gerard, John.  1975 (1633).
The Herball. New York: Dover.

Gigerenzer, Gerd.
2007.  Gut Feelings:  The Intelligence of the Unconscious.  New
York:  Viking.


Glover, Denise.
2005.  Up from the Roots:  Contextualizing Medicinal Plant
Classifications of Tibetan Doctors in Rgyalthang, PRC.  Ph.D. dissertation, Dept. of Anthropology, University of Washington.


Gómez-Pompa, Arturo.
1987.  “On Maya Silviculture.”  Mexican Studies/Estudios Mexicanos 3:1:1-17.


Gómez-Pompa, Arturo; Michael Allen; Scott Fedick; J. J.
Jiménez-Osornio (eds.).  2003.  The Lowland Maya Area:  Three Millennia at the Human-Wildland
Interface.  New York:  Haworth


Gonzalez, Roberto.
2001.  Zapotec Science.  University
of Texas Press, Austin.


Gortvay, Gy, and I. Zoltán.
1968.  Semmelweis:  His Life and Work.  Budapest:
Akadémiai Kiadó.


Gossen, Gary H.
1974.  Chamulas in the World of
the Sun:  Time and Space in a Maya Oral
Tradition.  Prospect Heights, IL:  Waveland.


—  2002.  Four Creations:  An Epic Story of the Chiapas Mayas.  Norman, OK:
University of Oklahoma Press.


Gould, Stephen Jay.
1999.   Rocks of Ages: Science and
Religion in the Fullness of Life.  New York:  Ballantine.


—  2002.  The Structure of Evolutionary Theory.  Cambridge, MA:  Harvard University Press.


Goulet, Jean-Guy.
1998.  Ways of Knowing.  Lincoln:  University
of Nebraska Press.


Greene, Brian.
2004.  The Fabric of the
Cosmos.  New York:


Greenfield, Patricia.
2004.  Weaving Generations
Together:  Evolving Creativity in the
Maya of Chiapas.  Santa Fe:  School
of American Research


A. T., and Oliver Rackham.  2001.  The Nature of the Mediterranean World.  New Haven:  Yale
University Press.


Grünbaum, Adolf.
1984.  The Foundations of
Psychoanalysis:  A Philosophical
Critique.  Berkeley:  University
of California Press.


Gunther, Robert T.
1934.  The Greek Herbal of
Dioscorides.  Oxford:  Oxford
University Press.


Hacking, Ian.
1999.  The Social Construction of
What?  Cambridge,
MA:  Harvard University Press.


Hadot, Pierre.
2006.  The Veil of Isis.  Tr. Michael Chase.  (Fr. orig. 2004.)  Cambridge, MA:  Harvard University Press.


Harbsmeier, Christoph.
1998.  Science and Civilisation in
Vol. 7, part 1, Language and Logic.  Cambridge:  Cambridge University Press.


John. 1966 [orig. ca. 1600].  The School of Salernum.  Salerno:  Ente Provinciale per il Turismo.


Deborah E.  2008.  The Jewel House:  Elizabethan London and the Scientific Revolution.  New Haven:  Yale
University Press.


Harris, Marvin.
1966  “The Cultural Ecology of
India’s Sacred Cattle.” Current Anthropology 7:51-66.


—  1968 The
Rise of Anthropological Theory
.  New
York:  Thomas Y. Crowell.


Harshbarger, John.
1896.  “Purposes of
Ethnobotany.”  Botanical Gazette


Hernández, Francisco.
1942-1946  Historia de las plantas de Nueva España. (Orig. ms. ca. 1570.)  Mexico
Universidad Autónoma de México.


Hernández Xolocotzi, Efraím.
1987.  Xolocotzia:  Obras de Efraim Hernández Xolocotzi.  Universidad Autónoma de Chapingo, Revista de
Geografía Agrícola.


Richard, and Charles Murray.  1994.  The Bell
Curve.  New York:
Free Press.


Hill, Donald R.
1990a.  “The Literature of Arabic
Alchemy.”  In Religion, Learning and Science in the ‘Abbasid Period, M. J. L.
Young, J. D. Latham, R. B. Serjeant, eds.
Cambridge:  Cambridge University
Press.  Pp. 328-341.


Hill, Donald R.
1990b.  “Mathematics and Applied
Science.”  In Religion, Learning and Science in the ‘Abbasid Period, M. J. L.
Young, J. D. Latham, R. B. Serjeant, eds.
Cambridge:  Cambridge University
Press.  Pp. 248-273.


Hill, Jane.
2003.  “What Is Lost When Names
Are Forgotten?”  In Nature Knowledge:
Ethnoscience, Cognition, and Utility, Glauco Sanga and Gherardo Ortalli,
eds.  New York
and London:  Berghahn.
Pp. 161-184.


Hinrichs, T J.
1999.  “New Geographies of Chinese
Medicine.”  Osiris 13:287-325.


Hobsbawm, Eric, and Terence Ranger (eds.).  1983.
The Invention of Tradition.  Cambridge:  Cambridge University Press.


Hordes, Stanley.  2005.
To the End of the Earth: A
History of the Crypto-Jews of New
Mexico.  New
York:  Columbia University Press.


Hsu, Elisabeth.
1999.  The Transmission of Chinese
Medicine.  Cambridge:  Cambridge
University Press.  Cambridge
Studies in Medical Anthropology, 7.


Hsu, Elisabeth (ed.).
2001.  Innovation in Chinese
Medicine.  Cambridge:  Cambridge
University Press.


Toni.  1999.  The Cult of Pure Crystal Mountain.  New York:  Oxford
University Press.


Hughes, J. Donald.
1983.  American Indian
Ecology.  Texas
Western University
Press, El Paso.


Hume, David.  1969
(1739-1740).  A Treatise of Human Nature.  New
York:  Penguin.


—   1975
(1777).  Enquiries Concerning Human
Undestanding and Concerning the Principles of Morals.  Ed. L. A. Selby-Bigge.  Oxford:  Oxford
University Press.


Eugene.  1982  “The Utilitarian Factor in Folk Biological Classification.”  American

—  2008
A Zapotec Natural History:  Trees, Herbs, and Flowers, Birds, Beasts and
Bugs in the Life of San Juan Gbëë
Tucson:  University of Arizona


Huntington, Samuel.
1997.  The Clash of Civilizations
and the Remaking of World Order.  New York:  Touchstone.


Hvalkof, Sven, and Arturo Escobar.  1998.
“Nature, Political Ecology, and Social Practice:  Toward an Academic and Political Agenda.”  In:
Building a New Biocultural Synthesis, ed. by Alan H. Goodman and Thomas
L. Leatherman.  Ann
Arbor;  University of Michigan
Press.  Pp. 425-450.


Idrisi, Zohor.
2005.  The Muslim Agricultural
Revolution and Its Influence on Europe.  Manchester,
England:  Foundation for Science, Technology and


Isaacs, Haskell D.
1990.  “Arabic Medical
Literature.”  In Religion, Learning and Science in the ‘Abbasid Period, M. J. L.
Young, J. D. Latham, R. B. Serjeant, eds.
Cambridge:  Cambridge University
Press.  Pp. 342-363.


Israel, Larry.
2008.  “The Prince and the
Sage:  Concerning Wang Yangming’s
‘Effortless’ Suppression of the Ning Princely Establishment Rebellion.”  Late Imperial China 29:68-128.


Kirill V., and Mark J. Dwyer.  2009.  “Finding the Way:  A Critical Discussion of Anthropological
Theories of Human Spatial Orientation with Reference to Reindeer Herders of
Northeastern Euorpe and Western Siberia.”
Current Anthropology 50:29-50.


Jaramillo, Cleofas.
1980.  The Genuine New Mexico
Tasty Recipes, 1942.  Santa Fe:  Ancient City Press.


Jenness, Diamond.
1955.  The Faith of a Coast Salish
Indian.  Victoria,
BC:  British Columbia Provincial
Museum [now Royal British Columbia
Museum], Memoir 3.


Johannes, R.
E.  1981.
Words of the Lagoon:  Fishing and
Marine Lore in the Palau District of Micronesia.  Berkeley:
University of California Press.


Johannes, R.
E., and J. W. MacFarlane.  1991.  Traditional Fishing in the Torres Straits
Islands.  CSIRO.


Kalichman, Seth.
2009.  Denying AIDS:  Conspiracy Theories, Pseudoscience, and Human
Tragedy.  New York:  Springer.


Kant, Immanuel.
1978.  Anthropology from a
Pragmatic Point of View.  Tr. Victor Lyle
Dowdell (Ger.
Orig. 1798).  Carbondale:  Southern Illinois University


Immanuel.  2007.  Critique of Pure Reason.  Tr. Max Müller and Marcus Weigelt.  German original 1781 (1st edn.)
and 1787 (2nd edn.; the translation uses both).  New
York:  Penguin.


Kassam, Karim-Aly S.
2009.  Biocultural Diversity and
Indigenouis Ways of Knowing:  Human
Ecology in the Arctic.”  Calgary:
University of Calgary Press.


Science and Ethnoscience, part 3: Classification

Monday, August 22nd, 2011


E. N. Anderson

Dept. of Anthropology, University of California, Riverside

Part 3.  Case Study:  Classification

One fact that is devastating to the view that science is purely a cultural or social construction is the broad consonance between folk and scientific systems of classification.  People everywhere classify plants and animals about the same way, recognizing categories like “bird,” “snake,” and so on (Atran 1990; Berlin 1992; Brown 1984).  Moreover, they focus on inferred biological relationships.  They classify dogs with dogs, cats with cats, and oak trees with beech trees, rather than—say—shepherd dogs with sheep and human shepherds, hounds with ducks, cats with grass, and oak trees with potatoes.

People classify things.  The fundamental, original purpose of this is to make the world manageable.  If we had to react to every stimulus as a new and unprecedented thing, we would never get out of bed in the morning.  Thus, as Kant (1978) pointed out, we assimilate and differentiate as we need to.   Humans seem to be natural classifiers.  Modern psychology confirms Kant’s points:  we essentialize categories, treating things we class together as if they were “the same” and exaggerating the differences of things we put in different categories (Atran 1990; Atran and Medin 2008).

Classifications are fundamentally about being useful.  We classify so that we can identify edible and useful plants, dangerous or poisonous animals, types of tools we need for projects, breeds of dogs used for different tasks, types of paintings (Impressionist, abstract expressionist, op-art…).  Most classifications are developed from actual interaction with the things we are classifying.  We classify them in ways that make for maximal efficiency in using them.

However, our love of classifying runs far beyond utility.  We classify all manner of things, and learn about them.  Folk biology everywhere includes an incredible number of names and facts, many of them essentially useless to the people who know them.  The Maya, for instance, have names for all manner of tiny insignificant birds, and know their life histories.

We classify everything:  dogs, personality types, ideas, gods, kinfolk, potatoes (some Peruvian farmers know hundreds of varieties by sight), and kinds of love.  People even develop classifications for fun, like the classifications of imaginary creatures (unicorns, dragons, and so on) in fantasy books.

Conversely, people may develop classifications to keep people in line—from those endless classifications of sins and impurities in the Bible to those endless classifications of traffic violations in the modern civil codes.  So one main, and universal, use of classification systems is to maintain control not only over natural complexity but also over people’s lives and social actions (Bowker and Star 1999; Foucault 1970).

Classifications range from legally defined, like types of property, to biologically based, like types of fir trees.  Classifications may be very clear, simple, and sharp, like classifications of living elephants:  there are only two, not much like each other, and not at all like any other animal.  Conversely, classifications of philosophic theories are so endlessly argued by philosophers that one may wonder whether the reason for having the classifications at all is to stir up debate.  Obviously, we will never have an accepted classification of philosophies.

Classifications may be universal (modern scientific nomenclature—among scientists, at least), cultural (English bird names), or at lower levels.  My classification of foods I like is unique to me.  My children’s classification of foods they would not eat was an all too significant family reality in their early years, but is of no significance today.

Classifications may be broadly true in some sense.  The modern scientific classifications of chemicals, stars, and living things are grounded in real and demonstrable facts.  We classify animals and plants on the basis of biological relationships.  Linnaeus had to infer these from appearance, and brilliantly saw that flowers are basic rather than leaves, stems, and roots.  We can now use cladistic analysis backed up by comprehensive genetics, and prove directly the genetic relationships we once had to infer.

On the other hand, classifications can be ad hoc, or plain wrong, or utterly ridiculous, like José Luis Borges’ “Chinese encyclopedia” parody cited in Foucault (1970:xv).

The purpose of classification is not to be right but to be useful.  Even Borges’ is useful:  it is intended to shock the reader into thinking about the whole philosophical issue of classification.  Anyone reading it realizes it is a joke, because the units are totally non-comparable; a real system has to have units that are comparable, in ways that matter within that particular system.  In technical terms, there has to be an “emics” to the system.

A great deal of research has been devoted to the history and cross-cultural variation of classification.

Kinship terminology has received by far the most effort.  Kinship is unique in that it is equally important and elaborate in all cultures.  It is the only realm in which every culture has an elaborate, precise, formalized, and almost universally known system.  Australian aboriginals may not have elaborate physics or chemistry, but their kinship systems are so formal and elaborate that many brilliant English-speaking scholars have spent years unsuccessfully trying to analyze them.  Thus they provide ideal material for comparative analyses of human thought.  Therefore, theorizing about family and kin has always been basic to anthropology.  Lewis Henry Morgan’s vast classic work Systems of Consanguinity and Affinity (1871) put the seal on kin classification as a major field for anthropological endeavor (Trautman 1987).

Since the 1870s, much work has been devoted to taxonomies of animals and plants, and sometimes other living things (like fungi).  Comparative work has shown that people everywhere see real biological relationships, and use them as one basis for classifying.  This has even led authorities on the subject to postulate that people have a natural tendency to classify on the basis of perceived basic similarities (Atran 1990; Berlin 1992; Brown 1984).

In classifying living things, every culture has a general classification systems versus special purpose classifications.  Brent Berlin found that this distinction is basic and apparently worldwide (Berlin 1992).  The general system is the one based on apparent biological relationships or real-world appearance.  It is the one that provides everyday names.  Everywhere, it is based on inferred similarities out there in nature.  Everywhere, if you simply ask “what is that?” you get the name in the general system.  The name of an animal or plant is always understood to be its name in the general system unless you specify otherwise.  Local utilitarian factors influence all general purpose systems (Hunn 1982), but do not determine them, so they end by looking very much like the modern international scientific system.

On the other hand, Roy Ellen has long emphasized real differences between cultural classification systems (e.g. Ellen 1993).  Similarly, Geoffrey Lloyd (2007) has found many areas that are not accurately perceived in folk biology.  He makes much of the microorganisms, which are irrelevant to his case, but he makes the more serious point that traditional classifications generally fail at the higher levels.  Nobody seems to have words for “mammal,” and many cultures lack a word for “animal.”  Plants are variously assembled.  Carol Kaesuk Yoon (2009) has recently held that there is a “clash” between “instinct”—the natural categorization that humans do—and “science.”  She maintains this because genetics has now shown that fish fall into several classes, with the bony fish closer to humans than to cartilaginous fish.  One might add that birds are closer to some “reptiles” (dinosaurs) than those are to other reptiles.

So science is indeed a cultural construction.  Even modern classifications are not 1:1 maps of biology, and folk systems certainly are not.

Yet, in fact, folk and traditional people see natural categories astonishingly well—not as well as the best modern geneticists, but well enough to show that nature is hard to ignore in these matters.  Sometimes the traditional small-scale societies had views closer to modern genetics than the Linnaean biologists did.  Fungi were still “plants” when I was an undergraduate, but the indigenous peoples of Mexico correctly place them closer to animals (Hunn 2008; Lampman 2008).  I found that the Yucatec Maya categorize orioles according to the best modern analysis.  And certainly my friends on the Hong Kong waterfront were aware that “fish” (yu) was a functional class (swimming aquatic life), not a natural biological one.  They knew from inspection, for instance, that cuttlefish were closer to octopi than to bony fish, though cuttlefish were “fish” and octopi were not.  It is significant that this example translates perfectly; folk English does the same thing.

The point is that it is constructed on the basis of ongoing interaction with reality.  (Even those unicorns are based on reality, at a couple of removes.  The original “unicorn” was the rhinoceros, and tales of it—a huge horselike creature with one horn in the middle of its forehead—were duly interpreted as reasonably as possible:  a horse with a narwhal tusk for a horn, the Europeans having no other one-“horned” animal to compare.)  Obviously, no society could exist if it did not base its knowledge on truth learned by experience.

One proof is the development of dictionaries in Arab (Carter 1990) and Chinese civilizations.  Technical vocabularies specialized on particular subjects, such as horses (Carter 1990) or drugs and medicines, show the classification systems appropriate to those matters.  Early Arabic dictionaries sometimes arranged words by linguistic domains, and these were much like ours or anyone else’s.  The early Greek and Latin writers also classified plants and animals in ways not irrational or incomprehensible.  They are not the same as our ways, but they are close enough that we still use many of Theophrastus’ and Pliny’s names as scientific names, either for the same plants or for similar or related ones.  (Still, one sometimes wonders about the more modern sages!  Kaktos, Greek for a kind of thistle, wound up applied to some plants that have nothing in common with thistles except prickliness.  Dozens of other names were similarly applied any old way, just to recycle a Greek name, no matter how inappropriately.  This started early; kardamon, another thistle, had already—and mysteriously—become the name of a spice in late antiquity.)

Special purpose classifications classify plants and animals in relation to human wants and needs. In Hong Kong, when I asked “what is that fish?” I got the name in the general system, relating fish to fish—classifying them as soles, sharks, groupers, and so on.  I slowly learned there were many other ways to classify fish:  by price, by technique used to catch them, by habitat, by sacred and ceremonial significance, and by eating qualities.  These were five separate, salient, well-known systems.  They were not merely ad hoc.  The fishermen never confused them with the basic system (Anderson 1972).  When I asked “what is that fish?” I always got a name from the basic system, never “a netted fish” or the like.

Proof that even the arcana of fish classification can suddenly become important is found in the striking book Trying Leviathan by D. Graham Burnett (2007).  This book is the history of a trial that took place in New York City in 1818 to decide whether a whale was a fish or a mammal.  The state had passed a law requiring inspection of fish oil, with a fee to be paid by the seller.  This being New York, a whale-oil dealer immediately challenged the law on the basis of science:  whales had recently been classified as mammals by Linnaeus and Cuvier.  This early example of New York chutzpah got him haled into court.  The trial involved the formidably brilliant icthyologist Samuel Mitchill as witness for the defense, but the verdict went against the dealer, since the plaintiff could establish that the state legislature had passed the law based (at some remove) on the supposition that whales were fish and whale oil would be inspected.

This was long before Darwin.  There was no obvious reason to prefer lactation, air-breathing and live birth over fins, aquatic habitat, and streamlined shape as classification markers.  The lawyers were astute enough to realize that classification could be ambiguous; they made reference to the “duck-billed beaver” (platypus) and other anomalies.  Not only New York lawyers find whales confusing.  My fishermen friends in Hong Kong told me that whales and porpoises were anomalous because they looked like fish but acted intelligent (unlike fish) and were “like pigs” internally.  My friends thought these creatures were uncanny, and avoided catching them.

In anthropology, studies of classifying everything from religious ceremonies to art objects have continued to proliferate.  An unwise attack on such studies was launched in the 1960s by Marvin Harris (1966, 1968) and others.  Harris chose to criticize a study of Maya firewood knowledge by Duane Metzger and Gerald Williams (1966), branding it—and by extension all such research—as “trivial.”  He could not have picked a worse target.  The Maya depend on firewood for cooking and warmth.  They live in a wet climate where good dry wood is hard to find and must be carefully chosen.  Like hundreds of millions of other people around the world, they spend up to several hours a day searching for firewood.  Knowing how to get the best wood in the shortest time is a life-and-death matter for them.  Firewood use is a matter of enormous councern worldwide, since about 1/3 of all the wood used in the world goes for this purpose, greatly contributing to global warming and deforestation.  Nothing could be less trivial, either to the Maya or to the planet.

An area in which folk classification is infamously important, inaccurate, and pernicious is “race.”  Americans are addicted to the notion that everything important is genetic and that genetics is a simple science.  (Many have wondered how a nation of overachieving immigrants from all manner of other cultures can believe this.)  Thus, as noted above, Richard Herrnstein and Charles Murray in The Bell Curve (1994) give us a “Latino race” with an IQ of 89!  Quite apart from the absurdity of such aggregated measures of intelligence, Herrnstein and Murray simply ignored the fact that Latinos can be white, black, Native American, East Asian, or any and all mixtures of these.  Similar confusion surrounds “Black” Americans, Native Americans, and other categories.  We have lately been inflicted with something called “race medicine,” which prescribes different drugs for Black and White Americans.  Yet there is a total continuum.  Millions of Whites are part Black, and almost all Blacks are part White—frequently 15/16, since anyone with any African appearance is called “Black.”  These 15/16 Caucasian patients are given “Black” drugs!  Even such appalling bureaucratic monstrosities as “Asian-Pacific Islander”—the creation of arbitrary Census Bureau labeling—have become “real” to Americans.  This shows how “race” classifications can not only change arbitrarily but can be invented out of whole cloth.  The strange, if not downright surrealistic, history of “race” labels has been well covered in anthropology by Lee Baker (1998), Audrey Smedley (2007), and Jonathan Marks (Marks 2001), among others.

Even Linnaean classification is related to economic and aesthetic theories of the Enlightenment elite (Foucault 1970).  Foucault also saw many other interesting aspects of classification that go far beyond its immediate utility.  He wrote:  “Take…animal and plant classifications.  How often have they not been rewritten since the Middle Ages according to completely different rules:  by symbolism [the medieval use of animal and plant symbols], by natural history, by comparative anatomy, by the theory of evolution.  Each time this rewriting makes the knowledge completely different in its functions, in its economy, in its internal relations” (Foucault, in Chomsky and Foucault 2006:26; cf. Foucault 1970).  There is some truth in this, but Foucault misses the key point that actual everyday classification of creatures did not change significantly during this period.  Dogs were dogs, cats were cats, whales were whales.  Nor, of course, was it “completely different in its functions”; it still functioned largely to let people name what they saw, and give similar names to similar creatures.

Over centuries, many new plants and animals were added to European knowledge, necessitating major changes in everyday words and usages, but the basic system did not change.  However, Foucault is correct in that elite scholars’ interests and perspectives really did change.  The Medieval churchmen were more interested in animals as symbols than in animals as animals (see e.g. Herbert Friedman’s superb account of birds in art, 1980; also Rowland 1978).  The function vs. anatomy tension lies behind the whale trial described above, and did indeed affect how we folk speakers classify whales, but we still talk about the “whale fishery,” as well as “shellfish” and “cuttlefish” and other non-anatomical “fish.”  Darwin profoundly changed human thought, but not folk taxonomical usage.

Consider the European goldfinch (Carduelis carduelis).  In the Middle Ages it was a symbol of Christ, and thus the child Christ is shown holding one in many Renaissance paintings.  In Linnaeus’ taxonomy it got its present name—just its old Latin name, doubled—and was classed with finches.  Anatomists then separated the finches into several groups—they turned out to be more different inside than outside—and the goldfinch got its own family, Carduelidae (which includes a lot of its relatives).  Darwinians have gone on to debate the actual relationships and membership of the Carduelidae.  So Foucault is right.

But not right at the deepest level.  Throughout all of this, the goldfinch remains a goldfinch, and every English speaker who notices birds knows it.  Germans similarly call it a distelfink, “thistle finch,” as they have for centuries, in honor of its regular food (thistle seeds).  Folk classification still makes it a “finch” along with zebra finches and Mexican ground finches, although we now know these birds are not closely related.

This emphasizes a difference between folk and elite understandings.  This is not so much a matter of better or worse education, or of snobbism, but of needs.  We ordinary people, and this includes scholars and scientists on their off days, need to have a quick, convenient, pragmatic label to refer to things we regularly interact with.  Scientists need to have labels based on understanding of deeper, less obvious, but more biologically important processes.

Hence scientists refer to C. carduelis in the lab and in the technical literature, but call it a “goldfinch” when they see it in their thistle patch.  Anywhere in the world, if you ask “what’s that?” as a small yellow finch with a red face flies by, you’ll be told “a goldfinch” (or local equivalent).  You will never be told “that’s the Christ child,” and you would not have been told that in the Italian Renaissance, either.  The medieval symbolic system is very much a special purpose classification, and the medieval artists knew that.

Medieval and Renaissance artists and writers often spoke of four levels of symbolism—traditionally defined as “image, symbol, metaphor, and allegory” or something similar (see e.g. Schneider 1992:17).  This survived in religious music until the present.  A good, and thoroughly modern, example is Mississippi John Hurt’s “Slidin’ Delta Blues,” in which the image–a train nicknamed “Slidin’ Delta”—is a symbol of parting from one’s love, which is a metaphor for death, which is an allegory for transcendence.  Hurt sings:  “Lord, I’m goin’ somewhere / I never been before,” where the “somewhere” is a faraway real place, death, mystical experience, and Heaven, depending on the level at which one is listening.

“Totemism,” in the broad sense, is similar.  Classifying people into Eaglehawk and Crow moieties or into Wildcat and Coyote moieties does not mean that people are animals.  It is not the basic classification of the traditional peoples that use this system, either, contra Emile Durkheim and Marcel Mauss (1963).  It is simply the use of well-known animals as symbols for social groups (Lévi-Strauss 1962, 1963).  This is a special purpose system, and it depends on the prior existence and widespread knowledge of the basic or general system.  It often depends also on knowing the animals’ habits.  Wildcats were associated in California Native cultures with valleys, coyotes with mountains, and thus valley and lowland animals are in the Wildcat moiety, hill and mountain ones in the Coyote moiety.  People are distributed according to birth rather than residence, however.  One is automatically in one’s father’s moiety, no matter where one lives.  What has happened is that human social divisions are projected onto nature.  “Nature” and human society are not radically separated in Native American cultures, so this is a “natural” thing to do (Durkheim and Mauss 1963).

Such social classifications are universal; consider our school mascots.  Symbol, metaphor, and allegory are amazingly important to humans (Lakoff and Johnson 1980).

An odd kind of “classification” is found in linguistic gender and other grammatical systems.  German, Spanish, and other gender systems are notoriously decoupled from sexual reality; “maiden” is neuter in German.  Several Australian languages have four genders: masculine, feminine, neuter, and useful plants (Lakoff 1990 discusses one such language).  Many languages, including Chinese and Maya, have classifying particles, added to numbers and demonstratives, that identify broadly the type of noun to follow.  Thus Chinese says yi ben shu “one volume book” and yi tiao yu “one length fish.”  This allows one to see that Yucatec Maya has a category for “plants” in general:  there is no actual word for “plants,” but there is a classifier (k’ul) that includes all and only plants.

Maps are, in a sense, another form of classification.  Not all cultures make maps, but all have extremely detailed knowledge of places and paths in their environments (Hunn 1991, 2008).  The idea that small-scale societies are  somehow intuitively aware of the environment without making mental maps or representations of it is wrong (Istomin and Dwyer 2009).

Yet, humans seem compelled to think causally.  This is another inborn habit of thought.  We have to find a motive.  Typically, we first look for an active, thinking agent.  If that fails, we look for a covering law—not usually a formal one, just a rule of thumb that will serve.  Only if that too totally fails do we accept blind chance, or probabilistic factors, as a reason (see e..g Nisbett and Ross 1980).  As Geoffrey Lloyd says, “humans everywhere will will use their imaginations to try to get to grips with what happens and why, exploiting some real or supposed analogy with the schemata that work in otherwise more mundane situations” (Lloyd 2007:130).

Aristotle described four types of cause, or rather of aition (pl. aitia), which has also been translated “factor” (Aristotle 1952:9, 88f).  The first is material cause—what the object we are contemplating is made of.  This would not occur to modern people as a “cause”—the hickory wood does not cause the baseball bat—but Aristotle was thinking partly of the elements of Greek thought.  Earth, air, fire, and water were generally thought to have dynamic qualities that made them evolve into things.  Chlorine purifies water by virtue of its violently oxidizing nature, which destroys bacteria and toxins; this is an example of material cause in action.

Second is formal cause: the definition of the object, its pattern, its essential character.  A baseball bat is a rounded stick made of hickory wood, and is patterned so as to hit balls in a game. Third is efficient cause—the direct, proximal cause, specifically the causing agent, of an action.  The bat is made by a factory to be sold to a player, who then uses it to hit a ball; the chlorine is bubbled through water, where it reacts chemically with toxins and bacterial membranes.  Fourth is the final or ultimate cause, the reason for the action or object:  the water is purified so people can drink it safely; the bat is used in a game for the purpose of entertaining people.  This last can go into infinite regress:  the bat is to hit a ball, so that the game will go on, so that people will be entertained, so that they will enjoy life and buy the sponsors’ products, so that….  And this only scratches the surface of Aristotle’s theory of cause, and he was only one Greek philosopher (see Lloyd 2007:108-130).

The endless debate on cause in philosophy since Aristotle need not concern us, since we are here considering folk and traditional knowledge.  In that realm, our heuristics and biases play out at their most florid.  Aristotle’s efficient cause is stated in agent terms.  This default attribution to intentional action by an agent gives us the universal belief in gods, spirits, and other supernatural beings.

Science and Ethnoscience, part 2: European Biology as Ethnobiology

Monday, August 22nd, 2011

E. N. Anderson

Dept. of Anthropology, University of California, Riverside

Part 2.  European Science as Ethnoscience:  Science in Europe before International Science Came

Recently, historians of science have reacted against the old model of evaluating former beliefs in light of current knowledge.  This is surely the right thing to do.  However, it often leads to evaluating former beliefs as if they were a homogeneous body of lore, decoupled from real-world experience.  One could, for instance, recount the medical knowledge of 1600 as if it were a single, coherent system, based on logical reasoning, with no input from experience or practice.  This is not really how people think, and certainly not how science and medicine developed.  People interact with their patients and surroundings, learn from that as well as from books, and come up with individual knowledge systems that may or may not have much in common with those of their contemporaries.  The current histories of science thus take account of agency, and the role of interaction with reality.

Near East and China to Europe

Science gets around.  Three particularly important cases of early-day knowledge transfer are particularly well documented:  the spread of medical lore from Greece to the Near East in the early Islamic period; the spread of medicine and other technical lore between China and the Near East in the Mongol period; and the spread of science from both the above to Europe in the Middle Ages and Renaissance.

The first two cases joined early, for Near Eastern medical knowledge was flowing to both Europe and China in the 1200s and 1300s.  However, the two-way nature of the latter flow, and the radical differences in structure and cultural background, make it more reasonable to treat them intially as separate histories.

Europe before 1500 participated in a general rise of science in the Eurasian and African world.  Greek learning was long forgotten in the west, but Arab and Byzantine scholars reintroduced it, first to Moorish Spain, then to Sicily and upward through Italy.  There had been a huge flow from the Greek world into Arabic and Persian cultures from 700 to 1000, but essentially none the other direction.  After this time the flow almost entirely reversed.  Translation into Arabic shrank considerably (Lewis 1982:76), but translation from Arabic into western languages picked up.  At first, almost all of it was within the Arab-influenced worlds of Spain and Italy, but it spread rapidly beyond those spheres.  Greek learning spread to west Europe directly (Freely 2009:165177, and see below), but spread largely via the Arabsd..

The great Salerno medical school, just south of Naples, was apparently started by Arabs in the early 8th century.  Legend said the school was founded by an Arab, a Jew, a Latin and a Greek.  It flourished by 850; it blossomed from about 1000 AD as the center of Islamic-derived learning in Europe.  Constantine the African (ca. 1020-1087), from Tunis or near it, was instrumental in transferring Arabic knowledge into Italy at this time, including his translations (and those of his student John the Saracen, 1040-1103) of works including al-Abbās, and Hunayn ibn Ishāq’s versions of Aristotle and Galen, though his translations were far from the best imaginable (Kamal 1975:189, 662-3; Ullman 1978).  (Hunayn, a Christian, came out under his Christian name of Iohannitius.)  Constantine worked in Salerno or nearby Montecassino.

Indian numerals were Arabized in the 9th century, and then developed into Arabic numerals, which slowly entered Europe in the late middle ages and early Renaissance.  The most important transfer of Indian into Arabic numeration came via al-Khwārazmī in Baghdad.  He became so famous as a mathematician that his name entered the world’s language.  “Algorithm” is a corruption of “al-Khwārazmī.”  This word first appeared in a thirteenth-century translation, Algoritmi de numero indorum, “Al-Khwärazmī on Indian numbering” (Hill 1990b:255;  “Logarithm” is a deliberately-coined metathesis of “algorithm”).   He contributed greatly to algebra (Arabic al-jabr, “figuring”), and his work on it was translated into Latin in the 12th century, by Robert of Chester and then again by Gerard of Cremona.  Trigonometry followed the same course, possibly from India, certainly from Islam, at a somewhat later date.  (On this and other mathematical transfers, see Freely 2009:133, with forms of numbers well shown, from ancient Brahmi to modern; Hill 19990; Mushtaq and Berggren 2000, esp. pp. 182, 187.)   The most important name in transferring Arabic numerals into Europe (in the 990s) was Gerbert of Aurillac, who became Pope Sylvester II (Lewis 2008:328-329)—one of the few popes to have any distinction in learning outside of theology.

The Arabs and other Near Easterners also made enormous contributions to technology and agriculture, but these are poorly known, because the contributors were rarely literate and literate people were rarely interested (Hill 1990b).  A few agricultural handbooks exist, and show great sophistication.  We know this lore was transferred to Europe, but we have few details.

The Salerno medical school remained the greatest in Europe throughout the early middle ages.  This school translated the Arab Taqwim as-sihha by the Christian Arab Ibn-Butlān (d. ca. 1066) as the Tacuinum sanitatis, which remained the basic medical manual in Europe for centuries (Tacuinum Sanitatis 1976).  It is still in print in several languages, though now more for its beautiful early-Renaissance plates than for its advice.  The latter, though, is still good; it survives today in the standard clichés about moderation in diet, moderate exercise, rest, and so forth, familiar to everyone from doctors’ talk and pop medical books.  These saws trace directly back to the Tacuinum.

It, in turn, was the basis for the Salernitan Rule, the versified guide to health that was the Salernan school’s most famous product (Arikha 2007:77, 100ff.).  Sir John Harington translated it into English around 1600.  His famous translation of one line is still frequently and justly quoted:

“Use three physicions still:  First Doctor Quiet,

Next Doctor Merryman, and Doctor Diet” (Harington 1966:22).

The Latin original, ibid., is:

Si tibi deficiant medici, medici tibi fiant

Haec tria, mens laeta, requies, moderata diaeta; literally, “if you need doctors, get three:  a happy mind, rest, and a moderate diet.”

The Salerno school also produced the Articella (“little art”), a handbook that, “by the mid-thirteenth century…was the foundational textbook for most medical teaching in the West.  It included the Hippocratic Aphorisms and Prognostics; Galen’s short Ars parva; the medically essential and thus ubiquitous treatises On Pulses and On Urines; and the extensive compendium of Galenic writings by Hunayn ibn Is’haq (Johannitius), the Isagoge Ioannitii in tegni Galeni, in the translation by Constantinus Africanus” (Arikha 2007:77).  Many other Italian translating projects were active (Freely 2009:126ff.).

Through it and other channels, the work of Ibn Sina (Avicenna, 980-1037; see Avicenna 1999) became standard.  Ibn Sina hailed from the far east of the Iranian world, near Bukhara.  He was a thorough-going Aristotelian, committed to investigation of the world, though convinced that intuition was vital in providing that.  His enormous Canon of Medicine was translated into Latin by Gerard of Cremona (1114-1187), along with perhaps a hundred other Arab works.  Gerard had moved to Toledo to learn Arabic, and remained there (Freely 2009:128; Pormann and Savage-Smith 2007:164), in that world which still remembered “convivencia.”  This was surely one of the most stunning examples of knowledge transfer in all history (Covington 2007; Kamal 1975:663; Ullman 1978:54).  One suspects that Gerard did not single-handedly translate all of them, but the achievement was fantastic nonetheless.  Avicenna’s Canon work remained standard in Europe into the 17th century.  Gerard also translated Ptolemy’s Almagest, and basic works of Al-Kindi, Al-Farabi, Al-Hazen, Thabit, Rhazes, al-Zahrawi, and Al-Khwarizmi, the last being the first algebra to reach Europe.  He also translated much alchemy (Hill 1990a:341), which, be it remembered, was a perfectly reasonable science in those days; much of modern chemistry descends from it.  Certainly, few people in history have been so important, and very few so important yet so little known.

Also active in Toledo were the Jewish translator and writer Abraham ibn Ezra (1086-1164; Freely 2009:129) and several others.

Fibonacci, famous for developing the sequence of numbers that specifies the pattern of developing plant structures, learned much from the Arabs, using al-Khwarizmi’s algebra works in Latin (Covington 2007:10)—presumably Gerard’s translation.  Faraj ben Salim, a Sicilian Jew, translated more of Rhazese as well as Ibn Jazlah, al-Abdan, and others.  As late as the 16th century, Andrea Alpago of Belluno was translating or retranslating more of Avicenna (Kamal 1975:664, following Hitti).  Another Italian, Stephen of Pisa, was active at Salenro and in the Middle East (Ullmann 1978:54).

Botany transferred actively, largely in the form of herbal medicine in the tradition of Dioscorides.  The Arabs had vastly increased the number of items in the Dioscoridean materia medica, and Europe slowly adopted many of these, though unable to access some that were strictly Near Eastern (Idrisi 2005).

Spain was key to transmission.  The Arabs conquered it in 711, ruled most of it into the 11th century, and retained a foothold at Grenada until 1492.  At peak, under the late Ummayads in the 10th century, Cordova (the capital) reportedly had 200,000 houses, 10,000,000 people, 600 inns, 900 baths, 600 mosques (with schools), 17 universities, and 70 public libraries, the royal one containing 225,000 books (Kamal 1975:8), or, by other estimates, 400,000 (Lewis 2008:326).  The Ummayad golden age ended, but subsequent dynasties did surprisingly well keeping civilization alive, and slowly Europe realized that there was something worthwhile here.

The climax of Spanish appropriation of Islamic knowledge came in the 11th-13th centuries, under Alfonso the Wise (late 13th century) and other relatively enlightened monarchs.  Moorish Spain was a center of Arab and Islamic civilization.  Works spread all over the world from there; Yusuf al-Mu’taman’s geometry book of the 11th century was taken by Moses Maimonides (1135-1204) to Cairo, whence it went on all over the Islamic world, being republished, for example, in Central Asia in the 13th century (Covington 2007).  At that time or earlier, Spanish travelers even went to Egypt and Syria, and possibly Central Asia, in search of knowledge (Kamal 1975:662, citing the medieval writer al-Maqrizi).  Ibn al-Baytar (d. 1248), a famous Andalusian physician and herbalist, traveled in the Near East and listed hundreds of remedies; many herbal drugs are still called by his name.

Around 750, the Byzantine emperor Constantine VII sent ‘Abd al-Rahman II of Andalus an elegant Greek manuscript of Dioscorides.  Seeing this as obviously far more useful than most pretty gifts, the Jewish minister Hasdai ibn Shaprut had it translated, with the gift-bearing ambassador and a monk providing the Greek, and several Arabs helping with the Arabic and with the plant identifications (Lewis 2008:331).  Arabic versions of Dioscorides were eventually brought into Latin, but, as we have seen, most Arabic medical knowledge came later and via Italy.

Even love poetry moved north; Andalusian song, sometimes learned via captured singing-girls, inspired the troubadours (see e.g. Lewis 2008:355).  Christian captives went the other way, and influenced Andalusian Arab songs; they often have chorus lines in (rather butchered) medieval Spanish, often with definitely racy words.

A vast range of Spanish and Italian words come from Arabic, including a huge percentage of traditional medical terms, and many have gone on into English, ranging from “syrup” and “sherbet” to “soda,” “cotton,” “alkali,” “antimony,” “realgar,” and “lozenge,” to say nothing of such well-known scientific terms as “algebra,” “algorithm,” “alchemy,” and most of the names of the larger stars.  The Arab definite article “al-“ is often a dead giveaway for Arabic origin.  The “l” gets assimilated to many initial consonants, giving Spanish words like azulejo “tile” (Arabic az-zulej) and azafrán “saffron” (az-zafaran).  The standard Spanish word for thin noodles,  fideos, is Arabic; the proper classical Arabic is fidāwish (see Zaouali 2007:116 for the word and a medieval recipe), fideos being the Andalusian Arabic pronunciation.  Today the word is often mistakenly taken as a plural.

Spain was, of course, a center of Arabic learning, which could easily be translated directly.  Al-Maqqari wrote of its capital in the 10th century:  “In four things Cordoba surpasses the capitals of the world…the greatest of all things is knowledge—and that is the fourth” (Freely 2009:107; the other three were local buildings, including the mosque which still survives).   Ibn Zuhr (Avenzoar to Europeans, transcribing the Andalucian pronunciation of his name) flourished ca. 1091-1162.  His more famous student Ibn Rushd (1126-1198, known in Latin as Averroes, approximating the Andalucian dialect pronunciation of Ibn Rushd) became a standard source of medical and scientific knowledge for medieval Europe.  He was enormously influential on St. Thomas Aquinas, and through him on all subsequent European thought.  It is not impossible that Europe would never have developed modern science without Averroes.  Averroes was an Aristotelian, and his version of Aristotle remained standard in Europe, being definitively superseded only after the original Greek texts became widely known.

Averroes also wrote “The Incoherence of the Incoherence,” an answer to al-Ghazzali’s “The Incoherence of the Philosophers,” a mystic’s attack on rational thinking.  Though one standard story claims that al-Ghazzali got the best of it and ended philosophy in Islam, actually Averroes’ answer was fairly successful, and science continued to flourish in the Islamic world, succumbing more to later economic decline than to al-Ghazzali’s mysticism.  Other scientists included Abulcasem (Abu al-Qasim).  Translation effort culminated with Arnold of Villanova (d. ca. 1313), who translated Avicenna, Al-Kindi, Avenzoar and others.

Some knowledge flowed the other way.  Little, if any, of it was scientific; it was more in the line of fun.  Some medieval Arab songs in Spain had Spanish-language choruses—significantly, written to be sung by slave-girls used for sexual purposes.  Spanish food got into Muslim cooking; “a primitive sort of puff pastry” was fulyātil, from the medieval Spanish word for “leafy” (Perry 2007:xii).  We will return to the story of Spain.

Italy, however, was also a major transfer zone, with Muslim control of Sicily (and briefly part of south Italy) critically important.  Sicily fell to Roger the Norman, who with his successors developed one of the most tolerant realms of the Middle Ages; seeing the value of Islamic knowledge, he and his successors, especially Frederick II, tolerated Muslim communities and oversaw a great deal of translation and learning.  One result was Frederick’s great treatise on falconry, De Arte Venandi cum Avibus, which is probably the only medieval work that is still the standard textbook in its subject (Frederick 1943).

South France produced the famous Tibbon family of Jewish translators, who rendered many works into Hebrew; then they or others translated on into Latin.  They were especially active in the 13th century (Pormann and Savage-Smith 2007:164-165).  They may have made the greatest single contribution to the translation effort, vying with Gerard of Cremona.  The enterprise ranks among the most astonishing examples of knowledge transfer in all history.

Universities, Crusaders and their doctors, knightly orders centered in Cyprus and elsewhere in the Mediterranean, and ordinary travelers became more and more a part of the effort, until the path was well-beaten and no longer a matter for a few heroic travelers.

Even the British Isles contributed translators, including Adelard of Bath and Michael Scot.  Rober Bacon learned much from translations of Arabic lore.  Later, in the 17th century, Jacobus Golius introduced Descartes to Alhazen’s work and other relevant texts; Alhazen’s work on optics now survives only in Latin translation.

By 1200, Paris had 40,000 inhabitants, 4000 of whom were students (Gaukroger 2006:47).

Students were then as they are now; “as the contemporary saying went, [they learned] liberal arts at Paris, law at Orleans, medicine at Salerno, magic at Toledo, and manners and morals nowhere” (Whicher 1949:3; cf. Waddell 1955, esp. pp 176 ff).  Nothing has changed since, except for the addresses of the most prestigious universities.  The “contemporary saying” was presumably said by older professors, who never fail to claim that the younger generation is going to hell, and never remember that their elders said the same thing about them.  It is particularly amusing to hear aging ‘60s people complain about today’s amazingly tranquil and industrious young. 

Religion was both enabler and opponent of all this.  Plato was the basis of early theology.  The rise of Platonism explains such things as the Seven Deadly Sins:  Greek philosophical annoyances rather than Biblical taboos.  Aristotle was outlawed for much of these earlier centuries; the idea that God was present in all his creation—the physical world—was anathematized as heresy (see Gaukroger 2006:70-71).

Oddly, Greek learning did not penetrate Europe directly until long after classical Greek works were well known via the Arab routes.  In fact, the Greeks themselves recovered much of it from the Arabs (Herrin 2008); the Dark Ages were not nearly so dark in Byzantium as in the west, but still much was lost.  Greeks such as Gregory Chioniades (late 13th-early 14th C) eventually came to translate Arab advances in astronomy, medicine, and related fields (Herrin 2008:274).  Somewhat before this time, medical study has revived in Byzantium; dissection began again (after longstanding Christian bans) around the 11th century (Herrin 2008:228).

Western Europeans came to Byzantium for commerce and crusades in the high middle ages.  The infamous Fourth Crusade of 1204 led to European occupation of the city for almost 60 years.  During this period, such Westerners as William of Moerbeke read and translated Aristotle, Galen, Archimedes, and other scientific greats (Herrin 2008:278-279).

Meanwhile, Greeks from the Byzantine world appeared in the West, in time to teach Petrarch and convert him to trying to rediscover Greek classics in their original form.  Burgundio of Pisa first translated Galen from Greek to Latin, around 1180 (Kamal 1975:663).  Others, including the Jewish Bonacosa, followed over the next century.  Byzantine delegations continued, and the 15th century emerged as a major turning point, establishing Greek learning as more or less de regueur for serious scholars, at least in Italy (see Gaukroger 2006:89-90).  The story of the rediscovery of classical learning is too well known to need retelling here; what interests us at this point is that direct work with the Greek sources came long after much classical learning was known through Arabic refraction.

With the rise of early modern science, it was the Europeans’ turn to seek out Near Eastern knowledge in its actual homeland.  Leonhard Rauwolf traveled extensively in the Near East in the 16th century, to be followed in later centuries by Adan Tournefort (a father of taxonomy) and many others.  The classical sources were by then well known in Europe; Rauwolf and Tournefort were more interested in gathering new knowledge through actual field work.  They are among the great ancestors of modern-day field biologists and anthropologists.

India, China and Japan became well known only later.  Portuguese and then Dutch enterprise (the latter especially in Japan) led to a flood of knowledge coming back to Europe.  The Jesuit missionaries, who focused on East Asia as their initial mission field, were particularly important; they idealized Chinese culture, arguing enthusiastically for its philosophy, governance, food, medicine, and anything and everything else (on medicine, see Barnes 2005).  “New Christians” may have been important too, if the example of Garcia da Orta (the Jewish-background writer on Indian medicines) is representative.  A veritable translating industry introduced East Asian medicine to Europe in the mid-17th century, with moxibustion in particular intriguing the Dutch in Japan (Cook 2007:350-377).  Even Thomas Sydenham, the very image of the “new science” in medical form, was fascinated by moxibustion and recommended it (Cook 2007:372).  Concepts did not get across, but practices and especially drugs did.  As Cook (2007:377) says:  “Culture certainly made translating the whys and wherefores as understood by one group extraordinarily difficult.  But it was no barrier to useful goods or the business of how to do something.”

The flood of medieval Arab material was almost all Aristotelian, and it led to an enormous revolution in European thought in the 12th and 13th centuries (Ball 2008; Gaukroger 2006).  The highly idealistic, other-worldly, broadly Platonic worldview of the Dark Ages gave way to a view that valued investigation of real-world things.  God’s plan as revealed in the actual experienced world became a major goal of investigation.  This was to be the key reason for scientific investigation for the next several centuries, as we shall see in the next section.

Traditional churchmen, however, caviled at the new rationalistic, worldly, logical approach.  They felt that “taking too strong an interest in nature as a physical entity was tantamount to second-guessing God’s plans” (Ball 2008:817).

This view rose in parallel to, and may have been derived from, the Muslim reaction against Aristotelianism.  In the Near East, but not in Europe, Muslim reaction triumphed in the end.  Extreme reactionary religiosity, associated with the Hanbalite legal school, begat the Ash’arite view that speculation on the world was impious.  This received a huge boost through al-Ghazāli’s savage attacks on the “philosophers” in the 12th century.  Hanbalite thinking has more recently given rise to the Wahhabism that swept the Islamic world in the late 20th and early 21st century.  Wahhabism was espoused by the Saud family in Saudi Arabia, and their oil wealth gave them the ability to propagate it worldwide, leading to Al-Qaeda terrorism, widespread attacks on girls’ schools, and many other manifestations.  Islam is as diverse as Christianity; the Hanbalites are to the other legal schools as the hard-shell southern Baptists are to the mainstream Christians.

Ash’arism might not have triumphed, however, had not the Mongols swept through the Middle East, followed closely by the even more devastating epidemics of bubonic plague from 1346 onward.  These multiple blows ruined economy and culture, and left the region prostrate.

Science withered or ossified.  Folk wisdom continued to increase, and so did science in some marginal areas of Islam such as India and Central Asia.  But in general the torch was passed to Europe.  The roles of the Middle East and Europe were reversed.  Thus, writing on Ottoman Turkish medicine and natural history after the Turkish empire had passed its noon, Bernard Lewis reports that “they did not think in terms of the progress of research, the transformation of ideas, the gradual growth of knowledge.  The basic ideas of forming, testing and, if necessary, abandoning hypotheses remained alien to a society in which knowledge was conceived as a corpus of eternal verities which could be acquired, accumlated, transmitted, interpreted, and applied but not modified or transformed” (Lewis 1982:229).  Lewis also notes lack of interest in the rest of the world.  He correctly says it is more typical of human societies than is the ethnographic curiosity of Europe in the modern period.  But the ancient Greeks and the early medieval Muslims had been more attentive to “the others.”

Lewis contrasts this strongly with the great days of early Islam, when the Near East was the scientific center of the world.  The Ottoman twilight may be an extreme case, but I encountered exactly those attitudes among older Chinese scholars in Hong Kong in 1965 and 1966.  Many of them told me soberly that the traditional fishermen I studied had six toes and never learned to swim.  A minute’s observation on the waterfront on any warm summer day would have sufficed to disprove both claims, but the claims were old and were in the Chinese literature, and that was enough!  Such attitudes trace back to the declining days of the Ming Dynasty in the 1500s, and are not unknown earlier, but (as in Islam) they do not hold universally until economic and political decline set in.  Nothing could be farther from genuine traditional ecological knowledge; those same fishermen (and the Yucatec Maya I later studied) constantly tested and added to their pragmatic knowledge of their worlds.

The Origins of Early Modern Science

Things were very different in Europe.  Early modern science arose after Near Eastern and other sciences were incorporated there.  Perhaps from China or the Near East came the idea of garden as microcosm of the world; this idea led many to start gardens in which they tried to grow everything they could find (Cook 2007:30).

One odd pioneer was Paracelsus (1493-1541; see Thick 2010:200).  Wildly nonconformist and eccentric, he dabbled in mining, alchemy, medicine, and philosophy during a wandering life working as miner, chemist and doctor.  He believed all nature and life were chemical, and could be reproduced in the chemist’s or alchemist’s laboratory.  Cemistry and alchemy were not differentiated at this time—they were one science.  He made, or at least established in the literature, perhaps the two most important breakthroughs in liberating modern science from Greek mistake:  he saw that diseases were separate entities in their own right, and not just forms of humoral imbalance; and he saw that at least some chemical elements—mercury and sulphur, to be exact (and he added salt)—were not compouinds of earth, air, fire and water, but were actual elements themselves.  The first of these profound insights was taken up later by Sydenham and others.  The second was not to be fully developed until Lavoisier.  Still, the idea was out there; the seed was sown.

Medieval herbals gave way successively to Brunfels’ major one of 1530-36, Fuchs’ great book of 1542, and then in the late 16th century the truly great work of Dodoens (Cook 2007; Ogilvie 2006).

Of course, a dramatic moment was the coming of New World plants to Europe, first in the rather small work of Nicolas Monardes of Sevilla (1925), but then in the enormous and stunning achievement of Francisco Hernandez in the late 16th century.  Thought by some recent writers to be lost, or buried in imperial Spanish libraries, it was actually made available by the Lynx Academy (made famous by Galileo’s membership; Freedberg 2002; Saliba 2007).  It was republished in Mexico in an obscure wartime edition (Hernandez 1942), which languishes almost unknown; a new edition is needed.

Meanwhile, Bernardino de Sahagun was getting Aztec students and colleagues to record their knowledge, in the monumental Codex Florentinus (Sahagun 1950-1982).  These ethnoscience studies of Mexico are among the greatest achievements of plant exploration and of ethnography.

Only shortly before, Las Casas had led the successful movement to have Native Americans declared by the Catholic Church to be fully human and entitled to all human rights then recognized.  This was the beginning of the end for the appalling practices of early Spanish settlement, when Native Americans were enslaved and worked to death, or fed alive to dogs because they were cheaper than dogfood (Las Casas 1992; Pagden 1987; Varner and Varner 1983).  Las Casas risked his life for decades; the settler interests were openly after him.  Few political battles in history have been more heroic or more important.  Interestingly, Las Casas was the conservative in these fights; the modernizing “humanists” took the position that the conquerors had full rights to do anything they wanted to the “savages.”

Spain in the late 16th century was thus a dynamic place of forward thinking and spectacular achievement.  Monardes may have heard the masses of the great Sevillan composer Francisco Guerrero.  The year of Guerrero’s death, 1599, saw the birth in Sevilla of the master paiter Velásquez.  Contemporary with Guerrero, the incomparable Tomas Luis de Victoria was shuttling between Spain and Rome (where Palestrina composed his vast repertoire at the same time).

“New Spain” in the New World was rapidly catching up.  Spanish composers moved to Mexico and South America, where they taught the locals, initiating a period of Baroque music that is little known but unexcelled; among other things, Estebán Salas in Cuba became the first African-American to compose classical European music.  In the 17th century, Juan Ruiz de Alarcón migrated from his obscure Mexican birthplace to Spain, where he became one of the great dramatists and an absolutely unexcelled master of the Spanish language.  (He was one of those writers who can make strong men weep simply from the beauty of the sounds, even if they do not understand the Spanish.)  In short, Spain—including “New Spain”—in the 16th and early 17th centuries was fully participant in the brilliant and innovative civilization of Western Europe, along with Italy, France, the Netherlands and England.  Spain’s melancholy decline set in before the full scientific revolution (or non-revolution), but not before scholars like Monardes and Hernández had contributed in a major way to it.

Ogilvie (2006) cautions that the new discoveries in Europe and the Near East were far more important in the development of botanical science than these rather sketchily-known New World discoveries.  However, these did indeed have a major effect (Gaukroger 2006:359; even so, Bernardino de Sahagun’s great work on Aztec knowledge, now known as the “Florentine Codex,” was not known in Europe at that time.)

Arabic learning, by this time, was entering Europe via Arabic-literate European scholars as well as immigrant Arabic-speakers like Leo Africanus (d. ca. 1550)  Leo taught Arabic to the European Orientalist Jean-Albert Widmanstadt, 1506-ca 1559).  A contemporary was Guillaume Postel (1510-1581), whose astonishing career has recently been reconstructed (Saliba 2007:218-220).  Postel served on a mission to Constantinople, where he apparently learned Arabic or at least developed an interest that led to his doing so.  He read and annotated technical works of astronomy and probably other sciences, and briefly taught Arabic in Paris.  People like him evidently alerted Copernicus to Arabic astronomy, which clearly influenced Copernicus.

Just as Greek had been the exciting new language to Petrarch and his generation, Arabic was to the 16th century.  Arabic manuscripts are widely found in old European libraries (notably the Vatican and, of course, Byzantine libraries), and were not read by Arab travelers alone.  With the Lynceans and their colleagues seeking out knowledge from the Aztecs to the Arabs, Europe was suddenly a very exciting place.

An example of knowledge flow from the Near East to Europe may be of interest.  The idea of circulation of the blood seems to have started in Islamic lands.  Bernard Lewis (2001:79-80) records that “a thirteenth-century Syrian physician called Ibn al-Nafīs” (d. 1288) worked out the concept (see also Kamal 1975:154).  His knowledge spread to Europe, via “a Renaissance scholar called Andrea Alpago (died ca. 1520) who spent many years in Syria collecting and translating Arabic medical manuscripts” (Lewis 2001:80).  Michael Servetus picked up the idea, including Ibn al-Nafīs’ demonstration of the circulation from the heart to the lungs and back. William Harvey (1578-1657) learned of this, and worked out—with stunning innovative brilliance—the whole circulation pattern, publishing the discovery of circulation in 1628 (Pormann and Savage-Smith 2007:47).  Galen and the Arabs thought the blood was entirely consumed by the body, and renewed constantly in the liver.  They did not realize that the veins held a return flow; they thought the arteries carried pneuma, the veins carried nutrients. Harvey’s genius was to see that blood actually circulates continually, ferrying nutrients to and from the whole body in a closed circuit.

The Dawn of Rapid Discovery Science

Europe has progressed fairly continuously since the final eclipse of the Roman Empire, though there were some checks in the 14th, 17th, and 18th centuries as well as in the Great Depression of the 1930s.  Knowledge in particular has risen steadily, even through those difficult periods.

Europe after 1500 presents a strikingly different case from both medieval Europe and the other civilizations of the world.  The flow of Near Eastern, Chinese, and Indian learning to Europe was one major input into the rise of what Randall Collins (1998) called “rapid discovery science.”

Yet, the new wave really began with Thomas Aquinas, Roger Bacon, William of Ockham, and other medieval thinkers, and they of course were drawing on those Arab sources.  This makes rather a slow process of the famous “scientific revolution” beloved of earlier generations of historians.  The current feeling is that dragging out a “revolution” over many centuries is ridiculous.  We live in an age, after all, when the computer revolution took only a generation.

The most comprehensive study of the intellectual background to the “revolution” is that of Gaukroger (2006, 2010).  Gaukroger sees a development from the scholasticism of the high medieval period, with its Aristotelian natural philosophy, to modern science.  Before the high middle ages, Plato and Christian dogma had been riding high, inhibiting learning.  Gaukroger provides very important observations on Plato, Augustine and Manicheanism (Gaukroger 2006:51-54).  Aristotle was rehabilitated thanks to the Arabs and to Thomas Aquinas.

One might argue, in defense of the old term, that what happened in the 17th century was the most momentous single change in all human history, rivaled only by the origin of agriculture.  (The latter was also a very slow process, leading to fights about whether it was a “revolution” or not.)  I will, here, follow Collins, and refer to the event as the invention (basically between 1540 and 1700) of rapid discovery science, rather than as a “scientific revolution.”

The new, empirical, discovery-oriented, innovation-seeking science arose in the 17th century, pursuant to the work of Francis Bacon (1561-1626), Galileo Galilei (1564-1642), William Harvey (1578-1657), René Descartes (1596-1650), and their correspondents.  Francis Bacon first emphasized the need for experiments to prove claims and advance knowledge; he was opposing magic and dogma based on anecdotal evidence, as well as sheer ignorance.  He also emphasized the need for cooperation; the lone-wolf savant was already a dated concept!   Like other scientists, he wished to strip away the veil of Nature and disclose her; she had been a goddess who “loved to hide herself,” and was still poetically so represented (Hadot 2006).  After Bacon, tension arose between scientists who wished to strip her and romantics who preferred the veil (Hadot 2006).

One remembers that religion and science were not opposed then; in fact science was seen as the discovery of God’s laws in nature.  Descartes and Boyle were great religious thinkers as well as scientists.  The great astronomer Johannes Kepler studied a supernova and realized that the star that guided the Magi to Jesus might well have been such; he sought records and regularities, calculated a date for Jesus’ birth (by then it was known that it was not 1 AD), and coupled it with astrology—still a science then, though a dubious one (Kemp 2009).  Kepler also believed in the Pythagorean music of the spheres, seeing earth and nature moved by heavenly harmonies “just as a farmer is moved by music to dance” (quoted in Kemp 2009).

The revolution was real, if slow. (See Bowler and Morus 2005 for the canonical story; Gaukroger 2006, 2010 for much more detail and a much more radical view.)  It involved finding more and more real-world problems with ancient atomism, mechanism, humoral medicine, and almost everything else, and thus more and more reason to go with new knowledge rather than old teachings.

A fascinating insight into the mind of the time is Malcolm Thick’s detailed biography of Sir Hugh Plat (1552-1608; Thick 2010).  Plat was an Elizabethan tradesman, a brewer by background, who succumbed to the insatiable curiosity of the time.  He never made a significant contribution to anything, but he worked with beaver-like intensity on chemistry, alchemy, food, medicine, cooking, gardening, and every other useful art he could find.  He amassed an incredible collection of ideas, methods, and tricks, most of which he tried himself.  Plat is important not because of what he accomplished but because his story was typical.  There were thousands of ordinary people in Europe of the time who became downright obsessive over useful knowledge or simply science for science’s sake.  They wanted to help the world and to advance learning.

Plat’s work is fascinatingly comparable to an almost exact contemporary, Song Yingxing (1587-1666?), who, oddly enough, has found a biographer at almost exactly the same time as Hugh Plat (Schäfer 2011).  Song was a much more organized, and one gathers a much more intelligent, man than Plat, and produced a famous work instead of a flock of rather ephemeral items, but the mentality was the same:  an obsessive urge to find out absolutely everything about useful arts.  Yet Song’s interests died with him, and no one like him existed in China for centuries.  Plat, on the other hand, was soon forgotten in the rush of new learning.

The same contrast—so bitter for China—is visible in herbals.  At the same time, Li Shizhen was compiling the greatest herbal in Chinese history and the greatest in the world up to his time (Li 2003, Chinese original 1593).  Li’s work was the culmination of a great herbal tradition going back for millennia.  But he was almost surpassed in his own lifetime, and was surpassed soon after it, as the new European herbal movement grew from strength to strength;  Rembert Dodoens’ breakthrough herbal came in 1554, to be followed by John Gerard’s (based on Dodoens’) in 1633 and Parkinson’s in 1629.  Li remained the standard of Chinese herbals until the late 20th century.  Thus, in herbal wisdom as in useful knowledge, China was still up with the west in the 1590s, but had fallen hopelessly behind by 1650.  (One reason was the fall of the Ming Dynasty and its replacement by the often-repressive and scientifically sluggish Qing.)

Through all human history, people had followed received wisdom unless there was overwhelming reason to change.  The revolution consisted of the simple idea that we should seek new knowledge instead, using the best current observations.  These were ideally from experiments, but perfectly acceptable if they came from exploration and natural history, like Galileo’s work on astronomy (published in 1632), or even from pure theory, like Newton’s Principia mathematica (1687).

Robert Boyle (1627-1691) stated the case for experiment over received tradition in The Skeptical Chymist (2006/1661; cf. Freely 2009:214-215), taking the extremely significant extreme position that even when he had no better theory to propose, he would not accept hallowed authority—he would wait for more experiments.  This is, of course, precisely the position that Thomas Kuhn said was hopeless, in The Structure of Scientific Revolutions (Kuhn 1962).  But it worked for Boyle.

It is no mere coincidence that, just as earlier scholars had their “republic of letters” and Galileo and his friends their “Lynx Academy,” Boyle depended on an “Invisible College” for stimulus and conversation.  Scientists may study vacuums, but they cannot work in one.  The sociology of science is vital.

Much of the revolution consisted of new opportunities to observe and test.  Consider the persistence of Hippocratic-Galenic medicine.  Few indeed were the people in premodern times who had Galen’s opportunities to observe, experiment, learn, teach, and synthesize.  He had the enormous medical university in Pergamon, the whole resources of Rome, and his practice with gladiators and other hard-living people to draw on.  He was a brilliant synthesist and a dynamic writer.  The reason he was not superseded until the 17th century was that no one could really do it.  No one had the technology, the theories, the infrastructure of labs and hospitals, or the observational opportunities.  The Arabs and Chinese could, and did, supplement his ideas with enormous masses of data, information, and further qualification, but they were wise not to throw Galen over. Radical rejection of his ideas was not fully accomplished until the 19th century.  By then, modern microscopes, laboratories, and experimental apparatus were perfected.  Soon Galen’s anatomy was extended by Harvey, Willis and others; his lack of recognition of diseases as specific entities was challenged by Paracelsus, then devastated by Sydenham.  This was a long, slow process, and followers of the eccentric Paracelsus were considered quacks and outsiders in the 16th century (Thick 2010).  The newness and uniqueness of syphilis had much to do with the change in attitude.

The same was true in chemistry.  Boyle’s courage in throwing out received wisdom on alchemy, particles, the nonexistence of vacuums, and elemental natures did not help him go beyond the ancients in regard to basic theory.  He discussed the atomic theory, but it too lacked real evidence at the time.  Above all, he realized that the world had proved to be far more complicated than the Greeks or the Renaissance scholars thought; he reviews dozens of sophisticated chemical experiments that proved this amply.  Old view simply would not fit.  But the future was unclear.

He could see that earth, air, fire and water were not much of a story, but he had no way of conceiving of the idea that earth, air and water were actually made up of simpler elements that were, or were comparable to, metals.  This involved reversing all conventional wisdom, which held that the basic elements combined to produce the metals.  This reversal was ultimately reached by Lavoisier in the 18th century.  It had to wait until improvements in experimental technique had isolated oxygen, nitrogen, and so forth.  Such a change in thinking was incredibly difficult to achieve, and truly revolutionary.  Finding out something new merely adds to knowledge, but this was a matter of turning upside down the whole basis of European thinking!  The earth-air-fire-water cosmology was basic to all aspects of (older) knowledge.  The recognition that these four substances broke down into simpler elements, rather than vice versa, was terribly hard-won.

Such new classification systems were extremely important.  Biological classification also underwent a basic paradigm shift.

The classification of living things, traditionally ascribed to Linnaeus, derives as much or more from the brilliant work of John Ray (1627-1705), an exact contemporary—in birth date at least—of Boyle.  Ray was a natural historian, fascinated with plants and birds, and a key person in uniting field work with laboratory work (specifcially dissection; but note that the botanists had been there before him).

Ray developed the modern species concept—the idea that those organisms which can interbreed with each other form a species (Birkhead 2008:31). In fact, Ray coined the term “species” in its modern use (Wikipedia, “John Ray”).  He also rejected both the idea that each species has to be viewed as a unique item (as Locke implied) and that it is merely one variant on a more general Platonic type; he pioneered the modern science of classification on the basis of picking out important traits of all sorts to distinguish species and group them taxonomically (Gaukroger 2010:191-194).  He thus foregrounded reproduction and reproductive structures, later shown by Linnaeus to be the really criterial things to look at in classifying plants.

With this system, sex mattered.  Anatomy mattered, and reproductive anatomy mattered more than superficial structures; Ray was a great pioneer in elucidating the reproductive anatomy and physiology of birds.  (In this he built on a great tradition, going back to surprisingly sensible if often wrong ideas of Aristotle’s.)  Leaving descendents mattered; Darwinian evolution depends on Ray and Linnaeus more than on the infamous Malthus.  Without this concept and its implications, there was no reason not to classify plants by their leaves, as many botanists did.  (The leaf-dependent botanists were later to attack Linnaeus for the “immorality” of his “sexual” system.)  Trees could be classified by their timber value.  We shall consider below a much more recent question over what to do with whales.

Ray’s work led to further development by Adan Tournefort, explorer of the Near East.  (I first encountered Tournefort as the man dubiously honored by Brassica tournefortii, a loathed and hated weed from North Africa that has invaded my southern California homeland.  But it tastes good—it is a wild broccoli—and thus I have a soft spot in my heart, or rather in my stomach, for it.)  The taxonomic work of Tournefort and his contemporaries led directly to Linnaeus.

Less beneficial, perhaps, was Ray’s crucial role in developing the “argument by design” for the existence of God (Birkhead 2008).  Later made famous by William Paley, this survives as the universal argument for “intelligent design” today.  It had the advantage of setting Darwin wondering what really caused the design in the world.  Natural selection was his answer—firm enough that a modern intelligent design advocate (like Francis Collins) must assume God, like modern artificial-intelligence designers, uses it to fine-tune his creation.

New and rigorous classification systems for stars, minerals, mental illnesses, and everything else imaginable were to follow, and they had and have their own costs and biases (Foucault 1970; Kassam 2009).  Today we have whole classification systems for everything from universes to subatomic particles.  Atoms, when discovered, were thought to be the true atoms of Greek thought—the final particles that could not be subdivided further.  (“Atom” comes from Greek atomos, “uncuttable.”)  Another bad guess.

This new wave’s creators saw themselves as a “Republic of Letters” (Gaukroger 2010; Ogilvie 2006:82ff; Rudwick 2005).  Educated people all over Europe were in constant correspondence with each other.  This correspondence was relatively unmarred by the hatreds and political games that made daily life in Renaissance Europe so insecure.  People respected each other across lines of nation and faith.  The common language, Latin, was not the property of any existing polity.  Members in this borderless but well-recognized Republic treated each other according to unwritten, or rarely-written, rules of respect and courtesy.

Science and humanities were one.  Describing a typical case, Martin Kemp (2008) points out connections between Peter Breughel’s extremely accurate and innovative representations of landscape and the maps of Abraham Ortelius, a cartographer who was a friend of Breughel.

Of course, all academics will realize that those rules of respect did not extend to debates about theory!  A Protestant could respect and tolerate a Catholic or Jew, but if anyone dared to cross his pet idea on plant reproduction or the treatment of ulcers, the words flew like enraged cats.  That was part of the game—part of business in the Republic of Letters.  This information flow presaged the value of scientific journals (invented in the 18th century but not really important till the 19th), and then the Internet; the vast network held together by letters in the 17th century was exactly like the scientific network on the Internet today.  All the Internet has added is speed—important, to be sure.

Religious solidarity and debate stood behind much of the vigor of debates in science, with Protestants and Jews always being on the defensive at first, and having to argue trenchantly for their beliefs.  This led them to be both original and persistent in thinking (Merton 1973; Morton 1981).  But, also, the wars of religion in the 16th and 17th centuries led to major cynicism about organized religion, and contributed mightily to retreat into science as an alternative way of knowing the Divine Will and into the Republic of Letters as an alternative and more decent way of being social.  The skepticism that surfaces in Montaigne, grows in Bayle, and climaxes in Voltaire fed a search for truths that were not simply matters of unprovable church dogma.

This development was exceedingly slow and uneven, because, contrary to conventional wisdom, the middle ages had plenty of sophisticated observation and argument, and the 17th and even 18th centuries had plenty of obscurantist, mystical, and blindly-Aristotelian holdovers.  Brilliant adversarial argument, technological progress, and economic benefits of forward research were all sporadic and contingent.  They did not suddenly cut in at the glad dawn in 1620 or 1650 or any other year.

What did cut in was neatly summarized by van Helmont, the Dutch physician who proved plants grew through combining air and water:  “Neither doth the reading of Books make us to be of the properties [of simples], but by observation” (quoted in Wear 2007:98).  Helmont had much to do with inventing the modern concept of “disease”—a specifiably entity, distinct from its symptoms.  The coming of plague and syphilis, clearly entities though very changeable in symptomatology and clearly different from anything in Herodotus or Galen, had more to do with the origin of this concept; people simply could not ignore them.

Significantly, Helmont’s own work was badly flawed, not least because of his many mystical and even visionary “observations” (see Andrew Wear 2000).  17th-century science did not suddenly discover Truth in the face of learned Error.  In fact, Galen’s and Avicenna’s old books remained much better guides to medical practice than Helmont’s rather wild ideas.  What mattered was that Helmont, and many others, were breaking away from reliance on the books, and rapidly developing a science based on original observation and test.  Their willingness to endure false starts as the price of radical breakthrough is far more important, to science and to history, than their initial successes at replacing the classics with better ideas.

Deborah Harkness (2008) has shown that this type of activity—feverish quest for anything new, exciting, and informative—was exceedingly widespread in Elizabethan England, and by inference in much of urban Europe.  Everyone from farm workers and craftsmen to lords and high court officials was frantically seeking anything new.  Things that improved manufacturing and promoted profit were especially desired, but people were almost as obsessed with new stars, rare plants, and odd rocks as with more solid matters like improving metallurgy and arms manufacture.  This ferment contrasts with China’s relatively staid attitude to innovation.  Even the works of Elman and of William Rowe, which do disclose much inteletual and craft activity in early modern China, have not produced anything similar.  The Tiangong Kaiwu was roughly contemporary with, and similar to, Hugh Plat’s Elizabethan work that gives its name to Harkness’ volume, but unlike Plat’s book it was an isolated incident, not a presage of more and better to come.  Similarly, Li Shizhen’s great herbal came out at almost exactly the same time as the comparable works of Dodoens and Gerard.   (The relations of those two—with Gerard as plagiarist extraordinaire—are described in detail by Harkness).  But Li’s was the last great Chinese herbal, Dodoens’ the first great European one.  By the early 1600s, Europe had surpassed China.

Harkness wisely includes alchemy and astrology among the useful sciences (see above on the Near East); no one at that time had a clue that one could not turn lead into gold or dirt into silver.  Recall that earth was still an “element” then; gold and silver were not.  Equally amazing things were being done daily in smelting and refining.  Similarly, everyone could see the sun’s influence on all life, and the moon’s control of tides; inexorable logic “proved” that the other heavenly bodies must have some influence.  The problem was that reality did not follow logic or common sense.

Moreover, alchemy, at least, sometimes worked.  We have a careful eyewitness account of a modern Central Asian alchemist turning dirt into gold (cited in Idries Shah’s Oriental Magic, 1956).  Fortunately, the account is extremely perceptive, allowing us to perceive that the good sage was simply panning a very small amount of finely disseminated gold out of a very large amount of alluvial soil. He added a good deal of magical rigmarole, but the actual process is clear.  He seems to have been genuinely convinced he was making the gold; finely disseminated gold in alluvial dirt is far from easy to see.  Countless such alluvial separations must have lain behind alchemy.  Similarly, mercury can extract gold from crushed auriferous rock, and is routinely used for that purpose today; if the gold particles are too small to see—as they often are—an alchemist would surely have thought he was turning rock to gold, via the “mercuric” power that led to naming the liquid metal after the trickster and messenger god.  And of course much of alchemy was spiritual, not physical.

The basic hopelessness of alchemy, however, was proved by Robert Boyle, in The Skeptical Chymist.  Boyle critiqued Galen, Paracelsus, and Helmont for reductionism without evidence, and upheld a view that was, indeed, skeptical; he saw no way to simplify chemistry.  He did not really substitute a new paradigm for an old one.

What mattered was that loyalty to and reliance on the old texts had given way to loyalty to independent verification and reliance on one’s own experiments and observations.  Boyle was not afraid to admit frank ignorance and to throw out theories without having much better to substitute.  Earlier generations, even though they were perfectly aware of the imperfection of old texts and the benefits of observation, did not trust their own innovative findings unless those clearly improved on all that had gone before.  Science thus progressed slowly and cautiously.  Boyle did not throw caution to the winds, but he had come to be a leader in a generation that preferred their own experiments to old stories, no matter how little their new experiments appeared to accomplish.  They were on the way to the modern period, when hypotheses and theories are expected to fail and to be superseded in a few years, and when “hard science” departments tell university libraries not to bother keeping journals more than a year or two (as I observed during my years chairing a university library committee).

Europe the Different

Floods of ink have been expended on why China, India and the Near East did not pick up on their own innovations, and why it was a tiny, marginal backwater of the Eurasian continent that exploded into rapid discovery science.

Clearly, it is Europe that is the exception.  The normal course of human events is to see knowledge advance slowly and fairly steadily, as it has done in all societies over thousands of years.  Chinese and Near Eastern science did not stop advancing when Europe took over the lead; they kept on.  Nor did the Maya, Inuit, Northwest Coast Native peoples, or Australian Aborigines stagnate or cease advancing at any point in their history.  They kept learning more.  Archaeology shows, in fact, that most such societies kept increasing their knowledge at exponential rather than linear rates.  Certainly the Northwest Coast peoples learned dramatically more in the last couple of millennia.  But the exponent was very small.  Europe’s since 1500 has been much larger.  In the 20th century, the number of scientific publications doubled every few years.  The doubling time continues to decrease.

This is quite unnatural for humans.  People are normally interested in their immediate social group, and in getting more liked and admired therein.  All their effort, except for minimal livelihood-maintenance, goes into social games and gossip.  (People do not work for “money”; they work for what money can buy—necessities and status.  Once they have the bare necessities, and perhaps a tiny bit of solitary enjoyment, everything else goes for social acceptance and status.)  Devoting oneself to science—to the dispassionate search for impersonal truth—is truly weird by human standards.  We still think of people with this interest as “nerds” and “geeks.”  Many of them are indeed somewhat autistic.  When I started teaching, I thought young people were interested in the world.  All I had to do was present information.  I learned that that was the last and least of my tasks.  The great teachers are those that can get the students interested in anything beyond their immediate social life.

In fact, interest in learning more about the natural world is—in my rather considerable experience—actually considerably greater in traditional small-scale societies than in modern, science-conscious Europe and America.  I have spent many years living with Maya farmers, Northwest Coast Natives, and Chinese fisherfolk, and certainly the level of interest in nature and natural things was much greater among them than among modern Americans.  They were correspondingly less single-mindedly obsessed with social life.  They lacked, for example, the fascination with “celebs” that reveals itself in countless magazines and TV programs, and that much earlier revealed itself in ancient Greek and Roman adulation of actors and gladiators.  They were also much quicker to pick up skills and knowledge from other people and peoples than American farmers and craftspeople are.

Why did Europe in the 16th and 17th centuries suddenly become obsessed with Japanese medicines, Indonesian shells, and Near Eastern flowers?  Why did so many Europeans take breaks from the Machiavellian social games of their age to study such things?  Pliny had studied, and indeed invented, “natural history,” but his work became a “classic”—quoted, cited, unread, and unimitated—in its own time; natural history grew under Arab care, but truly flourished only in post-1400 Europe.

No such changes took place in the other lands.  If anything, they went the other way.  Near Eastern science declined sadly during this period.  (The Ottoman Empire was a partial contrast, but its history seems almost more European than Near Eastern at this time.)  India was preoccupied with horrific invasions and conquests by Tamerlane, Babur, and lesser lights.

China spent this period trapped in the Ming Dynasty, whose frequently-unstable rulers and frozen, overcentralized bureaucracy stifled change.  Technological and scientific progress did occur, but it was slow.  Ming and Qing autocracy is surely the major reason—revisionists to the contrary notwithstanding (see e.g. Anderson 1988; Mote 1999 gives the best, most balanced discussion of the issue, suspending judgment but making a solid case).  In spite of Li Shizhen and his great innovative herbal of 1593, Chinese science was always deeply attendant to the past, discouraging innovative theories and ideas.  This point has been greatly overmade in western sources (often to the point of racism), and is now a cliché, but it is not without truth.  I have heard many educated Chinese strongly maintain points inscribed in old books but clearly and visibly wrong for present conditions.  In Hong Kong I was repeatedly told, for instance, that the fishermen I studied could not swim.  Anyone could see otherwise on a walk along any waterfront on any warm day.  But the old books said fishermen don’t swim.  In fairness to the Chinese, I have run into the same faith in books, as opposed to observation, in the United States and Europe.

China in the Song Dynasty was ahead of Europe in every field, and ahead of the Near East in most areas of science and enquiry.  The Mongol Empire, and its continuation in China’s Yuan Dynasty, instituted in massive knowledge transfer (Anderson ms.; Paul Buell, ongoing research; Buell et al 2000), leveling the playing field and introducing many Chinese accomplishments to the western world.  Gunpowder, cannon, the compass, printing, chemical technology, ceramic skills and many other innovations spread across Eurasia.  However, the Mongol yoke was repressive in China.  The end of Yuan saw violence and chaos.  The new Ming Dynasty brought in much worse autocracy and repression.  After an uneven but fairly successful start, the dynasty settled down after the 1420s to real stagnation.

A significant and highly visible symptom is the paralysis of philosophy.  The spectacular flowering of Buddhist, Taoist, and Neo-Confucian thought under Song and Yuan had a deeply conservative tinge, but at least it was a massive intellectual endeavor.  Highly innovative ideas were generated, often in the name of conservatism.  (An irony not exactly unknown in the western world; someone has remarked that all successful revolutions promise “return to the good old days.”)  By contrast, the only dramatic philosophical innovation of the Ming Dynasty was that of Wang Yangming.  Wang was a high official with a brilliant career as censor and general.  He retired to propagate his personal mix of Confucianism and Buddhism, an “inner light” philosophy strikingly similar to Quaker thought but maintained also by a profound skepticism about worldly success and worldly affairs in general.  He moved Confucian philosophy much closer to the quiestism and mysticism of monastic Buddhism. Wang was one of the key figures in turning Chinese intellectuals inward toward quietism, which in turn was one of the causes of China’s failure to equal Europe in scientific and technical progress.

Larry Israel (2008) has given us a superb dramatic account of Wang’s subduing of an apparently psychopathic rogue prince of the Ming Dynasty.  It is another side of Wang.  By a combination of absolutely brilliant generaling and political savvy (not without Machiavellian scheming), he parlayed a very weak position with about 10,000 troops into a total victory over a huge rebellion involving—according to Wang’s reports—100,000 total troops, many of them hardened bandits and outlaws.  Wang is described as maintaining perfect cool through it all, and showing perfect timing.

It is interesting to compare him with his near-contemporary Michel de Montaigne, another soldier turned sage.  Wang was far higher up the administrative and military ladder than Montaigne, but had the same ambivalence about it and the same desire to retire to meditative and isolated pursuits as soon as he could.  The great similarity in lifetrack and the real similarity in philosophy does not extend to any similarity in the effects of their thoughts over the long term.  Montaigne’s skepticism and meditative realism were enormously liberating to European intellectuals (see e.g. Pascal’s “thoughts”), and Montaigne thus became a major inspiration of the Enlightenment.

Montaigne remained less quiestist and escapist than Wang, but the real difference was in the times.  If the world had been different, Wang might have started a Chinese enlightenment, and Montaigne might have turned Europeans inward to arid meditation.  Wang’s thought was perfect at feeding the escapism of Chinese intellectuals faced with a hopelessly stagnant and degenerate court.  Montaigne’s rather similar thought was perfect at feeding the idealism and merciless enquiry of European intellectuals in a time of rapid change, dynamic expansion of empires, and terrific contestation of religion and rising autocracy (cf. Perry Anderson 1974).

A huge part of the problem was that Chinese intellectuals served at the mercy of the court, and the Ming court was erratic and punitive, regularly condemning innovators and critics of all kinds (Wang barely survived).  By contrast, many of Europe’s first scientists were minor nobles who had little hope of major advancement but no fear of falling far.  Moreover, like scientists everywhere until the 21st century, they were males who had long-suffering wives to do the social and family work.  Today, married female scientists still usually have all the responsibility of remembering birthdays, organizing children’s parties, and being nice to the boss at dinner; some resent it, some enjoy it, but all recognize it is a special and unfair burden.  Throughout the world in premodern times, science was the preserve of males, and at first of well-born ones.  Only they had the leisure and resources to pursue science.  They were often young and adventurous.  Today, the average age of scientists who make major innovations and get Nobel prizes is around 38 (Berg 2007); in math and physics it is considerably younger than that.  In the Renaissance and early modern period, averages would have been even lower, because of the shorter lifespans of those days.

Benjamin Elman (2005) has shown that the clichés about China’s failure to learn from Europe are not adequate accounts.  The Jesuits in the 17th and early 18th centuries did not bring modern European science; they brought Aristotelian knowledge and old, pre-Copernican astronomy, already discredited in Europe.  The Chinese already had science as good as that.  The Jesuits failed to introduce calculus and other modern mathematics.  The Chinese took what they could use—clocks, some mapping techniques—and saw correctly that the rest was not worth taking.  The Jesuits lost their China foothold and eventually were closed down totally (to be revived much later), and China had no real chance to learn until other missionaries flooded into China in the 19th century.  However, they continued to benefit from, and develop, the knowledge they learned from the Jesuits.  (Interestingly, this point had been made 60 years earlier by the anthropologist A. L. Kroeber [1944:196], without the materials available to Elman—showing what can be done by a relatively unbiased scholar in spite of the lack of any good information on just how successful Chinese science was.)

Elman systematically compares scientific fields ranging from mathematics and engineering to botany and medicine.  (Among other things, he notes that western medicine had some impact at the same time that the indigenous Chinese medical traditions were moving from a focus on cold to a more balanced focus on both cold and heat as causes of illness.  Like most premodern peoples, their naturalistic medical traditions gave heavy importance to those environmental factors.)  He misses the one that would best make his case:  nutrition.  Chinese nutritional science was ahead of the west’s till the very end of the 19th century.  This was one case in which the west should have done the learning.

After that, China learned about as fast as any country did.  Japan did not get its famous clear lead over China in borrowing from the west till late in the 19th century.  Elman sums up a general current opinion that China’s loss of the Sino-Japanese War of 1894-95 was not because China was behind technologically, but because China was corrupt and misgoverned.  The Empress Dowager’s infamous reallocation of the navy’s budget to redecorate the Summer Palace was only one problem!

This being said, the Chinese were indeed resistant to western knowledge, slow to realize its importance, slow to take it up, slow to see that their own traditions were lacking.  Elman is certainly right, both intellectually and morally, in stressing the Cinese successes, but he may go a bit far the other way.  He sometimes forgets that only a tiny elite adopted any western knowledge.  He admits the Jesuits had no effect outside the court circles—they were sequestered from the people.  In fact, China missed its chances till too late, and its borrowings were then interrupted by the appalling chaos of the 20th century.  Only in the 21st century did China finally drop its intellectual isolationism.

A Few Notes on Later Change

Science as a reliable cranker-out of money-making technologies is a 19th-century perception.  During the period of the (supposed!) “scientific revolution,” craftsmen, not scientists, made the profitable innovations.  The brilliant and pathbreaking innovations in agriculture, textiles, dyeing, mining, and other arts, from the 1400s on (after Europe had internalized Moorish introductions), are all anonymous.  While Bacon and Descartes were making themselves famous, the really important technological developments were being made by farmers and laborers, whose names no one recorded but whose deeds live on in every bite we take and every fibre we wear.  Few things are more moving, or humbling, than realizing how much we now owe to countless unnamed men and women who lived quiet good lives while the rich and famous did little besides pile up corpses, or, at best, write learned Latin tomes of speculation.

On the other hand, though some at the time said that science only satisfied “idle” curiosity, the very use of the invidious word “idle” indicates that more “serious” game was afoot.  Besides the obvious utility of medicine, there were countless works on transport, mining, agriculture, water management, architecture, and every other art of life.  As recognized in the old phrase “Renaissance man,” a well-known artist, politician, or literary person might make scientific advances in practical fields.  Most famously, Leonardo da Vinci made contributions (or at least plans for contributions) to many.

All this was learned much less rapidly than we once thought.  It took generations for the whole complex of observation, experiment, open publication, and forward-looking, inquiring, argumentative science to take wide hold.  Moreover, the founders’ mistakes conditioned science for years, or even centuries.  Worst in this regard was Descartes’ claim that nonhuman animals are mere machines, without true consciousness.  Not until the late 20th century was this idea—so pernicious in its effects—definitively excised from serious science.

However, the idea that Descartes is responsible for the mind-body dualism or the idea that animals are mere machines is based on the assumption that major cultural change occurs because a brilliant individual has a great insight which then trickles down.  This is not how culture change occurs.  It comes from continual interaction with the natural and social world, leading to general learning and constantly re-negotiated conclusions.  Descartes merely put fancy words to what had been church dogma for 1600 years.  He had his influence, but it was minor.

Medicine too reveals a slow, halting progress.  Notable innovators were Hooke, Boyle, and Thomas Sydenham, who developed from the Helmontian canon further ideas of nosology—systematic classification of named disease entities, rather than mere description of symptoms and inferred humoral causes—and laid the foundations for modern epidemiology (Gaukroger 2006:349-351; Wear 2000).  Boyle, ever the innovative and devoted mind, even counseled learning medical knowledge from Native Americans, long foreshadowing modern plant hunting (Gaukroger 2006:374).  However, Galenic medicine held sway through the 19th century, and in marginal areas right through the 20th.

However slow and uneven this all was, dynamic, forward-looking figures like Galileo, Descartes (who invented mathematical modeling as a systematic scientific procedure), Hooke, and Boyle did indeed transform the world.  The really critical element was their insistence on observation and experiment.  Europe previously (and even for a long time after) never could shake off the devotion to prior authority.  Rapid discovery science came when people realized that Aristotle, Avicenna, and other classics were simply not reliable and had to be tested and supplemented.

European expansion and the rise of entrepreneurship has long been a prime suspect in all this (Marx, Weber, and almost everyone else in the game mentioned it).  The correlation of maritime expansion, discovery, nascent mercantile capitalism, and science—the four developing in about that order—is too clear to ignore.

This had a background not only in the Mediterranean trade (Braudel 1973) but also in the European fishery, which developed early, and expanded into a high-seas, far-waters fishery by the 1400s (see e.g. Cook 2007:7-8).  This led to Europe’s taking full advantage, quickly, of Chinese and Arab maritime advances.  They developed navigation and seamanship to a unique and unprecedented level by 1450.  Holland and Portugal, the most dependent on fisheries of any nations (anywhere), took the lead.

After that, mercantile values took over: need for honest dealing (within reason!), enterprise, factual information, and above all keeping up on every bit of new knowledge and speculation.  Everything could be useful in getting an advantage in trade.  Even clear prose (necessary to scientific writing, at least today) may owe much to this need of merchants for simple, direct information (Cook 2007:56; 408-409).  The whole organization of the new science was influenced by the organization and institutions of the new mercantile capitalism.  Also, merchants wanted tangible signs of their travels and adventures: gardens, curiosity cabinets.

This classic theory has recently received a powerful boost from Harold Cook, who traces out the rise of Dutch business and science in Matters of Exchange:  Commerce, Medicine, and Science in the Dutch Golden Age (2007).  He shows that Dutch science was very much a matter of cataloging and processing the new items the Dutch were discovering in Indonesia, Japan, Brazil, and elsewhere.

Terms like “scientist” and “biology” date from the 19th century, as does “science” in its modern sense.  (“Scientist,” coined by William Whewell, was not really a new word; it merely replaced earlier terms like “savant” and “scient,” which had become obsolete.)

In the early modern period, the people in question were simply called “scholars,” because no one clearly separated science from theology, philosophy, and other branches of knowledge.  Enquiry was enquiry.  Only in the 19th century did disciplines become so distinctive, formal, and methodologically separate that they had to have their own names.

By the late 19th century, folk knowledge of the world had separated from formal knowledge so completely that yet another set of new terms appeared.  Consider the term “ethnobotany,” coined in 1895 by John Harshbarger to refer to the botanical knowledge of local ethnic groups.  This was an old field of study; Dioscorides really started it, and the 16th-century herbalists did it with enthusiasm—Ogilvie (2006:71) called it “ethnobotany ante litteram.”  Linnaeus drew heavily on folk knowledge in his botanical work.   China had a parallel tradition; Li Shizhen drew on folk wisdom.  But no one saw folk botany as a separate and distinctive field until the 1890s, when science became so formalized and laboratory-based that the old folk science became a different thing in people’s minds.

Conclusions on Science History

Looking back over the preceding sections, we see that the main visible difference was the explosion of trade and conquest, especially—but far from solely—in the 15th and 16th centuries.  This brought Europe into a situation where it was forced to deal with a fantastically increased mass of materials to classify, study, and deal with.  It simply could not ignore the new peoples, plants, animals, and so on that it had acquired.

Exactly the same problem faced the Greeks when they grew from tiny city-states to world empire between 600 BCE and 300 BCE, and they did exactly the same thing, leading to the scientific progress of the period.  The golden age of Chinese philosophy was in a similar expansionist period at the exact same time, but Chinese science peaked between 500 and 1200 A.D., with rapid expansion of contacts with the rest of the world.  The Arabs repeated the exact story when they exploded onto the world scene in the 600s and 700s.  In all cases, stiffening into empire was deadly; it slowed Greek science in the Hellenistic period, and virtually shut down Chinese and Near Eastern science after the Mongol conquests.  These conquests did much direct damage, but their real effect was to introduce directly—or create through reaction—a totalitarian style of rule.  China’s Yuan and especially Ming dynasties were hostile to change and innovation; Qing was less so, but not by much.  The change in the Near East was even more dramatic.  The spectacular flood of scientific works suddenly shut off completely after the Mongols (and the plagues that soon followed).  There was hardly a new book of science from then until modern European scientific works began to be translated.  Even today, the Near East lags almost all the rest of the world—including some far less developed regions—in science.  As expected, the worst lag is in the most autocratic countries.  The least lag is found in the more politically sane nations, such as Turkey, where both liberal Hanafi Islam and a European window have led to greater openness.

Europe and America have not, so far, suffered totalitarian death, but the United States from 2010 onward shows exactly how this happens.  The far right wing of the Republican party took over the House and Representatives and most states in that year, and immediately began a full-scale assault on the funding, the independence, and the freedom of teaching of the research and teaching institutions of the country, from grade school to the National Science Foundation.  An almost total defunding of science was advocated.  In education, teaching was under attack, with proposals to replace trained and independent teachers overseeing classes of 20-30 by low-skilled persons with low salaries and no job security put in charge of classes of 60-80.  Something very much like this happened in Ming and Qing China.

It also happened many times over in Europe, but there were always countries where scientists and scholars could take refuge:  the Netherlands in the 17th century, England in the 18th, France in the 19th, America in the 20th, and various lesser states at various times.  The European world’s fractionation saved it.  No one state could take over, and no one could repress all science.  In China, by contrast, the paranoid Ming Dynasty could shut down almost all progress throughout the whole region.  In the Near East, the Turkish and Persian empires did more or less the same thing.

In Europe, a feedback process developed.  The freer states promoted trade and commerce, which in turn stimulated more democracy (for various well-understood reasons).  This stimulated more searches for knowledge, which were relatively free of dogmatic interference.  Any forward knowledge could provide an advantage in trade.  The rise of Republican anti-intellectualism in the United States tracked the replacement of trade and commerce by economic domination through giant primary-production firms, especially oil and coal interests.


Another factor was the tension between religious sects.  Robert Merton (1973) and A. G. Morton (1981) pointed out a connection between religious debate and science.  Merton saw Protestantism as hospitable ideologically.  I find Morton’s explanation far more persuasive.  He thought that the arguments between sects over “absolute truth” created a world in which people seriously maintained minority views against all comers, argued fiercely for them, and sought proof from sources outside everyday society.  They were used to seeing truth as defensible even if unpopular.

Cook (2007) confirms this by noting how many religious dissenters wound up finding refuge the Netherlands—Spinoza and Descartes are only the most famous cases—and how many more resorted to publishing, teaching, or researching there.  Cook takes pains to point out that Dutch leadership in intellectual fields rapidly declined as the Netherlands lost political power, religious freedom, and mercantile edge (the three seem to have declined in a feedback relationship with each other; see also Israel 1995 for enormous detail on these matters).   Gaukroger (2006) has argued, reasonably enough, for a much more complex relationship, but I think Merton’s theory still applies, however much more there is to say.

Accordingly, the separation of science and religion is a product of the Enlightenment, and the “conflict” between science and religion is an 18th-19th-century innovation (Gaukroger 2006; Gould 1999; Rudwick 2005, 2008).  Before that, scientists, like everyone else, took God and the supernatural realm for granted (though there were exceptions by the 18th century).  Few saw a conflict, though the separation was beginning to be evident in the work of Spinoza and Descartes.  They deserve some of the blame for separating the natural from the moral (see Cook 2007:240-266).  Descartes inquired deeply into passions, mind, and soul, developing more or less mechanistic models whose more oversimplified aspects still bedevil us today.  Scientists like Newton and Boyle were not only intensely religious men, but they saw their science as a pillar of religious devotion—a devout exploration of God’s creation.  As late as the 18th century, Hume still argued that no one could seriously be an atheist, and was astonished when he visited France and met a roomful of them (Gaukroger 2006:27).  God was already seen as a clockmaker by the 14th century (Hadot 2006:85, 127), and by the 17th it appeared to many scientists that their job was to understand the divine clockwork.

The conflict of science and religion arose only after Archbishop Ussher and other rationalists overdefined the Bible’s position on reality, and had their claims shown to be ridiculous (Rudwick 2005, 2008).  Between fundamentalist “literalism” and 19th-century science there is, indeed, an unbridgeable gap.  However, no one who reads the Bible seriously can maintain a purely literalist position.  There are too many lines like Deuteronomy 10:16:  “Circumcise therefore the foreskin of your heart.”  (This line is repeatedly discussed in the Bible, from the Prophets down to Paul’s Epistle to the Romans, which discourses on it at great length.)  And the “Virgin Birth” is hard to square with Jesus’ lineage of “begats” traced through Joseph.  Be that as it may, today we are stuck with the conflict, sometimes in extreme forms, as when Richard Dawkins and the Kansas school board face off.

A conflict of science and philosophy arose too, but stayed mild.  Philosophy, however, fell from guiding the world (through the middle ages) to guiding nations (through the Renaissance and early modern periods) to guiding movements (through the 19th century) to being a game.  By the mid-twentieth century it had some function in guiding science, but had ceased to be a living force in guiding the world.  Economics has replaced it in many countries.  Extremist political ideology—fascism, communism, and religious extremism—has replaced it elsewhere.  Philosophical ethics have thinned out, though the Kantian ethics of Jurgen Habermas and John Rawls have recently been influential.

Mastering Nature

The early concern with “mastery” of nature has been greatly exaggerated in recent environmentalist books.  It was certainly there, but, like the conflict with religion, it was largely a creation of the post-Enlightenment world.  And it was not to last; biology has now shifted its concern to saving what is left rather than destroying everything for immediate profit.

The 19th century was, notoriously, the climactic period for science as nature-mastering, but it was also the age that gave birth to conservation as a serious field of study.  Modern environmentalists read with astonishment George Perkins Marsh’s great book Man and Nature (2003 [1864]).  This book started the modern conservation movement.  One of the greatest works of 19th century science, it profoundly transformed thinking about forests, waters, sands, and indeed the whole earth’s surface.  Yet it is unequivocally committed to mastery and Progress, not preservation.  Marsh forthrightly prefers tree plantations to natural forests, and unquestioningly advocates draining wetlands.  He wished not to stop human management of the world, but to substitute good management for bad management.  His only sop to preservation is an awareness of the truth later enshrined in the proverb “Nature always bats last.”  He knew, for instance, that constraining rivers with levees was self-defeating if the river simply aggraded its bed and eventually burst the banks.

This being said, the importance of elite male power in determining science has been much exaggerated in some of the literature (especially the post-Foucault tradition).  Scientists were a rare breed. More to the point, they were self-selected to be concerned with objective, dispassionate knowledge (even if “useful”), and they had to give up any hope of real secular power to pursue this goal. Science was a full-time job in those days.  So was getting and holding power.

A few people combined the two (usually badly), but most could not.  Scientists and scholars were a dedicated and unconventional breed.  Many, from Spinoza to Darwin, were interested in the very opposite of worldly power, and risked not only their power but sometimes their lives.  (Spinoza’s life was in danger for his religious views, not his lens-making innovations, but the two were not unrelated in that age.  See Damasio 2003.)  Moreover, not everyone in those days was the slave of an insensate ideology.  Thoreau was not alone in his counter-vision of the good.  Certainly, the great plant-lovers and plant explorers of old, from Dioscorides to Rauwolf and Bauhin and onward through Linnaeus and Asa Gray, were not unappreciative of nature.

And even the stereotype of male power is inadequate; many of these sages had female students, and indeed by the end of the 19th century botany was a common female pursuit.  Some of the pioneer botanists of the Americas were women, including incredibly intrepid ones like Kate Brandegee, who rode alone through thousands of miles of unexplored, bandit-infested parts of Mexico at the turn of the last century.

We need to re-evaluate the whole field of science-as-power.  Governments, especially techno-authoritarian ones like Bismarck’s Prussia and the 20th century dictatorships, most certainly saw “science” and technology as ways to assert control over both nature and people.  Scientists usually did not think that way, though more than a few did.  This leads to a certain disjunction.  Even in the area of medicine, where Michel Foucault’s case is strong and well-made (Foucault 1973), there is a huge contrast between medical innovation and medical care delivery.  Medical innovation was classically the work of loners (de Kruif 1926), from Joseph Lister to Maurice Hillebrant (the designer of the MMR shots).  Even the greatest innovators in 19th-century medicine, Robert Koch and Louis Pasteur, worked with a few students, and were less than totally appreciated by the medical establishment of the time.  Often, these loners were terribly persecuted for their innovative activities, as Semmelweis was in Hungary (Gortvay and Zoltán 1968) and Crawford Long, discoverer of anesthesia, in America.  (Dwelling in the obscurantist “Old South,” at a time when black slavery was considered a Biblical command, Long was attacked for thwarting God’s plan to make humans suffer!)  By contrast, medical care delivery involves asserting control over patients.  At best this is true caring, but usually it means batch-processing them for convenience and economy—regarding their humanity merely as an annoyance.  No one who has been through a modern clinic needs a citation for this (but see Foucault 1973).

Science and Ethnoscience, part 1: Science

Monday, August 22nd, 2011


E. N. Anderson

Dept. of Anthropology

University of California, Riverside

Part 1.  Science and Ethnobiology

Science and Knowledge

The present paper questions the distinctions between “science,” “religion,” “traditional ecological knowledge,” and any other divisions of knowledge that may sometimes be barriers in the way of Truth.

I will make this case via my now rather long experience in ethnobiology.  Ethnobiology is the study of the biological knowledge of particular ethnic groups.  It is part of what is now called “traditional ecological knowledge,” TEK for short.  Ethnobiology has typically been a study of working knowledge:  the actual pragmatic and operational knowledge of plants and animals that people bring to their daily tasks.  It thus concerns hunting and gathering, farming, fishing, tree-cutting, herbal medicine, cooking, and other everyday practical pursuits.  Ethnobiological research has focused on how people use, name, and classify the plants, animals and fungi they know.

As such, it is close to economic botany and zoology, to archaeology, and to ethnomedicine.  It is a part of human ecology, the study of how humans interact with their environment.  It overlaps with cultural ecology, the branch of human ecology that concerns cultural knowledge specifically.  Cultural ecology was essentially invented, and the term coined, by Julian Steward (1955).  Steward attended very seriously to political organization, but his earlier students generally did not, which caused his later students to coin the further term “political ecology” (Wolf 1972), which has caught on in spite of some backlash from the earlier students and their own students (Vayda 2008).  Human/cultural/political ecology has produced a huge, fast-evolving, and rather chaotic body of theory (Sutton and Anderson 2009).

Like many of my generation, I was raised in a semi-rural world of farms, gardens, ranches, and craft work.  I learned to shoot, fish, and camp.  Many formative hours were spent on the family farm, a small worked-out cotton farm in a remote part of East Texas.  (My father was raised there, but the family had abandoned it to sharecroppers by the time I came along.)  I learned about all this through actual practice, under the watchful eyes of elders or peers.  Naturally, I learned it much better than I learned classroom knowledge acquired in a more passive way.  Thus I was preadapted to study other people’s working knowledge of biota.

Logic also makes this a good entry point into the study of theoretical human ecology.  It is the most basic, everyday, universal way that humans interact with “nature.”  It is the most direct.  It has the most direct feedback from the rest of the world—the nonhuman realm that is so often out of human control.  The philosopher may meditate on the nonexistence of existence, or on the number of angels that can dance on the point of a pin, but the working farmer or gatherer must deal with a more pragmatic reality.  She must know which plants are the best for food and which will poison her, and how to avoid being eaten by a bar.

In Comte’s words, we need to know in order to predict, and predict in order to be able to act (savoir pour prévoir, prévoir pour pouvoir).

How do we know we know?

For many people, even many scientists, it is enough to say that we see reality and thus know what’s real.  This is the position of “naïve empiricism.” There is no problem telling the real from the unreal, once we have allowed for natural mistakes and learned to ignore a few madmen.  Reality is transparent to us.  The obvious failure of everyone before now to see exactly what’s real and what isn’t is due to their being childlike primitives.  Presumably, the fact that almost half the science I learned as an undergraduate is now abandoned proves that my teachers (who included at least one Nobel laureate) were childlike primitives too.

Obviously this does not work, and the ancient Greeks already recognized that people are blinded by their unconscious heuristics and biases.  Francis Bacon systematized this observation in his Novum Organum (1901/1620).  He identified four “idols” (of the “tribe, den, market, and theatre”), basically cultural prejudices that cause us to believe what our neighbors believe rather than what is true.  Later, John Locke (1979/1697) expanded the sense of limitations by providing a very modern account of cognitive biases and cognitive processing limitations.  The common claim that Locke believed the mind was a “blank slate” and that he was a naïve empiricist is wrong.  He used the expression tabula rasa (blank slate) but meant that people could learn a wide variety of things, not that they did not have built-in information processing limits and biases.  He recognized both, and described them in surprisingly modern ways.  His empiricism, based on careful and close study, involved working to remove the “idols” and biases.  It also involved cross-checking, reasoning, and progressive approximation, among other ways of thought.

Problems with Words

Ethnobiology has normally been concerned with “traditional ecological knowledge,” now shortened to TEK and sometimes even called “tek” (one syllable).  By the time a concept is acronymized to that extent, it is in danger of becoming so cut-and-dried that it is mere mental shorthand.  The time has come to take a longer look.  This paper will not confine itself to “TEK,” whatever that is.  I am interested in all knowledge of environments.  I want to know how it develops and spreads.

Science studies and history of science have made great strides in recent decades, partly through use of anthropological concepts, and in turn have fed back on anthropological studies of traditional knowledge.  The result has been to blur the distinction between traditional local knowledge and modern international science.  Peter Bowker and Susan Star (1999) have produced descriptions of modern scientific classification that sound very much like what I find among Hong Kong fishermen and Northwest Coast Native people.  Bruno Latour (2004, 2005) describes the cream of French scientists thinking and talking very much as Mexican Maya farmers do.  Martin Rudwick, in his epochal volumes on the history of geology, describes great scientists speculating on the cosmos with all the mixture of confusion, insight, genius, and wild guessing that led Native Californians to conclude that their world was created by coyotes and other animal powers.  Early geological speculation was as far from what we believe today as California’s coyote stories.

Similar problems plague the notion of “indigenous” knowledge.  Criticisms of the idea that there is an “indigenous” kind of knowledge, as opposed to some other kind, have climaxed in a slashing attack on the whole idea by Matthew Lauer and Shankar Aswani (2009).  They maintain “it relies on obsolete anthropological frameworks of evolutionary progress” (2009:317).  This is too strong—no one now uses those frameworks.  The term “indigenous” has a specific legal meaning established by the United Nations.  However, there is a little fire under Lauer and Aswani’s smoke.  The term “indigenous knowledge” does tend to imply that the knowledge held by “indigenous” people is somehow different:  presumably more local, more limited, and more easy to ignore.  Some, especially biologists, use this to justify a few that non-indigenous people (whatever that means) somehow manage to have a wider, better vision.

Similarly, the term “traditional ecological knowledge” has been criticized for implying that said knowledge is “backward and static….  Much development based on TEK thus continues to implement homogenous Western objectives by coopting and decontextualizing selected aspects of knowledges specific to unique places, eliminate their dynamism, and focus more than anything else on negotiating the terms for their commodification” (Sluyter 2003, citing but rather oversimplifying Escobar 1998).  Most of us who study “TEK” do not commit these sins.  But many people do, especially non-anthropologists working for bureaucracies.  Their international bureaucratic “spin” has indeed made the term into a very simplistic label (Bicker et al. 2004, and see below).

The implication of stasis is particularly unfortunate.  Traditional ecological knowledge, like traditional folk music, is dynamic and ever-changing, except in dying cultures.  Many people understand “traditional” to mean “unchanged since time immemorial.”  It does not mean that in normal use.  “Traditional” Scottish folk music is pentatonic and has certain basic patterns for writing tunes (syncopation at specific points, and so on).  New Scottish tunes that follow these traditions are being written all the time, and they are thoroughly traditional though completely new.   Similarly, traditional classification systems can and do readily incorporate new crops and animals.  Traditional Yucatec Maya knowledge of plants is still with us, but over 25 years I have seen their system expand yearly to accommodate new plants.

People are notoriously prone to invent new traditions (Hobsbawm and Ranger 1983).  “Tradition,” more often than not, means “my version of what Grandpa and Grandma did,” not “my faithful reproduction of what my ancestors did in the Ice Age.”

And, of course, modern international science is hardly free from traditions!   “Science” is an ancient Greek invention, and the major divisions—zoology, botany, astronomy, and so on—are ancient Greek in name and definition.  Theophrastus’ original “botany” text of the 4th century BC reads surprisingly well today; we have added evolution and genetics, but even the scientific names of the plants are often the same as Theophrastus’, because his terms continued in use by botanists.  Coining scientific names today is done according to fixed and thoroughly traditional rules, centuries old, maintained by international committees.  Species names of trees, for instance, are normally feminine, because the ancient Romans thought all trees had female spirits dwelling in them.  Thus even trees with masculine-sounding genus names have feminine species names (e.g. Pinus ponderosa, Quercus lobata) Traditions of publication, laboratory conduct, institutional organization, and so on are more recent, but are older than many of the “traditional” bits of lore classed as “TEK.”

It is no more surprising to find that Maya change and adapt with great speed than to find that laboratory chemists use the same paradigms and much of the same equipment that Robert Boyle used more than 300 years ago.

Finally, the differences between traditional (or “indigenous”) knowledges and modern science are not obviously greater than the differences between long-separated traditional cultures.  Maya biological knowledge is a great deal like modern biology—enough to amaze me on frequent occasions.  Both are very different from the knowledge system of the Athapaskan peoples of the Yukon.   Similarly, the conduct of science in the United States is quite different from that in China or Japan.  National laboratory cultures have been the subject of considerable analysis (see e.g. Bowker and Star 1999; Latour 2005; Rabinow 2002).  And modern sciences differ in the ways they operate.  Paleontology is not done the way theoretical physics is done (Gould 2002).  Thus Latour (2004) and many others now speak of “sciences” rather than “science,” just as Peter Worsley (1997) wrote of “knowledges” in discussing TEK and popular lore.

If one looks at high theory, traditional knowledge and modern science may be different, but if one looks at applications, they are the same enterprise:  a search for practical and theoretical knowledge of how everything works.  Similarly, if one looks at discovery methodology, traditional ecological knowledge and formal mathematical theory seem very different indeed, but traditional and contemporary ecology or biology are much more alike.

I can only conclude that instead of speaking of “ethnoscience,” “modern science, “traditional knowledge,” and “postmodern knowledge,” we might just as well say “sciences” and “knowledges” and be done with it.

Therefore, pigeonholing TEK in order to dismiss it is unacceptable (Nadasdy 2004).  By the same token, bureaucratizing science, as “Big Science” and overmanaged government agencies are doing now, is the death of science.   As Michael Dove says:  “By problematizing a purported division between local and extralocal, the concept of indigenous knowledge obscures existing linkages or even identities between the two and may privilege political, bureaucratic authorities with a vested interest in the distinction (whether its maintenance or collapse).”  (Dove 2006:196.)

Problems with Projecting the “Science” Category on Other Cultures

A much more serious problem, often resulting from such bureaucratization, has been the tendency to ignore the “religious” and other beliefs that are an integral part of these knowledge systems.  This is not only bad for our understanding; it is annoying, and sometimes highly offensive, to the people who have the knowledge.  Christian readers might well be offended by an analysis of Holy Communion that confined itself to the nutritional value of the wine and cracker, and implied that was all that mattered.  Projecting our own categories on others has its uses, and for analytic and comparative purposes is often necessary, but it has to be balanced by seeing them in their own terms.  This problem has naturally been worse for comparative science that deliberately overlooks local views (Smith and Wobst 2005; also Nadasdy 2004), but has carried over into ethnoscience.

On the other hand, for analytic reasons, we shall often want to compare specific knowledge of—say—the medical effects of plants.   Thus we shall sometimes have to disembed empirical scientific knowledge from spiritual belief.  If we analyze, for instance, the cross-cultural uses of Artemisia spp. as a vermifuge, it is necessary to know that this universally recognized medicinal value is a fact and that it is due to the presence of the strong poison thujone in most species of the genus.  Traditional cultures may explain the action as God-given, or due to a resident spirit, or due to magical incantations said over the plant, or may simply not have any explanation at all.  However, they all agree with modern lab science on one thing:  it works.

We must, then, consider four different things:  the knowledge itself; the fraction of it that is empirical and cross-culturally verifiable; and the explanations for it in the traditional cultures in question; and the modern laboratory explanations for it.  All these are valuable, all are science, and all are important—but for different reasons.  Obviously, if we are going to make use of the knowledge in modern medicine, we will be less interested in the traditional explanations; conversely, if we are explicating traditional cultural thought systems, it is the modern laboratory explanations that will be less interesting.

The important sociological fact to note is the relative independence or disembedding of “science,” in the sense of proven factual knowledge, from religion.  Seth Abrutyn (2009) has analyzed the ways that particular realms of human behavior become independent, with their own organization, personnel, buildings, rules, subcultures, and so on.  Religion took on such an independent institutional life with the rise of priesthoods and temples in the early states.  Politics too developed with the early states, as did the military.  Science became a truly independent realm only much later.  Only since the mid-19th century has it become organizationally and intellectually independent of religion, philosophy, politics, and so on.  It is not wholly independent yet (as science studies continually remind us).  However, it is independent enough that we can speak of the gap between science and religion (Gould 1999).  This gap was nonexistent in traditional cultures—including the western world before 1700 or even 1800.  Many cultures, including early modern European and Chinese, had developed a sense of opposing natural to supernatural or spiritual explanations, but there were no real separate institutional spheres based on the distinction.

However, we can back-project this distinction on other cultures for analytic reasons—if we remember we are doing violence to their cultural knowledge systems in the process.  There are reasons why one sometimes wants to dissect.

Inclusive Science

I use “science” to cover systematic human fact-finding about the world, wherever done and however done.  Traditional people all include what we moderns call “supernatural” factors in their explanations.  Thus, we have to take some account of such ideas in our assessment of their sciences (Gonzalez 2001).  This is obviously a very broad and possibly a bit idioyncratic usage, but it allows comparison.  It is imperfect, but alternatives seem worse.

Science is about something—specifically, about knowing more, and perhaps improving the human condition in the process.  The appropriate tests are therefore outcome measures, which are usually quite translatable and comparable between cultures.

I might prefer “sciences,” following Latour (2004) and Eugene Hunn (2008), but I share with Joseph Needham a dedication to the idea of a panhuman search for verifiable knowledge. Since the first hominid figured out how to use fire or chip rock, science has been a human-wide, cumulative venture, responsible for many of the greatest achievements of the human spirit.  Yet the traditions and knowledge systems that feed into it are very different indeed.  Science is a braided river, or, even more graphically, a single river made up of countless separate water molecules.

Science gives us sciences, but is one endeavor.  Attempts to confine scientific methodology to a single positivist bed have not worked, and modern sciences are institutionalized in separate departments, but neither of these things destroys the basic reality and unity of the set of practices devoted to knowledge-seeking.  Even today, in spite of the divergence of the sciences, we have Science magazine and “big science” and a host of other recognitions of a basic system.

All narrow definitions are challenged by the fact that the ancient Romans invented the term “science” (scientia, “things known,” from scire “know”).

The Greek word for science was episteme (shades of Foucault 1970), and the more general words for “knowledge” were sophos “knowledge” and sophia “wisdom, cleverness.”  Sciences, however, were distinguished by the ending –logia, from logos, “word.”  Simpler fields that were more descriptive than analytic ended in –nomos “naming.”  It is interesting that astrology was a science but astronomy a mere “star-naming”!  Another ending was –urgia “handcraft work,” as is chirurgia,the word that became “surgery” in English; it literally means “handwork.”

The Greeks worked terribly hard on most of what we now think of as “the sciences,” from botany to astronomy.  In the western world, they get the major credit for separating science from other knowledges.  Aristotle, in particular, kept his accounts of zoology and physics separate from the more speculative material he called “metaphysics.”  (At least, he probably called it that, though some have speculated that his students labeled that material, giving it a working term that just meant “the stuff that came after [meta, ‘beyond’] the physics stuff in his lectures.”)

The Greeks also gave us philosophia, “love of wisdom”—the higher, rigorous attention to the most basic and hard-to-solve questions.  This word was given its classic denotation and connotation by Plato (Hadot 2002).  They used techne for (mere) craft.  Yet another kind of knowledge was metis—sharp dealing, resourcefulness, street smarts.  The quintessential metic was Odysseus, and east Mediterranean traders are still famous for this ability.

The ancient Greeks (at least after Aristotle) contrasted science, an expert and analytical knowledge of a broad area, with mere craft, techne. This has left us today with an invidious distinction between “science” and “technology” (or “craft”).  The Greeks were less invidious about it.  Arts were usually mere techne, but divine inspiration—the blessing of the Muses that gave us Homer and Praxiteles—went beyond that.  We now think of the Muses as arch Victorian figures of speech, but the ancient Greeks took them seriously.

Allowing the Greeks and Romans their claim to having science makes it impossible to rule out Egyptian and “Chaldean” (Mesopotamian) science, which the Greeks explicitly credited.  Then we have to admit, also, Arab, Persian, and Chinese science, which continued the Greek projects (more or less).  Privileging modern Euro-American science is patently racist.  Before 1200 or 1300 A.D., the Chinese were ahead of the west in most fields.  We can hardly shut them out.  (True, they had no word for “science,” but the nearest equivalent, li xue “study of basic principles,” was as close to our “science” as scientia was at the same point in time.)  Once we have done that, the floodgates are open, and we cannot reasonably rule out any culture’s science.

Words for “science” and scientists in English go back at least to the Renaissance.  The OED attests “science” from 1289.  The word “scientist” was not invented till W. Whewell coined it in 1833, but it merely replaced earlier words: “savant” from the French, or the Latinate coinage “scient,” used as a noun or adjective.  These words had been around since the 1400s.  (“Scient” had become obsolete.)

Thus, I define “science” as systematic, methodical efforts to gain pragmatic and empirical knowledge of the world and to explain this by theories (however wildly wrong the latter may now appear to be). Paleolithic flint-chipping, Peruvian llama herding, and Maya herbal medicine are sciencs, in so far as they are systematized, tested, extended by experience, and shared.  The contrast is with unsystematized observation, random noting of facts, and pure speculation.  In this I agree with scholars of traditional sciences such as Roberto Gonzalez (2001) and Eugene Hunn (2008; see esp. pp. 8-9), as well as Malinowski, who considered any knowledge based on experience and reason to be science, and thus found it everywhere.

The boundaries are vague, but this is inevitable.  “Science” however defined is a fuzzy set.  Even modern laboratory science grades off into rigorous field sciences and into speculative sciences like astrophysics.

Science is based on theories, which I define as broad ideas about the world that generate predictions and explanations when applied to pragmatic, empirical engagement with particular environments.  This allows me to consider folk views such as the beliefs supporting shamanism along with modern scientific theories.

On the other hand, in small-scale traditional cultures, cutting off “science” creates an artificial distinction.  Such societies do not separate science from other knowledge, including what we in English would call “religion” or “spiritualism,” and analysis does violence to this.  It is worth doing anyway for some comparative and analytical purposes, but most of the time I find it preferable to talk about “knowledge.”  For most purposes, I am much more interested in understanding traditional knowledge systems holistically. For some purposes, however, we need to analyze, and all we can do is live with the violence, remembering that “analysis” literally means “splitting up.”

Chinese, Arab, Persian, and Indian civilizations, and probably Maya and Aztec ones, did have self-conscious, cumulative traditions of fact-seeking and explanation-seeking.  The Near Eastern cultures actually based their science on the Greeks, and even used the Greek words.  Both “science” and “philosophy,” variously modified, were taken into Arabic and other medieval Near Eastern languages.  The Chinese were farther afield, as will appear below, but Joseph Needham was clearly right in studying their efforts as part of the world scientific tradition.  However, it is also necessary to study the ways that traditional Chinese knowledge and knowledge-seeking was not like western sciences.  I will argue at length, below, that both Needham and his critics are right, and that to understand Chinese knowledge of the environment we must analyze it both on its own terms and as scientific practice.

Finally, there is an inevitable tendency to back-project our modern views of the world on earlier days.  Astrology and alchemy seemed as reasonable in the Renaissance as astronomy and chemistry.  There was simply no reason to think that changing dirt into gold was any harder than changing iron ore into iron.

There was even evidence that it could work.  Idries Shah (1956) gives an account by an observant traveler of an alchemist changing dirt to gold in modern central Asia.  The meticulous account makes it clear that he was actually separating finely disseminated gold out of alluvial deposits, but he was evidently quite convinced that he was really transforming the dirt.  More recently, reconstructed alchemical experiments turn silver yellow (superficially, however).  Apparently alchemists were fooled into thinking this was a real change, or at least could be developed into one (Reardon 2011).  Scientists are thus now studying alchemy to see just what those early chemists were doing.  They were not just wasting their time.  They had high hopes and were not unreasonable.  Ultimately they proved wrong, and duly hung up their signboards.  Such is progress—and they were not the last to have to give up on a failed project; we do it every day now.

The old “Whig history” that starts with Our Perfect Modern Situation and works back—seeing history as a long battle of Good (i.e. what led to us perfect moderns) vs. Evil—is long abandoned, but we cannot avoid some presentism (Mora-Abadía 2009).  Obviously, even my use of the term “science” for TEK is something of a presentist strategy.  Thus “science” is a rather arbitrary term.  I shall use it, with some discomfort, for that part of knowledge which claims to be preeminently dedicated to learning empirical and pragmatic things about environments and about lives.

Overly Restrictive Definitions of “Science”

I strictly avoid using it to mean solely lab-based activities.  I follow the Greeks in using it for Aristotle’s legacy, not just for the world of case/control, hypothesis-generation, hypothesis-testing, and formal theory.  This form of science was canonized by Ernst Mach and others in the late 19th century.  This usage is inadequate for many reasons.  Among other things, it relegates Aristotle, Galen, Tao Hongjing, Boyle, Li Shizhen, Harvey, Newton, Linnaeus, and even Lyell and Darwin to the garbage can.  Mach certainly did not want this; he was trying to improve scientific practice, not deny his heritage.

We can hardly balk at the errors of traditional societies.  Much of the science I learned as an undergraduate is now known to be wrong:  stable continents, Skinnerian learning theory, “climax communities” in ecology, and so on.  We allow them into our histories of science, along with phlogiston, ether, humoral medicine, the mind-body dichotomy, and other wrong theories once sacrosanct in Western science.

In my field work in Hong Kong, I found that many Chinese explained earthquakes as dragons shaking in the earth.  Other Chinese explained earthquakes as waves caused by turbulent flow of qi (breath, or vital energy) in the earth.  The Chipewyans of north Canada explain earthquakes as the thrashing of a giant fish (Sharp 1987, 2001).  When I was an undergraduate, most American geologists did not yet accept the fact that earthquakes are usually caused by plate tectonics, and instead invoked scientific explanations just as mystical and factually dubious as the dragons and fish.  They blamed earthquakes on the earth shrinking, or the weight of stream sediments—anything except plate tectonics (Oreskes 1999, 2001).    One should never be too proud about inferred variables inside a black box.

Unlike emotions, which have clear biological foundations, scientific systems can be seen as genuinely culturally constructed from the ground up.  Chimpanzees make termite-sticks and leaf cups, but the gap between these and space satellites is truly far greater than the gap between chimp rage and human anger.  It is true that chimps in laboratory situations can figure out how to put sticks together to get bananas, and otherwise display the basics of insight and hypothesis-testing (de Waal 1996; Kohler 1927), but they do not invent systematic and comprehensive schemes for understanding the whole world.  People, including those in the simplest hunter-gatherer societies, all do.

Many historians restrict “science” to the activity popularized in western Europe by Galileo, Bacon, Descartes, Harvey, Boyle, and others in the 16th and 17th centuries.  This usage is considerably more reasonable.  The “Scientific Revolution” involved a really distinctive moment or Foucaultian “rupture” that led to new worlds.  However, much excellent work has recently cut it down to size.  In fact, we now know that calling it a “revolution” drew a somewhat arbitrary line between these sages and their immediate forebears.  They were self-consciously “Aristotelian” against the “Platonism” of said forebears, but this looks very much less distinctive when one considers Arab and Persian science.  Aristotelianism had come to Europe from the Arabs in the 12th and 13th centuries, and the “revolution” was really a slow evolution (Gaukroger 2006).

A valuable term for the unified tradition that embraces European science since 1500 and world science since 1700 or 1800 is “rapid discovery science” (Collins 1998).  Rapid discovery science is very different from traditional science, but the difference is one of degree at least as much as of  kind.

The period from Galileo to 1800 may be defined as early modern science. Unlike both its primarily Near Eastern ancestors and its post-1800 descendent modern international science, it was largely a European enterprise.  Many criticisms have been made of its Eurocentric biases.  It did indeed display a rather distinctive and basically European worldview:  dualistic, excessively rational, dismissing or belittling the rest of the world, and more than somewhat sexist.  However, as we shall see, it depended in critical ways on nonwestern science for both data and ideas.  It was never isolated and could never really ignore the rest of the world’s knowledges.

A common terminological use is to restrict “science” to modern laboratory-based scientific practice, and the most closely similar field sciences.  This science develops formal theories (preferably stated in mathematic terms), generates hypotheses from the theories, tests these according to a formal methodology, discusses the results, and awaits cross-confirmation by other labs.  The problem with this usage is that it rules out virtually all science done before the 19th century.  In the early 20th century, Viennese logicians attempted to theorize such science as exceedingly formal, even artificial, procedure, with very strict rules of verification or—more famously—“falsification” (Popper 1959).

But this rules out not only all earlier science but even most science done today.  Field science can’t make the grade.  As Stephen Jay Gould (e.g. 1999) often pointed out, paleontology does not qualify.  We can hardly experiment in the lab with Tyrannosaurus rex.  Indeed, historians and social scientists (such as Thomas Kuhn, 1962) have repeatedly pointed out that few lab men and women follow their own principles—they go with hunches, have accidents, and so on.  The most hard-core positivist scientists admit this happily in their memoirs (see e.g. Skinner 1959).  Thus, I shall not use “science” in the above sense.  I shall use the term modern laboratory science for the general sort of science idealized by the positivists, but without losing sight of the fact that even it does not follow positivist guidelines.

However, no one can deny that there was a general movement in the 19th century to make science and the sciences more self-conscious, more rigorous, more clearly divided, and more methodologically consistent (see e.g. Rudwick 2005 on geology.)  Contrary to much blather, this was not a “European” enterprise.  It already involved people on both American continents, and it very soon included Asians.  Modern medicine, in particular, owes as much to Wu Lien-teh for his studies of plague and to Kiyoshi Shiga for his studies of dysentery as it does to any but the greatest of the European doctors.  (Shiga won what may be the least enviable immortalization in history, as the namesake of shigellosis.)  Moreover, many of the European founders did their key work in the tropics, as in the pathbreaking work of Patrick Manson and Ronald Ross on malaria and Walter Reed on yellow fever.  Therefore, I will use the term modern international science to refer to the new, self-conscious enterprise that began after 1800.

As Arturo Escobar says, “…an ensemble of Western, modern cultural forces…has unceasingly exerted its influence—often its dominance—over most world regions.  These forces continue to operate through the ever-changing interaction of forms of European thought and culture, taken to be universally valid, with the frequently subordinated knowledges and cultural practices of many non-European groups throughout the world” (Escobar 2008:3).  Escobar, among many others, speaks of “decolonializing” knowledge, and I hope to contribute to that.

Euro-American rational science arose in a context of technological innovation, imperial expansion, power jockeying (as Foucault reminded us), political radicalism, and economic dynamism.  We now know, thanks to modern histories and ethnographies of science, that European science was and is a much messier, more human enterprise than most laypersons think.  The cool, rational, detached scientist with his (sic!) laboratory, controlled experiments, and exquisitely perfect mathematical models is rare indeed outside of old-fashioned hagiographies of scientists.  Rarer still is the lackey of patriarchal power, creating phony science simply to enslave.  (Rare, but far from nonexistent; one need think only of the sorry history of racism and “scientific” sexism, up to and including Lawrence Summers’ famous dismissal of women’s math abilities.  One could always argue that Summers is an economist, not a scientist.)

More nuanced conclusions emerge from the history of science (as told by e.g. Martin Rudwick 2007, 2008) and the ethnography of science (e.g. Bruno Latour 2004, 2005).  These show modern international science as a very human enterprise.  Most of us who have worked in the vineyard can only agree.  (I was initially trained as a biologist and have done some biological research, so I am not ignorant of the game.)  These accounts bring modern science much closer to the traditional ecological knowledge of the Maya, the Haida, or the Chumash.  I have no hesitation about using the word “science” to describe any and all cultures’ pragmatic knowledge of the environment (see below, and Gonzalez 2001).

One can often infer the theory behind traditional or early empirical knowledge.  Sometimes it is quite sophisticated, and one wishes the writer had been less modest.  Therefore, a solid, factual account should not be dismissed because it “doesn’t speak to theory issues” until one has thought over the implications of the author’s method and conclusion.  This is as true if the account comes from a Maya woodsman or Chinese herbalist as it is when the account comes from a laboratory scientist.

We thus need a definition of “science” broad enough to include “ethnoscience” traditions.  The accumulated knowledge and wisdom of humanity is being lost and neglected more than ever, in spite of the tiny and perhaps dwindling band of anthropologists who care about it.  The fact that a group does not have a “thing” called “science,” and even the fact that the group believes in mile-long fish and dinosaur-sized otters (as do the Chipewyan of Canada; Sharp 2001), does not render their empirically verifiable knowledge unscientific.

Considering all folk explanations, and classifying the traditional ones as “religion,” Edward Tylor classically explained magic and religion as, basically, failed science (Tylor 1871).  He came up with a number of stories explaining how religious beliefs could have been reasonably inferred by fully rational people who had no modern laboratory devices to make sense of their perceptions.  Malinowski’s portrayal of religion as emotion-driven was part of a general reaction against Tylor in the early 20th century.

Indeed, Tylor discounted emotion too much.  On the whole, however, there is still merit in Tylor’s work.  There is also merit in Malinowski’s.  Science, like religion and magic, partakes of the rational, the emotional, and the social.

Basic Science:  Beyond Kuhn and Kitcher

“I can’t remember a single first formed hypothesis which had not after a time to be given up or greatly modified.  This has naturally led me to distrust greatly deductive reasoning in the mixed sciences.”  (Darwin, from his notebooks, quoted Kagan 2006:76)

All my life, I have been fascinated with scientific knowledge—that is, knowledge of the world derived from deliberate, careful, double-checked reflection on experience, rather than from blind tradition, freewheeling speculation, or logic based on a priori principles.

Thomas Kuhn’s classic The Structure of Scientific Revolutions (1962, anticipated by the brilliant work of Ludwig Fleck on medical history) concentrated on biases and limits within scientific practice.  Kuhn was attending to real problems with science itself.  This contrasts with, say, critiques of racism and sexism, which are necessary and valuable but wre already anticipated by Francis Bacon’s critiques of bias-driven pseudoscience (Bacon 1901, orig. ca. 1620).

From all this arose a great change in how “truth” is established.  Instead of going for pure unbiased observation, or for falsification of errors, we now go for “independent confirmation.”  David Kronenfeld (personal communication, 2005) adds:  “Science itself is also an attitude—probing, trying to ‘give nature a chance to say no,’ and so forth….science is not a thing of individuals but is a system of concepts and of people.”

A result is not counted, a finding is not taking seriously, unless it is cross-confirmed, preferably by people working in a different lab or field and from a different theoretical framework.  I certainly don’t believe my own findings unless they are cross-confirmed.  (See Kitcher 1993; for much more, Martin and MacIntyre 1994.)

Indeed, the new face of positivism demands  what is called VV&A:  “Verification (your model correctly captures the processes you are studying), validation (your code correctly implements your model) and authentication (some group or individual blesses your simulation as being useful for some intended purpose)” (Jerrold Kronenfeld, email of Jan. 7, 2010).  This is jargon in the “modeling” world, but it applies across the board.  Any description of a finding must be checked to see that it is correct, that the descriptions of it in the literature are accurate, and that advances knowledge, strengthens or qualifies theory, or is otherwise useful to science.

In short, science is necessarily done by a number of people, all dedicated to advancing knowledge, but all dedicated to subjecting every new idea or finding to a healthy skepticism.  We now see science as a social process.  Truth is established, but slowly, through debate and ongoing research.

Naïve empiricist agendas assume we can directly perceive “reality,” and that it is transparent—we can know it just by looking.  We can tell the supernatural from the natural.  This is where we begin to see real problems with these agendas, and the whole “modernist program” that they may be said to represent.  Telling the supernatural from the natural may have looked easy in Karl Popper’s day.  It seemed less clear before him, and it seems less clear today.

We have many well-established facts that were once outrageous hypotheses:  the earth is an oblate spheroid (not flat), blood circulates, continents drift, the sun is only a small star among billions of others.  We also have immediate hypotheses that directly account for or predict the facts.  We know an enormous amount more than we did ten years ago, let alone a thousand years, and we can do a great deal more good and evil, accordingly.

However, science has moved to higher and higher levels of abstraction, inferring more and more remote and obscure intervening variables.  It now presents a near-mystical cosmology of multidimensional strings, dark matter, dark energy, quark chromodynamics, and the rest.  Even the physicist Brian Greene has to admit:  “Some scientists argue vociferously that a theory so removed from direct empirical testing lies in the realm of philosophy or theology, but not physics” (Greene 2004:352).

To people like me, unable to understand the proofs, modern physics is indeed an incomprehensible universe I take on faith—exactly like religion.  The difference between it and religion is not that physics is evidence-based.  Astrophysics theories, especially such things as superstring and brane theory, are not based on direct evidence, but on highly abstract modeling.  The only difference I can actually perceive is that science represents forward speculation by a small, highly trained group, while religion represents a wide sociocultural communitas. Religion also has beautiful music and art, as a result of the communitas-emotion connection, but I suppose someone somewhere has made great art out of superstring theory.

The universe is approximately 96% composed of dark matter and energy—matter and energy we cannot measure, cannot observe, cannot comprehend, and, indeed, cannot conceptualize at all (Greene 2004).  We infer its presence from its rather massive effects on things we can see.  For all we know, dark matter and energy are God, or the Great Coyote in the Sky (worshiped by the Chumash and Paiute).

On a smaller and more human scale, we have the “invisible hand” (Smith 1776) of the market—a market which assumes perfect information, perfect rationality, and so on, among its dealers.  The abstract “market” is no more real than the Zapotec Earth God, and has the same function:  serving as black-box filler in an explanatory model.  Of course Smith was quite consciously, and ironically, used a standard theological term for God.

The tendency to use “science” to describe truth-claims and “religion” to describe untestable beliefs is thus normative, not descriptive.  It is a rather underhanded attempt to confine religion to the realm of the untestable and therefore irrelevant.  (This objection was made by almost every reviewer of Gould 1999.)

We have abstract black-box mechanisms in psychology (e.g. Freudian dynamic personality structure), anthropology (“culture”), and sociology (“class,” “discourse,” “network”).  Darwin’s theory of evolution had a profoundly mysterious black box, in which the actual workings of selection lay hidden, until modern genetics shone light into the box in the 1930s and 1940s.  Geology similarly depended on mysticism, or at least on wildly improbable mechanisms, to get from rocks to mountains, until continental drift showed the way.  Human ability to withstand disease was for long a totally black box.  The usual wild speculations filled it until Elie Metchnikoff’s brilliant work revealed the immune-response system, and gave us all yogurt into the bargain.  It was Metchnikoff who popularized it as a health food, having seen that people in his native Bulgaria ate much yogurt and lived long.

At present, organized “science” in the United States is full of talk about “complex adaptive systems” that are “self-organizing” and may or may not have an unmeasurable quality called “resilience.”  They may be explained by “chaos theory.”  All this is substantially mystical, and sometimes clearly beyond the pale of reality; no, a butterfly flapping in Brazil can not cause a tornado in Kansas, by any normal meaning of the word “cause.”  “Self-organizing” refers to ice crystals growing in a freezing pool, ecological webs evolving, and human communities and networks forming—as if one could explain all these by the same process!  In fact, they are simply equated by a singularly loose metaphor.

When traditional peoples infer things like superstrings and self-organizing systems, we label those inferences “beliefs in the supernatural.”  The traditional people themselves never seem to do this labeling; they treat spirit forces and spirit beings as part of their natural world.  This is exactly the same as our treating dark energy, the market, and self-organization as “natural.”

Surely if they stopped and thought, the apologists for science would recognize that some unpredictable but large set of today’s inferred black-box variables will be a laughingstock 20 years from now—along with phlogiston, luminiferous ether (Greene 2004), and the angle of repose.

More:  they would have to admit that a science that is all correct and all factually proved out is a dead science!  Science is theories and hypotheses, wild ideas and crazy speculation, battles of verification and falsification.  Facts (whatever they are) make up part of science, but in a sense they are but the dead residue of science that has happened and gone on.  (See Hacking 1999; Philip Kitcher 1993.  These writers have done a good job of dealing with the fact that science is about truth, but is ongoing practice rather than final truth.  See Anderson 2000 for further commentary on Hacking.)

The history of science is littered with disproved hypotheses.  Mistakes are the heart and soul of science.  Science progresses by making guesses (hopefully educated ones) about the world, and testing them.  Inevitably, if these guesses are specific and challenging enough to be interesting, many of them will be wrong.  This is one of the truths behind Karl Popper’s famous claim that falsification, not verification, is the life of science (Popper 1948).

Science is not about established facts.  Established, totally accepted truth may be a result of science, but real science has already gone beyond it into the unknown.  Science is a search.

Premodern and traditional sciences made the vast majority of their errors from assuming that active and usually conscious agents, not mindless processes, were causal.  If they did not postulate dragons in the earth and gods in the sky, they postulated physis (originally a dynamic flux that produced things, not just the physical world), “creative force” (vis creatrix), or the Tao.

Today, most errors seem to come not from this but from three other sources.

First, scientists love, and even need, to assume that the world is stable and predictable.  This leads them into thinking it is more simple and stable than it really is.  Hence motionless continents (Oreskes 1998), Newtonian physics with its neat predictable vectors, climax communities in ecology, maximum sustainable yield theory in fisheries, S-R theory in psychology, phlogiston, and many more.

Second, scientists are hopeful, sometimes too much so.  From this come naïve behaviorism, from a hope for the infinite perfectability of humanity (see Pinker 2003); humanistic psychology (with the same fond hope); astrology; manageable “stress” as causing actually hopeless diseases (Taylor 1989); and the medieval Arab belief that good-tasting foods must be good for you (Levey 1966).

Third, some scientists like to take out their hatreds and biases on their subjects, and pretend that their fondest hates are objective truth.  This corrupted “science” gave us racism, sexism, and the old idea that homosexuality is “pathological.”  Discredited “scientific” ideas about children, animals, sexuality in general (Foucault 1978), and other vulnerable entities are only slightly less obvious.

It gave us the idea (now, I hope, laid definitively to rest) that nonhuman animals are mere machines that do not feel or think.  It gives us the pathologization of normal behavior.  Much or most diagnosed ADHD in the United States, for instance, is clearly not real ADHD; other countries have only about 10% our rate.  Most extreme of all ridiculous current beliefs, and thus most popular of all, is the idea that people are innately selfish, evil, violent, or otherwise horrific, and only a thin veneer of culture holds them in place.  This has given us Hobbes’ state of nature, Nietzsche’s innate will to power, Freud’s id, Dawkins’ selfish gene, and the extreme form of  “rational self-interest” that assumed people act only for immediate selfish advantage.  Three seconds of observation in any social milieu (except, perhaps, a prison riot) would have disproved all this, but no one seemed to look.

Given all the above, critics of science have shown, quite correctly, that all too much of modern “science” is really social bias dressed up in fancy language.

An issue of concern in anthropology is the ways that, in modern society, some mistaken beliefs are classified as “pseudoscience,” some as “religion,” and some merely as “controversial/inaccurate/disproved science.”   In psychology, parapsychology is firmly in the “pseudoscience” category, but racism (specifically, “racial” differences in IQ) remains “scientific,” though equally disproved and ridiculous.  Freudian theory, now devastated by critiques, is “pseudoscience” to many but is “science”—even if superseded science—to many others.  It is obvious that such labels are negotiable, and are negotiated.

The respectability and institutional home of the propounder of a theory is clearly a major determinant.  A mistake, if made at Harvard, is science; the same mistake made outside of academia is pseudoscience.  Pseudoscience is validly used for quite obvious shucks masquerading as science, but nonsense propounded by a Harvard or Stanford professor is all too apt to be taken seriously—especially if it fits with popular prejudices.  One recalls the acceptance as “science” of the transparently ridiculous race psychology of Stanford professor Thomas Jukes and Harvard professor Richard Herrnstein (see Herrnstein and Murray 1994, where, for instance, the authors admit that Latinos are biologically diverse, mixed, and nonhomogeneous, and then go right on to assign them a racial IQ of 89).

All this is not meant to give any support to the critics of science who claim “it” (whatever “it” is) can only be myth or mere social storytelling.  It is also not meant to claim that traditional knowledge is as well-conceived and well-verified as good modern science.  It is meant to show that traditional knowledge-seeking and modern science are the same enterprise.  Let us face it:  modern science does better at finding abstruse facts and proving out difficult causal chains.  We now know a very great deal about what causes illness, earthquakes, comets, and potatoes; we need not appeal to witchcraft or the Great Coyote.  But the traditional peoples were not ignorant, and the modern scientists do not know it all, so we are all in the same book, if not always on the same page.

Thus, science is the process of coming to general conclusions that are accurate enough to be used, on the basis of the best evidence that can be obtained.  Inevitably, explanatory models will be developed to account for the ways the facts connect to the conclusions, and these models will often be superseded in due course; that is how science progresses.

“Best evidence” is a demanding criterion, but not as demanding as “absolute proof.”  One is required to do the best possible—use appropriate methods, check the literature, get verification by others using other models or equipment.  Absolute proof is more than we can hope for in this world (Kitcher 1993).

Purely theoretical models provide a borderline case.  Even when they cannot be tested, they may qualify as science in many areas (e.g. theoretical astrophysics, where experimental testing is notoriously difficult).  Fortunately, they are usually testable with data.

The need to test hypotheses with hard evidence does not rule out the study of history.  Archaeological finds showed that the spice trade of the Roman Empire was indeed extensive.  This validated J. Miller’s hypothesis of extensive spice trade through the Red Sea area (Miller 1969), and invalidated Patricia Crone’s challenge thereto (Crone 1987).  We are, hopefully, in the business of developing Marx’ “science of history,” as well as other human sciences.

We are also now aware that “mere description” isn’t “mere.”  It always has some theory behind it, whether we admit it or not (Kitcher 1993; Kuhn 1962).  Even a young child’s thoughts about the constancy of matter or the important differences between people and furniture are based on partially innate theories of physics and biology (see e.g. Ross 2004).  Thus, traditional ecological knowledge can be quite sophisticated theoretically, though lacking in modern scientific ways of stating the theories in question.

Science and Popular “Science”

It thus appears that science is indeed a social institution.  But what kind of social institution is it?  Four different ones are called “science.”

First, we have the self-conscious search for knowledge—facts, theories, methodologies, search procedures, and knowledge systems.  This is the wide definition that allows us to see all truth-seeking, experiential, verification-conscious activities as “science,” from Maya agriculture to molecular genetics.  This can be divided into two sub-forms.  First, we can examine and compare systems as they are, mistakes and all—taking into account Chinese beliefs about sacred fish, Northwest Coast beliefs about bears that marry humans, Siberian shamans’ flights to other worlds, early 20th century beliefs in static continents, and so on.  Second, we can also look at all systems in the cold light of modern factual analysis, dismissing alike the typhoon-causing sacred fish and the tornado-causing Amazonian butterfly.  Fair is fair, and an international standard not kind to sacred fish cannot be merciful to exaggerated and misapplied “western” science either.

Second, it is “what scientists do”—not things like breathing and eating, but things they do qua scientists (e.g. Latour 2004).  This would include not only truth-seeking but a lot of grantsmanship, nasty rivalries, job politics, and even outright faking of data and plagiarism of others’ work.

Third, it is science as an institution:  granting agencies, research parks, university science faculties, hi-tech firms.  Many people use “science” this way, unaware that they are ruling off the turf the vast majority of human scientific activities, including the work of Newton, Boyle, Harvey, and Darwin, none of whom had research institutes.

Fourth, we have science as power/knowledge.  From the Greeks to Karl Marx and George Orwell, milder forms of this claim have been made, and certainly a great deal of scientific speculation is self-serving.  Science does, however, produce knowledge that stands the tests of verification and usually of utility.  It is our best hope of improving our lives and, now, of saving the world.

What is not science is perhaps best divided into four heads.

First, dogma, blind tradition, conformity, social and cultural convention, visionary and idiosyncratic knowledge, and bias—the “idols” of Bacon (1901).  These have sometimes replaced genuine inquiry within a supposedly scientific tradition.

Second, ordinary daily experience, which clearly works but is not being tested or extended—just being used and re-used.  Under this head comes explicitly non-“sciency” but still very useful material: autobiographies, collected texts, art, poetry and song.  These qualify as useful data, if only as worthwhile insights into someone’s mind.  All this material deserves attention; it is raw material that science can use.

Third, material that is written to be taken seriously as a claim about the world, but is not backed up by anything like acceptable evidence.  In addition to the obvious cases such as today’s astrology and alchemy, this would include most interpretive anthropology, especially postmodern anthropology.  Too many social science journal articles consist of mere “theory” without data, or personal stories intended to prove some broad point about identity or ethnicity or some other very complex and difficult topic. However, the best interpretive anthropology is well supported by evidence; consider, for example, Lila Abu-Lughod’s Veiled Sentiments (1985), or Steven Feld’s Sound and Sentiment (1982).

Fourth, pure advocacy:  politics and moral screeds.  This is usually backed up by evidence, but the evidence is selected by lawyers’ criteria.  Only such material as is consistent with the writer’s position is presented, and there is very minimal fact-checking.  If material consistent with an opposing position is presented, it is undercut in every way possible.  Typically, opponents are represented in straw-man form, and charged with various sins that may or may not have anything to do with reality or with the subject at hand; “any stick will do to beat a dog.”  Once again, Bacon (1901) was already well aware of this form of non-science.

The Problem of Truth

The problem of truth, and whether science can get at it in any meaningful way, has led to a spate of epistemological writings in anthropology, science studies, and history of science.  These writings cover the full range of possibilities.

The classic empiricist position—we can and do know real truths about the world—is robustly upheld by people like Richard Dawkins, whose absolute certainty not only extends to his normal realm (genetics; Dawkins 1976, a book widely criticized) but to religion, philosophy, and indeed everything he can find to write about.  He is absolutely positive that there are no supernatural beings or forces (Dawkins 2006).  He has said “Show me a relativist at 30,000 feet and I’ll show you a hypocrite” (quoted in Franklin 1995:173)  Sarah Franklin has mildly commented on this bit of wisdom:  “The very logic that equates ‘I can fly’ with ‘science must be an unassailable form of truth’ and furthermore assumes such an equation to be self-evident, all but demands cultural explication” (Franklin 1995:173).

At the other end of the continuum is the most extreme form of the “strong programme” in science studies, which holds that science is merely a set of myths, no different from the first two chapters of the Book of Genesis or any other set of myths about the cosmos.  Its purpose, like the purposes of many other myths, is to maintain the strong in power.  It is just another power-serving deception.  Since it cannot have any more truth-value than a dream or hallucination, it cannot have any other function; it must maintain social power.  This allowed Sandra Harding to maintain that “Newton’s Principia Mathematica is a rape manual” because male science “rapes female nature.”  This reaches Dawkins’ level of unconscious self-satire, and has been all too widely quoted (to the point where I can’t trace the real reference).  Dawkins might point out that Harding surely wrote it on a computer, sent it by mail or email to a publisher, and had it published by modern computerized typography.

Bad enough, but far more serious is the fact that the “strong programme” depends on assuming that people, social power, and social injustice are real.  Harding’s particularly naïve application of it also assumes that males, females, and rape are not only real but are unproblematic categories—yet mathematics is not.  How the strong programmers can be so innocently realist about an incredibly difficult concept like “power,” while denying that 2 + 2 = 4, escapes me.

Clearly, these positions are untenable, but that leaves a vast midrange.

The empiricist end of the continuum would, I think, be anchored by John Locke (1979/1697) and the Enlightenment philosophers who (broadly speaking) followed him.  Locke was not only aware of human information processing biases and failures; his account of them is amazingly modern and sophisticated.  It could go perfectly well into a modern psychology textbook.  He realized that people believe the most fantastic nonsense, using a variety of traditional beliefs as proof.  But he explains these as due to natural misinference, corrected by self-awareness and careful cross-checking.  He concluded that our senses process selectively but do not lie outright.  Thus the track from the real world to our knowledge of it is a fairly short and straight one—but only if we use reason, test everything, and check deductions against reality.

Locke’s optimism was almost immediately savaged by David Hume (1969 [1739-40]), who concluded that we cannot know anything for certain; that all theories of cause are pure unprovable inference; that we cannot even be sure we exist; and that all morals and standards are arbitrary.  This total slash-and-burn job was done in his youth, and has a disarming cheerfulness and exuberance about it, as if he were merely clearing away some minor problems with everyday life.  This tone has helped it stay afloat through the centuries, anchoring the skeptical end of the continuum.

Immanuel Kant (1978, 2007) took Hume seriously, and admitted that all we have is our experience—and maybe not even that.  At least we have our sensory impressions: the basic experiences of seeing, smelling, hearing, feeling, and tasting.  They combine to produce full experiences, informed by emotion, cognition, and memory of earlier experiences.  This more or less substitutes “I experience, therefore maybe I am” for Descartes’ “I think, therefore I am”; Kant realized not only that thought is not necessarily a given, but, more importantly, that sensory experience is prior to thought in some basic way.  He worked outward from assuming that experience was real and that our memory of it extending backward through time was also real.  Perhaps the time itself was illusory.  Certainly our experience of time and space is basic and is not the same as Time and Space.  And perhaps the remembered events never happened.  But at least we experience the memory.  From this he could tentatively conclude that there is a world-out-there that we are experiencing, and that its consistency and irreducible complexity make it different from dreams and hallucinations.

In practice, he took reality as a given, and devoted most of his work to figuring out how the mind worked and how we could deduce standards of morality, behavior, and judgment from that.  He was less interested in saving reality from Hume than in saving morality.  This need not concern us here.  What matters much more is his realization that the human brain inevitably gives structure to the universe—makes it simpler, neater, more patterned, and more systematic, the better to understand and manage it.  Obviously, if we took every new perception as totally new and unprecedented, we would never get anything done.

Kant therefore catalogued many of the information-processing biases that have concerned psychologists since. Notable were his “principle of aggregation” and “principle of differentiation,” the most basic information-processing heuristics (Kant 1978).  The former is our tendency to lump similar things into one category; the latter is our tendency to see somewhat different things as totally different.  In other words, we tend to push shades of gray into black and white.  This leads us to essentialize and reify abstract categories.  Things that refuse to fit in seem uncanny.  More generally, people see patterns in everything, and try to give a systematic, structured order to everything.  From this grew the whole structuralist pose in psychology and anthropology, most famously advocated in the latter field by Claude Lévi-Strauss (e.g. 1962).

Hume and Kant were also well aware—as were many even before them—of the human tendency to infer agency by default.  We assume that anything that happens was done by somebody, until proven otherwise.  Hence the universal belief in supernaturals and spirits.  This and the principle of aggregation gives us “other-than-human persons,” the idea that trees, rocks, and indeed all beings are people like us with consciousness and intention.

Kant’s focus on experience and the ways we process it were basic to social science; in fact, social science is as Kantian as biology is Darwinian.  However, Kant still leaves us with the old question.  His work reframes it:  How much of what we “know” is actually true?  How much is simply the result of information-processing bias?

People could take this in many directions.  At the empiricist end was Ernst Mach, who developed “positivism” in the late 19th century.  Well aware of Kant, Mach advocated rigorous experimentation under maximally controlled conditions, and systematic replication by independent investigators, as the surest way to useful truths.  The whole story need not concern us here, except to note that controlled, specified procedures and subsequent replication for confirmation or falsification have become standard in science (Kitcher 1993).  Note that positivism is not the naïve empiricist realism that postmodernists and Dawkinsian realists think it is.  It is, in fact, exactly the opposite.  Also, positivism does not simply throw the door open to racist and sexist biases, as the Hardings of the world allege.  It does everything possible to prevent bias of any kind.  If it fails, the problem is that it was done badly.

Kant did not get deeply into the issue of social and political influences on belief, but he was aware of them, as was every thinker from Plato on down.  Kantians almost immediately explored the issue.  By far the most famous was Marx, whose theory of consciousness and ideology is well known; basically, it holds that people’s beliefs are conditioned by their socioeconomic class.  Economics lies behind belief, and also behind nonsense hypocritically propagated by the powerful to keep themselves in power.

By the end of the 19th century, this was generalized by Nietzsche and others to a concern with the effects of power in general—not just the power of the elite class—on beliefs.  This idea remained a minority position until the work of Michel Foucault in the 1960s and 1970s.  Foucault is far too complex to discuss here, but his basic idea is simple:  established knowledge in society is often, if not always, “power/knowledge”:  information management in the service of power.  Foucault feared and hated any power of one person over another; he was a philosophic anarchist.  He saw all such sociopolitical power as evil.  He also saw it as the sole reason why we “know” and believe many things, especially things that help in controlling others.  He was especially attracted to areas where science is minimal and need for control is maximal:  mental illness, sexuality, education, crime control.  When he began writing, science had only begun to explore these areas, and essentially did not exist in the crime-control field.  Mischief expanded to fill the void; there is certainly no question that the beliefs about sex, women, and sexuality that passed as “science” berore the days of Kinsey had everything to do with keeping women down and nothing to do with truth.  Since his time, mental illness and its treatment, as well as sexuality, have been made scientific (though far from perfectly known), but crime control remains exactly where it was when Foucault wrote, and, for that matter, where it was when Hammurabi wrote his code.

A generation of critics like Sandra Harding concluded that science had no more grasp on truth than religion did.  Common such “science” certainly was; typical it was not.  By the time the postmodernists wrote, serious science had reached even to sex and gender issues, with devastating effects on old beliefs.  The postmodernists were flogging a dead horse.  Often, they kept up so poorly on actual science that they did not realize this.  Those who did realize it moved their research back into history.  Finding that the sex manuals of the 19th century were appalling examples of power/knowledge was easy.  The tendency to overgeneralize, and see all science as more of the same, was irresistible to many.  Hence the assumption that Newton and presumably all other scientists were mere purveyors of yet more sexist and racist nonsense.

A “strong programme” within science studies holds that science is exactly like religion: a matter of wishes and dreams, rather than reality.  This is going too far.  Though this idea is widely circulated in self-styled “progressive” circles, it is an intensely right-wing idea.  It stems from Nazi and proto-Nazi thought and ideology (including the thought of the hysterically anti-Semitic Nietzsche, and later of Martin Heidegger and Paul de Man, both committed and militant Nazis, and influenced also by the right-wing philosopher and early Nazi-sympathizer Paul Ricoeur).  It is deployed today not only by academic elites but also by the fundamentalist extremists who denounce Darwinian evolution as just another origin myth.  The basically fascist nature of the claim is made clear by such modern right-wing advocates as Gregg Easterbrook (2004), who attacked the Union of Concerned Scientists for protesting against the politicization of science under the Bush administration.  Easterbrook makes the quite correct point that the Union of Concerned Scientists is itself a politically activist group, but then goes on to maintain that, since scientific claims are used for political reasons, the claims are themselves purely political.  He thus confuses, for example, the fact that global warming due to greenhouse gases is now a major world problem with the political claims based on this fact; he defends the Bush administration’s attempt to deny or hide the fact.

Postmodernists dismiss science—and sometimes all truth-claims—as just another social or cultural construction, as solipsistic as religion and magic.  Some anthropologists still believe, or at least  maintain, that cultural constructions are all we have or can know.  This is a self-deconstructing position; if it’s true, it isn’t true, because it is only a cultural construction, and the statement that it’s only a cultural construction is only a cultural construction, and we are back with infinite regress and the Liars Paradox.  The extreme cultural-constructionist position is all too close to, and all too usable by, the religious fundamentalists who dismiss science as a “secular humanist religion.”

If we trim off these excesses, we are left with Foucault’s real question:  How much of what we believe, and of what “science” teaches, is mere power/knowledge?  Obviously, and sadly, a great deal of it still is, especially in the fields where real science has been thin and rare.  These include not only education and crime control, but also economics, especially before the rise of behavioral economics in the 1990s.  “Development” is another case (see Dichter 2003; Escobar 2008; Li 2007).  Not only is rather little known in this area, but the factual knowledge accumulated over the years in this area is routinely disregarded by development agents, and the pattern of disregard fits perfectly with Foucaultian theory.  To put it bluntly, “development” is usually about control, not about development.  Indeed, coping strategies for most social problems today are underdetermined by actual scientific research, leaving power/knowledge a clear field.

However, this does not invalidate science.  Where we actually know what we are doing, we do a good job.  Medicine is the most obvious case.  Foucault subjected medicine to the usual withering fire, and so have his followers, but the infant mortality rate under state-of-the-art care has dropped from 500 per thousand to 3 in the last 200 years, the maternal mortality rate from 50-100 per thousand to essentially zero, and life expectancy (again with state-of-the-art health care) has risen from 30 to well over 80.  Somebody must be doing something right.

Medical science largely works.  Medical care, however, lags behind, because the wider context of how we deal with illness and its sociocultural context remains poorly studied, and thus a field where power/knowledge can prevail (Foucault 1973).

Here we may briefly turn to Chinese science to find a real counterpart.  The Chinese, at least, had deliberately designed, government-sponsored case/control experiments as early as the Han Dynasty around 150 BC (Anderson 1988).  The ones we know about were in agriculture (nong; agricultural science is nongxue), and Chinese agriculture developed spectacularly over the millennia.  It is beyond doubt that this idea was extended to medicine; we have some hints, though no real histories.  Unfortunately most of the work was done outside the world of literate scholars.  We know little about how it was done.  A few lights shine on this process now and then over the centuries (e.g. the Qi Min Yao Shu of ca. 550, and the wonderful 17th-century Tiangong Kaiwu, an enthusiastic work on folk technology).  They show a development that was extremely rigorous technically, extremely rapid at times, and obviously characterized by experiment, analysis, and replication.

In medicine (yi or yixue), the developments were slow, uncertain, and tentative, because of far too much reliance on the book and far too little on experience and observation (see e.g. Unschuld 1986).  However, there was enough development, observation, and test—corpse dissection, testing of drugs, etc.—to render the field scientific.  Even so, we must take note that it was far more tradition-bound and far less innovative than western medicine after 1650 (cf. Needham 2000 and Nathan Sivin’s highly skeptical introduction thereto).   Medical botany and nutrition are probably the most scientific fields, but medical botany ceased to progress around 1600, nutrition around the same time or somewhat later.  It is ironic that Chinese botany froze at almost exactly the same time that European botany took off on its spectacular upward flight.  Li Shizhen’s great Bencao Gangmu of 1593 was more impressive than the European herbals of its time.  Unfortunately, it was China’s last great herbal until international bioscience came to Chinese medicine in the last few decades.  Herbal knowledge more or less froze in place; Chinese traditional doctors still use the Bencao Gangmu. By contrast, European herbals surpassed it within a few years, and kept on improving.

Best of all, thanks to the life work of the incredible scholar H. T. Huang (2000), we know that food processing was fully scientific by any reasonable standard.  Chinese production of everything from soy sauce to noodles was so sophisticated and so quick to evolve in new directions that, in many realms, it remains far ahead of modern international efforts.  Thanks to H. T. and a few others, we can understand in general what is going on, but modern factories cannot equal the folk technologists in actual production.

One thing emerges very clearly from comparison of epistemology and the historical record:  using some form of the empirical or positivist “scientific method” does enormously increase the speed and accuracy of discovery.  Intuition and introspection also have a poor record.  Medieval scholars, both Platonists and Aristotelians, relied on intuition, and did not add much to world science; much of the triumph of the Renaissance and the scientific “revolution” was due to developments in instrumentation and in verification procedures.  Psychologists long ago abandoned introspective methods, since the error rate was extremely high.  Doctors have known this even longer.  The proverb “the doctor who treats himself has a fool for a patient” is now many centuries old.

The flaws of the empirical and positivist programs are basically in the direction of oversimplification.  Procedures are routinized.  Mythical “averages” are used instead of individuals or even populations (Kassam 2009).  Diversity is ignored.  Kant’s principles of differentiation and aggregation are applied with a vengeance (cf., again, Kassam 2009, on taxonomy).  The result does indeed allow researcher bias to creep in unless zealously guarded against—as Bacon pointed out.  But, for all these faults, science marches on.  The reason is that the world is fantastically complicated, and we have to simplify it to be able to function in it.  Quick-and-dirty algorithms give way, over time, to more nuanced and detailed ones, but Borges’ one-to-one map of the world remains useless.  A map of the world has to simplify, and then the user’s needs dictate the appropriate scale.

The full interpretive understanding sought by many anthropologists, by contrast, remains a fata morgana.  It is fun to try to understand every detail of everyone’s experience, but even if we could do it (and we can’t even begin) it would be as useless as Borges’ map.

On the other hand, we need that attempt, to bring in the nuances to science and to correct the oversimplifications.  A purely positivist agenda can never be enough.

Case Study:  Yucatec Maya Science

Anthropology is in a particularly good place to test and critique discussions of science, because we are used to dealing with radically different traditions of understanding the world.  Also, we are used to thinking of them as deserving of serious consideration, rather than simply dismissing them as superstitious nonsense, as modern laboratory scientists are apt to do.  I thus join Roberto Gonzalez (2001) and Eugene Hunn (2008) in using the word “science,” without qualifiers, for systematic folk knowledge of the natural world.

The problem is not made any easier by the fact that no society outside Europe and the Middle East seems to have developed a concept quite like the ancient Latin scientia or its descendants in various languages, and that the European and Middle Eastern followers of the Greeks have defined scientia/science in countless different ways.  Arabic ‘ilm, for instance, in addition to being used as a translation for scientia, has its own meanings, and this led to many different definitions and usages of the word in Arabic.

In Chinese, to know is zhi, and this can mean either to know a science or to know by mystical insight.  An organized body of teaching, religious or philosophical, is a jiao.  The Chinese word for “science,” kexue, is a modern coinage.  It means “study of things.”  It was originally a Japanese coinage using Chinese words.  The Chinese borrowed it back.  Lixue, “study of the basic principles of things,” is a much older word in Chinese, and once meant something like “science,” but it has now been reassigned to mean “physics.”  Other sciences have mostly-new names coined by putting the name of the thing studied in front of the Chinese word xue “knowledge.”   But non-science knowledges are also xue; literature and culture is wen xue “literary knowledge.”  We can define Chinese “science” in classical terms, without using the modern word kexue, by simply listing the forms of xue devoted to the natural (xing, ziran) world as opposed to the purely cultural.

Yucatec Maya has no word separating science from other kinds of knowledge.  So far as I know, the same is true of other Native American languages.  The Yucatec Maya language divides knowledge into several types.  The basic vocabulary is as follows:

Oojel to know  (Spanish saber).

Oojel ool to know by heart; ool means “heart.” Cha’an ool is a rare or obsolete synonym.

K’aaj, k’aajal to recognize, be familiar with (Spanish conocer in the broader sense)

K’aajool, k’aajal ool to “recognize by heart” (Spanish reconocer):  to recognize easily and automatically.  (The separation between ool and k’aaj is so similar to the Spanish distinction of saber and conocer that there may be some influence from Spanish here.)

K’aajoolal (or just k’aajool), knowledge; that which is known.

U’ub– to hear; standardly used to mean “understand what one said,” implying just to catch it or get it, as opposed to na’at, which refers to a deeper level of understanding.

Na’at to understand.

The cognate word to na’at in Tzotzil Maya is na’, and has been the subject of an important study by Zambrano and Greenfield (2004).  They find that it is used as the equivalent of “know” as well as “understand,” but focally it means that one knows how to do something—to do something on the basis of knowledge of it.   This keys us into the difference between Tzotzil knowing and Spanish or English knowing:  Tzotzil know by watching and then doing (as do many other Native Americans; see Goulet 1998, Sharp 2001), while Spanish and English children and adults know by hearing lectures or by book-learning.  It seems fairly likely that a culture that sees knowledge as practice would not make a fundamental or basic distinction between magic, science, and religion.  The distinction would far more likely be between what is known from experience and what is known only from others’ stories.  Such distinctions are made in some Native American languages.

Ook ool religion, belief; to believe; secret.  Ool, once again, is “heart.”

So Chinese and Maya have words for knowledge in general but no word for science as opposed to humanistic, religious, or philosophical knowledge.  Unlike the Greeks, they do not split the semantic domain finely.

Let us then turn to “science” in English.  The word has been in the language since the 1200s.  One reference, from 1340, in the OED is appropriately divine:  “for God of sciens is lord,” i.e. “for God is lord of all knowledge.”  The word has been progressively restricted over time, from a general term for knowledge to a term for a specific sort of activity designed to increase specialized knowledge of a particular subset of the natural world.

Science can also be seen as an institution:  a social-political-legal setup with its own rules, organizations, people, and subculture.  We generally understand one of two things:

1) a general procedure of careful assemblage of all possible data relevant to a particular question, generally one of theoretical interest, about the natural world.

2) a specific procedure characterized by cross-checking or verification (Kitcher 1993) or by falsifiability (Popper 1959).  Karl Popper’s falsifiability touchstone was extremely popular in my student days, but is now substantially abandoned, even by people who claim to be using it and to regard it as the true touchstone of science.  One problem is that falsifiability is just as hard to get and insure as verifiability.  We all know anthropological theories that have been falsified beyond all reasonable doubt, but are still championed by their formulators as if nothing had happened.  Thus, as of 2009, Mark Raymond Cohen 2009 is still championing his idea that Pleistocene faunal extinctions forced people to turn to agriculture to keep from starving—a theory so long and frequently disproved that the article makes flat-earthers look downright reasonable.

Another and worse problem is that Popper’s poster children for unfalsifiable and therefore nonscientific theories have been disproved, or at least subjected to some rather serious doubt.  Adolph Grunbaum (1984; see also Crews 1998, Dawes 1994) took on Freud and Popper both at once, showing quite conclusively that—contra Popper—Freudian theory was falsifiable, and had in fact been largely, though not entirelyi, falsified.  As with Cohen, the Freudians go right on psychoanalyzing—and, unlike Cohen, charging huge amounts of money.  Marx’ central theory was Popper’s other poster child, and while it does indeed seem to be too fluffy to disprove conclusively, its predictions have gone the way of the Berlin Wall, the USSR, and Mao’s Little Red Book.

We are well advised to stick with Ernst Mach’s original ideas of science as requiring specialized observational techniques (usually involving specialized equipment and methodology) and, above all, cross-verification (Kitcher 1993).  On the other hand, David Kronenfeld points out (email of Jan. 7, 2010) that Popper’s general point—one should always be as skeptical as possible of ideas and subject them to every possible test—is as valid as ever.  Clear counter-evidence should count (Cohen to the contrary notwithstanding).

The question for us then becomes whether the Maya had special procedures for meticulously gathering data relating to more or less theoretic questions about the natural world, and how they went about verifying their data and conclusions.

The Maya are a different case, since they do not have any terminological markers at all (unlike the Chinese with xue), they do not have a history of systematic cumulative investigation and replication, and, for that matter, they do not have a concept of “nature.”  All of the everyday Maya world is shaped by human activities.  Areas not directly controlled by the Maya themselves are controlled by the gods or spirits.  Rain, the sky, and the winds, for instance, have varying degrees of control by supernaturals.   Ordinary workings of the stars and of weather and wind are purely natural in our English sense, but anything important—like storms, or dangerous malevolent winds—has agency.  This makes the idea of “natural science” distinctly strange in Maya context.

However, the Maya are well aware that human and supernatural agency is only one thing affecting the forests, fields, and atmosphere.  Plants, animals and people grow and act according to consistent principles—essentially, natural laws.  The heavens are regular and consistent; star lore is extensive and widely known.  Inheritance is obvious and well recognized.  So are ecological and edaphological relationships—indeed there is a very complex science of these.

To my knowledge, there is no specific word for any particular science in Maya, with the singular exception of medicine:  ts’akTs’ak normally refers to the medicines themselves, but can refer to the field of medicine in general.  In spite of a small residual category of diseases explained by witchcraft and the like, Maya medicine is overwhelmingly naturalistic.  People usually get sick from natural causes—most often, getting chilled when overheated.  This simple theory of causation underdetermines an incredibly extensive system of treatment.  I have recorded 350 medicinal plants in my small research area alone, as well as massage techniques, ritual spells, cleansing and bathing techniques, personal hygeine and exercise, and a whole host of other medicinal technologies (Anderson 2003, 2005).  Some are “magic” by the standards of modern science, but are considered to work by lawful, natural means.  More to the point, medicine in Maya villages is an actively evolving science in which the vast majority of innovations are based either on personal experience and observation or on authority that is supposed to be medically reliable.  (Usually the authority is the local clinic or an actual medical practitioner, but a whole host of patent medicines and magical botánica remedies are in use.)  New plants are quickly put to use, often experimentally.  If they resemble locally used plants, they may be tried for the same conditions.  In the last five years, noni, a Hawaiian fruit used in its native home as a cureall, has come to Mayaland, being used for diabetes and sometimes other conditions.  It is now grown in many gardens, and is available for sale at local markets.  It has been tried by countless individuals to treat their diabetes, and compared in effectiveness with local remedies like k’ooch (Cecropia spp.) and catclaw vine (uncertain identification).  I have also watched the Maya learn about, label, and study the behavior of new-coming birds, plants, and peoples.  Their ethnoscience is rapidly evolving.  The traditional Maya framework is quite adequate to interpret and analyze new phenomena and add knowledge of them to the database.

This makes it very difficult to label Maya knowledge “traditional ecological knowledge” as opposed to science.  It is not “traditional” in the sense of old, stagnant, dying, or unchanging.  Nonanthropologists generally assume that traditional ecological knowledge is simply backward, failed science (see e.g. Nadasdy 2004).  They are now supposed to take account of it in biological planning and such-like matters, but they do so to minimal extent, because of this assumption.

Until modern times, the Maya did not have the concept of a “science” separate from other knowledge.  They also lacked the institution of “science” as a field of endeavor, employment, grant-getting, publishing, etc.  Now, of course, there are many Maya scientists.  Quintana Roo Maya are an education-conscious, upwardly-mobile population.  The head of the University of Quintana Roo’s Maya regional campus is a local Maya with a Ph.D. from UCSC in agroecology.

This might not matter, but Maya knowledge also includes those supernatural beings, mentioned above.  This was a problem for Roberto Gonzalez and Gene Hunn as well, in their conceptualization of Zapotec indigenous knowledge as science.  The Quintana Roo Maya are a hardheaded, pragmatic lot, and never explain anything by supernaturals if they can think of a visible, this-world explanation, but there is much they cannot explain in traditional terms or on the basis of traditional knowledge.  Storms, violent winds, and other exceptional meteorological phenomena are the main case.  Strange chronic health problems are the next most salient; ordinary diseases have ordinary causes, but recurrent, unusual illnesses, especially with mental components, must be due to sorcery or demons.

One could simply disregard these rather exceptional cases, and say that Maya science has inferred these black-box variables the way the Greeks inferred atoms and aether and the way the early-modern scientists inferred phlogiston and auriferous forces.  However, discussion with my students, especially Rodolfo Otero in recent weeks, has made it more clear to me that much of our modern science depends on black-box variables that are in fact rather supernatural.  Science has moved to higher and higher levels of abstraction, inferring more and more remote and obscure intervening variables.  Physics now presents a near-mystical cosmology of multidimensional strings, dark matter, dark energy, quark chromodynamics, and the rest.  The physicist Brian Greene admits of string theory:  “Some scientists argue vociferously that a theory so removed from direct empirical testing lies in the realm of philosophy or theology, but not physics” (Greene 2004:352).

Closer to anthropology are the analytic abstractions that have somehow taken on a horrid life of their own—the golems and zombies of our trade.  These are things like “globalization,” “neoliberalism,” “postcoloniality,” and so forth.  Andrew Vayda says it well:  “Extreme current examples…are the many claims involving ‘globalization,’ which…has transmogrified from being a label for certain modern-world changes that call for explanation to being freely invoked as the process to which the changes are attributed” (Vayda 2009:24).  “Neoliberalism” has been variously defined, but there is no agreed-on definition, and there are no people who call themselves “neoliberals”; the term is purely pejorative and lacks any real meaning, yet it somehow has become an agent that does things and causes real-world events.  There is even something called “the modernist program,” though no one has ever successfully defined “modern” and none of the people described as “modernist” had a program under that name.  Farther afield, we have the Invisible Hand of the market, rational self-interest, money, and a number of other economic concepts that suffer outrageous reification.  The tendency of social sciences to reify concepts and turn them into agents has long been noted (e.g. Mills 1959), and it is not dead.

The real problem with supernatural beings, Maya or postmodern, is that they tend to become more and more successful at pseudo-explanation, over time.  People get more and more vested interest in using reified black-box postulates to explain everything.  The great advantage of modern science—what Randall Collins (1999) calls “rapid discovery science”—is that it tries to minimize such explanatory precommitment.  The whole virtue of rapid discovery science is that it keeps questioning its own basic postulates.

If we can claim “science” for superstrings, neoliberalism, and rational choice, the Maya winds and storm gods can hardly be ruled out.  At least one can see storms and feel winds.  Black-box variables are made to be superseded.  We cannot use them as an excuse to reject the accumulated wisdom of the human species.