Science and Ethnoscience, part 2: European Biology as Ethnobiology

SCIENCE AND ETHNOSCIENCE
E. N. Anderson

Dept. of Anthropology, University of California, Riverside

Part 2.  European Science as Ethnoscience:  Science in Europe before International Science Came

Recently, historians of science have reacted against the old model of evaluating former beliefs in light of current knowledge.  This is surely the right thing to do.  However, it often leads to evaluating former beliefs as if they were a homogeneous body of lore, decoupled from real-world experience.  One could, for instance, recount the medical knowledge of 1600 as if it were a single, coherent system, based on logical reasoning, with no input from experience or practice.  This is not really how people think, and certainly not how science and medicine developed.  People interact with their patients and surroundings, learn from that as well as from books, and come up with individual knowledge systems that may or may not have much in common with those of their contemporaries.  The current histories of science thus take account of agency, and the role of interaction with reality.

Near East and China to Europe

Science gets around.  Three particularly important cases of early-day knowledge transfer are particularly well documented:  the spread of medical lore from Greece to the Near East in the early Islamic period; the spread of medicine and other technical lore between China and the Near East in the Mongol period; and the spread of science from both the above to Europe in the Middle Ages and Renaissance.

The first two cases joined early, for Near Eastern medical knowledge was flowing to both Europe and China in the 1200s and 1300s.  However, the two-way nature of the latter flow, and the radical differences in structure and cultural background, make it more reasonable to treat them intially as separate histories.

Europe before 1500 participated in a general rise of science in the Eurasian and African world.  Greek learning was long forgotten in the west, but Arab and Byzantine scholars reintroduced it, first to Moorish Spain, then to Sicily and upward through Italy.  There had been a huge flow from the Greek world into Arabic and Persian cultures from 700 to 1000, but essentially none the other direction.  After this time the flow almost entirely reversed.  Translation into Arabic shrank considerably (Lewis 1982:76), but translation from Arabic into western languages picked up.  At first, almost all of it was within the Arab-influenced worlds of Spain and Italy, but it spread rapidly beyond those spheres.  Greek learning spread to west Europe directly (Freely 2009:165177, and see below), but spread largely via the Arabsd..

The great Salerno medical school, just south of Naples, was apparently started by Arabs in the early 8th century.  Legend said the school was founded by an Arab, a Jew, a Latin and a Greek.  It flourished by 850; it blossomed from about 1000 AD as the center of Islamic-derived learning in Europe.  Constantine the African (ca. 1020-1087), from Tunis or near it, was instrumental in transferring Arabic knowledge into Italy at this time, including his translations (and those of his student John the Saracen, 1040-1103) of works including al-Abbās, and Hunayn ibn Ishāq’s versions of Aristotle and Galen, though his translations were far from the best imaginable (Kamal 1975:189, 662-3; Ullman 1978).  (Hunayn, a Christian, came out under his Christian name of Iohannitius.)  Constantine worked in Salerno or nearby Montecassino.

Indian numerals were Arabized in the 9th century, and then developed into Arabic numerals, which slowly entered Europe in the late middle ages and early Renaissance.  The most important transfer of Indian into Arabic numeration came via al-Khwārazmī in Baghdad.  He became so famous as a mathematician that his name entered the world’s language.  “Algorithm” is a corruption of “al-Khwārazmī.”  This word first appeared in a thirteenth-century translation, Algoritmi de numero indorum, “Al-Khwärazmī on Indian numbering” (Hill 1990b:255;  “Logarithm” is a deliberately-coined metathesis of “algorithm”).   He contributed greatly to algebra (Arabic al-jabr, “figuring”), and his work on it was translated into Latin in the 12th century, by Robert of Chester and then again by Gerard of Cremona.  Trigonometry followed the same course, possibly from India, certainly from Islam, at a somewhat later date.  (On this and other mathematical transfers, see Freely 2009:133, with forms of numbers well shown, from ancient Brahmi to modern; Hill 19990; Mushtaq and Berggren 2000, esp. pp. 182, 187.)   The most important name in transferring Arabic numerals into Europe (in the 990s) was Gerbert of Aurillac, who became Pope Sylvester II (Lewis 2008:328-329)—one of the few popes to have any distinction in learning outside of theology.

The Arabs and other Near Easterners also made enormous contributions to technology and agriculture, but these are poorly known, because the contributors were rarely literate and literate people were rarely interested (Hill 1990b).  A few agricultural handbooks exist, and show great sophistication.  We know this lore was transferred to Europe, but we have few details.

The Salerno medical school remained the greatest in Europe throughout the early middle ages.  This school translated the Arab Taqwim as-sihha by the Christian Arab Ibn-Butlān (d. ca. 1066) as the Tacuinum sanitatis, which remained the basic medical manual in Europe for centuries (Tacuinum Sanitatis 1976).  It is still in print in several languages, though now more for its beautiful early-Renaissance plates than for its advice.  The latter, though, is still good; it survives today in the standard clichés about moderation in diet, moderate exercise, rest, and so forth, familiar to everyone from doctors’ talk and pop medical books.  These saws trace directly back to the Tacuinum.

It, in turn, was the basis for the Salernitan Rule, the versified guide to health that was the Salernan school’s most famous product (Arikha 2007:77, 100ff.).  Sir John Harington translated it into English around 1600.  His famous translation of one line is still frequently and justly quoted:

“Use three physicions still:  First Doctor Quiet,

Next Doctor Merryman, and Doctor Diet” (Harington 1966:22).

The Latin original, ibid., is:

Si tibi deficiant medici, medici tibi fiant

Haec tria, mens laeta, requies, moderata diaeta; literally, “if you need doctors, get three:  a happy mind, rest, and a moderate diet.”

The Salerno school also produced the Articella (“little art”), a handbook that, “by the mid-thirteenth century…was the foundational textbook for most medical teaching in the West.  It included the Hippocratic Aphorisms and Prognostics; Galen’s short Ars parva; the medically essential and thus ubiquitous treatises On Pulses and On Urines; and the extensive compendium of Galenic writings by Hunayn ibn Is’haq (Johannitius), the Isagoge Ioannitii in tegni Galeni, in the translation by Constantinus Africanus” (Arikha 2007:77).  Many other Italian translating projects were active (Freely 2009:126ff.).

Through it and other channels, the work of Ibn Sina (Avicenna, 980-1037; see Avicenna 1999) became standard.  Ibn Sina hailed from the far east of the Iranian world, near Bukhara.  He was a thorough-going Aristotelian, committed to investigation of the world, though convinced that intuition was vital in providing that.  His enormous Canon of Medicine was translated into Latin by Gerard of Cremona (1114-1187), along with perhaps a hundred other Arab works.  Gerard had moved to Toledo to learn Arabic, and remained there (Freely 2009:128; Pormann and Savage-Smith 2007:164), in that world which still remembered “convivencia.”  This was surely one of the most stunning examples of knowledge transfer in all history (Covington 2007; Kamal 1975:663; Ullman 1978:54).  One suspects that Gerard did not single-handedly translate all of them, but the achievement was fantastic nonetheless.  Avicenna’s Canon work remained standard in Europe into the 17th century.  Gerard also translated Ptolemy’s Almagest, and basic works of Al-Kindi, Al-Farabi, Al-Hazen, Thabit, Rhazes, al-Zahrawi, and Al-Khwarizmi, the last being the first algebra to reach Europe.  He also translated much alchemy (Hill 1990a:341), which, be it remembered, was a perfectly reasonable science in those days; much of modern chemistry descends from it.  Certainly, few people in history have been so important, and very few so important yet so little known.

Also active in Toledo were the Jewish translator and writer Abraham ibn Ezra (1086-1164; Freely 2009:129) and several others.

Fibonacci, famous for developing the sequence of numbers that specifies the pattern of developing plant structures, learned much from the Arabs, using al-Khwarizmi’s algebra works in Latin (Covington 2007:10)—presumably Gerard’s translation.  Faraj ben Salim, a Sicilian Jew, translated more of Rhazese as well as Ibn Jazlah, al-Abdan, and others.  As late as the 16th century, Andrea Alpago of Belluno was translating or retranslating more of Avicenna (Kamal 1975:664, following Hitti).  Another Italian, Stephen of Pisa, was active at Salenro and in the Middle East (Ullmann 1978:54).

Botany transferred actively, largely in the form of herbal medicine in the tradition of Dioscorides.  The Arabs had vastly increased the number of items in the Dioscoridean materia medica, and Europe slowly adopted many of these, though unable to access some that were strictly Near Eastern (Idrisi 2005).

Spain was key to transmission.  The Arabs conquered it in 711, ruled most of it into the 11th century, and retained a foothold at Grenada until 1492.  At peak, under the late Ummayads in the 10th century, Cordova (the capital) reportedly had 200,000 houses, 10,000,000 people, 600 inns, 900 baths, 600 mosques (with schools), 17 universities, and 70 public libraries, the royal one containing 225,000 books (Kamal 1975:8), or, by other estimates, 400,000 (Lewis 2008:326).  The Ummayad golden age ended, but subsequent dynasties did surprisingly well keeping civilization alive, and slowly Europe realized that there was something worthwhile here.

The climax of Spanish appropriation of Islamic knowledge came in the 11th-13th centuries, under Alfonso the Wise (late 13th century) and other relatively enlightened monarchs.  Moorish Spain was a center of Arab and Islamic civilization.  Works spread all over the world from there; Yusuf al-Mu’taman’s geometry book of the 11th century was taken by Moses Maimonides (1135-1204) to Cairo, whence it went on all over the Islamic world, being republished, for example, in Central Asia in the 13th century (Covington 2007).  At that time or earlier, Spanish travelers even went to Egypt and Syria, and possibly Central Asia, in search of knowledge (Kamal 1975:662, citing the medieval writer al-Maqrizi).  Ibn al-Baytar (d. 1248), a famous Andalusian physician and herbalist, traveled in the Near East and listed hundreds of remedies; many herbal drugs are still called by his name.

Around 750, the Byzantine emperor Constantine VII sent ‘Abd al-Rahman II of Andalus an elegant Greek manuscript of Dioscorides.  Seeing this as obviously far more useful than most pretty gifts, the Jewish minister Hasdai ibn Shaprut had it translated, with the gift-bearing ambassador and a monk providing the Greek, and several Arabs helping with the Arabic and with the plant identifications (Lewis 2008:331).  Arabic versions of Dioscorides were eventually brought into Latin, but, as we have seen, most Arabic medical knowledge came later and via Italy.

Even love poetry moved north; Andalusian song, sometimes learned via captured singing-girls, inspired the troubadours (see e.g. Lewis 2008:355).  Christian captives went the other way, and influenced Andalusian Arab songs; they often have chorus lines in (rather butchered) medieval Spanish, often with definitely racy words.

A vast range of Spanish and Italian words come from Arabic, including a huge percentage of traditional medical terms, and many have gone on into English, ranging from “syrup” and “sherbet” to “soda,” “cotton,” “alkali,” “antimony,” “realgar,” and “lozenge,” to say nothing of such well-known scientific terms as “algebra,” “algorithm,” “alchemy,” and most of the names of the larger stars.  The Arab definite article “al-“ is often a dead giveaway for Arabic origin.  The “l” gets assimilated to many initial consonants, giving Spanish words like azulejo “tile” (Arabic az-zulej) and azafrán “saffron” (az-zafaran).  The standard Spanish word for thin noodles,  fideos, is Arabic; the proper classical Arabic is fidāwish (see Zaouali 2007:116 for the word and a medieval recipe), fideos being the Andalusian Arabic pronunciation.  Today the word is often mistakenly taken as a plural.

Spain was, of course, a center of Arabic learning, which could easily be translated directly.  Al-Maqqari wrote of its capital in the 10th century:  “In four things Cordoba surpasses the capitals of the world…the greatest of all things is knowledge—and that is the fourth” (Freely 2009:107; the other three were local buildings, including the mosque which still survives).   Ibn Zuhr (Avenzoar to Europeans, transcribing the Andalucian pronunciation of his name) flourished ca. 1091-1162.  His more famous student Ibn Rushd (1126-1198, known in Latin as Averroes, approximating the Andalucian dialect pronunciation of Ibn Rushd) became a standard source of medical and scientific knowledge for medieval Europe.  He was enormously influential on St. Thomas Aquinas, and through him on all subsequent European thought.  It is not impossible that Europe would never have developed modern science without Averroes.  Averroes was an Aristotelian, and his version of Aristotle remained standard in Europe, being definitively superseded only after the original Greek texts became widely known.

Averroes also wrote “The Incoherence of the Incoherence,” an answer to al-Ghazzali’s “The Incoherence of the Philosophers,” a mystic’s attack on rational thinking.  Though one standard story claims that al-Ghazzali got the best of it and ended philosophy in Islam, actually Averroes’ answer was fairly successful, and science continued to flourish in the Islamic world, succumbing more to later economic decline than to al-Ghazzali’s mysticism.  Other scientists included Abulcasem (Abu al-Qasim).  Translation effort culminated with Arnold of Villanova (d. ca. 1313), who translated Avicenna, Al-Kindi, Avenzoar and others.

Some knowledge flowed the other way.  Little, if any, of it was scientific; it was more in the line of fun.  Some medieval Arab songs in Spain had Spanish-language choruses—significantly, written to be sung by slave-girls used for sexual purposes.  Spanish food got into Muslim cooking; “a primitive sort of puff pastry” was fulyātil, from the medieval Spanish word for “leafy” (Perry 2007:xii).  We will return to the story of Spain.

Italy, however, was also a major transfer zone, with Muslim control of Sicily (and briefly part of south Italy) critically important.  Sicily fell to Roger the Norman, who with his successors developed one of the most tolerant realms of the Middle Ages; seeing the value of Islamic knowledge, he and his successors, especially Frederick II, tolerated Muslim communities and oversaw a great deal of translation and learning.  One result was Frederick’s great treatise on falconry, De Arte Venandi cum Avibus, which is probably the only medieval work that is still the standard textbook in its subject (Frederick 1943).

South France produced the famous Tibbon family of Jewish translators, who rendered many works into Hebrew; then they or others translated on into Latin.  They were especially active in the 13th century (Pormann and Savage-Smith 2007:164-165).  They may have made the greatest single contribution to the translation effort, vying with Gerard of Cremona.  The enterprise ranks among the most astonishing examples of knowledge transfer in all history.

Universities, Crusaders and their doctors, knightly orders centered in Cyprus and elsewhere in the Mediterranean, and ordinary travelers became more and more a part of the effort, until the path was well-beaten and no longer a matter for a few heroic travelers.

Even the British Isles contributed translators, including Adelard of Bath and Michael Scot.  Rober Bacon learned much from translations of Arabic lore.  Later, in the 17th century, Jacobus Golius introduced Descartes to Alhazen’s work and other relevant texts; Alhazen’s work on optics now survives only in Latin translation.

By 1200, Paris had 40,000 inhabitants, 4000 of whom were students (Gaukroger 2006:47).

Students were then as they are now; “as the contemporary saying went, [they learned] liberal arts at Paris, law at Orleans, medicine at Salerno, magic at Toledo, and manners and morals nowhere” (Whicher 1949:3; cf. Waddell 1955, esp. pp 176 ff).  Nothing has changed since, except for the addresses of the most prestigious universities.  The “contemporary saying” was presumably said by older professors, who never fail to claim that the younger generation is going to hell, and never remember that their elders said the same thing about them.  It is particularly amusing to hear aging ‘60s people complain about today’s amazingly tranquil and industrious young. 

Religion was both enabler and opponent of all this.  Plato was the basis of early theology.  The rise of Platonism explains such things as the Seven Deadly Sins:  Greek philosophical annoyances rather than Biblical taboos.  Aristotle was outlawed for much of these earlier centuries; the idea that God was present in all his creation—the physical world—was anathematized as heresy (see Gaukroger 2006:70-71).

Oddly, Greek learning did not penetrate Europe directly until long after classical Greek works were well known via the Arab routes.  In fact, the Greeks themselves recovered much of it from the Arabs (Herrin 2008); the Dark Ages were not nearly so dark in Byzantium as in the west, but still much was lost.  Greeks such as Gregory Chioniades (late 13th-early 14th C) eventually came to translate Arab advances in astronomy, medicine, and related fields (Herrin 2008:274).  Somewhat before this time, medical study has revived in Byzantium; dissection began again (after longstanding Christian bans) around the 11th century (Herrin 2008:228).

Western Europeans came to Byzantium for commerce and crusades in the high middle ages.  The infamous Fourth Crusade of 1204 led to European occupation of the city for almost 60 years.  During this period, such Westerners as William of Moerbeke read and translated Aristotle, Galen, Archimedes, and other scientific greats (Herrin 2008:278-279).

Meanwhile, Greeks from the Byzantine world appeared in the West, in time to teach Petrarch and convert him to trying to rediscover Greek classics in their original form.  Burgundio of Pisa first translated Galen from Greek to Latin, around 1180 (Kamal 1975:663).  Others, including the Jewish Bonacosa, followed over the next century.  Byzantine delegations continued, and the 15th century emerged as a major turning point, establishing Greek learning as more or less de regueur for serious scholars, at least in Italy (see Gaukroger 2006:89-90).  The story of the rediscovery of classical learning is too well known to need retelling here; what interests us at this point is that direct work with the Greek sources came long after much classical learning was known through Arabic refraction.

With the rise of early modern science, it was the Europeans’ turn to seek out Near Eastern knowledge in its actual homeland.  Leonhard Rauwolf traveled extensively in the Near East in the 16th century, to be followed in later centuries by Adan Tournefort (a father of taxonomy) and many others.  The classical sources were by then well known in Europe; Rauwolf and Tournefort were more interested in gathering new knowledge through actual field work.  They are among the great ancestors of modern-day field biologists and anthropologists.

India, China and Japan became well known only later.  Portuguese and then Dutch enterprise (the latter especially in Japan) led to a flood of knowledge coming back to Europe.  The Jesuit missionaries, who focused on East Asia as their initial mission field, were particularly important; they idealized Chinese culture, arguing enthusiastically for its philosophy, governance, food, medicine, and anything and everything else (on medicine, see Barnes 2005).  “New Christians” may have been important too, if the example of Garcia da Orta (the Jewish-background writer on Indian medicines) is representative.  A veritable translating industry introduced East Asian medicine to Europe in the mid-17th century, with moxibustion in particular intriguing the Dutch in Japan (Cook 2007:350-377).  Even Thomas Sydenham, the very image of the “new science” in medical form, was fascinated by moxibustion and recommended it (Cook 2007:372).  Concepts did not get across, but practices and especially drugs did.  As Cook (2007:377) says:  “Culture certainly made translating the whys and wherefores as understood by one group extraordinarily difficult.  But it was no barrier to useful goods or the business of how to do something.”

The flood of medieval Arab material was almost all Aristotelian, and it led to an enormous revolution in European thought in the 12th and 13th centuries (Ball 2008; Gaukroger 2006).  The highly idealistic, other-worldly, broadly Platonic worldview of the Dark Ages gave way to a view that valued investigation of real-world things.  God’s plan as revealed in the actual experienced world became a major goal of investigation.  This was to be the key reason for scientific investigation for the next several centuries, as we shall see in the next section.

Traditional churchmen, however, caviled at the new rationalistic, worldly, logical approach.  They felt that “taking too strong an interest in nature as a physical entity was tantamount to second-guessing God’s plans” (Ball 2008:817).

This view rose in parallel to, and may have been derived from, the Muslim reaction against Aristotelianism.  In the Near East, but not in Europe, Muslim reaction triumphed in the end.  Extreme reactionary religiosity, associated with the Hanbalite legal school, begat the Ash’arite view that speculation on the world was impious.  This received a huge boost through al-Ghazāli’s savage attacks on the “philosophers” in the 12th century.  Hanbalite thinking has more recently given rise to the Wahhabism that swept the Islamic world in the late 20th and early 21st century.  Wahhabism was espoused by the Saud family in Saudi Arabia, and their oil wealth gave them the ability to propagate it worldwide, leading to Al-Qaeda terrorism, widespread attacks on girls’ schools, and many other manifestations.  Islam is as diverse as Christianity; the Hanbalites are to the other legal schools as the hard-shell southern Baptists are to the mainstream Christians.

Ash’arism might not have triumphed, however, had not the Mongols swept through the Middle East, followed closely by the even more devastating epidemics of bubonic plague from 1346 onward.  These multiple blows ruined economy and culture, and left the region prostrate.

Science withered or ossified.  Folk wisdom continued to increase, and so did science in some marginal areas of Islam such as India and Central Asia.  But in general the torch was passed to Europe.  The roles of the Middle East and Europe were reversed.  Thus, writing on Ottoman Turkish medicine and natural history after the Turkish empire had passed its noon, Bernard Lewis reports that “they did not think in terms of the progress of research, the transformation of ideas, the gradual growth of knowledge.  The basic ideas of forming, testing and, if necessary, abandoning hypotheses remained alien to a society in which knowledge was conceived as a corpus of eternal verities which could be acquired, accumlated, transmitted, interpreted, and applied but not modified or transformed” (Lewis 1982:229).  Lewis also notes lack of interest in the rest of the world.  He correctly says it is more typical of human societies than is the ethnographic curiosity of Europe in the modern period.  But the ancient Greeks and the early medieval Muslims had been more attentive to “the others.”

Lewis contrasts this strongly with the great days of early Islam, when the Near East was the scientific center of the world.  The Ottoman twilight may be an extreme case, but I encountered exactly those attitudes among older Chinese scholars in Hong Kong in 1965 and 1966.  Many of them told me soberly that the traditional fishermen I studied had six toes and never learned to swim.  A minute’s observation on the waterfront on any warm summer day would have sufficed to disprove both claims, but the claims were old and were in the Chinese literature, and that was enough!  Such attitudes trace back to the declining days of the Ming Dynasty in the 1500s, and are not unknown earlier, but (as in Islam) they do not hold universally until economic and political decline set in.  Nothing could be farther from genuine traditional ecological knowledge; those same fishermen (and the Yucatec Maya I later studied) constantly tested and added to their pragmatic knowledge of their worlds.

The Origins of Early Modern Science

Things were very different in Europe.  Early modern science arose after Near Eastern and other sciences were incorporated there.  Perhaps from China or the Near East came the idea of garden as microcosm of the world; this idea led many to start gardens in which they tried to grow everything they could find (Cook 2007:30).

One odd pioneer was Paracelsus (1493-1541; see Thick 2010:200).  Wildly nonconformist and eccentric, he dabbled in mining, alchemy, medicine, and philosophy during a wandering life working as miner, chemist and doctor.  He believed all nature and life were chemical, and could be reproduced in the chemist’s or alchemist’s laboratory.  Cemistry and alchemy were not differentiated at this time—they were one science.  He made, or at least established in the literature, perhaps the two most important breakthroughs in liberating modern science from Greek mistake:  he saw that diseases were separate entities in their own right, and not just forms of humoral imbalance; and he saw that at least some chemical elements—mercury and sulphur, to be exact (and he added salt)—were not compouinds of earth, air, fire and water, but were actual elements themselves.  The first of these profound insights was taken up later by Sydenham and others.  The second was not to be fully developed until Lavoisier.  Still, the idea was out there; the seed was sown.

Medieval herbals gave way successively to Brunfels’ major one of 1530-36, Fuchs’ great book of 1542, and then in the late 16th century the truly great work of Dodoens (Cook 2007; Ogilvie 2006).

Of course, a dramatic moment was the coming of New World plants to Europe, first in the rather small work of Nicolas Monardes of Sevilla (1925), but then in the enormous and stunning achievement of Francisco Hernandez in the late 16th century.  Thought by some recent writers to be lost, or buried in imperial Spanish libraries, it was actually made available by the Lynx Academy (made famous by Galileo’s membership; Freedberg 2002; Saliba 2007).  It was republished in Mexico in an obscure wartime edition (Hernandez 1942), which languishes almost unknown; a new edition is needed.

Meanwhile, Bernardino de Sahagun was getting Aztec students and colleagues to record their knowledge, in the monumental Codex Florentinus (Sahagun 1950-1982).  These ethnoscience studies of Mexico are among the greatest achievements of plant exploration and of ethnography.

Only shortly before, Las Casas had led the successful movement to have Native Americans declared by the Catholic Church to be fully human and entitled to all human rights then recognized.  This was the beginning of the end for the appalling practices of early Spanish settlement, when Native Americans were enslaved and worked to death, or fed alive to dogs because they were cheaper than dogfood (Las Casas 1992; Pagden 1987; Varner and Varner 1983).  Las Casas risked his life for decades; the settler interests were openly after him.  Few political battles in history have been more heroic or more important.  Interestingly, Las Casas was the conservative in these fights; the modernizing “humanists” took the position that the conquerors had full rights to do anything they wanted to the “savages.”

Spain in the late 16th century was thus a dynamic place of forward thinking and spectacular achievement.  Monardes may have heard the masses of the great Sevillan composer Francisco Guerrero.  The year of Guerrero’s death, 1599, saw the birth in Sevilla of the master paiter Velásquez.  Contemporary with Guerrero, the incomparable Tomas Luis de Victoria was shuttling between Spain and Rome (where Palestrina composed his vast repertoire at the same time).

“New Spain” in the New World was rapidly catching up.  Spanish composers moved to Mexico and South America, where they taught the locals, initiating a period of Baroque music that is little known but unexcelled; among other things, Estebán Salas in Cuba became the first African-American to compose classical European music.  In the 17th century, Juan Ruiz de Alarcón migrated from his obscure Mexican birthplace to Spain, where he became one of the great dramatists and an absolutely unexcelled master of the Spanish language.  (He was one of those writers who can make strong men weep simply from the beauty of the sounds, even if they do not understand the Spanish.)  In short, Spain—including “New Spain”—in the 16th and early 17th centuries was fully participant in the brilliant and innovative civilization of Western Europe, along with Italy, France, the Netherlands and England.  Spain’s melancholy decline set in before the full scientific revolution (or non-revolution), but not before scholars like Monardes and Hernández had contributed in a major way to it.

Ogilvie (2006) cautions that the new discoveries in Europe and the Near East were far more important in the development of botanical science than these rather sketchily-known New World discoveries.  However, these did indeed have a major effect (Gaukroger 2006:359; even so, Bernardino de Sahagun’s great work on Aztec knowledge, now known as the “Florentine Codex,” was not known in Europe at that time.)

Arabic learning, by this time, was entering Europe via Arabic-literate European scholars as well as immigrant Arabic-speakers like Leo Africanus (d. ca. 1550)  Leo taught Arabic to the European Orientalist Jean-Albert Widmanstadt, 1506-ca 1559).  A contemporary was Guillaume Postel (1510-1581), whose astonishing career has recently been reconstructed (Saliba 2007:218-220).  Postel served on a mission to Constantinople, where he apparently learned Arabic or at least developed an interest that led to his doing so.  He read and annotated technical works of astronomy and probably other sciences, and briefly taught Arabic in Paris.  People like him evidently alerted Copernicus to Arabic astronomy, which clearly influenced Copernicus.

Just as Greek had been the exciting new language to Petrarch and his generation, Arabic was to the 16th century.  Arabic manuscripts are widely found in old European libraries (notably the Vatican and, of course, Byzantine libraries), and were not read by Arab travelers alone.  With the Lynceans and their colleagues seeking out knowledge from the Aztecs to the Arabs, Europe was suddenly a very exciting place.

An example of knowledge flow from the Near East to Europe may be of interest.  The idea of circulation of the blood seems to have started in Islamic lands.  Bernard Lewis (2001:79-80) records that “a thirteenth-century Syrian physician called Ibn al-Nafīs” (d. 1288) worked out the concept (see also Kamal 1975:154).  His knowledge spread to Europe, via “a Renaissance scholar called Andrea Alpago (died ca. 1520) who spent many years in Syria collecting and translating Arabic medical manuscripts” (Lewis 2001:80).  Michael Servetus picked up the idea, including Ibn al-Nafīs’ demonstration of the circulation from the heart to the lungs and back. William Harvey (1578-1657) learned of this, and worked out—with stunning innovative brilliance—the whole circulation pattern, publishing the discovery of circulation in 1628 (Pormann and Savage-Smith 2007:47).  Galen and the Arabs thought the blood was entirely consumed by the body, and renewed constantly in the liver.  They did not realize that the veins held a return flow; they thought the arteries carried pneuma, the veins carried nutrients. Harvey’s genius was to see that blood actually circulates continually, ferrying nutrients to and from the whole body in a closed circuit.

The Dawn of Rapid Discovery Science

Europe has progressed fairly continuously since the final eclipse of the Roman Empire, though there were some checks in the 14th, 17th, and 18th centuries as well as in the Great Depression of the 1930s.  Knowledge in particular has risen steadily, even through those difficult periods.

Europe after 1500 presents a strikingly different case from both medieval Europe and the other civilizations of the world.  The flow of Near Eastern, Chinese, and Indian learning to Europe was one major input into the rise of what Randall Collins (1998) called “rapid discovery science.”

Yet, the new wave really began with Thomas Aquinas, Roger Bacon, William of Ockham, and other medieval thinkers, and they of course were drawing on those Arab sources.  This makes rather a slow process of the famous “scientific revolution” beloved of earlier generations of historians.  The current feeling is that dragging out a “revolution” over many centuries is ridiculous.  We live in an age, after all, when the computer revolution took only a generation.

The most comprehensive study of the intellectual background to the “revolution” is that of Gaukroger (2006, 2010).  Gaukroger sees a development from the scholasticism of the high medieval period, with its Aristotelian natural philosophy, to modern science.  Before the high middle ages, Plato and Christian dogma had been riding high, inhibiting learning.  Gaukroger provides very important observations on Plato, Augustine and Manicheanism (Gaukroger 2006:51-54).  Aristotle was rehabilitated thanks to the Arabs and to Thomas Aquinas.

One might argue, in defense of the old term, that what happened in the 17th century was the most momentous single change in all human history, rivaled only by the origin of agriculture.  (The latter was also a very slow process, leading to fights about whether it was a “revolution” or not.)  I will, here, follow Collins, and refer to the event as the invention (basically between 1540 and 1700) of rapid discovery science, rather than as a “scientific revolution.”

The new, empirical, discovery-oriented, innovation-seeking science arose in the 17th century, pursuant to the work of Francis Bacon (1561-1626), Galileo Galilei (1564-1642), William Harvey (1578-1657), René Descartes (1596-1650), and their correspondents.  Francis Bacon first emphasized the need for experiments to prove claims and advance knowledge; he was opposing magic and dogma based on anecdotal evidence, as well as sheer ignorance.  He also emphasized the need for cooperation; the lone-wolf savant was already a dated concept!   Like other scientists, he wished to strip away the veil of Nature and disclose her; she had been a goddess who “loved to hide herself,” and was still poetically so represented (Hadot 2006).  After Bacon, tension arose between scientists who wished to strip her and romantics who preferred the veil (Hadot 2006).

One remembers that religion and science were not opposed then; in fact science was seen as the discovery of God’s laws in nature.  Descartes and Boyle were great religious thinkers as well as scientists.  The great astronomer Johannes Kepler studied a supernova and realized that the star that guided the Magi to Jesus might well have been such; he sought records and regularities, calculated a date for Jesus’ birth (by then it was known that it was not 1 AD), and coupled it with astrology—still a science then, though a dubious one (Kemp 2009).  Kepler also believed in the Pythagorean music of the spheres, seeing earth and nature moved by heavenly harmonies “just as a farmer is moved by music to dance” (quoted in Kemp 2009).

The revolution was real, if slow. (See Bowler and Morus 2005 for the canonical story; Gaukroger 2006, 2010 for much more detail and a much more radical view.)  It involved finding more and more real-world problems with ancient atomism, mechanism, humoral medicine, and almost everything else, and thus more and more reason to go with new knowledge rather than old teachings.

A fascinating insight into the mind of the time is Malcolm Thick’s detailed biography of Sir Hugh Plat (1552-1608; Thick 2010).  Plat was an Elizabethan tradesman, a brewer by background, who succumbed to the insatiable curiosity of the time.  He never made a significant contribution to anything, but he worked with beaver-like intensity on chemistry, alchemy, food, medicine, cooking, gardening, and every other useful art he could find.  He amassed an incredible collection of ideas, methods, and tricks, most of which he tried himself.  Plat is important not because of what he accomplished but because his story was typical.  There were thousands of ordinary people in Europe of the time who became downright obsessive over useful knowledge or simply science for science’s sake.  They wanted to help the world and to advance learning.

Plat’s work is fascinatingly comparable to an almost exact contemporary, Song Yingxing (1587-1666?), who, oddly enough, has found a biographer at almost exactly the same time as Hugh Plat (Schäfer 2011).  Song was a much more organized, and one gathers a much more intelligent, man than Plat, and produced a famous work instead of a flock of rather ephemeral items, but the mentality was the same:  an obsessive urge to find out absolutely everything about useful arts.  Yet Song’s interests died with him, and no one like him existed in China for centuries.  Plat, on the other hand, was soon forgotten in the rush of new learning.

The same contrast—so bitter for China—is visible in herbals.  At the same time, Li Shizhen was compiling the greatest herbal in Chinese history and the greatest in the world up to his time (Li 2003, Chinese original 1593).  Li’s work was the culmination of a great herbal tradition going back for millennia.  But he was almost surpassed in his own lifetime, and was surpassed soon after it, as the new European herbal movement grew from strength to strength;  Rembert Dodoens’ breakthrough herbal came in 1554, to be followed by John Gerard’s (based on Dodoens’) in 1633 and Parkinson’s in 1629.  Li remained the standard of Chinese herbals until the late 20th century.  Thus, in herbal wisdom as in useful knowledge, China was still up with the west in the 1590s, but had fallen hopelessly behind by 1650.  (One reason was the fall of the Ming Dynasty and its replacement by the often-repressive and scientifically sluggish Qing.)

Through all human history, people had followed received wisdom unless there was overwhelming reason to change.  The revolution consisted of the simple idea that we should seek new knowledge instead, using the best current observations.  These were ideally from experiments, but perfectly acceptable if they came from exploration and natural history, like Galileo’s work on astronomy (published in 1632), or even from pure theory, like Newton’s Principia mathematica (1687).

Robert Boyle (1627-1691) stated the case for experiment over received tradition in The Skeptical Chymist (2006/1661; cf. Freely 2009:214-215), taking the extremely significant extreme position that even when he had no better theory to propose, he would not accept hallowed authority—he would wait for more experiments.  This is, of course, precisely the position that Thomas Kuhn said was hopeless, in The Structure of Scientific Revolutions (Kuhn 1962).  But it worked for Boyle.

It is no mere coincidence that, just as earlier scholars had their “republic of letters” and Galileo and his friends their “Lynx Academy,” Boyle depended on an “Invisible College” for stimulus and conversation.  Scientists may study vacuums, but they cannot work in one.  The sociology of science is vital.

Much of the revolution consisted of new opportunities to observe and test.  Consider the persistence of Hippocratic-Galenic medicine.  Few indeed were the people in premodern times who had Galen’s opportunities to observe, experiment, learn, teach, and synthesize.  He had the enormous medical university in Pergamon, the whole resources of Rome, and his practice with gladiators and other hard-living people to draw on.  He was a brilliant synthesist and a dynamic writer.  The reason he was not superseded until the 17th century was that no one could really do it.  No one had the technology, the theories, the infrastructure of labs and hospitals, or the observational opportunities.  The Arabs and Chinese could, and did, supplement his ideas with enormous masses of data, information, and further qualification, but they were wise not to throw Galen over. Radical rejection of his ideas was not fully accomplished until the 19th century.  By then, modern microscopes, laboratories, and experimental apparatus were perfected.  Soon Galen’s anatomy was extended by Harvey, Willis and others; his lack of recognition of diseases as specific entities was challenged by Paracelsus, then devastated by Sydenham.  This was a long, slow process, and followers of the eccentric Paracelsus were considered quacks and outsiders in the 16th century (Thick 2010).  The newness and uniqueness of syphilis had much to do with the change in attitude.

The same was true in chemistry.  Boyle’s courage in throwing out received wisdom on alchemy, particles, the nonexistence of vacuums, and elemental natures did not help him go beyond the ancients in regard to basic theory.  He discussed the atomic theory, but it too lacked real evidence at the time.  Above all, he realized that the world had proved to be far more complicated than the Greeks or the Renaissance scholars thought; he reviews dozens of sophisticated chemical experiments that proved this amply.  Old view simply would not fit.  But the future was unclear.

He could see that earth, air, fire and water were not much of a story, but he had no way of conceiving of the idea that earth, air and water were actually made up of simpler elements that were, or were comparable to, metals.  This involved reversing all conventional wisdom, which held that the basic elements combined to produce the metals.  This reversal was ultimately reached by Lavoisier in the 18th century.  It had to wait until improvements in experimental technique had isolated oxygen, nitrogen, and so forth.  Such a change in thinking was incredibly difficult to achieve, and truly revolutionary.  Finding out something new merely adds to knowledge, but this was a matter of turning upside down the whole basis of European thinking!  The earth-air-fire-water cosmology was basic to all aspects of (older) knowledge.  The recognition that these four substances broke down into simpler elements, rather than vice versa, was terribly hard-won.

Such new classification systems were extremely important.  Biological classification also underwent a basic paradigm shift.

The classification of living things, traditionally ascribed to Linnaeus, derives as much or more from the brilliant work of John Ray (1627-1705), an exact contemporary—in birth date at least—of Boyle.  Ray was a natural historian, fascinated with plants and birds, and a key person in uniting field work with laboratory work (specifcially dissection; but note that the botanists had been there before him).

Ray developed the modern species concept—the idea that those organisms which can interbreed with each other form a species (Birkhead 2008:31). In fact, Ray coined the term “species” in its modern use (Wikipedia, “John Ray”).  He also rejected both the idea that each species has to be viewed as a unique item (as Locke implied) and that it is merely one variant on a more general Platonic type; he pioneered the modern science of classification on the basis of picking out important traits of all sorts to distinguish species and group them taxonomically (Gaukroger 2010:191-194).  He thus foregrounded reproduction and reproductive structures, later shown by Linnaeus to be the really criterial things to look at in classifying plants.

With this system, sex mattered.  Anatomy mattered, and reproductive anatomy mattered more than superficial structures; Ray was a great pioneer in elucidating the reproductive anatomy and physiology of birds.  (In this he built on a great tradition, going back to surprisingly sensible if often wrong ideas of Aristotle’s.)  Leaving descendents mattered; Darwinian evolution depends on Ray and Linnaeus more than on the infamous Malthus.  Without this concept and its implications, there was no reason not to classify plants by their leaves, as many botanists did.  (The leaf-dependent botanists were later to attack Linnaeus for the “immorality” of his “sexual” system.)  Trees could be classified by their timber value.  We shall consider below a much more recent question over what to do with whales.

Ray’s work led to further development by Adan Tournefort, explorer of the Near East.  (I first encountered Tournefort as the man dubiously honored by Brassica tournefortii, a loathed and hated weed from North Africa that has invaded my southern California homeland.  But it tastes good—it is a wild broccoli—and thus I have a soft spot in my heart, or rather in my stomach, for it.)  The taxonomic work of Tournefort and his contemporaries led directly to Linnaeus.

Less beneficial, perhaps, was Ray’s crucial role in developing the “argument by design” for the existence of God (Birkhead 2008).  Later made famous by William Paley, this survives as the universal argument for “intelligent design” today.  It had the advantage of setting Darwin wondering what really caused the design in the world.  Natural selection was his answer—firm enough that a modern intelligent design advocate (like Francis Collins) must assume God, like modern artificial-intelligence designers, uses it to fine-tune his creation.

New and rigorous classification systems for stars, minerals, mental illnesses, and everything else imaginable were to follow, and they had and have their own costs and biases (Foucault 1970; Kassam 2009).  Today we have whole classification systems for everything from universes to subatomic particles.  Atoms, when discovered, were thought to be the true atoms of Greek thought—the final particles that could not be subdivided further.  (“Atom” comes from Greek atomos, “uncuttable.”)  Another bad guess.

This new wave’s creators saw themselves as a “Republic of Letters” (Gaukroger 2010; Ogilvie 2006:82ff; Rudwick 2005).  Educated people all over Europe were in constant correspondence with each other.  This correspondence was relatively unmarred by the hatreds and political games that made daily life in Renaissance Europe so insecure.  People respected each other across lines of nation and faith.  The common language, Latin, was not the property of any existing polity.  Members in this borderless but well-recognized Republic treated each other according to unwritten, or rarely-written, rules of respect and courtesy.

Science and humanities were one.  Describing a typical case, Martin Kemp (2008) points out connections between Peter Breughel’s extremely accurate and innovative representations of landscape and the maps of Abraham Ortelius, a cartographer who was a friend of Breughel.

Of course, all academics will realize that those rules of respect did not extend to debates about theory!  A Protestant could respect and tolerate a Catholic or Jew, but if anyone dared to cross his pet idea on plant reproduction or the treatment of ulcers, the words flew like enraged cats.  That was part of the game—part of business in the Republic of Letters.  This information flow presaged the value of scientific journals (invented in the 18th century but not really important till the 19th), and then the Internet; the vast network held together by letters in the 17th century was exactly like the scientific network on the Internet today.  All the Internet has added is speed—important, to be sure.

Religious solidarity and debate stood behind much of the vigor of debates in science, with Protestants and Jews always being on the defensive at first, and having to argue trenchantly for their beliefs.  This led them to be both original and persistent in thinking (Merton 1973; Morton 1981).  But, also, the wars of religion in the 16th and 17th centuries led to major cynicism about organized religion, and contributed mightily to retreat into science as an alternative way of knowing the Divine Will and into the Republic of Letters as an alternative and more decent way of being social.  The skepticism that surfaces in Montaigne, grows in Bayle, and climaxes in Voltaire fed a search for truths that were not simply matters of unprovable church dogma.

This development was exceedingly slow and uneven, because, contrary to conventional wisdom, the middle ages had plenty of sophisticated observation and argument, and the 17th and even 18th centuries had plenty of obscurantist, mystical, and blindly-Aristotelian holdovers.  Brilliant adversarial argument, technological progress, and economic benefits of forward research were all sporadic and contingent.  They did not suddenly cut in at the glad dawn in 1620 or 1650 or any other year.

What did cut in was neatly summarized by van Helmont, the Dutch physician who proved plants grew through combining air and water:  “Neither doth the reading of Books make us to be of the properties [of simples], but by observation” (quoted in Wear 2007:98).  Helmont had much to do with inventing the modern concept of “disease”—a specifiably entity, distinct from its symptoms.  The coming of plague and syphilis, clearly entities though very changeable in symptomatology and clearly different from anything in Herodotus or Galen, had more to do with the origin of this concept; people simply could not ignore them.

Significantly, Helmont’s own work was badly flawed, not least because of his many mystical and even visionary “observations” (see Andrew Wear 2000).  17th-century science did not suddenly discover Truth in the face of learned Error.  In fact, Galen’s and Avicenna’s old books remained much better guides to medical practice than Helmont’s rather wild ideas.  What mattered was that Helmont, and many others, were breaking away from reliance on the books, and rapidly developing a science based on original observation and test.  Their willingness to endure false starts as the price of radical breakthrough is far more important, to science and to history, than their initial successes at replacing the classics with better ideas.

Deborah Harkness (2008) has shown that this type of activity—feverish quest for anything new, exciting, and informative—was exceedingly widespread in Elizabethan England, and by inference in much of urban Europe.  Everyone from farm workers and craftsmen to lords and high court officials was frantically seeking anything new.  Things that improved manufacturing and promoted profit were especially desired, but people were almost as obsessed with new stars, rare plants, and odd rocks as with more solid matters like improving metallurgy and arms manufacture.  This ferment contrasts with China’s relatively staid attitude to innovation.  Even the works of Elman and of William Rowe, which do disclose much inteletual and craft activity in early modern China, have not produced anything similar.  The Tiangong Kaiwu was roughly contemporary with, and similar to, Hugh Plat’s Elizabethan work that gives its name to Harkness’ volume, but unlike Plat’s book it was an isolated incident, not a presage of more and better to come.  Similarly, Li Shizhen’s great herbal came out at almost exactly the same time as the comparable works of Dodoens and Gerard.   (The relations of those two—with Gerard as plagiarist extraordinaire—are described in detail by Harkness).  But Li’s was the last great Chinese herbal, Dodoens’ the first great European one.  By the early 1600s, Europe had surpassed China.

Harkness wisely includes alchemy and astrology among the useful sciences (see above on the Near East); no one at that time had a clue that one could not turn lead into gold or dirt into silver.  Recall that earth was still an “element” then; gold and silver were not.  Equally amazing things were being done daily in smelting and refining.  Similarly, everyone could see the sun’s influence on all life, and the moon’s control of tides; inexorable logic “proved” that the other heavenly bodies must have some influence.  The problem was that reality did not follow logic or common sense.

Moreover, alchemy, at least, sometimes worked.  We have a careful eyewitness account of a modern Central Asian alchemist turning dirt into gold (cited in Idries Shah’s Oriental Magic, 1956).  Fortunately, the account is extremely perceptive, allowing us to perceive that the good sage was simply panning a very small amount of finely disseminated gold out of a very large amount of alluvial soil. He added a good deal of magical rigmarole, but the actual process is clear.  He seems to have been genuinely convinced he was making the gold; finely disseminated gold in alluvial dirt is far from easy to see.  Countless such alluvial separations must have lain behind alchemy.  Similarly, mercury can extract gold from crushed auriferous rock, and is routinely used for that purpose today; if the gold particles are too small to see—as they often are—an alchemist would surely have thought he was turning rock to gold, via the “mercuric” power that led to naming the liquid metal after the trickster and messenger god.  And of course much of alchemy was spiritual, not physical.

The basic hopelessness of alchemy, however, was proved by Robert Boyle, in The Skeptical Chymist.  Boyle critiqued Galen, Paracelsus, and Helmont for reductionism without evidence, and upheld a view that was, indeed, skeptical; he saw no way to simplify chemistry.  He did not really substitute a new paradigm for an old one.

What mattered was that loyalty to and reliance on the old texts had given way to loyalty to independent verification and reliance on one’s own experiments and observations.  Boyle was not afraid to admit frank ignorance and to throw out theories without having much better to substitute.  Earlier generations, even though they were perfectly aware of the imperfection of old texts and the benefits of observation, did not trust their own innovative findings unless those clearly improved on all that had gone before.  Science thus progressed slowly and cautiously.  Boyle did not throw caution to the winds, but he had come to be a leader in a generation that preferred their own experiments to old stories, no matter how little their new experiments appeared to accomplish.  They were on the way to the modern period, when hypotheses and theories are expected to fail and to be superseded in a few years, and when “hard science” departments tell university libraries not to bother keeping journals more than a year or two (as I observed during my years chairing a university library committee).

Europe the Different

Floods of ink have been expended on why China, India and the Near East did not pick up on their own innovations, and why it was a tiny, marginal backwater of the Eurasian continent that exploded into rapid discovery science.

Clearly, it is Europe that is the exception.  The normal course of human events is to see knowledge advance slowly and fairly steadily, as it has done in all societies over thousands of years.  Chinese and Near Eastern science did not stop advancing when Europe took over the lead; they kept on.  Nor did the Maya, Inuit, Northwest Coast Native peoples, or Australian Aborigines stagnate or cease advancing at any point in their history.  They kept learning more.  Archaeology shows, in fact, that most such societies kept increasing their knowledge at exponential rather than linear rates.  Certainly the Northwest Coast peoples learned dramatically more in the last couple of millennia.  But the exponent was very small.  Europe’s since 1500 has been much larger.  In the 20th century, the number of scientific publications doubled every few years.  The doubling time continues to decrease.

This is quite unnatural for humans.  People are normally interested in their immediate social group, and in getting more liked and admired therein.  All their effort, except for minimal livelihood-maintenance, goes into social games and gossip.  (People do not work for “money”; they work for what money can buy—necessities and status.  Once they have the bare necessities, and perhaps a tiny bit of solitary enjoyment, everything else goes for social acceptance and status.)  Devoting oneself to science—to the dispassionate search for impersonal truth—is truly weird by human standards.  We still think of people with this interest as “nerds” and “geeks.”  Many of them are indeed somewhat autistic.  When I started teaching, I thought young people were interested in the world.  All I had to do was present information.  I learned that that was the last and least of my tasks.  The great teachers are those that can get the students interested in anything beyond their immediate social life.

In fact, interest in learning more about the natural world is—in my rather considerable experience—actually considerably greater in traditional small-scale societies than in modern, science-conscious Europe and America.  I have spent many years living with Maya farmers, Northwest Coast Natives, and Chinese fisherfolk, and certainly the level of interest in nature and natural things was much greater among them than among modern Americans.  They were correspondingly less single-mindedly obsessed with social life.  They lacked, for example, the fascination with “celebs” that reveals itself in countless magazines and TV programs, and that much earlier revealed itself in ancient Greek and Roman adulation of actors and gladiators.  They were also much quicker to pick up skills and knowledge from other people and peoples than American farmers and craftspeople are.

Why did Europe in the 16th and 17th centuries suddenly become obsessed with Japanese medicines, Indonesian shells, and Near Eastern flowers?  Why did so many Europeans take breaks from the Machiavellian social games of their age to study such things?  Pliny had studied, and indeed invented, “natural history,” but his work became a “classic”—quoted, cited, unread, and unimitated—in its own time; natural history grew under Arab care, but truly flourished only in post-1400 Europe.

No such changes took place in the other lands.  If anything, they went the other way.  Near Eastern science declined sadly during this period.  (The Ottoman Empire was a partial contrast, but its history seems almost more European than Near Eastern at this time.)  India was preoccupied with horrific invasions and conquests by Tamerlane, Babur, and lesser lights.

China spent this period trapped in the Ming Dynasty, whose frequently-unstable rulers and frozen, overcentralized bureaucracy stifled change.  Technological and scientific progress did occur, but it was slow.  Ming and Qing autocracy is surely the major reason—revisionists to the contrary notwithstanding (see e.g. Anderson 1988; Mote 1999 gives the best, most balanced discussion of the issue, suspending judgment but making a solid case).  In spite of Li Shizhen and his great innovative herbal of 1593, Chinese science was always deeply attendant to the past, discouraging innovative theories and ideas.  This point has been greatly overmade in western sources (often to the point of racism), and is now a cliché, but it is not without truth.  I have heard many educated Chinese strongly maintain points inscribed in old books but clearly and visibly wrong for present conditions.  In Hong Kong I was repeatedly told, for instance, that the fishermen I studied could not swim.  Anyone could see otherwise on a walk along any waterfront on any warm day.  But the old books said fishermen don’t swim.  In fairness to the Chinese, I have run into the same faith in books, as opposed to observation, in the United States and Europe.

China in the Song Dynasty was ahead of Europe in every field, and ahead of the Near East in most areas of science and enquiry.  The Mongol Empire, and its continuation in China’s Yuan Dynasty, instituted in massive knowledge transfer (Anderson ms.; Paul Buell, ongoing research; Buell et al 2000), leveling the playing field and introducing many Chinese accomplishments to the western world.  Gunpowder, cannon, the compass, printing, chemical technology, ceramic skills and many other innovations spread across Eurasia.  However, the Mongol yoke was repressive in China.  The end of Yuan saw violence and chaos.  The new Ming Dynasty brought in much worse autocracy and repression.  After an uneven but fairly successful start, the dynasty settled down after the 1420s to real stagnation.

A significant and highly visible symptom is the paralysis of philosophy.  The spectacular flowering of Buddhist, Taoist, and Neo-Confucian thought under Song and Yuan had a deeply conservative tinge, but at least it was a massive intellectual endeavor.  Highly innovative ideas were generated, often in the name of conservatism.  (An irony not exactly unknown in the western world; someone has remarked that all successful revolutions promise “return to the good old days.”)  By contrast, the only dramatic philosophical innovation of the Ming Dynasty was that of Wang Yangming.  Wang was a high official with a brilliant career as censor and general.  He retired to propagate his personal mix of Confucianism and Buddhism, an “inner light” philosophy strikingly similar to Quaker thought but maintained also by a profound skepticism about worldly success and worldly affairs in general.  He moved Confucian philosophy much closer to the quiestism and mysticism of monastic Buddhism. Wang was one of the key figures in turning Chinese intellectuals inward toward quietism, which in turn was one of the causes of China’s failure to equal Europe in scientific and technical progress.

Larry Israel (2008) has given us a superb dramatic account of Wang’s subduing of an apparently psychopathic rogue prince of the Ming Dynasty.  It is another side of Wang.  By a combination of absolutely brilliant generaling and political savvy (not without Machiavellian scheming), he parlayed a very weak position with about 10,000 troops into a total victory over a huge rebellion involving—according to Wang’s reports—100,000 total troops, many of them hardened bandits and outlaws.  Wang is described as maintaining perfect cool through it all, and showing perfect timing.

It is interesting to compare him with his near-contemporary Michel de Montaigne, another soldier turned sage.  Wang was far higher up the administrative and military ladder than Montaigne, but had the same ambivalence about it and the same desire to retire to meditative and isolated pursuits as soon as he could.  The great similarity in lifetrack and the real similarity in philosophy does not extend to any similarity in the effects of their thoughts over the long term.  Montaigne’s skepticism and meditative realism were enormously liberating to European intellectuals (see e.g. Pascal’s “thoughts”), and Montaigne thus became a major inspiration of the Enlightenment.

Montaigne remained less quiestist and escapist than Wang, but the real difference was in the times.  If the world had been different, Wang might have started a Chinese enlightenment, and Montaigne might have turned Europeans inward to arid meditation.  Wang’s thought was perfect at feeding the escapism of Chinese intellectuals faced with a hopelessly stagnant and degenerate court.  Montaigne’s rather similar thought was perfect at feeding the idealism and merciless enquiry of European intellectuals in a time of rapid change, dynamic expansion of empires, and terrific contestation of religion and rising autocracy (cf. Perry Anderson 1974).

A huge part of the problem was that Chinese intellectuals served at the mercy of the court, and the Ming court was erratic and punitive, regularly condemning innovators and critics of all kinds (Wang barely survived).  By contrast, many of Europe’s first scientists were minor nobles who had little hope of major advancement but no fear of falling far.  Moreover, like scientists everywhere until the 21st century, they were males who had long-suffering wives to do the social and family work.  Today, married female scientists still usually have all the responsibility of remembering birthdays, organizing children’s parties, and being nice to the boss at dinner; some resent it, some enjoy it, but all recognize it is a special and unfair burden.  Throughout the world in premodern times, science was the preserve of males, and at first of well-born ones.  Only they had the leisure and resources to pursue science.  They were often young and adventurous.  Today, the average age of scientists who make major innovations and get Nobel prizes is around 38 (Berg 2007); in math and physics it is considerably younger than that.  In the Renaissance and early modern period, averages would have been even lower, because of the shorter lifespans of those days.

Benjamin Elman (2005) has shown that the clichés about China’s failure to learn from Europe are not adequate accounts.  The Jesuits in the 17th and early 18th centuries did not bring modern European science; they brought Aristotelian knowledge and old, pre-Copernican astronomy, already discredited in Europe.  The Chinese already had science as good as that.  The Jesuits failed to introduce calculus and other modern mathematics.  The Chinese took what they could use—clocks, some mapping techniques—and saw correctly that the rest was not worth taking.  The Jesuits lost their China foothold and eventually were closed down totally (to be revived much later), and China had no real chance to learn until other missionaries flooded into China in the 19th century.  However, they continued to benefit from, and develop, the knowledge they learned from the Jesuits.  (Interestingly, this point had been made 60 years earlier by the anthropologist A. L. Kroeber [1944:196], without the materials available to Elman—showing what can be done by a relatively unbiased scholar in spite of the lack of any good information on just how successful Chinese science was.)

Elman systematically compares scientific fields ranging from mathematics and engineering to botany and medicine.  (Among other things, he notes that western medicine had some impact at the same time that the indigenous Chinese medical traditions were moving from a focus on cold to a more balanced focus on both cold and heat as causes of illness.  Like most premodern peoples, their naturalistic medical traditions gave heavy importance to those environmental factors.)  He misses the one that would best make his case:  nutrition.  Chinese nutritional science was ahead of the west’s till the very end of the 19th century.  This was one case in which the west should have done the learning.

After that, China learned about as fast as any country did.  Japan did not get its famous clear lead over China in borrowing from the west till late in the 19th century.  Elman sums up a general current opinion that China’s loss of the Sino-Japanese War of 1894-95 was not because China was behind technologically, but because China was corrupt and misgoverned.  The Empress Dowager’s infamous reallocation of the navy’s budget to redecorate the Summer Palace was only one problem!

This being said, the Chinese were indeed resistant to western knowledge, slow to realize its importance, slow to take it up, slow to see that their own traditions were lacking.  Elman is certainly right, both intellectually and morally, in stressing the Cinese successes, but he may go a bit far the other way.  He sometimes forgets that only a tiny elite adopted any western knowledge.  He admits the Jesuits had no effect outside the court circles—they were sequestered from the people.  In fact, China missed its chances till too late, and its borrowings were then interrupted by the appalling chaos of the 20th century.  Only in the 21st century did China finally drop its intellectual isolationism.

A Few Notes on Later Change

Science as a reliable cranker-out of money-making technologies is a 19th-century perception.  During the period of the (supposed!) “scientific revolution,” craftsmen, not scientists, made the profitable innovations.  The brilliant and pathbreaking innovations in agriculture, textiles, dyeing, mining, and other arts, from the 1400s on (after Europe had internalized Moorish introductions), are all anonymous.  While Bacon and Descartes were making themselves famous, the really important technological developments were being made by farmers and laborers, whose names no one recorded but whose deeds live on in every bite we take and every fibre we wear.  Few things are more moving, or humbling, than realizing how much we now owe to countless unnamed men and women who lived quiet good lives while the rich and famous did little besides pile up corpses, or, at best, write learned Latin tomes of speculation.

On the other hand, though some at the time said that science only satisfied “idle” curiosity, the very use of the invidious word “idle” indicates that more “serious” game was afoot.  Besides the obvious utility of medicine, there were countless works on transport, mining, agriculture, water management, architecture, and every other art of life.  As recognized in the old phrase “Renaissance man,” a well-known artist, politician, or literary person might make scientific advances in practical fields.  Most famously, Leonardo da Vinci made contributions (or at least plans for contributions) to many.

All this was learned much less rapidly than we once thought.  It took generations for the whole complex of observation, experiment, open publication, and forward-looking, inquiring, argumentative science to take wide hold.  Moreover, the founders’ mistakes conditioned science for years, or even centuries.  Worst in this regard was Descartes’ claim that nonhuman animals are mere machines, without true consciousness.  Not until the late 20th century was this idea—so pernicious in its effects—definitively excised from serious science.

However, the idea that Descartes is responsible for the mind-body dualism or the idea that animals are mere machines is based on the assumption that major cultural change occurs because a brilliant individual has a great insight which then trickles down.  This is not how culture change occurs.  It comes from continual interaction with the natural and social world, leading to general learning and constantly re-negotiated conclusions.  Descartes merely put fancy words to what had been church dogma for 1600 years.  He had his influence, but it was minor.

Medicine too reveals a slow, halting progress.  Notable innovators were Hooke, Boyle, and Thomas Sydenham, who developed from the Helmontian canon further ideas of nosology—systematic classification of named disease entities, rather than mere description of symptoms and inferred humoral causes—and laid the foundations for modern epidemiology (Gaukroger 2006:349-351; Wear 2000).  Boyle, ever the innovative and devoted mind, even counseled learning medical knowledge from Native Americans, long foreshadowing modern plant hunting (Gaukroger 2006:374).  However, Galenic medicine held sway through the 19th century, and in marginal areas right through the 20th.

However slow and uneven this all was, dynamic, forward-looking figures like Galileo, Descartes (who invented mathematical modeling as a systematic scientific procedure), Hooke, and Boyle did indeed transform the world.  The really critical element was their insistence on observation and experiment.  Europe previously (and even for a long time after) never could shake off the devotion to prior authority.  Rapid discovery science came when people realized that Aristotle, Avicenna, and other classics were simply not reliable and had to be tested and supplemented.

European expansion and the rise of entrepreneurship has long been a prime suspect in all this (Marx, Weber, and almost everyone else in the game mentioned it).  The correlation of maritime expansion, discovery, nascent mercantile capitalism, and science—the four developing in about that order—is too clear to ignore.

This had a background not only in the Mediterranean trade (Braudel 1973) but also in the European fishery, which developed early, and expanded into a high-seas, far-waters fishery by the 1400s (see e.g. Cook 2007:7-8).  This led to Europe’s taking full advantage, quickly, of Chinese and Arab maritime advances.  They developed navigation and seamanship to a unique and unprecedented level by 1450.  Holland and Portugal, the most dependent on fisheries of any nations (anywhere), took the lead.

After that, mercantile values took over: need for honest dealing (within reason!), enterprise, factual information, and above all keeping up on every bit of new knowledge and speculation.  Everything could be useful in getting an advantage in trade.  Even clear prose (necessary to scientific writing, at least today) may owe much to this need of merchants for simple, direct information (Cook 2007:56; 408-409).  The whole organization of the new science was influenced by the organization and institutions of the new mercantile capitalism.  Also, merchants wanted tangible signs of their travels and adventures: gardens, curiosity cabinets.

This classic theory has recently received a powerful boost from Harold Cook, who traces out the rise of Dutch business and science in Matters of Exchange:  Commerce, Medicine, and Science in the Dutch Golden Age (2007).  He shows that Dutch science was very much a matter of cataloging and processing the new items the Dutch were discovering in Indonesia, Japan, Brazil, and elsewhere.

Terms like “scientist” and “biology” date from the 19th century, as does “science” in its modern sense.  (“Scientist,” coined by William Whewell, was not really a new word; it merely replaced earlier terms like “savant” and “scient,” which had become obsolete.)

In the early modern period, the people in question were simply called “scholars,” because no one clearly separated science from theology, philosophy, and other branches of knowledge.  Enquiry was enquiry.  Only in the 19th century did disciplines become so distinctive, formal, and methodologically separate that they had to have their own names.

By the late 19th century, folk knowledge of the world had separated from formal knowledge so completely that yet another set of new terms appeared.  Consider the term “ethnobotany,” coined in 1895 by John Harshbarger to refer to the botanical knowledge of local ethnic groups.  This was an old field of study; Dioscorides really started it, and the 16th-century herbalists did it with enthusiasm—Ogilvie (2006:71) called it “ethnobotany ante litteram.”  Linnaeus drew heavily on folk knowledge in his botanical work.   China had a parallel tradition; Li Shizhen drew on folk wisdom.  But no one saw folk botany as a separate and distinctive field until the 1890s, when science became so formalized and laboratory-based that the old folk science became a different thing in people’s minds.

Conclusions on Science History

Looking back over the preceding sections, we see that the main visible difference was the explosion of trade and conquest, especially—but far from solely—in the 15th and 16th centuries.  This brought Europe into a situation where it was forced to deal with a fantastically increased mass of materials to classify, study, and deal with.  It simply could not ignore the new peoples, plants, animals, and so on that it had acquired.

Exactly the same problem faced the Greeks when they grew from tiny city-states to world empire between 600 BCE and 300 BCE, and they did exactly the same thing, leading to the scientific progress of the period.  The golden age of Chinese philosophy was in a similar expansionist period at the exact same time, but Chinese science peaked between 500 and 1200 A.D., with rapid expansion of contacts with the rest of the world.  The Arabs repeated the exact story when they exploded onto the world scene in the 600s and 700s.  In all cases, stiffening into empire was deadly; it slowed Greek science in the Hellenistic period, and virtually shut down Chinese and Near Eastern science after the Mongol conquests.  These conquests did much direct damage, but their real effect was to introduce directly—or create through reaction—a totalitarian style of rule.  China’s Yuan and especially Ming dynasties were hostile to change and innovation; Qing was less so, but not by much.  The change in the Near East was even more dramatic.  The spectacular flood of scientific works suddenly shut off completely after the Mongols (and the plagues that soon followed).  There was hardly a new book of science from then until modern European scientific works began to be translated.  Even today, the Near East lags almost all the rest of the world—including some far less developed regions—in science.  As expected, the worst lag is in the most autocratic countries.  The least lag is found in the more politically sane nations, such as Turkey, where both liberal Hanafi Islam and a European window have led to greater openness.

Europe and America have not, so far, suffered totalitarian death, but the United States from 2010 onward shows exactly how this happens.  The far right wing of the Republican party took over the House and Representatives and most states in that year, and immediately began a full-scale assault on the funding, the independence, and the freedom of teaching of the research and teaching institutions of the country, from grade school to the National Science Foundation.  An almost total defunding of science was advocated.  In education, teaching was under attack, with proposals to replace trained and independent teachers overseeing classes of 20-30 by low-skilled persons with low salaries and no job security put in charge of classes of 60-80.  Something very much like this happened in Ming and Qing China.

It also happened many times over in Europe, but there were always countries where scientists and scholars could take refuge:  the Netherlands in the 17th century, England in the 18th, France in the 19th, America in the 20th, and various lesser states at various times.  The European world’s fractionation saved it.  No one state could take over, and no one could repress all science.  In China, by contrast, the paranoid Ming Dynasty could shut down almost all progress throughout the whole region.  In the Near East, the Turkish and Persian empires did more or less the same thing.

In Europe, a feedback process developed.  The freer states promoted trade and commerce, which in turn stimulated more democracy (for various well-understood reasons).  This stimulated more searches for knowledge, which were relatively free of dogmatic interference.  Any forward knowledge could provide an advantage in trade.  The rise of Republican anti-intellectualism in the United States tracked the replacement of trade and commerce by economic domination through giant primary-production firms, especially oil and coal interests.

Religion

Another factor was the tension between religious sects.  Robert Merton (1973) and A. G. Morton (1981) pointed out a connection between religious debate and science.  Merton saw Protestantism as hospitable ideologically.  I find Morton’s explanation far more persuasive.  He thought that the arguments between sects over “absolute truth” created a world in which people seriously maintained minority views against all comers, argued fiercely for them, and sought proof from sources outside everyday society.  They were used to seeing truth as defensible even if unpopular.

Cook (2007) confirms this by noting how many religious dissenters wound up finding refuge the Netherlands—Spinoza and Descartes are only the most famous cases—and how many more resorted to publishing, teaching, or researching there.  Cook takes pains to point out that Dutch leadership in intellectual fields rapidly declined as the Netherlands lost political power, religious freedom, and mercantile edge (the three seem to have declined in a feedback relationship with each other; see also Israel 1995 for enormous detail on these matters).   Gaukroger (2006) has argued, reasonably enough, for a much more complex relationship, but I think Merton’s theory still applies, however much more there is to say.

Accordingly, the separation of science and religion is a product of the Enlightenment, and the “conflict” between science and religion is an 18th-19th-century innovation (Gaukroger 2006; Gould 1999; Rudwick 2005, 2008).  Before that, scientists, like everyone else, took God and the supernatural realm for granted (though there were exceptions by the 18th century).  Few saw a conflict, though the separation was beginning to be evident in the work of Spinoza and Descartes.  They deserve some of the blame for separating the natural from the moral (see Cook 2007:240-266).  Descartes inquired deeply into passions, mind, and soul, developing more or less mechanistic models whose more oversimplified aspects still bedevil us today.  Scientists like Newton and Boyle were not only intensely religious men, but they saw their science as a pillar of religious devotion—a devout exploration of God’s creation.  As late as the 18th century, Hume still argued that no one could seriously be an atheist, and was astonished when he visited France and met a roomful of them (Gaukroger 2006:27).  God was already seen as a clockmaker by the 14th century (Hadot 2006:85, 127), and by the 17th it appeared to many scientists that their job was to understand the divine clockwork.

The conflict of science and religion arose only after Archbishop Ussher and other rationalists overdefined the Bible’s position on reality, and had their claims shown to be ridiculous (Rudwick 2005, 2008).  Between fundamentalist “literalism” and 19th-century science there is, indeed, an unbridgeable gap.  However, no one who reads the Bible seriously can maintain a purely literalist position.  There are too many lines like Deuteronomy 10:16:  “Circumcise therefore the foreskin of your heart.”  (This line is repeatedly discussed in the Bible, from the Prophets down to Paul’s Epistle to the Romans, which discourses on it at great length.)  And the “Virgin Birth” is hard to square with Jesus’ lineage of “begats” traced through Joseph.  Be that as it may, today we are stuck with the conflict, sometimes in extreme forms, as when Richard Dawkins and the Kansas school board face off.

A conflict of science and philosophy arose too, but stayed mild.  Philosophy, however, fell from guiding the world (through the middle ages) to guiding nations (through the Renaissance and early modern periods) to guiding movements (through the 19th century) to being a game.  By the mid-twentieth century it had some function in guiding science, but had ceased to be a living force in guiding the world.  Economics has replaced it in many countries.  Extremist political ideology—fascism, communism, and religious extremism—has replaced it elsewhere.  Philosophical ethics have thinned out, though the Kantian ethics of Jurgen Habermas and John Rawls have recently been influential.

Mastering Nature

The early concern with “mastery” of nature has been greatly exaggerated in recent environmentalist books.  It was certainly there, but, like the conflict with religion, it was largely a creation of the post-Enlightenment world.  And it was not to last; biology has now shifted its concern to saving what is left rather than destroying everything for immediate profit.

The 19th century was, notoriously, the climactic period for science as nature-mastering, but it was also the age that gave birth to conservation as a serious field of study.  Modern environmentalists read with astonishment George Perkins Marsh’s great book Man and Nature (2003 [1864]).  This book started the modern conservation movement.  One of the greatest works of 19th century science, it profoundly transformed thinking about forests, waters, sands, and indeed the whole earth’s surface.  Yet it is unequivocally committed to mastery and Progress, not preservation.  Marsh forthrightly prefers tree plantations to natural forests, and unquestioningly advocates draining wetlands.  He wished not to stop human management of the world, but to substitute good management for bad management.  His only sop to preservation is an awareness of the truth later enshrined in the proverb “Nature always bats last.”  He knew, for instance, that constraining rivers with levees was self-defeating if the river simply aggraded its bed and eventually burst the banks.

This being said, the importance of elite male power in determining science has been much exaggerated in some of the literature (especially the post-Foucault tradition).  Scientists were a rare breed. More to the point, they were self-selected to be concerned with objective, dispassionate knowledge (even if “useful”), and they had to give up any hope of real secular power to pursue this goal. Science was a full-time job in those days.  So was getting and holding power.

A few people combined the two (usually badly), but most could not.  Scientists and scholars were a dedicated and unconventional breed.  Many, from Spinoza to Darwin, were interested in the very opposite of worldly power, and risked not only their power but sometimes their lives.  (Spinoza’s life was in danger for his religious views, not his lens-making innovations, but the two were not unrelated in that age.  See Damasio 2003.)  Moreover, not everyone in those days was the slave of an insensate ideology.  Thoreau was not alone in his counter-vision of the good.  Certainly, the great plant-lovers and plant explorers of old, from Dioscorides to Rauwolf and Bauhin and onward through Linnaeus and Asa Gray, were not unappreciative of nature.

And even the stereotype of male power is inadequate; many of these sages had female students, and indeed by the end of the 19th century botany was a common female pursuit.  Some of the pioneer botanists of the Americas were women, including incredibly intrepid ones like Kate Brandegee, who rode alone through thousands of miles of unexplored, bandit-infested parts of Mexico at the turn of the last century.

We need to re-evaluate the whole field of science-as-power.  Governments, especially techno-authoritarian ones like Bismarck’s Prussia and the 20th century dictatorships, most certainly saw “science” and technology as ways to assert control over both nature and people.  Scientists usually did not think that way, though more than a few did.  This leads to a certain disjunction.  Even in the area of medicine, where Michel Foucault’s case is strong and well-made (Foucault 1973), there is a huge contrast between medical innovation and medical care delivery.  Medical innovation was classically the work of loners (de Kruif 1926), from Joseph Lister to Maurice Hillebrant (the designer of the MMR shots).  Even the greatest innovators in 19th-century medicine, Robert Koch and Louis Pasteur, worked with a few students, and were less than totally appreciated by the medical establishment of the time.  Often, these loners were terribly persecuted for their innovative activities, as Semmelweis was in Hungary (Gortvay and Zoltán 1968) and Crawford Long, discoverer of anesthesia, in America.  (Dwelling in the obscurantist “Old South,” at a time when black slavery was considered a Biblical command, Long was attacked for thwarting God’s plan to make humans suffer!)  By contrast, medical care delivery involves asserting control over patients.  At best this is true caring, but usually it means batch-processing them for convenience and economy—regarding their humanity merely as an annoyance.  No one who has been through a modern clinic needs a citation for this (but see Foucault 1973).

Leave a Reply