3/15/10

Glottochronology and Language Exchange

My thoughts on language exchange that you can read in this blog are indirectly related to an already-established field of linguistics called Glottochronology (Swadesh 1949; 1951). What is glottochronology and how might it relate to what I call "language economy"? Glottochronology is a measurement of the time of separation (called the time depth) between two related languages. Readers of historical linguistics are advised to read a well written summation by Winfred Lehmann (1992) which I will paraphrase here.

Suppose we were to take five words from Modern English (ME): animal, fur, head, I, sun. In standard German the corresponding words are Tier, vier, Kopf, ich, Sonne. Since the first and third words are not related, we find a 60% agreement between the two languages. Our next step is to compare this percentage with the expected rate of loss.

To permit ready determination of the time depth of two related languages, Robert Lees devised a formula. t is equal to the logarithm of the percentage of cognates c, divided by twice the logarithm of the assumed percentage of the cognates retained after a millenia of separation r. So...

t = (log c) / 2(log r)

Using our five words from ME and German, we determine t to be:

t = log 60% / 2 (log 85%)

Which is

t = -0.511 / 2 * -0.163

To give us a number 1.563. By the formula, English separated from German around 1000 * 1.563 years ago, that is around the year AD 430. The Anglo-Saxons, in fact, moved into the British isles around AD 435.

In such a short list the range of error may be great. To reduce the chance of error, one would prefer a long, carefully designed list. The two most frequently used lists are one of 100 words, and another of 200 words.


Now, glottochronology is not without a great deal of controversy and problems. Lehmann mentions that when an emperor died, the ancient Chinese would cease to use any old religious words and replace them with synonyms. But the overall method and aim are, I think, very important.

So how do glottochronology and language economy relate to each other? Glottochronology is an historical pursuit designed to date that which has already occurred. Language economy is an effort at tracking contemporary change and predicting future language exchange.

3/2/10

Some links for you...

For those of you who like to validate their belief in God with some pithy line like, "What are the odds that such and such happened?" I give you... THIS.

The next link is a fun little philosophical image. Click here.

2/22/10

Moral Animals and Moral Relativism


Francis Beckwith argues in "Why I'm not a Moral Relativist" that there are only three sources of objective morality: illusion (i.e. there isn't objective morality), chance, and intelligence. I like this argument. However, what if our biological and social evolution plays a central role in our acquisition of moral principles? I'm not sure we can put evolution as a moral source in the "chance" category as under such an explanation morality would arise in response to social conditions and is the natural result of a social species.

Just to press this point, take a social species like wolf/dog. They have moral rules that have evolved through their social interaction and need to survive. As Bekoff and Pierce argue, there are dog "commandments" much like Moses' decalogue that help wolf/dog society to function. If wolf/dogs have moral principles (along with other social organisms) much like our own, then I'm further inclined to think that "chance" is a poor category for evolutionary explanations of morality.

Might intelligence have bestowed morality to wolf/dogs and us humans? I'm inclined to think that there are simpler ways to explain this: that the process which led to our biology also led to our morality. So, what category might evolutionary explanations of morality fit if not chance (and not intelligence)? Perhaps a category of "mechanistic process" might better describe such explanations, showing Beckwith's argument to be, at bottom, a false dilemma.


________________________________________

2/19/10

Further analysis of language economy

In the previous post, I dissected the levels of language change from 1 to 4 in order of severity. The first category is a relatively mild amount of language export from one language to the other. In the first category, a language will borrow words only to fill a void: such as English borrowing the word 'cilantro' from Spanish. The reason why we added the word cilantro is easy to imagine: a Spanish merchant sells a new product to a chef, who uses it in his meal to his master who, impressed, asks the chef what the name of the new plant is. "The merchant said 'cilantro', sir."

But before we begin analyzing language exchange we must continue splicing. We have seen the levels of moderation in a language exchange, but now we must categorize the psychology on a broad level. For my examinations that will follow in later posts, I think it useful to dissect the reasons for language change into two categories. Of course, many categories - perhaps an infinite number - can be created to explain why languages change or don't, but I think we can make do with two which I will call normative and unitive forces.

Normative. This may also be described as the conservative force of the mind, the interior resistance to language change. Normative forces maintain language laws within our mind and resist the replacement of words and grammar structures with new versions.

Unitive. The liberalizing, largely external force that encourages us to add or modify existing words or grammatical structures to the way we speak. This may also be described as an ongoing desire to speak appropriately. For instance, though the normative forces encourage me to speak colloquially with my friends in a simple manner, college simply will not accept simple talk - and in order to be accepted I have had to add thousands of new words to my vocabulary and even change existing grammar structures.

Normative forces are directly linked to language-economy supply. A US tourist that does not want to learn how to converse in Portuguese will try to get by in as much English as possible. The Portuguese waiter, in order to make him or herself more hirable, will be quite responsive to unitive forces (which are directly linked to language-economy demand) and picks up quite a bit of restaurant English, maybe even more. Will the Portuguese waiter retain the English and use it in everyday scenarios? In category 1 or even category 2 situations, usually not. Perhaps a phrase or two slips in, but in general the demand for English is not high enough to permanently change the Portuguese waiter's interior vocabulary.

2/14/10

The Economy of Language-Exchange

A particular deficit in linguistic literature that I feel needs research is a field I will call "Language Economy." That is, examination of the fundamental nature of exchange between languages. When cultures with different languages come into contact, I that there are a number of possible pathways linguistic-cultures interact that deserve categorization. Here are a number that I propose, ranging from the smallest amount of language exchange to the greatest:

Category I. Light Language Trade
Languages admit words for nouns, verbs and proper nouns (especially toponyms) to fill a noticeable language abscess. [Example: when the Spanish began selling us a new condiment to our food, we bought their name for it too, 'cilantro']

Category II. Moderate Language Trade
More than just one language adding words to fill abscesses, word are also replaced by foreign words. Colloquial phrases enter language.

Category III. Extensive Language Trade
Grammar changes, large vocabulary replacement, core vocabulary begins to be replaced. [Example: French domination of the English language from 1054 onward]

Category IV. Complete Language Trade
Core vocabulary replaced save for a few words. Core grammatical structures change.

In my opinion, the interaction between languages of the world are the freest economies we could ever study.

2/7/10

Linguistic Gravity Model of Vocabulary-trade



The above formula was developed in 1954 by Walter Isard to give a very good estimation of how much trade (numbered by currency, usually dollars) goes on between two countries. I won't get into the specifics, but the formula basically says this: given the distance between two countries (physical distance is a huge barrier to trade) and the relative sizes of both countries' economies, Fij will be equal to how much trade goes on between the two countries. The formula has turned out to be a pretty reliable one, consistently predicting the correct figures that we would expect to see.

But I would like to take the formula out of its economics box and bring it into contemporary linguistics. Namely, with sufficient work, a formula could be devised to predict how much a language will borrow from another language due to contact.

i = Submissive language, the language that imports itself
j = Dominant language, the language that exports itself
D = the economic mass of a culture
G = a constant ~ 1
F = total language exported from j to i
Mij = total distance between two countries equal to MiMij


So...

Fij = G(Mij/Dij)

The important thing that this equation predicts is the amount of language export upon a submissive language, but not the other way around. Sociologists have long noticed that language exchange is never equal. Iberian Latin absorbed almost no local words during the Roman Empire, yet Latin quickly demolished all local tongues in the peninsula to the point where it is impossible to know what languages once existed in these villages. For a contemporary example, look at Mexican-United States relations and their languages: Mexican Spanish uses an incredible breadth of 'Americanisms,' from retiro (in the sense of 'to retire from a job') to 'dippear' ('to dip'). Yet how many Spanish words have entered the General American English lexicon? Not many, probably none.

2/3/10

Hey, you...

For the last two centuries, linguists have been tracing the roots of words back to their earliest states by means of comparative analysis (the field is called 'historical linguistics'). Beginning with British scholars working abroad for the empire, amateur philologists noticed great similarity of between English, Greek, Latin, and native langauges - most remarkably the Indian language Sanskrit. The similarities were too great to ignore. The fact that the Old English word 'snaw' (snow) sounded much like old Slovak 'sneigh' and the Sanskrit verb 'sniyhati' ('becomes wet') all bear remarkable phonological similarities gave rise to a new branch of history where words were cross-examined to recover their original sound and meaning (*sniegw^ho). Suddenly no word in any language was safe without being thoroughly explored by eager linguists to discover its origin. Some languages, once thought to be completely unrelated were brought together (such as English, Hittite and Sanskrit). Others were shown to be genetic isolates without any living relative (like Basque or arguably Etruscan).

More recently, the Oxford English Dictionary has been the continuous attempt to consolidate etymological studies into a single source, making available to the public an enormous compendium tracing the roots of every single English word. The results have been largely successful. But in etymological studies there have been rogue words that defy etymology. Words like 'quiz' that confound the scholar, despite numerous 'folk etymologies' (read 'rumor').

One of those words is one of our most popular: 'hey', as in a comment to attract someone's attention. Why is this word a challenge? The origins of the word 'hey' are, in a sense, too well attested. Words like 'hey' appear in nearly every language, with similar sounds. Some even speculate that hey is a natural language phenomenon of the mind in society. Some Indo-European languages have remarkably similar words; Greek has 'ei' and Mexican Spanish uses 'ay', for instance. But so do completely unrelated tongues. Chinese has 'ai' (single tone), Burmese purportedly 'aey', and from my own experience Gizpuzkoan Basque uses 'ei'.

But I argue that 'hey' can be clearly demonstrated to have etymological roots going as far back as Proto-Germanic *hɜɪ. First, the English 'hey' has extremely old roots in writing, despite the word being a vulgarity that I believe is unlikely to appear in ancient documents (how often do you write 'hey' in your essays?). The OED places the first occurrence of 'hey' in writing to 1295 AD. So it has occurred long after the Norman conquest of English placed firm linguistic roots into our language. But the word 'hey', with its rough /h/ phoneme, has no clear link to French. In fact, cognates with /h/ to the word exist in Germanic languages, not the Romantics. Modern German has 'hei', Dutch 'hei', and Swedish 'haj'. So there are clear references to 'hey' in the Germanic languages.

There is also powerful evidence outside German languages. Finnish, an unrelated language that is part of the Uralic family (a family not demonstratively linked to Indo-European apart from interesting similarities in their pronouns), has a long tradition of using 'hey' - but the other Uralic languages (Saami, Hungarian, etc...) do not have a strong history of a similar word with the infamous /h/. Finnish, however, is infamous for its large number of Germanic loanwords. Take the word 'king'. In Old English the word was 'cyning,' and the word is still taught - along with Hwaet - in High School English class today. Cyning is part of a well-attested to series of Germanic words for king that all sound alike. To skip to the end of the story, king has a root in Proto-Germanic *kyninjaz, which was recovered about a century and a half ago. Finnish scholars around the turn of the century found their word for king was actually an introduction for a foreign source, and had no root in the Uralic family at all. The word? Kuningas, providing exciting confirmation of a Proto-Germanic word from outside sources. As it turns out, Finnish has borrowed a huge amount of words from Proto-Germanic, and it is in the opinion of this author that hey is but another example. Especially since the majority of examples of 'hey' in languages unrelated to IE (or even within IE languages, for that matter) do not have the classic /h/ sound. Now all that remains for me is a careful analysis of the word, which I hope will lead to a paper someday soon.

1/29/10

The "Gospel" and False-Belief


Here's a standard form of "sharing the gospel" that is used by Christians. "If you were to die and stand before God and he were to ask, 'why should I let you in', what would you say." What if, I asked a group of 34 students primed by reading Clifford's "Ethics of Belief" and James' "Will to Believe", you answered "because I believe X" and X was the correct answer ", but God still refused you entrance. What would that say about your belief? Here's their results:

14 students thought that they poor reasons for their belief. (A Clifford-like response)
10 students thought their belief was not *really* believed, perhaps made invalid by their actions or self-deception. (a proto-Jamesian response)
4 students thought that God was either lying or testing them about their answer.
4 students thought that their belief was based on selfish desires, like getting into heaven.
2 students thought that their belief was insufficient and that they should have believed more than just X.


1/25/10

"Advanced Chess" and Cognitive Augmentation


Gary Kasparov, infamous for his 1980's matches with Deep Blue, IBM's chess machine, has designed a *new* way to play chess while simultaneously using a computer chess program. "Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine." Is this taking chess to "the highest level"? Or is it destroying the game? Read more in his New York Review of Books article.

1/23/10

Human Nature and Cognitive Augmentation


I've received some wonderful comments on William Cornwell's talk at the last CIPHER workshop, who argued that technologically advanced augmentation of our senses, bodies, and minds make us "more human".

"I do believe that chips implanted into someone’s neurophysiology wouldn’t make them more human, but rather less human. To have a machine do decision-making for us, affects all parts of the brain, and mechanizes us to the degree that our humanity is compromised. To some degree electronics augment us cognitively, but to a greater extent it hinders us and provides us with more trouble. (...) The stakes are high. Are you willing to put your life in the hands of a robot, even if it is controlled by “your” own brain? Augmentation cognition would alienate us from humanity. (...) We are not made to last pharmaceutically beyond 106 years old, and not only would a robot be essentially not human but it would outlive generations."

"But this is all reminding me of the Tower of Babel. Humans have this tendency to try to build themselves higher and higher, and to make themselves gods, improving upon everything they can. (...) Although it might seem mysterious and unreasonable to us, God doesn’t want us to all band together and become more and more powerful as a race. We can see this clearly from how he reacted to the building of Babel: “The Lord Said, ‘if as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. Come, let us go down and confuse their language so they will not understand each other” (Genesis 11:6). There is a line in my mind between using tools to make our lives easier and trying to improve upon ourselves as a species because we think what we are is inadequate."


1/8/10

Ethics of Simulation

Recent talk of the film Avater (2009) has brought up an interesting moral question I've been tossing around for a while now. What are the moral implications of digital simulation games involving death? While video games have, in the past, been linked to youth violence--and vice versa--I am wondering about he morality of the games themselves.

As computer generated imagery becomes more and more lifelike, I wonder at what point it becomes immoral to kill it. How does destroying my opponent's civilization in Age of Empires (an excellent, if dated, computer real-time strategy game) compare to the sack of Rome in 492 A.D.?

As a History major I am sensitive to the fact that one one level even asking this question is an affront to the people who lived through that event, but still, the question nags. I also realize that Artificial Intelligence may be far from a reality, but intelligence is not necessary for basic rights. Animals, and even plants, are granted rights--or at least their life is has value, if only Utilitarian in nature--so why not the unreal?

I am far from believing we have to worry about simulating beings revolting against their masters dwelling in the land of reality, but I think the morality of simulation, in concept, bears further inquiry.

1/4/10

Graduate School in the Humanities: A bust



The Chronicle has a good argument for working at the intersection of the humanities and the sciences, reporting that jobs for professors in the humanities have been going down the swizzle for some time. For those interested in the practical benefits of being a humanities scientist, check out HASTAC.

“Knowledge goes from my ears to my brain, not from my finger to my brain,”


Here's a
great article from NYT on the current state of braille use.

Many blind users seem to be forsaking standard tactile braille use for audio applications. Surprisingly only 10% of blind people make use of tactile braille. A stronger point is made by Laura Slote, blind from age six, “It’s an arcane means of communication, which for the most part should be abolished.” Apparently, tactile braille reading is slow, cumbersome (a Harry Potter book has over 1000 pages), and isolating.

The only arguments I've heard in support of widespread tactile braille use are from developmental advocates, who say that its use recruits needed areas from the visual cortex for normal brain functioning, as the article reports. But similar reports have been made about auditory stimulation activating the visual cortex.

In sum, the thought that people without sensory deficits are prescribing clunky devices for people with sensory deficits, when more user-friendly devices are in the offing, seems a bit ridiculous.


1/3/10

More Virtual Reality Ethics



Once again the new virtual reality world called Second Life ignites a flurry of ethical quandaries. Patrick Davidson's talk at Ignite (http://ignite.oreilly.com/2009/12/patrick-davison-and-the-plight-of-the-digital-chickens.html) about virtual chicken farming raises some interesting issues relating to virtual property ownership and anonymous social interactions. The main thrust of his talk, however, is that different people perceive the function of virtual spaces differently, and this difference in perception leads to conflict.

When dealing with digital ethics, it is important to keep in mind the different ways of perceiving digital reality. To some it is a new way of being, to others it is merely a tool, to others still it is just a game. When conflicts of digital ethics arise, it is often the case, but rarely noticed, that the conflict stems from a fundamentally different view of digital reality.

12/29/09

Humans are only Humans if they are Bionic Humans

National Geographic's feature article provides a nice overview of the present state of adaptive technologies. The graphic presentation is really cool. The eye surgery video is simply shocking, and shows, to my mind, that retinal implants are the fastest area of progress in the last five years.

The article's overall claim that humans are "better off" with this technology, is a bit tame, to say the least, particularly in contrast with William Cornwell's view that we can only be human with this technology. For more on this see Cornwell's paper, "Human Nature Unbound" and come out for his talk at the Cipher workshop, January 19th.


12/26/09

The Ethics of Avatar


My paper about our confused intuitions on moral status of virtual avatars is up in the latest edition of Stillpoint Magazine.

A reader just emailed me asking about the moral dimensions of avatar use explored by James Cameron in his engaging movie Avatar. In this film an other-world tribe of native humanoids, the Na'vi, adopt an "avatar" (what the Na'vi call a "dream walker") of a paraplegic marine named Jake Sully into their culture.

Sully's avatar is a cyborg of sorts, a biologically engineered physical being. Hence, Sully's avatar is more akin to the cyborgs in Bruce Willis' film Surrogates than to digital-based avatars which exist on a digital plane of existence. Sully and his avatar are two bodies in the same "analogue" world, whereas a digital avatar inhabits a distinct world from their user. This small point has, I think, a large effect on our intuitions.

Sully's cyborg avatar is clearly just an extension of his body, and thus it is Sully himself who is adopted by the Na'vi people and who (spoiler alert!) marries the Na'vi princess. Even the princess recognizes this "extension" relationship, as evidenced by her care for his damaged human body.

What if, by contrast, the Na'vi and their world was portrayed in the movie as digital, a variant of Second Life? Would it be as obvious that Sully himself was adopted by the Na'vi people, etc? Not to me, anyway. It seems like involvement in the digital world, separate and unique to ours, brings about a deep confusion about the moral status of its creatures.

In sum, Cameron's film does not bring up the same confused intuitions on our moral status that Second Life does, but that isn't to say that there aren't other intriguing ethical questions to be drawn from the film.






12/2/09

The Darwinism of Tongues


About four years ago, Ostler's classic book Empires of the Word was re-published. The book is a popular edition of his years of reading and writing scholarly papers on historical linguistics in all the language families. At one point Ostler, an Oxford University professor while writing the book, briefly noted that at no point has anyone ever analyzed languages for their survival value. In other words, from an evolutionary standpoint perhaps not all languages are created equal.

Is the curious absence of scholarly inquiry a product of over-Political Correctness? It could be. As Pinker recently bemused: contemporary scientific research is beginning to brush shoulders with a new opponent. But this time the perpetrators of scientific injustice are new. No longer are the opponents of free and open scientific progress clerics or bishops. (That is not to say these traditional enemies do not survive in the pockets of the world). These new opponents are politicians, general public, or unqualified scientists. For an example of this one need only turn to the debate on global warming. Statistically, most scientists opposed to global warming theory are meteorologists or geologists - specifically petroleum geologists (which makes you wonder if it's really scientific truth they are interested in when their jobs depend on fossil fuel consumption) - not climatologists, 97% of whom agree that humans are causing the climate to, on average, get warmer over the long-term as a result of rising CO2 emissions. Hyper-PCness could be one explanation. Or maybe most linguists simply do not believe languages have different survival values. There is also the possibility that linguists have simply overlooked this low-lying fruit. But I don't think any of these are good explanations.

What I think is more likely is that it's nearly impossible to definitively produce evidence that a language will not change to adapt to its surroundings just as quickly as its surroundings change. For instance, languages that do not have distinct concepts of color might easily create separate categories for colors once the need presents itself. But whatever the reason for the silence, Ostler's idea that some languages are "better" than others is a compelling one. But how would one go about creating a criterion for "better" languages? Languages can quickly change or add words to match the needs of the individual. Grammar structure changes slowly but I doubt this would vindicate Ostler. If a language says "the red on the chair" instead of "the red chair", will the brevity of the latter example really contribute to a society's survival? As long as the language can manage to tell another gatherer that there is fruit on the tree outside the village, it doesn't matter which way the person goes about saying it.

So on a microscopic level, Ostler's speculation seems to be a dead end. But what about linguistic conceptions of social values? It is here that I believe we could begin to judge languages. A language that emphasizes sexual looseness could become the language of a community devastated by sexually transmitted diseases. No language study, then, could clearly separate the social values from the linguistic factors. Instead, linguistics would simply be a contributing force in a greater confluential problem for a society. Ah, but now we are back to the older problem as before - that there is no reason to suggest that that the language is merely the arbitrary tool - the effect and not the cause. Just as a community can easily invent new words to suit the given situation, so too will they modify semiotic values to match their cultural mores. Professor Ostler's book is a fantastic one, but here I think he is without scientific (linguistic or otherwise) ground to stand on.

Ten Elshof on Self-Deception

I'm wrapping a book review on Gregg Ten Elshof's book I Told Me So. Gregg is the chair of the philosophy department at Biola and a fellow student of Dallas Willard. Like Dallas, he is using his talent and training as a philosopher to provide some insight into the Christian life.

Anyway, here's the way I like to demonstrate the likelihood that YOU are self-deceived: Try this experiment. Write down three character traits that best represent you. Now, think of someone in your life who brings you displeasure, and write down three qualities that represent him or her. Would it surprise you to learn that the qualities you attributed to your rival, which were likely negative, are qualities that in reality best represent you? And furthermore, your self-attributed qualities, which were likely positive, are in actuality better attributions of your rival.

If this experiment worked on you (it devastates me), things get worse. It turns out, as Gregg argues, that our belief attributions are even likely deceived, like "the belief of the saving power of Christ" Christians like to attribute to themselves. The communal and emotional atmospheres of evangelistic rallies provide the perfect conditions for self-deception, and this is where many acquire their Christian beliefs. These beliefs are maintained in similar environments found at many church services.

What to do? Gregg is a little short shrift on this. I'm inclined to say that it is high-time for Christians to stop isolating themselves from "non-believers" and take opposing criticism seriously. That's at least a step in the right direction.

11/20/09

Moral Responsibility of Involuntary Action

Christian Perring of Dowling College, who edits the helpful "Metapsychology Online Reviews" site, presented an interesting paper at Cambridge Hospital, MA last week about moral responsibility of addicts and others who may not have full control of their harmful actions.

The thrust of the argument was that addicts can be morally blameworthy for their harmful behaviors even if they were compelled by the addiction. The rationale was that the addict behaves in a way that he knows will cause harm. The relationship, then is damaged, and the person hurt feels blame towards the person who has acted harmfully. The addict, or the mentally ill person, is still blameworthy for their actions because 1) the person knew that the action would harm the other person and 2) they performed the action nonetheless.

The reaction to this conclusion was surprisingly receptive. However, as the issue was pushed, it seems that there was more of a gradient of blameworthiness. For instance, someone who performs a malicious act purely for malice is very blameworthy. On the other end of the spectrum, the epileptic who happens to hit someone is not blameworthy at all, because he had no intention of performing the action.