Menu Close
Linguistics

Linguistics

Linguistics represents the attempt to study language scientifically. The science is pulled in two opposite directions – to describe every language and to do so in the simplest possible terms. The term, linguistics, only came into use during the 19th century. But the intellectual enterprise began at least three thousand years in the middle East, India and Greece. Along with astronomy, it is one of the two oldest sciences.

From a modern perspective, there are three criteria facing any theory of language:  It should

  • Describe any language precisely in the same terms which can be used to give an equally precise description of any other language, and do this in a way which is as clear about what is not part of the language as what is;
  • Be learnable in the way language demonstrably is, so that, irrespective of the target language, the overwhelming majority of children develop an adult-like competence by the age of around ten without any specialised instruction; this competence includes, not just the broad outlines, but fine points exemplified in only a very small number of words, down to the limit case of one;
  • Be such that it could have evolved in the human species.

The first two of these criteria were set out by Noam Chomsky in 1965. The third has become increasingly prominent in recent discussion.

History

A little over three thousand years ago an unknown scholar somewhere on the Eastern shore of the Mediterranean realised that there were speech sounds which could be represented by abstract signs, one for each sound. The basis of the modern alphabet. The system quickly spread across the Mediterranean. There have since been at least four significant improvements on the original idea. A major scholar named, Pāṇini working somewhere in Northern India around two and half thousand years ago, wrote a formal description of Sanskrit in 3,959 succinctly stated rules, distinguishing consonants and vowels, nouns and verbs. Modern linguistics still embodies key aspects of Pāṇini’s many insights. Arguably, the first modern scholar of language was William Jones, who in 1786 set out the core idea now known as Indo-European languages. Although the commonalities between Hindi and Western European languages had been noticed before, it was Jones who popularised the idea, defining criteria, still used today, by which commonalities could be identified, in the numeral system, in morphology, and in the most familiar words. The scope of investigation was broadened to languages outside the Indo-European by Wilhelm von Humboldt who played a key part in stimulating American interest in the native languages of North America, not to mention his own research into Basque, Kavi, the language of Java and other languages now characterised as Austronesian, spoken across the Pacific. Gradually the coverage broadened. Eventually some six or seven thousand languages were identified across the world. The study of these is now known as ‘typology’. It is now apparent the Indo-European group of languages represents (by far) the largest single group in the world, having seemingly emerged around 6.000 years ago somewhere around the Black Sea. There is argument about whether this was to the North or South.

The field changed fundamentally when Noam Chomsky became a student. His MA thesis in 1951 was a grammar of modern Hebrew involving thirty ordered rules. Crucially these rules were designed to describe not only what happened in the language, but also what did not happen. This degree of mathematical explicitness made Chomsky’s model different from any model which had ever been proposed before.

In 1957, Noam Chomsky proposed a grammar, partitioned into two components, one generating  ‘kernel sentences’ like “The BBC interviewed the politician”, the other accounting for the very complex ways in which sentences could be ‘transformed’ as in “Who might not have been being interviewed by the BBC?” by a sequence of ordered transformational steps. Each component was characterised by a particular sort of rule. These defined the question (by who), the modality (by might), the negation (by not), the aspect (by have), the passive (by been). Chomsky’s analysis was the first complete analysis of this core aspect of English, defining the interaction between the various aspects of grammar here.

In 1965, Chomsky proposed that a grammar of this sort was learnable by a ‘Language Acquisition Device’ which compared systems of rules, favouring the simplest. But in 1967 E Mark Gold showed that the class of languages assumed by Chomsky in 1965 was formally unlearnable. This forced a rethink about the defining architecture. The rethinking continues to this day.

In  1970 Chomsky made the form of word entries more significant and limited the scope of the transformational component. This reduced  the problematic partitioning, and took the first step towards what is now known as ‘X-bar theory’ by generalising across sets of environments which had previously seemed entirely separate from one another, mainly nouns and verbs. This framework has since been greatly developed.

In the 1980s, particularly from work by Chomsky (1981) and Hagit Borer (1984), many linguists came to think of language learning in terms of choices between small sets of values with respect to particular variables, for instance whether or not the equivalent of I is routinely not pronounced, with the equivalent of “I love you” said without the I. This is the case in Greek, Italian and Spanish and numerous other unrelated languages. According to whether this happens or not, the language learner is thought to do the equivalent of throwing a mental switch one way or the other. Children exposed to English on the one side or Italian on the other would throw the relevant switches opposite ways. The points around which these choices or settings are made are known as ‘parameters’. From work by Nina Hyams (1986), this particular switch seems to be thrown at around two and a quarter.

Over a long working life Chomsky has gradually refined his model, but without changing the concern for precision and explicitness. Increasingly he emphasises the notion which he sometimes calls ‘Computation for human language’ by which all of the seemingly vastly different languages of the world share one universal commonality. They all build their structures by the same GENERATIVE procedures – which DERIVE these structures in opposite directions for speech (or signing) and understanding. This model requires one universal semantics.

By current biolinguistic thinking, the key issue is evolvability.

Novelty and vulnerability

The great novelty of Chomsky’s approach has been to insist that grammars should generate only those structures corresponding to a set of canons making them grammatical, and not generate any structures not corresponding to these canons. By this approach, “Something good” is grammatical, “Good something” is not. The grammar should generate the former, and explicitly preclude the latter. In a more subtle way, in “He was seen to do something good”, the little word to has to precede do. In “I saw him do something good”, adding to before do is just not English. But “I want her to do something good” is fine. It does not seem plausible that this contrast is just stipulated in English grammar with respect to the word see (and hear which patterns the same way). A more plausible account is one based on general principles which separately or together have this effect. It remains an open question why these two verbs of perception should work this way – unlike want, know, force, and other verbs.

The contrast between grammaticality and ungrammaticality has been central to Chomsky’s work for the past 70 years.

Ultimately, the contrast here is by introspection, by writers (and readers) interrogating themselves. The slipperiness of the resulting data is obvious. But despite many attempts, it has proved hard to operationalise procedures to measure judgements across a sample of randomly selected experimental subjects about subtle contrasts such as the one between “I saw him do something good” and “I want her to do something good”. One of the many problems is to ensure that the task is understood consistently and uniformly. For all its obvious faults, introspection seems to be the only workable criterion. (See Frederick Newmeyer (1983) for a thoughtful survey of the issues here.)

Two models

Although Chomsky’s model is now followed by a very large number of linguists around the world, perhaps most, he has vociferous critics who argue that the generative model unreasonably seeks to force the observation of linguistic variety into a predefined framework. Many of the native languages of North America and Australia (of those that survive) have what seem to European-type languages like very elaborate procedures squeezing whole sentences into what seem like long words. Some of these are difficult to describe in generative terms. If this proves impossible, the generative model collapses. The stakes are high.

This difference came into the open in the 17th century, long before the notion of linguistics had emerged. The issue took the forms of a bitter, personal dispute (about academic priority of all things) between one who was, on a reasonable estimation, both the first generative linguist and the first speech and language pathologist, one William Holder, and John Wallis, who took Holder’s place treating the same child, having previously given the first listing of the sounds of English. Wallis’s listing had been strictly taxonomic, in contrast to Holder’s approach which was explicitly derivational. (He uses the term derivation in the modern sense).

One of the many issues touched on by the difference between generativists and taxonomists is the simple matter of definition: What is a language?

One of the issues between generative and taxonomic traditions is with respect to ‘correctness’. Generativists tend to pride themselves on the point that the starting point of linguistics is observation, not passing judgement. But taxonomists often respond that the very strength of the distinction between what the grammar generates and doesn’t generate is itself covcrtly prescriptive.

Outstanding issues

One empirical issue concerns the fact that some aspects of meaning are pronounced twice. And others go unpronounced. Of the former, there is word order in a language like English and what is known as ‘case’. This duplication occurs in the simplest of sentences. In “She saw him”, it was she who saw and him who was seen. But these different roles are expressed not just by the forms of she as opposed to her and him as opposed to he, but also by the order of the words, the fact that she precedes saw and him follows. Of the unpronouncing, there is what is known as ‘elipsis’ in the “I do” of the marriage vows. In some of the Celtic languages, such forms largely takes the place of English yes and no. But what can go unpronounced by elipsis varies across languages. So it has to be learnt.

There is another issue about how feisty, well-connected working class women, completely unknown to one another, can all start saying the same speech sounds in some new way, and leading everyone else to follow them, seemingly part of what is known as a ‘circular vowel shift’ in North America right now (See William Labov (2001). How can they do this without noticing themselves or being noticed by listeners? No one knows.

There was a circular vowel shift in England roughly between the time of Chaucer and Shakespeare, giving the so-called silent E in bate, bite, note, Pete, from what were previously two-syllable words with the A, I, O, and E sounding quite different from the way they sound today.

A theoretical issue on which there have been significant changes of mind, is how to characterise contrasts in grammaticality or acceptability. It seems that there is a continuum between uninterpretable nonsense and what is understandable, but not quite right. The answer is not simple, certain, or obvious.

What is the relation between what is known as Universal Grammar or UG and discourse? This may have a bearing on the relation between what goes unpronounced and what is pronounced twice.

Linguistics is not a monolith. It does not pretend to answer all possible questions. It is a work in progress.

In brief

Linguistics is cautious about what counts as fact. There is a tension between the three criteria above. They pull in different directions. They represent different priorities in research. But to my mind, the issue of evolvability is especially significant in relation to developmental disorders of all sorts, as outlined in Nunes (2002), subject to the corrections here.

With astronomy, experimentation is impossible, but it is possible to look into the future. With linguistics, experimentation is very difficult. There are limits to what can be learnt from psycholinguistics. But predicting what may happen next is just speculation.

In both linguistics and astronomy there is key data in both the recent and the ancient past. This bears directly on the three criteria above. Overcoming the tension between them is a key research objective.

References

See (See Adger (2003), Carnie (2021), Radford (2016)) for highly regarded introductions to current thinking in the framework assumed here, though not to the emerging notion of evolvability.

Do you have an enquiry?