One human language

One human language

Do children learn to talk by just copying adults, gradually getting better until the copying is perfect? In a way that surprises some, it is now thought by many that the only possible answer is:  No.

Of course, children copy what they hear. That is often painfully obvious. But this can’t be the whole story.

What we know about speech and language is finite. Without this finiteness, understandings would vary as widely as do the outputs of other human skills such as drawing, painting, and music. What we can do with this knowledge about speech and language is infinite. This contrast between the knowledge and what can be done with it is sometimes called ‘discrete infinity’.

But if learning speech and language was just by copying what we have heard, there would always be something we have yet to hear, and potentially copy. And it would not make sense to contrast the grammaticality of “Something good” with the ungrammaticality of “Good something”.

It seems to have been Charles Darwin who was the first to see that children don’t just copy what they hear, when he noted that his grandson was producing sounds which he could not plausibly have heard.

I believe that the most plausible explanation of discrete infinity is by a learning by Phase, as one aspect of What makes us human.

A uniqueness

If the development of speech and / or language is significantly delayed or disordered – and if it is, this is mostly obvious to professionals and non-professionals alike – there is a natural question: Are there any other similar capacities which are similarly affected? Given that speech and language are human-specific, are there any other similar capacities? There are various capacities unevidenced in any other species. Humans can do practical experiments, write and play music, draw or paint a likeness, throw or kick a ball at a target or hit it with a stick or a bat, carry out successive operations with numbers, and more. Despite many attempts by circus masters and other entertainers, non-humans do not seem to be generally capable of any of these things. Some individual animals have learnt what may seem like the first steps to some of these skills. This can then be put on show, to be admired or unfairly mocked. But non-human levels in these skills would never be mistaken for human skills.

Some individual dogs, chimpanzees, bonobo apes, and gorillas, have learnt to process one or more aspects of human language, as signs. To my mind the most remarkable of these was a chimpanzee called Washoe who died in 2007. She had been taught something over 200 signs in American Sign Language. She once saw one of her carers with a new born baby. The next time she saw the carer, the carer did not have a baby. Washoe asked after the baby. The carer decided to tell Washoe the truth – that that the baby had in fact died. Washoe made the sign for tears. This may have been the only real discussion of grief and condolence that has ever taken place between a human and non-human. But in no case did Washoe ever show any evidence  in her signing of words doing what the little word that does in “I’m so sorry that your baby is dead.”

What makes speech and language unique is that for any language, of which English is just one example, native speakers can agree about what things mean, or don’t mean, or if there is more than one meaning, and in ways that brook no argument.

Consider these two sentences:

• I wonder who she wants to see her.

• She wants to see her.

In the first sentence, on one reading, her refers to any female, and on another reading, to the same individual as she. But in the second sentence, her can only refer to any female other than she. On this sharp difference of reference there is no disagreement. How do speakers come to these judgements?

Or consider the effect (or lack of any effect) of further embeddings:

• I know she thinks I wonder who she wants to see her

Irrespective of the number of embeddings, the reference of her is still free.

With a finite number of words in the vocabulary, there is no limit to this embedding. There is thus an infinite number of possible sentences. This is known as ‘discrete infinity’. The child learner can’t hear all of the possible exemplars. The only way this infinity can be grasped is by virtue of the sentences being generated on line.

Or take this word play (with a nod to Groucho Marx):

• I found an elephant in my pants.

Was the finder wearing pants? Or was the elephant hiding? The ambiguity hinges on whether in my pants is part of the same phrase as an elephant or whether it is part of the main structure of the sentence. The difference is thus in the structure. And the structure is defined on different sorts of phrase, built from two sorts of words, words like word, goodness, absence, amortise, none having anything much to do with things or action, but bearing on the real world, and words like a and the, which make sense only in relation to other words.

There are limits to what things can mean. But this knowledge is not from everyday experience. It is entirely without instruction. The limits are especially significant when they apply in almost opposite directions, as in the cases above.

Without agreement about what things mean, there would be just endless uncertainty. There would be no laws, no possible contracts, no plays on words. Social life would grind to a halt.

No natural language has been found without such phenomena.

A biological linguistics

By a line of explanation originally due to Noam Chomsky, the simplest explanation of discrete infinity is that there are underlying principles for language that we are born with, that are, in a species-specific way, encoded in the human genome. These principles are necessarily very simple and abstract.

The richness and complexity of human language is by the way these principles interact.

Over the last fifty years, many linguists have come to accept Chomsky’s conclusion.

No non-human has ever shown evidence of anything like discrete infinity. To this extent, humans are both unique and exceptional in the animal kingdom. But for some reason, the exceptionality of human language is strongly resisted by some.

Some geneticists take this line because of the difficulty of identifying the corresponding sequence in the DNA. But to me, the onus is on those who deny any sort of genomic explanation of phenomena like those above to provide a more plausible explanation.

This is not to suggest that children are born knowing how to talk, a self-evidently absurd aunt sally sometimes peddled by those opposed to any idea of human exceptionality. The claim is just that children come to the task of learning language expecting, I suggest in Sweet and sour, to mind meaningful significances in a hierarchy of contrasts. Exactly how this happens has been at the centre of linguistic research for the past 60 years.

The first impetus towards a biological linguistics was in 1967 from Eric Lenneberg. There are are tell tales pointing in this direction from children with speech and language disorders. Around a third of such children have a close relative, a parent, a sibling, an uncle, aunt, or cousin, who either has or once had a similar disorder.

Across a broad range of ages and disorders there are significant, well attested, and broadly agreed co-morbidities.

It seems to me that there are various cases where experience is plainly insufficient for children to reliably progress to full ‘linguistic competence’ – being able to talk. One aspect of this is that all languages allow sentences to get longer and longer with no point at which this becomes impossible. So we can say, “She’s lying.” Or “You know that she’s lying.” Or “I think that you know that she’s lying.” And so on indefinitely. Now the child may only hear one step – as in “You know that she’s lying.” But without needing to be told, learners somehow know that any number of steps are allowed, or they wouldn’t understand any numbers of steps greater than what they happen to have heard.