The Secret Recipe for Language Acquisition
How do humans acquire this unique ability called language?
Human communication unique. Whilst other primates’ gesture and vocalise more selfishly, to get others to do what they want them to, we humans communicate co-operatively. We love being social butterflies, we crave a good gossip, we just can’t wait to inform other tid-bits of our life stories.
Without language, we will be nowhere. Whether its English, Arabic, Spanish, Urdu, Creole or Sign, a world without language is virtually impossible to imagine. We would have no empires, no kingdoms, no democracies, no schools. We would have no poetry, no Harry Potter books, no knock-knock jokes. Scary. Right?!
So how we do begin to learn such an important thing at such a tender infant age? Is it enough to listen to and repeat “Mama,” “Papa” “Apple”, “Ball”, “Cat”? It seems so simplistic to learn something as beautifully complex yet crucial as language.
I never really questioned how we acquire language until my Cognitive Neuroscience degree at the University of Cambridge. I always took language for granted. It was innate, it was part of us. We just got to acquire language once we reached a certain age – after all everyone I met had the gift of language. My neuroscience degree got me curious about something so inescapably common like language. I began to think that if someone speaks mandarin or greek to me, I can’t “just pick it up.” I need to sit down, figure out how the word is spelt and understand how it sounds phonetically. I also need to learn what it means and putting the grammar together to form a sentence if a whole new level of difficult. Achieving fluency is a new language is hard, so how do-little humans begin to do it when they’ve literally just hatched– they know the meaning of a grand total of nothing and they don’t even have the sense yet to follow commands. So, what are the real mechanisms at play which are sophisticated yet so simple that even a toddler can understand it?
The Secret Recipe for Language Acquisition
Ingredient 1: Motivation
Turns out the secret ingredient is what we all need to kickstart our desire to learn anything. MOTIVATION.
Think about the littles baby in your family, a cousin or niece or your own little toddler. Are they inquisitive? Do they smile at you? Do they follow you with their eyes around the room? Don’t they absolutely love being the little Prince or Princess that they are. By 12 months of age, infants align their attention and attitudes with those of others in at least three ways:
1) they look to other’s faces to let them know that are sharing attention with them
2) they curiously follow into others’ focus of attention
3) they actively direct others to follow into their own focus of attention – they pull mummy’s hand or change granny’s face direction so that they also look at the same thing that has caught their attention.
But why does that cute little human love to get your attention? A study by Liszkowski investigated two opposing views regarding the motivations behind infants pointing gestures: 1) Infant’s point to share attention and interest about objects with a social partner – whether it be mum, dad, aunty, grandpa, or their older sister! 2) Infants do not point to share attention but to gain attention for themselves and the positive “feel-good” emotions that come with it. The results found that children were only satisfied when adults practised “joint attention” – in this scenario the adult engaged both with the child and the object they were pointing to. In other scenarios, adults were told to respond only with positive emotions to the child without really acknowledging the object they were pointing to and in third scenario, which tested whether if the infants wanted nothing from the adult and were pointing for themselves, the adult ignored both the infant and their points. In both the latter scenarios children were not happy. This shows that infants’ “pointing” was an invitation to share attention and their desire to communicate.
In fact, Koenig and Echols showed that when adults deliberately mislabel objects, infants get distressed and try to correct them. This shows that already at such a minuscule age, infants are developing the concept language is a set of linguistic rules to facilitate co-operative communication. It only works if all speakers of a particular language have a shared agreement that a certain set of abstracts sounds can only refer to one thing – in other words, the sounds “K-A-T” can only mean the animal cat and not a book, and the sound “B-O-O-K” can only mean book and not a cat. Can you imagine the disaster that would ensue if different people decided the same set of sounds would mean different things?
So, infants have a motivation to learn language? That just explains WHY they want to acquire language but does not explain HOW they do it? How do they tease apart the different phonetics and syllables and intonations to learn and produce speech?
The baby needs to be able to decipher that the adult is interested in talking to them to perk up their motivation to communicate. Csibra developed a theory that humans have a specialised set of ‘ostensive signals’ which enable infants to recognise ‘communicative intent.’ In normal, non-scientific language, humans have a special set of actions which indicate to the baby that they are the special people we are talking to. Newborns only engage when they know communication is being directed towards them. I mean we all do the same thing. We only talk to others when we know they are addressing us – you don’t randomly get drawn into the conversation of tourists walking past you in Oxford Circus. You don’t hijack the conversation of a couple standing in the KFC queue behind you. It’s simply odd and you know it, the babies know it too.
This brings us to the next ingredients of our language recipe: the ‘ostensive signals’. This are a fancy term to describe simple, unambiguous set of actions that can be easily decoded by the baby and the funny thing is– we all do this when talking to a baby without even realising it ourselves!
Ingredient 2: Eye Contact
It’s almost rude to not have eye contact with a friend, teacher or whoever else you are addressing. It is such an important non-verbal form of communication. Imagine a lecturer not looking at his students, or a singer staring at his feet for the entire duration of his concert. The audience will switch off immediately. Eye contact not only engages the listener but also tells the one talking that they are being listened to. As a medical doctor, all our OSCE communication skills workshops keep highlighting eye contact, eye contact, eye contact! It shows you are good communicator; patients feel listened to and significantly boosts the doctor-patient relationship.
But who knew eye contact played such a crucial role in language acquisition in your early years! A study by Farroni and colleagues found that, when babies can choose between photographs of faces looking directly at them or looking away, 3-day-old newborns prefer to look at the face that appears to make eye contact with them. This is a very robust effect, unusually strong among studies with neonates- not a single infant oriented more towards the face with averted eyes than towards the face with direct gaze. 3 to 6 months old infants smile less to a person after she has broken eye contact with them, even when she continues to respond to the child appropriately according to their behaviour. Five-month-olds continue to smile at people who avert their head while maintaining eye contact with the baby. In contrast, their smile diminishes when the adult moves her eyes only 5 degrees away, to one of their ears. This phenomenon suggests that what newborns are sensitive to is not simply the presence of eyes but more specifically the position of the pupils/irises within the eye.
Eye contact encourages face-to-face learning. It offers rich “speech reading” cues – babies look at our lip and jaw movements to try and correlate which sounds come from what mouth shapes. For example, your lip is a round “O” when saying “Awww.” Eye contact makes it easier for infants to learn to produce sound sounds themselves through imitation. Additionally, looking into the eyes of social partners can help understand their emotional state, their intentions, and their likely future action. Infants use feedback from adults to appraise current situations and determine how they should respond on the basis of their gaze. Hence, eye contact is not only important for phonological and lexical development (i.e. learning how words sounds and what what words mean), but it also important for social development by potentially teaching infants how to make appropriate emotional responses in a given context.
Ingredient 3: Infant Direct Speech (IDS) or “Motherese”
We talk, we laugh, we joke – whether it be stand-up comedy, a concert or a new broadcast, speech is the most frequent channel of communication.
But have you noticed how you are absolutely incapable of talking to babies in a normal vocal tone? No matter what you do, your tonation or prosody changes to a sing-song exaggerated “goochi gooochi goo” pattern. “HEL-LO, HOW – ARE- YOU?” is said in a high pitch with broader pitch and amplitude variation. The speed of speech is much much slower, with each word defined and a pause after each word. This prosody is known as Infant Directed Speech (IDS) or “motherese.” You do this without even thinking about it! It’s not just you, but even a 5-year-old elder sister will talk to her new baby brother this way. These characteristics of IDS are universal, though culture-specific variations have also been found . Why are we verbal humans programmed to talk in this funny poem like manner to all pre-verbal infants?
Several functions have been attributed to the melodic-baby talk. It’s not just “cute” it’s a scientific phenomenon. These features help capture the baby’s attention, helps them to not only grasp the emotional context of speech but also relax the baby and regulate their own emotions. IDS is pivotal in language acquisition. The more repetitive, slower speech pattern with exaggerated pronunciations helps the infant to discriminate between speech sounds and detect boundaries between words. Mama is going” could have many different word boundaries for someone who doesn’t know English. It could me “Ma ma izgoing”, “Mamaiz go ing”, “Mama izgoing” and so on and so forth. When babies hear IDS speech multiple times the brain starts to note statistical patterns – “Going” is more commonly paired together than “izgo” so the brain learns that the word boundary should be between iz and go.
A variety of experiments demonstrate that two-day-old newborns pay more attention to a source talking to them in IDS than to a source speaking in an adult-directed way. And when babies pay more attention, they may be more likely to notice the statistical patterns in speech. Enhanced attention may also help them remember these patterns better. Additionally, when adults communicated face-to-face using IDS, babies experience increased activity in brain regions associated with processing auditory messages. Similar attempts using every day, adult speech had no such effect.
So next time you speak to your baby nephew or a toddler who turns up at the paediatric emergency department and you use baby talk- remember you are helping them learn language!
Ingredient 4: Contingent Responsivity
Talking to children and having a natural turn taking conversations is absolutely critical for language development. When you tell a story or recount a life event on the phone to your friend, you seek assurances from time to time that they are still listening – a little “hmm” or “alright” or “omg, I cannot believe that happened to you” can go a long way in improving the quality of the conversation. You wouldn’t be satisfied unless you knew the person you talking to is listening and acknowledging what you are saying – and neither will infants. This idea of ‘turn taking’ and ‘response seeking’ is called contingent responsivity which serves an ostensive signal as it follows the basic human conversational pattern.
This “turn-taking” behaviour is seen in infants even in the pre-linguistic stage. When investigating sucking behaviour, we see infants frequently pause soon after the onset of feeding for no obvious physiological reasons. Mothers tend to respond to such pauses by jiggling the child or the bottle, which they never do while the baby is sucking. When asked, mothers say that they do this to encourage the infant to suck more, but the amount of milk intake is not affected by this maternal behaviour. In fact, sucking returns only after the mother stops jiggling, and if she jiggles the child during sucking, it interrupts sucking . This pattern of interaction between mother and child resembles the turn-taking pattern of normal human conversation: sucking and jiggling inhibit each other, while finishing either of these behaviours induces a response from the partner. You can even call this type of synchronization of behaviour between mother and infant a ‘proto-conversation’. Furthermore, between the ages of 2 and 6 months, mothers and babies frequently engage in fact-to-face play situations where they call out, smile at, gesture to and touch each other, and these acts are carefully coordinated in more than one dimension. The temporal pattern of this type of coordination again resembles conversational turn taking, can also be dubbed as proto-conversation.
The pivotal role of social interaction in language learning was demonstrated by an ingenious study by Kuhl and colleagues. They got mandarin speaking adults to play with and entertain little 9-month-old American infants with books and toys. They then tested their speech perception to mandarin phonetic contrasts. A second control group of 9-month-olds experienced identical play sessions, but the language used was English. The results showed that babies who had mandarin play sessions were significantly better at discriminating mandarin speech sounds. They then repeated this study using video tapes. A new group of infants got the same amount and quality of mandarin from a video tape. The toys and books were visible on the screen and the adults appeared to be looking directly at the infant, as the films were taken from the infants’ perspective. The infants were very interested in the videos, touching the screen and attending avidly to the entertaining adults. However, when their sensitivity to critical phonetic contrasts was tested, it’s as if they never listened to the mandarin video tapes – their ability was indistinguishable from the control infants tested in the first experiment!
Watching and listening to films of foreign language material did not result in any measurable language learning. Kuhl argues that live speakers provided many subltle social cues that facilitated language acquisition. For example, the speakers gaze tended to focus on toys and pictures that they were talking about, and the infants gaze followed. The live presence of the adult engaging in contingent responsivity also provided interpersonal cues that attracted attention and possibly motivated learning. Simply being exposed to the raw sensory information from a video tape is apparently insufficient to trigger learning the sounds of speech which shows the critical role of this “back-and-forth” play in language acquisition.
Ingredient 5: Infant’s own name
We all respond to our own names. Except from the time when I first moved to England in 2007 and I stopped responding to my name as I couldn’t recognise how it sounded like in the Bristish accent. My schoolmates probably thought I was rude but oh well – I have since adapted to responding to various combination of pronunciations for my one name that I didn’t think existed prior to 2007.
Anyway, I digress, back to Infants. Their own name becomes an ostensive signal and it is the earliest word infants recognize around 4.5 months of age. From about 6 months, infants spontaneously turn their head when their name is called, showing that they interpret this word as a vocative.
Final Words
Language learning is complex. I am certain there are so many more mechanisms at play which are less well discovered or less well studied. The children born to blind or deaf parents successfully acquire language despite not having the expected combination of ostensive signals – this makes one wonder that there must be other things at play when it comes to language acquisition Deaf babies do not show typical babbling and vocalisation emerges late and rapidly falls away. Although in recent years, the developmental trajectories of deaf and hard of hearing (DHH) children with hearing parents have improved with early identification of hearing impairment and and appropriate interventions such as rich exposure to sign language environment, the majority of children are still delayed compared with hearing children. In contrast, DHH children born to deaf parents, develop sign language in a similar way as hearing children develop spoken language as they are in a language rich environment. This shows that higher-order language mechanisms in deaf children are intact, but the insensitivity to acoustic communicative signals that impairs development in those born to hearing parents.
My sister works as a doctor in the Paediatric emergency department, and she was telling me a story of a 2-year-old whose parents brought her in because she repeatedly kept saying “I’m tired.” The team checked her over and couldn’t find anything wrong with her. She was also super active running and playing around the waiting room whilst saying “I’m tired.” They then thought of the possibility that she may have just overheard someone saying it and now uses it when she pleases – it’s a new word in her vocabulary overall. There is a developmental phenomenon called “fast-mapping” around 2 years of age when their vocabulary exponentially expands. In this phase most children are acquiring 10 or more new words a day and often they have heard or even overheard the novel word only once before reproducing it themselves. Overhearing does not involve any of the expression of the so-called communicative intentions by the adult and yet these children are learning words at a fast pace suggesting other factors are at play for language acquisition.
Nevertheless, our secret ingredients to language acquisition – eye contact, IDS and contingent responsivity- are undeniably very important. However, they may be different embellishments in how we manage these ingredients, the different utensils and vessels we prepare them which causes the individual variations in flavour of language. Quality of language exposure and other brain functions such as tracking of statistical co-occurrences in speech is required for phonological, lexical and grammatical development in language acquisition. Try making chicken tikka with stale out of date meat without really the right pan to cook it with or the right knife to cut it – you wouldn’t get far. You don’t just need ingredient but also the quality of ingredients and the tools (brain cognitive processes) to put it all together to finally get the final product – language!