Yesterday one of the professors thanked the class for "behaving" during the previous class in the presence of "the observers." I guess she was undergoing an assessment. I wonder if people think I'm an observer. I guess I'm old enough to be an administrator, and confident enough that I look as though I belong. And I take notes, but not necessarily in proportion to the density of information in the lecture, because I'm taking notes on the room and the other students, too. From their point of view I could be evaluating them. That would explain why they don't frown at my presence. I hope I'm not freaking them out.
It turns out that one of the women in front of me in this lecture hall is attending a class just for fun, too. She is enrolled in the university in other classes, but came along with her friend to find out what this one was like. I tell her about my Mandarin experience and she laughs. Human contact. I like humans.
The professor starts the class by choosing a random country from a website. Apparently we learn features of a "language of the week" in this class. His first spin is Northern Mariana Islands, but apparently we've already been to the same language family as there, and the same for the next spin, Guadeloupe. The following spin is United Kingdom, and everyone laughs, which is too bad, because we could have learned about Manx or Welsh or Old English but he spins again and we get Botswana. He promises that later in the week we will learn about a language spoken there.
The rest of the class is a really easy version of some of the other classes I've been in, especially the one with the trees. We learn that Aristotle's Problem is "How do children learn language?" and that we appear to form categories of words and follow templates to put them in sentences. Chomsky's Problem is phrasebuilding based on abstract categories of words. It's an elementary version of the same lesson on recursion that we had with the adjectives yesterday in structural analysis. Finding the same pieces in different classes is like doing a jigsaw puzzle and finding that two bits I have matched because they are blue connect to the two edge pieces I have put together.
I've swapped in a new class for the one that didn't exist. This one is called Historical Linguistics. The professor is new and young, and not super confident, but the subject matter is fascinating. They've been talking about language change the class before and he professor finishes up with some review on synecdoche (which I've heard of before but didn't know was pronounced [sIn'ɛkdɘki], pretty much like the town in New York) and metonomy. Synecdoche is when a part is made to stand for a whole, or vice versa, such as when tea, a type of drink, becomes the name for an entire meal, or the Old English word ceol, which meant ship, came forward into modern English meaning keel, just the bottom ridge of a ship. The word synecdoche comes from Greek syn (with) + ek (out) + dekhesthai (receive). That doesn't add up to "whole standing for part" to me, but that's what the prof said. Metonomy is when a related thing comes to stand for the thing, so "the bar" meaning the legal profession or a word that used to mean "hip" coming to mean thigh. For some reason that is common with body parts, especially the face, for words to move around a bit with time and interlanguage borrowing regarding the thing they really represent.
Next we have language birth. This occurs in one of three ways: dialectical divergence, creolization and invention. Creolization is what happens when a pidgin, a small-vocabulary trade language, becomes the native language of a new generation of speakers. We learn the typical features of a pidgin and look at some examples, such as Tok Pisin, now the official language of Papua New Guinea, and Chinook Wawa, an extinct trade language of the Pacific Northwest. Creoles can be full independent languages, but if a creole is still in contact with the superstrate language, it may undergo decreolization and the speakers move towards the standard form of the high status language.
We ended with language death. This occurs when either there is a massive loss of speakers of the language, through epidemics or genocide, or the people who speak the language shift to speaking another language, usually due to an imbalance of power. We learn a number of ways to classify endangered languages and that's the end of that class. I'm going to come back to this one. It's my favourite so far.
My final class for the day is on speech phonetics, how we make noises for language. It's a third year class with a couple of prerequisites I don't have, but then the first three quarters of the term would also be a prerequisite, wouldn't it? The professor is really interesting and it's hard to say whether he s teaching a class or just enthusing wildly about the computer modelled speech that is the focus of his research. There is an aspect of it that he perfected yesterday and now he's telling us about it. His course is a cross between anatomy and psychology. A student tries to sidetrack him by asking about the Brittney Speers video where she sticks out her tongue while enunciating, "Why does she do that?" The professor just counters with,
"How does it make you feel?" It's nothing to do with speech production, seeing as she is lip-synching in the video.
Speech is partly observed by hearing and partly by sight. We're back to this theme from another direction. Some of the interesting points of the lecture included:
- People blind from birth actually use their lips differently to produce the same sounds, because they haven't had the opportunity to observe how others do it.
- Facial movements are important in speechreading by the deaf.
- If you see someone's mouth moving your brain acts differently, based on whether you believe they are speaking or not.
He showed the results of an experiment that showed that head movement is highly correlated with speech pitch, and eighty percent of vocal tract information is recoverable from facial motion. He recorded people speaking, collecting information about their head movements from tracking dots, like in motion capture for video games. He then removed all the pitch information from the recorded speech and analyzed the motions to reintroduce pitch. He had samples for Japanese and English. Both sounded completely natural. If I had paid for the class I would have put up my hand to get the citation for that paper because it was near-unbelievable. (I don't speak Japanese, but sometimes a non-speaker of a language can hear differences better than a native speaker, because we're not distracted by the meaning. I remember a time I asked a group of Norwegian friends, "Does Siri come from a different part of Norway than the rest of you?" They were stunned, because apparently her accent was not very strong, but to me who heard only the rhythms and sounds, there was a clear difference in the one voice).
The McGurk Effect is that when you hear a sound, the visual signal influences how you perceive it. So given ambiguous audio with a video of a person pronouncing a bilabial consonant, we will "hear" a b or p. If the audio and video are not synced, we notice, but if the visual signal leads by a little bit we don't notice as easily as if it lags. The professor speculates that we are used to perceiving the slight lag in hearing speech due to the speed of sound.
You think "like we'd notice that" but we do notice things that we don't notice we notice. They did an experiment where subjects put their hands on the face of a speaking person, following the Tadoma method, a language teaching technique for the deaf-blind. The experimental subjects had normal vision and hearing and had never had any training in the technique, but it gave them a ten percent improvement in comprehension. I think they had their hearing partially but not completely obstructed.
It was a good class. I'll come to this one again.