This is only interesting if one doesn't realize that sign language is its own language, rather than simply signed English. Syntax and word order are entirely different, and much information is encoded in what one might call 'tone'—eg, the energy or rhythm with which any sign is performed—that in English would be communicated with full words or suffixes.<p>These facts are patent to anybody with a passing acquaintance of American (or any other sign language). The 'paradox' they're discussing here is tantamount to saying, 'Japanese words tend to be longer than English words. So how do the Japanese communicate as effectively as we do?'
One obvious point that somehow isn't mentioned in the article:<p>The speed of human language isn't limited by the ability of "the language" to encode a stream of information. It is limited by the human ability to create and understand the bits of language information.<p>And that limit is <i>much</i> lower than the human ability to take in other streams of information. Vision lets you take in mega-bytes of information in seconds. Speech processing involves far less because language processing is such a hard problem for the brain.<p>Luria's the "The Working Brain" mentions that <i>most</i> brain damage degrades speech in some fashion and that's because such a large portion of your brain works on the speech recognition problem when you are speaking or listening.<p>And processing language is a hard problem for people (and computers!) because a language statement involves answering (at least implicitly) global questions about your store of information - "are all men are mortal", "do black swans exist" etc.
Calling asl "signed english" is usually a good indication that the writer doesn't know much about the subject. Then they go on to say:<p>> "It turns out that the information content of handshapes is on average just 0.5 bits per handshape less than the theoretical maximum. By contrast, the information content per phoneme in spoken English is some 3 bits lower than the maximum."<p>Talking about the information content of speech symbols is likely to be entirely bunk, but I'm going to go read the full article and try to find out whether the summary is bad or if the research is really this confused.
I too find information theory a fascinating subject. Several commentors are disappointed by the trivial nature of the study (which I assume none of us have read). That is a problem with trying to apply theory to real world problems, You often have to make compromises in the quality of your interpretation of the problem in order to rigorously apply the tools of theory. I'm guessing the researches picked up some understanding of ASL during their research (if they didn't have some to begin with), but chose to frame their study so the parameters were easier to quantify.
There's a very nice sign in ASL to describe this article. It uses two hands, and one of them is a fist with the index and little fingers extended, like the horns of a bull.<p>(1) I'm pretty sure it was Klima and Bellungi's book <i>The Signs of Language</i> that pointed out that arm muscles are slower than vocal muscles, and therefore ASL does things that spoken languages <i>can't</i> do in order to maintain the same communication bandwidth. (That book was written <i>ten years ago</i>.)<p>(2) The handshape is only a very small part of what conveys meaning in sign language; <i>one of the classic newbie mistakes in learning ASL</i> is to look at your conversation partner's hands, rather than his or her eyes. A great deal of <i>grammatical</i> information is communicated purely by facial expression; for example, raised eyebrows can indicate a yes/no question, lowered eyebrows can indicate a wh-question, and just looking in one direction or shifting the body slightly can substitute for a pronoun. There are also movements of the mouth that act as adverbial modifiers for a sign, to indicate things like "almost", "carelessly", "with difficulty", "distant in time or space", and a whole bunch of other stuff.<p>(3) With regard to the hand and arm movements themselves, the location and movement of the signs are as significant as the handshape. The signs for "father" and "mother" differ only in location. The signs for "paper" and "cheese" differ only in movement. Skimming the article, it appears that the authors didn't bother taking location and movement because linguists disagree on how to categorize those other features. But that's no excuse for <i>completely leaving them out</i> of your analysis. That's methodological laziness.<p>(4) Modulation of movement also has grammatical significance which in English would be conveyed by modal verbs or adverbs. For example, a change in how you make the sign for "to be red" turns it into "to become red". The Klima and Bellungi book above has more of this kind of thing.<p>(5) There's also the ASL classifier system, which provides a concise way of using the relative position and motion of hands to indicate the relative position and motion of objects in physical <i>or metaphorical</i> space. I once saw a lecture at which a woman very eloquently used this to describe herself advancing through all four years of her college education while a friend of hers kept repeating her "prep" year. (Gallaudet has a pre-freshman year for students who, thanks to the ocean of suck that is the American deaf-ed system, don't arrive with adequate college preparation.)<p>There have been <i>over thirty years</i> of serious linguistic research into ASL, and judging from the references, these jokers didn't do more than strip-mine it for a list of handshapes. AAARRRGGGHH!
Along similar lines as the article, I've always thought about how we can be more efficient when speaking. If you notice, there is a tendency to be more and more efficient on the web, with abbreviations/acronyms. Imagine a world where we say "lol" just as though we type it (not hard to imagine). Now take it a few steps further, by considering the vast amount of different sounds/tones our vocal chords can create, imagine if we keep simplifying the spoken language to a point where it becomes like one of those star-trek civilizations with their clicking sounds when they speak.<p>I believe the trend is inevitible.
What did they study? The accompanying photograph shows the alphabet, but I assume they aren't just considering using sign language to spell out words. Then they mention phonemes, but those letters aren't phonemes. Are there signs for phonemes? And ASL has signs for words.