Friday, September 9, 2016

Thoughts on Sign Language Design

Previously: General Thoughts on Writing Signs and A System for Coding Handshapes

One of the problems with designing a constructed sign language is that so little is actually known about sign languages compared to oral languages. For many conlangers and projects (e.g., sign engelangs or loglangs, etc.), this isn't really a big deal, but it is a serious problem for the aspiring naturalistic con-signer, ascribing to the diachronic or statistical naturalism schools.

I have, however, come across one account of the historical development of modern ASL from Old French Sign Language. While it is hard to say if the same trends evidenced here would generalize to all sign languages, they do seem pretty reasonable, and provide a good place for con-signers to start. Additionally, it turns out that many of these diachronic tendencies mesh rather well with the goal of designing a language with ease of writing in mind.

Unsurprisingly, despite the relative ease of visual iconicity in a visual language, actual iconicity seems to disappear pretty darn easily. But I, at least, find it difficult to come up with totally arbitrary signs for things - much more difficult than it is to make up spoken words - and the Diachronic Method is generally considered a good thing anyway, so knowing exactly how iconicity is eroded should allow a con-signer to start with making up iconic proto-signs, and then artificially evolving them into non-iconic "modern" signs.

The general trends in this account of ASL evolution can be summed up as follows:
  1. Signs that require interaction with the environment (like touching a table top) either disappear entirely, replaced by something else, or simplify to avoid the need for props. That seems pretty obvious.
  2. Signs that require the use of body parts other than the hands for lexical (as opposed to grammatical) content tend to simplify to eliminate non-manual components. E.g., facial expressions may indicate grammatical information like mood, but won't change the basic meaning of a word.
  3. Signs tend to move into more restricted spaces; specifically, around the face, and within the space around the body that is easily reached while still keeping the elbows close in. This is essentially a matter of improving ease of articulation.
  4. Signs that occur around the head and face tend to move to one side, while signs occurring in front of the torso tend to centralize. This makes sense for keeping the face in view, especially if facial expressions are grammatically significant.
  5. Two-handed signs around the head and face tend to become one-handed signs performed on just one side. In contrast, one-handed signs performed in front of the torso tend to become symmetrical two-handed signs.
  6. Asymmetrical two-handed signs tend to undergo assimilation in hand shape and motion, so that there is only one hand shape or motion specified specified for the whole sign, though not necessarily place or contact. This is a matter of increasing ease of articulation (reduction how much different stuff you have to do with each hand), as well as increased signalling redundancy.
  7. Signs that involve multiple sequential motions or points of contact "smooth out".
  8. There is an analog to "sound symbolism", where, if a large group of signs in a similar semantic domain happen to share a particular articulatory feature (similar shape, similar motion, etc.), that feature will be analogically spread to other signs in the same semantic domain.
And, of course, multiple of these can apply to a single proto-sign, such that it, for example, eliminates head motion in favor of hand motion, loses a hand, and smooths the resulting hand motion.

Most of the time, all of those trends reduce iconicity and increase arbitrariness of signs, but iconicity increases in cases where it does not contradict those other principles. Thus, a lot of antonyms end up being dropped and replaced by reverse-signs- e.g., you get morphological lexical negation by signing a word backwards, and temporal signs move to end up grouped along a common time-line in the signing space.

Symmetrization makes writing easier because you don't have to encode as much simultaneous stuff. Even though two hands might be used, you don't have to write down the actions of two simultaneous hands if they are doing the same thing. Reduction of the signing space also means you need fewer symbols to express a smaller range of variation in the place and motion parameters, and smoothing simplifies writing essentially by making words shorter, describable with a single type of motion.

Many two-handed ASL signs are still not entirely symmetric. Some, like the verb "to sign", are anti-symmetric, with circling hands offset by 180 degrees. One-handed signing is, however, a thing, and communication can still proceed successfully if only the dominant hand performs its half of the sign, while the other hand is occupied. (I imagine there is some degradation, like eating while talking orally, but I don't know enough about ASL to tell exactly how significant that effect is or how it qualitatively compares to oral impediments.) Thus, it appears that it would not be terribly difficult to make the second hand either completely redundant, or limited in its variations (such as symmetric vs. antisymmetric movement, and nothing else) to make two-handed signs extremely easy to write, and minimize information loss in one-handed signing.

Given the restriction of two-handed signs to particular places (i.e., not around the face), it might even make sense to encode the action of the second hand as part of the place. One could imagine, for example, a non-symmetric case of touching the second hand as a place specification (which would typically remain possible even if that hand is occupied), as well as symmetric second hand and anti-symmetric second hand.

I have no idea if native signers of ASL or any other sign language actually think of the second hand as constituting a Place, just like "at the chin," or "at the shoulder," rather than a separate articulation unto itself, but treating the secondary hand as a place does seem like very useful way to think for a con-sign-lang. Not only does it significantly reduce the complexity required in a writing system, it also ends up smoothing out the apparent surface differences between near-face signs and neutral space signs; in each case, there is underlyingly only one lexical hand, with the second hand filling in when the face is absent to better specify Place information.

Monday, September 5, 2016

General Thoughts on Writing Signs

In my last post, I have begun to develop an essentially featural writing system for an as-yet undeveloped sign language. Featural writing systems are extremely rare among natural oral languages, but every system for writing sign languages that I know of is featural in some way. So, why is this?

Let's examine some of the possible alternatives. The simplest way to write sign languages, for a certain value of "simple", would be to use logograms. Just as the logograms used to write, e.g., Mandarin, do not necessarily have any connection whatsoever to the way the words of the language are pronounced, logograms for a signed language need not have any systematic relation to how words are signed. Thus, the fact that the language's primary modality is signing becomes irrelevant, and a signed language can be just as "easy" to write as Chinese is.

However, while logograms would be perfectly good as a native writing system for communication between people who already know a sign language and the logographic writing system that goes with it, they are next to useless for documenting a language that nobody speaks yet, or for teaching a language to a non-native learner. For that, you need some additional system to describe how words are actually produced, whether they are spoken orally or signed manually.

Next, we might consider something like an alphabet or a syllabary. (si5s calls itself a "digibet".) In that case, we need to decide what level of abstraction in the sign language we want to assign to a symbol in the writing system. If we want linearity in the writing system to exactly match linearity in the primary language, as it does with an ideal alphabet, then we need one symbol for every combination of handshape, place, and motion, since those all occur simultaneously. Unfortunately, that would result in thousands of symbols, with most words being one or two symbols long, which is really no different from the logography option. So, we need to go smaller. Perhaps we can divide different aspects of a sign into categories like "consonants" and "vowels", or "onsets", "nucleii", and "codas". If we assign one symbol to each handshape, place, and motion... well, we have a lot of symbols, more than a typical alphabet and probably more than a typical syllabary, but far fewer than a logography. In exchange for that, we either have to pick an arbitrary order for the symbols in one "sign-syllable", or else pack them into syllable blocks like Hangul or relegate some of them to diacritic status, and get something like an abugida. Stokoe notation is in that last category. Syllable blocks seem like a pretty good choice for a native writing system, but that won't work for an easily-typable romanization. For that, we're stuck with the artificially linearized options, which is also the approach taken by systems like ASL-phabet.

For a sign language with an intentionally minimalized cheremic inventory, that level of descriptiveness would be quite sufficient. But, there aren't a whole lot of characters you can type easily on a standard English keyboard (and even fewer if you don't want the result to look like crap and be very confusing- parentheses should not be used for -emic value!) Thus, we need to go down to an even lower level of abstraction, and that means going at least partly featural.

Native sign writing systems have a different pressure on them for featuralism: signing and writing are both visual media, which makes possible a level of iconography unavailable to writing systems for oral languages. In the worst case, this leads to awkward, almost pictographic systems like long-hand SignWriting, which is only one step away from just drawing pictures of people signing. But even a more evolved, schematic, abstract system might as well hang on to featural elements for historical and pedagogical reasons.

A System for Coding Handshapes

Sign languages are cool, and conlangs are cool, but there is a serious dearth of constructed sign languages. Or at least, there is a dearth of accessible documentation on constructed sign languages, and for all practical purposes that's the same thing. The only one I know of off-hand is KNSL. Thus, I want to create one.

Part of the problem is that it's just so hard to write sign languages. I, for one, cannot work on a language without having a way to type it first. Not all conlangers work the same way, but even if you can create an unwritten language, the complexity of documenting it (via illustration or video recording) would make it much more difficult to tell other conlangers that you have done so. The advantages of being able to type the language on a standard English keyboard are such that, if I am going to work on a constructed sign language, developing a good romanization system is absolutely critical. If necessary, it is even worth bending the language itself in order to make it easier to write.

There are quite a few existing systems for writing sign, like SLIPA, but just as you don't write English in IPA, it seems important in a developing a new language to come up with a writing system that is well adapted to the phonology/cherology of that specific language.

It occurred to me that binary finger counting makes use of a whole lot of interesting handshapes, and conveniently maps them onto numbers.* Diacritics or multigraphs can then be added to indicate things like whether fingers are relaxed or extended, or whether an unextended thumb is inside or outside any "down" fingers, which don't make any difference to the counting system.

So, I can write down basic handshapes just by using numbers from 0-31, or 0-15, depending on whether or not the thumb is included. There are reasons for and against that decision; including the thumb means the numbers would correspond directly to traditional finger-counting values, which is nice; but, it also results in a lot of potential diacritics / multigraphs not making sense with certain numbers, which has some aesthetic disappeal. On the other hand, lots of potential diacritics wouldn't make sense with certain numbers anyway, so maybe that doesn't matter. On the gripping hand, only using 0-15 and relegating all thumb information to diacritics / multigraphs means I can get away with using single-digit hexadecimal numerals (0-F), which is really convenient.

This page describing an orthography for ASL provides a convenient list of ASL handshapes with pictures and names that we can use for examples. Using hexadecimal numbering for the finger positions, and ignoring the thumb, the basic ASL handshapes end up getting coded as follows:

1: 1
3: 3
4: F
5: F
8: D
A: 0
B: F
C: F
D: 1
E: 0
F: E
G: 1
I: 8
          K: 3
L: 1
M: 0
N: 0
O: 0
R: 3
S: 0
T: 0
U: 3
V: 3
W: 7
X: 1
Y: 8

You'll notice that a lot of ASL signs end up coded the same way; e.g., A, M, N, S, and T all come out as 0 in finger-counting notation. Some of that is going to be eliminated when we add a way to indicate thumb positions; if we counted 0-V (32 symbols) instead of 0-F (16), including the thumb as a binary digit, the initial ambiguity would be much smaller. Some of that is expected, and will remain- it just means that ASL makes some cheremic distinctions that don't matter in this new system. That's fine, because this isn't for ASL; we're just using pictures of ASL as examples because they are convenient. However, si5s, another writing system for ASL, got me thinking of using diacritics to indicate additional handshape distinctions beyond just what the finger-counting notation can handle. Typing diacritics on numbers is difficult, but I can easily add multigraphs to provide more information about finger arrangement in addition to thumb positioning.

First off, there are thumb position diacritics. Since one of the thumb positions is "extended", indicating an odd number, these are only applicable to even numbers, where the thumb position is something else (this would change if I went to 0-F notation instead, excluding the thumb). For these, we've got:

p- thumb touching the tips (or 'p'oints) of the "up" fingers
d- thumb touching the tips of the "down" fingers (as in ASL 8, D, F, and O)
s- thumb held along the side of the hand (as in ASL A)
u- thumb under any "down" fingers, or along the palm (as in ASL 4)
b- thumb between any "down" fingers (as in ASL N, M, and T)
e- thumb extended to the side (as in ASL 3, 5, C, G, L, and Y)

The default is thumb on top of any "down" fingers, as in ASL 1, I, R, S, U, V, W, and X, or across the palm.
The hand position of ASL E is ambiguous between thumb under and thumb over- diacritic 'u' or the default, unmarked state.

Note that 'u' and 'b' are indistinguishable from the default for position F, since there aren't any 'down 'fingers. Position 'b' can be interpreted as "next to the down finger" in cases where there is only one finger down (positions 7, B, D, and E).

Next, the "up" fingers can be curled or not, and spread or not, indicated respectively by a 'c' and a 'v'. Position 'v' of course does not make sense for positions without two adjacent fingers up (0, 1, 2, 4, 5, 8, 9, and A- half of the total!), and 'c' doesn't make sense for 0.

This still does not capture all of the variation present in ASL signs, but it does capture a lot, and, as previously noted, the bits that are missed don't really matter since this is not supposed to be a system for coding ASL!

The ASL mapping list with multigraphs added looks like this:

1: 1
3: 3ve
4: Fv
5: Fve
8: Dd
A: 0s
B: Fu
C: Fce
D: 1d
E: -
F: Evd
G: 1e
I: 8
         K: -
L: 1e
M: 0b
N: 0b
O: 0d or Fp
R: 3
S: 0
T: 0b
U: 3
V: 3v
W: 7v
X: 1c
Y: 8e

And we can code some additional handshapes from the "blended" list:

3C: 3vce
4C: Fvc
5C: Fvce
78: 9
AG: 1p
AL: 0e


The crossed fingers of the ASL R are not representable in this compositional system, but I like that handshape, so we can add an extra basic symbol X to the finger-counting 0-F, to which all of the thumb position multigraphs or diacritic can be added.

To complete a notation system for a full sign language, I'd need to add a way of encoding place, orientation, and two kinds of motion- gross motion, and fine motion, where fine motion is stuff from the wrist down. I'll address those in later posts, but this feels like a pretty darn good start which already provides hundreds of basic "syllable nucleii" to start building sign words from.

* Of course, other finger-counting systems (like chisanbop, perhaps) could also be used to come up with cheremic inventories and coding systems for them as well.