Saturday, October 15, 2016

A Phonology Without Phonemic Consonants

There are languages that have been analyzed as lacking phonemic vowels, with all vowels being completely predictable from the consonant string. That doesn't mean that they aren't pronounced with vowels, merely that vowels serve no contrastive function.

So, how about a phonology that does the exact opposite: packs all of the contrast into underlying phonological vowels, with phonetic consonants being completely predictable from the vowel string?

Now, there are lots of ways to do this in an ad-hoc manner. Say, an /i/ and an /a/ always have a [t] inserted between them, while a [u] and an [a] get a [ʒ], just because. But I'm gonna look at something that is potentially naturalistic, where the choice of consonant phones and where they get inserted is a reasonable consequence of vocalic features. There are probably lots of ways to do that, too, but here's just one of them:

For simplicity, we'll start with just /i/, /a/, /u/ as the basic, plain vowels. You could use more, but these are sufficient to demonstrate the rules I have in mind, which I will describe in terms of generic vowel features such that one could add more basic vowels and already know exactly how they would behave. Each of these can come in plain, breathy, rhotic, and nasal varieties, or any combination thereof; i.e., one could have a breathy nasal rhotic vowel, with all three extra qualities at once. I'll assume that any vowel can have any combination of these qualities, and there are no phonotactic restrictions on the underlying vowel string (although certain combinations might require sticking in a syllable boundary to break them up). Changing either of those assumptions could introduce further structural interestingness in other similar phonologies.

All of these vowels can also be long or short, with syllables being maximally 3 morae; thus, one can have one short vowel, two short vowels, three short vowels, one long vowel, or one short vowel and one long vowel per syllable, where all of the vowels in a single syllable must share all of their voicing, rhotic, and nasal features. For consonant-induction purposes a "long syllable" is any syllable containing a long vowel, or a triphthong (long, long+short, short+long, and short+short+short, but not short+short). Ignoring length, this results in 8 possible versions of every basic vowel, which can be transcribed as V, hV, Vn, Vr, hVn, hVr, Vrn,  and hVrn. That results in a total of 24 phonemes:

i /i/ a /a/ u /u/
hi /i̤/ ha /a̤/ hu /ṳ/
in /ĩ/ an /ã/ un /ũ/
ir /i˞/ ar /a˞/ ur /u˞/
hin /ĩ̤/ han /ã̤/ hun /ṳ̃/
hir /i̤˞/ har /a̤˞/ hur /ṳ˞/
irn /ĩ˞/ arn /ã˞/ urn /ũ˞/
hirn /ĩ̤˞/ harn /ã̤˞/ hurn /ṳ̃˞/

Or 48, if we count the long versions as separate phonemes.

Tautosyllabic vowels can turn into glides. An /i/ becomes [j], while short /u/ turns into [w]. In long syllables, medial vowels are glided first, such that, e.g., /uia/ becomes [uja], not [wia]. Sequences of /iii/ become [ji:] and /uuu/ become [wu:]' sequences of /aaa/ must be broken into two syllables, either [a:.a] or [a.a:]. Since all vowels in a syllable must have matching features, we can romanize these by grouping the vowels together within one set of voice/breathy/nasal letters. E.g., huin /ṳ̃͡ĩ̤/ [w̤̃ĩ̤], or iar /i˞͡a˞/ [ja˞].

That provides us with two phonetic consonants so far: /j/ and /w/.

Other consonants are induced when transitioning from a vowel that has a certain quality to one that doesn't, or at syllable or morpheme boundaries.

Breathy-voiced vowels basically induce an onset [h] (hence the romanization convention) morpheme-initially or after a non-breathy vowel, but in certain situations this can be mutated into aspiration or lenition of a previous phonetic consonant instead (see below). (A reasonable phonotactic restriction on the vowel string might be that plain vowels can't follow breathy vowels in the same morpheme, just because I find it difficult to perceive the transition from breathy to plain voice. But I'll ignore that for now.)

Plain vowels induce plain coda stops. High front vowels (/i/) induce [t], high back vowels (/u/) induce [k], and low vowels (/a/) induce [ʔ]. These all become aspirated when followed by unstressed breathy voice, absorbing the [h] induced by the following vowel unless it crosses a morpheme boundary; if the breathy syllable is stressed, then [t] becomes [t͡s] and [k] becomes [x], again replacing the [h] induced by the following vowel unless it crosses a morpheme boundary, while [ʔ] is unaffected.

Now, we have 8 more phonetic consonants: [t], [tʰ], [t͡s], [k], [kʰ], [x], [ʔ] and [h].

Since plain vowels already lack other properties, so there are none to lose, these consonant sounds will not occur every time there is a transition to a different class of vowel. Instead, they will only occur at non-utterance-final morpheme boundaries if the syllable is short, and at any non-word-final syllable boundary if the syllable is long; additionally, the consonants will be geminated in long syllables, stealing duration from the vowel. Thus, something like <uuha> /u:a̤/ or <iiha> would be rendered phonetically as [uk:ʰa̤]/[ukxa̤] or [it:ʰa̤]/[itt͡sa̤] respectively, depending on stress placement, and assuming it's monomorphemic.

Nasal vowels induce nasal stops, with non-back vowels (/i/ and /a/) inducing a coda [m] at the end of a word and [n] in other positions, and back vowels (/u/) inducing /ŋ/. Nasalization also interacts with syllable length; like the induced plain stops, induced [n] and [ŋ] will steal length from a long nucleus and become geminated.

Successive non-breathy nasal vowels in different syllables induce an epenthetic [ʁ]. Why? Because that's what I discovered I naturally do when trying to pronounce them! I don't know what exactly the articulatory phonetic justification, but there must be one! Thus, something like <unan> /ũã/ comes out as [ũʁãm], while monosyllabic long /ĩĩ/ (romanized <iin>) is distinguished from disyllabic /ĩ.ĩ/ (romanized <inin>) by the phonetic realizations [ĩ:n] and [ĩʁĩn], respectively. The results of a genuinely rhotic initial vowel (<irnin> /ĩ˞.ĩ/) look different still, as described below.

So far, that adds another 4 phonetic consonants ([m], [n], [ŋ], and [ʁ]), for a total of 16.

Rhotic vowels get a little complicated, due to interaction with other qualities. With combined rhotic+nasal vowels, the coda consonants are ordered R-N.
High front non-nasal vowels (/i/) induce [ɾ] word-medially, and [ɹ] word finally or with nasalization. Low non-back non-nasal vowels (/a/) induce [ɹ], which is ambisyllabic word-medially unless it is produced by a non-breathy vowel followed by a breathy vowel (with an induced [h] onset pushing the ambisyllabic [ɹ] out of the way). Nasal mid vowels induce [ʐ] or [ʒ] in free variation. Back vowels (/u/) induce [ʀ] or [ʁ] in free variation, which like [ɹ] is ambisyllabic unless followed an induced onset [h] or paired with nasalization.

That's another 3-ish phonetic consonants, leaving us with a total inventory looking something like this:

Stops: [t], [tʰ], [k], [kʰ], [ʔ]
Fricatives/affricates: [h], [x], [ts], [ʐ]/[ʒ]
Nasals: [m], [n], [ŋ]
Rhotics: [ʀ]/[ʁ], [ɹ], [ɾ]
Glides: [w], [j]

which really doesn't look that bad! It's sorta-kinda Iroquoian-looking, if you squint, with extra rhotics. Several natural languages get along with fewer consonant phones than that. But, it can still be written mostly-unambiguously (save for specifying morpheme/syllable boundaries) purely as a string of vowels from a 24-character all-vowel alphabet; or perhaps a featural script with three basic vowels and diacritics for the various combinations of nasal, rhotic, and breathy features, and maybe length.

Of course, there are other possible re-analyses of words generated this way. The romanization scheme already embodies one: a three-vowel, three-consonant analysis, where the consonants and vowels have some fairly complex interactions generating a lot of allophones of each, and some particular strange distributional restrictions (like, /h/ is the only consonant that can start a word!) A native speaker of such a language might, however, go for a four-consonant analyses, adding /t/ → [t], [tʰ], [k], [kʰ], [ʔ], [ts], [x]; or even breaking things down further, with no realization of the significance of the extremely limited distribution of these sounds. Speakers might also group things like /t/ →
[t], [tʰ], [k], [kʰ], [ʔ], [ts]; /h/ → [h], [x]; /z/ → [ʐ], [ʒ]; /r/ → [ʀ], [ʁ], [ɹ], [ɾ]; based on perceptual similarity, thus confusing the disparate origins of [h] vs. [x] and masking the commonality of [ʐ] and [ʒ] with the rhotics.

If one were to start with something like this and then evolve it historically, one could easily get a more "normal"-looking inventory (e.g., maybe that tap [ɾ] ends up turning into an [l], and maybe [t͡s] simplifies to plain [s]) with a steadily more opaque relationship to the underlying vocalic features, despite still being regularly predictable from them.

If one were to do an intrafictional description of the language, such as might be written by native linguists, I would be somewhat inclined to go with one of these alternative analyses as the standard native conception, and then dive in to the argument for why it should be re-analyzed as consisting purely of underlying vowels instead. Although, it would be a shame to miss out on the opportunity for a native writing system consisting of a 24-vowel alphabet.

Friday, September 9, 2016

Thoughts on Sign Language Design

Previously: General Thoughts on Writing Signs and A System for Coding Handshapes

One of the problems with designing a constructed sign language is that so little is actually known about sign languages compared to oral languages. For many conlangers and projects (e.g., sign engelangs or loglangs, etc.), this isn't really a big deal, but it is a serious problem for the aspiring naturalistic con-signer, ascribing to the diachronic or statistical naturalism schools.

I have, however, come across one account of the historical development of modern ASL from Old French Sign Language. While it is hard to say if the same trends evidenced here would generalize to all sign languages, they do seem pretty reasonable, and provide a good place for con-signers to start. Additionally, it turns out that many of these diachronic tendencies mesh rather well with the goal of designing a language with ease of writing in mind.

Unsurprisingly, despite the relative ease of visual iconicity in a visual language, actual iconicity seems to disappear pretty darn easily. But I, at least, find it difficult to come up with totally arbitrary signs for things - much more difficult than it is to make up spoken words - and the Diachronic Method is generally considered a good thing anyway, so knowing exactly how iconicity is eroded should allow a con-signer to start with making up iconic proto-signs, and then artificially evolving them into non-iconic "modern" signs.

The general trends in this account of ASL evolution can be summed up as follows:
  1. Signs that require interaction with the environment (like touching a table top) either disappear entirely, replaced by something else, or simplify to avoid the need for props. That seems pretty obvious.
  2. Signs that require the use of body parts other than the hands for lexical (as opposed to grammatical) content tend to simplify to eliminate non-manual components. E.g., facial expressions may indicate grammatical information like mood, but won't change the basic meaning of a word.
  3. Signs tend to move into more restricted spaces; specifically, around the face, and within the space around the body that is easily reached while still keeping the elbows close in. This is essentially a matter of improving ease of articulation.
  4. Signs that occur around the head and face tend to move to one side, while signs occurring in front of the torso tend to centralize. This makes sense for keeping the face in view, especially if facial expressions are grammatically significant.
  5. Two-handed signs around the head and face tend to become one-handed signs performed on just one side. In contrast, one-handed signs performed in front of the torso tend to become symmetrical two-handed signs.
  6. Asymmetrical two-handed signs tend to undergo assimilation in hand shape and motion, so that there is only one hand shape or motion specified specified for the whole sign, though not necessarily place or contact. This is a matter of increasing ease of articulation (reduction how much different stuff you have to do with each hand), as well as increased signalling redundancy.
  7. Signs that involve multiple sequential motions or points of contact "smooth out".
  8. There is an analog to "sound symbolism", where, if a large group of signs in a similar semantic domain happen to share a particular articulatory feature (similar shape, similar motion, etc.), that feature will be analogically spread to other signs in the same semantic domain.
And, of course, multiple of these can apply to a single proto-sign, such that it, for example, eliminates head motion in favor of hand motion, loses a hand, and smooths the resulting hand motion.

Most of the time, all of those trends reduce iconicity and increase arbitrariness of signs, but iconicity increases in cases where it does not contradict those other principles. Thus, a lot of antonyms end up being dropped and replaced by reverse-signs- e.g., you get morphological lexical negation by signing a word backwards, and temporal signs move to end up grouped along a common time-line in the signing space.

Symmetrization makes writing easier because you don't have to encode as much simultaneous stuff. Even though two hands might be used, you don't have to write down the actions of two simultaneous hands if they are doing the same thing. Reduction of the signing space also means you need fewer symbols to express a smaller range of variation in the place and motion parameters, and smoothing simplifies writing essentially by making words shorter, describable with a single type of motion.

Many two-handed ASL signs are still not entirely symmetric. Some, like the verb "to sign", are anti-symmetric, with circling hands offset by 180 degrees. One-handed signing is, however, a thing, and communication can still proceed successfully if only the dominant hand performs its half of the sign, while the other hand is occupied. (I imagine there is some degradation, like eating while talking orally, but I don't know enough about ASL to tell exactly how significant that effect is or how it qualitatively compares to oral impediments.) Thus, it appears that it would not be terribly difficult to make the second hand either completely redundant, or limited in its variations (such as symmetric vs. antisymmetric movement, and nothing else) to make two-handed signs extremely easy to write, and minimize information loss in one-handed signing.

Given the restriction of two-handed signs to particular places (i.e., not around the face), it might even make sense to encode the action of the second hand as part of the place. One could imagine, for example, a non-symmetric case of touching the second hand as a place specification (which would typically remain possible even if that hand is occupied), as well as symmetric second hand and anti-symmetric second hand.

I have no idea if native signers of ASL or any other sign language actually think of the second hand as constituting a Place, just like "at the chin," or "at the shoulder," rather than a separate articulation unto itself, but treating the secondary hand as a place does seem like very useful way to think for a con-sign-lang. Not only does it significantly reduce the complexity required in a writing system, it also ends up smoothing out the apparent surface differences between near-face signs and neutral space signs; in each case, there is underlyingly only one lexical hand, with the second hand filling in when the face is absent to better specify Place information.

Monday, September 5, 2016

General Thoughts on Writing Signs

In my last post, I have begun to develop an essentially featural writing system for an as-yet undeveloped sign language. Featural writing systems are extremely rare among natural oral languages, but every system for writing sign languages that I know of is featural in some way. So, why is this?

Let's examine some of the possible alternatives. The simplest way to write sign languages, for a certain value of "simple", would be to use logograms. Just as the logograms used to write, e.g., Mandarin, do not necessarily have any connection whatsoever to the way the words of the language are pronounced, logograms for a signed language need not have any systematic relation to how words are signed. Thus, the fact that the language's primary modality is signing becomes irrelevant, and a signed language can be just as "easy" to write as Chinese is.

However, while logograms would be perfectly good as a native writing system for communication between people who already know a sign language and the logographic writing system that goes with it, they are next to useless for documenting a language that nobody speaks yet, or for teaching a language to a non-native learner. For that, you need some additional system to describe how words are actually produced, whether they are spoken orally or signed manually.

Next, we might consider something like an alphabet or a syllabary. (si5s calls itself a "digibet".) In that case, we need to decide what level of abstraction in the sign language we want to assign to a symbol in the writing system. If we want linearity in the writing system to exactly match linearity in the primary language, as it does with an ideal alphabet, then we need one symbol for every combination of handshape, place, and motion, since those all occur simultaneously. Unfortunately, that would result in thousands of symbols, with most words being one or two symbols long, which is really no different from the logography option. So, we need to go smaller. Perhaps we can divide different aspects of a sign into categories like "consonants" and "vowels", or "onsets", "nucleii", and "codas". If we assign one symbol to each handshape, place, and motion... well, we have a lot of symbols, more than a typical alphabet and probably more than a typical syllabary, but far fewer than a logography. In exchange for that, we either have to pick an arbitrary order for the symbols in one "sign-syllable", or else pack them into syllable blocks like Hangul or relegate some of them to diacritic status, and get something like an abugida. Stokoe notation is in that last category. Syllable blocks seem like a pretty good choice for a native writing system, but that won't work for an easily-typable romanization. For that, we're stuck with the artificially linearized options, which is also the approach taken by systems like ASL-phabet.

For a sign language with an intentionally minimalized cheremic inventory, that level of descriptiveness would be quite sufficient. But, there aren't a whole lot of characters you can type easily on a standard English keyboard (and even fewer if you don't want the result to look like crap and be very confusing- parentheses should not be used for -emic value!) Thus, we need to go down to an even lower level of abstraction, and that means going at least partly featural.

Native sign writing systems have a different pressure on them for featuralism: signing and writing are both visual media, which makes possible a level of iconography unavailable to writing systems for oral languages. In the worst case, this leads to awkward, almost pictographic systems like long-hand SignWriting, which is only one step away from just drawing pictures of people signing. But even a more evolved, schematic, abstract system might as well hang on to featural elements for historical and pedagogical reasons.

A System for Coding Handshapes

Sign languages are cool, and conlangs are cool, but there is a serious dearth of constructed sign languages. Or at least, there is a dearth of accessible documentation on constructed sign languages, and for all practical purposes that's the same thing. The only one I know of off-hand is KNSL. Thus, I want to create one.

Part of the problem is that it's just so hard to write sign languages. I, for one, cannot work on a language without having a way to type it first. Not all conlangers work the same way, but even if you can create an unwritten language, the complexity of documenting it (via illustration or video recording) would make it much more difficult to tell other conlangers that you have done so. The advantages of being able to type the language on a standard English keyboard are such that, if I am going to work on a constructed sign language, developing a good romanization system is absolutely critical. If necessary, it is even worth bending the language itself in order to make it easier to write.

There are quite a few existing systems for writing sign, like SLIPA, but just as you don't write English in IPA, it seems important in a developing a new language to come up with a writing system that is well adapted to the phonology/cherology of that specific language.

It occurred to me that binary finger counting makes use of a whole lot of interesting handshapes, and conveniently maps them onto numbers.* Diacritics or multigraphs can then be added to indicate things like whether fingers are relaxed or extended, or whether an unextended thumb is inside or outside any "down" fingers, which don't make any difference to the counting system.

So, I can write down basic handshapes just by using numbers from 0-31, or 0-15, depending on whether or not the thumb is included. There are reasons for and against that decision; including the thumb means the numbers would correspond directly to traditional finger-counting values, which is nice; but, it also results in a lot of potential diacritics / multigraphs not making sense with certain numbers, which has some aesthetic disappeal. On the other hand, lots of potential diacritics wouldn't make sense with certain numbers anyway, so maybe that doesn't matter. On the gripping hand, only using 0-15 and relegating all thumb information to diacritics / multigraphs means I can get away with using single-digit hexadecimal numerals (0-F), which is really convenient.

This page describing an orthography for ASL provides a convenient list of ASL handshapes with pictures and names that we can use for examples. Using hexadecimal numbering for the finger positions, and ignoring the thumb, the basic ASL handshapes end up getting coded as follows:

1: 1
3: 3
4: F
5: F
8: D
A: 0
B: F
C: F
D: 1
E: 0
F: E
G: 1
I: 8
          K: 3
L: 1
M: 0
N: 0
O: 0
R: 3
S: 0
T: 0
U: 3
V: 3
W: 7
X: 1
Y: 8

You'll notice that a lot of ASL signs end up coded the same way; e.g., A, M, N, S, and T all come out as 0 in finger-counting notation. Some of that is going to be eliminated when we add a way to indicate thumb positions; if we counted 0-V (32 symbols) instead of 0-F (16), including the thumb as a binary digit, the initial ambiguity would be much smaller. Some of that is expected, and will remain- it just means that ASL makes some cheremic distinctions that don't matter in this new system. That's fine, because this isn't for ASL; we're just using pictures of ASL as examples because they are convenient. However, si5s, another writing system for ASL, got me thinking of using diacritics to indicate additional handshape distinctions beyond just what the finger-counting notation can handle. Typing diacritics on numbers is difficult, but I can easily add multigraphs to provide more information about finger arrangement in addition to thumb positioning.

First off, there are thumb position diacritics. Since one of the thumb positions is "extended", indicating an odd number, these are only applicable to even numbers, where the thumb position is something else (this would change if I went to 0-F notation instead, excluding the thumb). For these, we've got:

p- thumb touching the tips (or 'p'oints) of the "up" fingers
d- thumb touching the tips of the "down" fingers (as in ASL 8, D, F, and O)
s- thumb held along the side of the hand (as in ASL A)
u- thumb under any "down" fingers, or along the palm (as in ASL 4)
b- thumb between any "down" fingers (as in ASL N, M, and T)
e- thumb extended to the side (as in ASL 3, 5, C, G, L, and Y)

The default is thumb on top of any "down" fingers, as in ASL 1, I, R, S, U, V, W, and X, or across the palm.
The hand position of ASL E is ambiguous between thumb under and thumb over- diacritic 'u' or the default, unmarked state.

Note that 'u' and 'b' are indistinguishable from the default for position F, since there aren't any 'down 'fingers. Position 'b' can be interpreted as "next to the down finger" in cases where there is only one finger down (positions 7, B, D, and E).

Next, the "up" fingers can be curled or not, and spread or not, indicated respectively by a 'c' and a 'v'. Position 'v' of course does not make sense for positions without two adjacent fingers up (0, 1, 2, 4, 5, 8, 9, and A- half of the total!), and 'c' doesn't make sense for 0.

This still does not capture all of the variation present in ASL signs, but it does capture a lot, and, as previously noted, the bits that are missed don't really matter since this is not supposed to be a system for coding ASL!

The ASL mapping list with multigraphs added looks like this:

1: 1
3: 3ve
4: Fv
5: Fve
8: Dd
A: 0s
B: Fu
C: Fce
D: 1d
E: -
F: Evd
G: 1e
I: 8
         K: -
L: 1e
M: 0b
N: 0b
O: 0d or Fp
R: 3
S: 0
T: 0b
U: 3
V: 3v
W: 7v
X: 1c
Y: 8e

And we can code some additional handshapes from the "blended" list:

3C: 3vce
4C: Fvc
5C: Fvce
78: 9
AG: 1p
AL: 0e


The crossed fingers of the ASL R are not representable in this compositional system, but I like that handshape, so we can add an extra basic symbol X to the finger-counting 0-F, to which all of the thumb position multigraphs or diacritic can be added.

To complete a notation system for a full sign language, I'd need to add a way of encoding place, orientation, and two kinds of motion- gross motion, and fine motion, where fine motion is stuff from the wrist down. I'll address those in later posts, but this feels like a pretty darn good start which already provides hundreds of basic "syllable nucleii" to start building sign words from.

* Of course, other finger-counting systems (like chisanbop, perhaps) could also be used to come up with cheremic inventories and coding systems for them as well.

Friday, June 3, 2016

Possession & State in Valaklwuuxa

When there are no nouns, how do you manage genitive constructions?

Somewhat surprisingly, the answer turns out to be "the same way you form resultatives".


Resultatives are derived predicates that indicate a final state resulting from an action. In English, we often indicate these with passive participles; thus, potatoes which have undergone boiling are "boiled potatoes"- "boiled" is the state that results from boiling.

In Valaklwuuxa, resultatives are derived by the prefix <ves->. Thus, we can have sentence pairs like "nbetsa tu txe Dxan-la." ~ "John sat down." vs. "vesnbetsa txe Dxon-la" ~ "John is sitting.", or "le-val" ~ "It's cooking" vs. "le-vesval" ~ "It is / has been cooked."

These kinds of derived predicates tend to be intransitive, but there are some transitive roots which produce transitive resultatives as well- things like "to touch" -> "to be in contact with something", or "to see" -> "to have been seen by someone".

But what happens if you try to apply that particular derivation to something which is not a process? Well...


Consider a root like <kusa> "child". If we conjugate that as "le-kusa", it means "He/she is a child"; we can also add an explicit subject, and say "kusa txe Dxon-la" ~ "John is a child." But if we add the prefix <ves->, we get "veskusa txe Dxon-la" ~ "John has a child"; and, in fact, this is a transitive predicate- the unstated object is John's child.

(In fact, possessive predicates actually tend to be ambi-transitive; if additional description of the object is needed or implied, one uses the transitive conjugations; but if not, the intransitive conjugations are also acceptable. This is fairly weird for Valaklwuuxa verbs, where transitivity tends to be quite explicit, but omitting explicit transitivizers or detransitivizers eliminates extra syllables in a situation where different conjugation paradigms usually eliminate ambiguity anyway.)

Now, if I want to say something a little more complicated, like "I see John's child", I can relativize that object, and get "xe-lwokx txe veskusasa txe Dxan-la" (where <lwokx> is the root for "to see something")- note that the inverse voice suffix <-sa> must be used to relativize the child, rather than the child's possessor (John).

The Semantic Connection

In theory, these two usages of <ves-> could be related in two general ways:

1. Accidental homophony- they are two separate prefixes that happen to sound the same, due to historical sound changes or something.
2. Two uses of the same morpheme- somehow, one semantic operation actually covers both cases.

Strange as it may seem, the correct answer is actually (2). This is, in fact, one and the same prefix in both cases, and is in fact modelled on a similar prefix <es-> in Lillooet Salish. This paper explains the morphosyntactic evidence for considering <es-> to be one morpheme in Salish, but for Valaklwuuxa it is sufficient to simply assert that, yes, this is one thing because that's how the conlang was defined, as long as we can provide a reasonably coherent definition for it. That's gonna take a little bit of formal semantics.

One tenuous semantic connection is to consider that possessing something is itself a state, so it makes sense to have a stative marker on possessed things. Similarly, we can conceive of things "having" states. Many languages in fact do this- in Spanish, for example, one is not hungry; rather, one has hunger, and the use of one verb, "to have", to express both possession and the perfect aspect in English is similarly suggestive that there may be a natural connection between these two concepts. Then, stative-on-a-thing = possession, and stative-on-an-action = resulting state. But, we can go deeper than this.

First, let's consider things that have a necessary relation to something else- e.g., a father is always the father of someone, a child is always someone's child, a husband always has a wife, etc. If we look at a root like <kwutanbets> "husband", it is intransitive and therefore takes one external argument- the person who is a husband. However, there is another, hidden, internal argument- the wife of whom he is the husband. What <ves-> does, then, is to pull out the internal argument and make it external. Thus, we can have sets of sentences like "kwutanbets txe Dxan-la" ~ "John is (someone's) husband" / "veskwutanbets txe nBale-la" ~ "Mary has a husband" / "Dxan txe veskwutanbetsa txe nBale-la" ~ "John is Mary's husband".

(And, of course, we can do the same thing with the inverse relation- "sendand txe nBale-la" ~ "Mary is (someone's) wife" / "nBale txe vesendandsa txe Dxan-la" ~ "Mary is John's wife")

This can be generalized so that we assume all "things" have an internal possessor argument, even if it's not an obvious, inherent one, like husband/wife or father/child.

Now, if we consider processes, the (or at least one) external argument is still an entity, a thing; as explained in a previous post, there is after all no difference in Valaklwuuxa between "I act" and "I am an actor". Processes, however, have a different internal argument. One could have a process-root which has the subject's possessor as an internal argument, and then <ves-> would obviously have the same function in every case. If, however, we assume that process-roots have an internal argument for the end-state of the process, then <ves-> still has the same semantic effect- promote an implicit internal argument to an explicit external argument- but produces resultatives for some roots and possessives for others.

Pronominal Possessives

Now, in Salish languages, this is not the only mechanism of indicating possession. In particular, there are pronominal possessive clitics which can be added to a root. In Valaklwuuxa, however, this is not strictly necessary; normal verb inflections already serve that purpose quite adequately. For example, if you wish to say "my rock" or "my house", you can simply conjugate the possessed form (in inverse voice, of course, lest you say "I have a rock instead!") for first person: "veswonglqasaka" or "vesk'elansaka", respectively. Note than in English, possessive pronouns are tied up with determiners and definiteness; i.e., you can "the rock", "a rock", or "my rock", but not *"a my rock"; to express that meaning, you have to resort to a circumlocution like "one of my rocks" or "a rock of mine". In Valaklwuuxa, however, you can mix and match however you like: "my rock" ~ "txe veswonglqasaka", "a rock of mine" ~ "ta veswonglqasaka".

Now, without specifying the thing, how would you say "It is mine!"? Basically, it comes out as "It's my thing!":

"xe-vestuka!" (I have the thing!) / "le-vestuksaka!" (It is my thing!)

where <tuk> is the root for "a thing".

(cf. "vestukend" ~ "I have a thing" / "I have something", using the non-transitive conjugation.)

The existing machinery is also sufficient for asking questions about possession, although there is some ambiguity for pronominal possessors. As described in my last post, one can simply replace an explicit possessor phrase with an interrogative to ask who owns something, although the lack of independent possessive pronouns means the structure of the answer is not exactly parallel to that of the question in this case:

"veswonglqa ta k'aku-la?" ~ "Whose rock is it?"
"le-veswonglqasaka" ~ "It is my rock." (cf. "veswonglqaka" ~ "I have a rock.")

If you want to ask something like "Is that John's rock?", you merely have to add the polar question particle after "John":

"Dxan k'a se veswonglqasa?"

If, however, you want to ask "Is that your rock?", we get some abiguity:

"dwu-veswonglqask k'a se?" ~ "Is that your rock (or someone else's)?" / "Is that your rock (or another thing of yours)?"

This is, however, no worse than the ambiguity that exists in English polar questions, and it would be very rare for that to cause an actual practical problem in a real discourse context.

The Verb <benlqwo>

In addition to forming both predicative ("I have") and attributive ("my") possessives with <ves->, Valaklwuuxa also has a root <benlqwo>, meaning "to have" or "to carry". In situations where <ves-> is unsuitable (e.g., because the derived form would invalidate a serial construction), <benlqwo> can be used for predicative possession. In cases where a form in <ves-> would be feasible, however, <benlqwo> carries a specific connotation of "on one's person". Thus, one might say:

"xe-veshatqakend" ~ "I have/own a rock." (Why yes, there are two different roots that both mean "rock"- "wonglqa" and "hatqak".)


"xe-veshatqaka se" ~ "I have this rock." / "This is my rock."


"xe-benlqwond ta hatqak-la" ~ "I have a rock on me (in my hand or in a pocket)".

Thursday, June 2, 2016

Questions & Deixis in Valaklwuuxa

I have been translating the Universal Speed Curriculum into Valaklwuuxa. This is a very simple conversational script; it's not intended to teach you a lot vocabulary, or particularly deep grammar principles- just to get you comfortable with speaking fluently in a target language and capable of asking simple questions and understanding simple answers, so that you can learn more of the target language in the target language.

As such, it starts out with sentences like "What is that?" / "That is a rock." / "Is that a rock?" Basically, you need to be able to ask content questions and polar questions, and name things by pointing (deixis), which we do in English with demonstrative pronouns. These should be easy things to handle in any language, and in fact Valaklwuuxa handles just fine... but given how subjectively weird Valaklwuuxa is, just how it manages may be non-obvious to the typical Anglophone.

If you know a little bit about Valaklwuuxa already (because you've read my previous blog posts or something), you might reasonably think "well, there aren't any normal nouns, and you don't need pronouns except the subject clitics because the verb conjugation takes care of everything else, so maybe there are extra deictic and interrogative conjugations?" And indeed, one could imagine a language that worked that way- the conjugation table would be large an unwieldy, but that never stopped a natlang! But there's a problem: if "what" and "that" are just translated by verb inflections... what gets inflected? There is, after all, no word for "is"!


To resolve this, the interrogative pronouns "what" and "who" are actually translated in Valaklwuuxa by interrogative verbs, meaning roughly "to be what?" and "to be whom?" These are <k'asa> and <k'aku>, respectively. A third interrogative word, <k'axe>, is what we might be tempted to call a "pro-verb"; it most closely translates into English as "to do what?" In general, there is no morphosyntactic distinction in Valaklwuuxa between sentences like "I act" and "I am an actor- these would both translate the same way. But, Valaklwuuxa distinguishes unergative verb (with an agent-like subject) and unaccusative verbs (with a patient-like subject) in other areas of the grammar, and that is the internal distinction between <k'asa" and "k'axe>. Animate things, however, are always "things one can be" but never "things one can do", so there is only the one (unaccusative) root for "to be whom?"

Using any of these verbs as the predicate of a sentence allows asking questions like "What is it?" If you need to ask a question about an argument of some other verb (like, say "What did you eat?"), you just treat the interrogatives like any other Valaklwuuxa root, and stick them into a relativized argument phrase.

All of these interrogative roots also have corresponding answer words: <dasa> ("to be that"), <daxe> ("to do that"), and <daku> ("to be them"). These, however, are not the deictic (pointing) words that you would use in a question like "What is that?" They are more like regular pronouns (or pro-verbs)- they refer to some thing or action that has already been mentioned earlier in the discourse, which you do not wish to repeat. (And if you think that the schematicism in how answers and questions are regularly related to each other is suspiciously unnatural... well, Russian actually does exactly the same thing!)


Surprisingly, the actual demonstratives turned out to work pretty much like they do in English- the exact set of them is different, and they divide up space differently, but they pretty much just look like free pronouns. Lest you think that this is not weird enough for a language with such alien-to-Anglophones morphosyntax as Valaklwuuxa... well, that's actually how natural Salish languages handle them, too.

Internally, demonstratives are considered to be pretty much the same as articles- they are things that can head argument phrases, but they can't be predicates. They just happen to be intransitive version of articles (determiners), which don't require a relative clause to follow.

The three generic, non-deictic articles, which always a require a following phrase, are as follows:

<txe> "I know which one"
<ta> "I don't know/care which one"
<kwe> "the one who/which..."

The demonstratives, which can be used with or without an explicit argument, come in pairs distinguished by animacy:

<tqe>/<se> "this (near me)"
<tqel>/<sel> "that (near you)"
<lel>/<lel> "yon (near it)"

Note that there is no number distinction (e.g., "this" vs." these"). Plural marking can done by attaching the clitic <=ndek> to a determiner, but is not obligatory- it is unlikely to be used, except for emphasis, if number is indicated in some other, such as by the verb conjugation or if a specific number is mentioned.

Demonstratives are also distinguished from articles in that they can also be prefixed with <we->, which is a "pointing" marker; it's not obligatory when you point at something, but can only be used if you are actually pointing at something, and can be approximated as "this/that one right here/there!"

There is also a single set (without any animacy distinction) of question/answer determiners: <k'adza>, for asking "which one?", and the answer <dadza>, used for (approximately) "the same one"/"the same thing".

Asking What Things Are

Now, we have enough to translate:

"What is that (near you)?" ~ "k'asa sel?"
"This (near me) is a rock." ~ "wonglqa se."
(Where <wonglqa> is the word for "to be a rock".)

Now you might think, why did we choose to have interrogative roots and deictic pronouns? Couldn't you just as easily do it the other way around? That would make content questions simpler, because you wouldn't have to construct a relative clause around every interrogative root. And the answer is "yes", some other language could indeed work just the same as Valaklwuuxa in every other repsect, except for flipping that one decision the other way around. But choosing to do things in this way has one really nice consequence: the structure of content questions exactly parallels the structure of their answers. If the rock is "yonder", so that both questioner and answerer use the same demonstrative, you get:

"k'asa lel?"
"wonglqa lel."

Replace the question word with its answer, and everything else stays the same. Treating interrogatives as verbs does bring up another issue, though: when using them in argument positions, which determiner do you use? Typically, you'll use <ta>, the "I don't know which one" article (because if you did know which one, why did you ask?), but any determiner is valid, and they can be used to make much more specific kinds of questions, like:

"dwu-valsk sel k'asa?" ~ "You cooked that what?" / "What's that thing that you cooked?"

A Brief Note on Polar Questions

So, that's pretty much everything you need to know about content questions- but what about polar questions, with a yes/no answer?
The simplest way to form them is simply by intonation; syntactic structure is identical to statements, but a rising-falling tone over a whole clause will turn it into a question. If you want to be more specific, though, there is an interrogative particle <k'a>, which placed immediately after whatever is in doubt. Thus, we can ask:

"dwu-valsk k'a ta wonglqa-la?" ~ "Did you cook a rock?" (as opposed to doing something else with it)
"dwu valsk ta wonglqa k'a-la? ~ "Did you cook a rock?" (as opposed to some other item)

There is of course a corresponding answer word, <da>, used to confirm the thing in doubt:

"xe-valka ta wonglqa da-la." ~ "Yes, I really did cook a rock."

And an (irregular) negative answer particle:

"xe-valka ta wonglqa pe-la." ~ "No, I did not cook a rock." (but I may have cooked something else)


In addition to the interrogatives discussed above, there are two more pairs of answer/question roots:

<skwol> / <sdwol> "how many / so many"
<k'akwo> / <dakwo> "which (ordinal) one / that (ordinal) one"

That last one is a thing for which English has no single simple question word, but many languages (like Hindi) do. If you want to elicit a response like "I am the fifth child in my family.", you can imagine a corresponding question like "Which-th child are you?" In English, that's terribly awkward, and there is just no standard way of forming that kind of question, but in Valaklwuuxa, <k'akwo> is the standard translation for "which-th", or "what number".

Now, there's one more bit of interestingness. All of the basic question roots are intransitive, but there is a generic transitivizing suffix <-(e)t>. This usually has a causative meaning ("to make something happen", "to make someone do something"), which means Valaklwuuxa doesn't need to use special verbs for "to make" or "to force" nearly as often as a language like English does, but the precise meaning of a transitivized verb is lexically specified. In the case of <kaxet>/<daxet>, the transitive versions actually mean "to do what to something?"/"to do that to something". So if you want to ask "What did he make you do?", the translation actually does use a separate word for "make" after all.

Tuesday, May 17, 2016

Path & Manner in Valaklwuuxa

How languages describe motion is a particularly interesting subfield of verbal lexicology. In some languages, verbs of motion are even a distinct morphological class. Most of my blog posts on conlanging & linguistics have focused so far on WSL, which has no verbs, and thus no verbs of motion, but since I just started blogging about Valaklwuuxa, which is practically nothing but verbs, this topic suddenly seems relevant!

Conlangery #14 discusses verbs of motion in some depth, but the short version is that there are two major semantic components to motion: the manner in which one moves, and the path or direction along which one does it. Different languages differ in which of these components they encode in verbs, and which they relegate to adverbs, adpositional phrases, or other mechanisms. Germanic languages, for example, tend to encode manner, while Romance languages tend to encode path instead. Since English is a Germanic language with a ton of Romance borrowings, we've got a bit of both- manner verbs like "walk" (go slowly by foot), run (go fast by foot), swim (go by water), drive (go by directing a machine), etc., and path verbs like "ascend" (go up), "traverse" (go across), and "enter" (go into). Russian, in comparison, has verb roots that describe manner, which combine with prefixes for path.

So, how does Valaklwuuxa do it?

Valaklwuuxa has several roots that act a lot like manner verbs. For example:

ketqenda - go slow
petqentqe - go fast
nbatqe - move under one's own power (i.e., for humans, walk; but also applies to flying birds, swimming fish, vehicles, etc.)
lande - drive or ride

But, it also has roots like <tak> "to go along/beside", and <wole> "to move around a circuit", which are clearly path verbs.

And, on a moment's reflection, it seems like there must be both kinds of verbs; when verbs are your only open lexical category, and there's only one preposition, and not many basic adverbs... there's really no choice but to encode both path and manner information in verbs.

There are, however, still ways to determine whether the language is primarily path- or manner-oriented, aside from just looking at a list of lexical entries.

In the case of <ketqenda> and <petqentqe>, while these verbs can be, and are, used to describe motion, they can also be used with a more generic attributive sense. The attributes "fast" and "slow" generally imply motion, but literal displacement over a distance is not always entailed. Thus, one can say, e.g., <xe-petqentqenk!> to mean "I am going quickly!", "I'm hurrying", or just "I am fast!" Similarly, the imperative <(dwu-)petqentqex!> can mean "hurry up!", or "go faster!", but it can also be used in a metaphorical sense similar to "think fast!"

Most of the time when <ketqenda> or <petqentqe> are used with reference to literal motion, they appear in a serial-verb construction with some other verb, as in

dwu-petqentqe takex!
dwu=petqentqe tak=ex
"Go (along the path) quickly!"

Words like <nbatqe> and <lande>, despite being more prototypical motion-verbs, behave similarly. They are very rarely used as finite verbs by themselves. In fact, they will be used on their own, as the predicate of a clause, almost exclusively in contexts where they are implicitly serialized with another verb, which may not even be a motion verb- and which can radically change the meaning! For example, if someone asked

"Did you cook it?"

you might answer with

"Yes, I did it myself!"

which is an elliptical form of <xe-nbatqe valesk!> "I cooked it by myself!"

If you want to actually say that someone is walking somewhere, it would be very odd to just say, e.g., <nBale txe nbatqe-la> or <nbatqe txe nBale-la> for "Mary is walking", unless the fact that Mary is travelling has already been established in the discourse and you are just highlighting the fact that she is doing it by walking, as opposed to some other means.

From all this you might get the impression that perhaps "to walk" and "to drive" are just bad glosses, and <nbatqe> and <lande> aren't really motion verbs at all- but that's only half the story! For one thing, something like <xe-nbatqe takend>, assuming that "I" am a adult human, really does mean "I am walking along a path", not just "I am traversing a path by myself", perhaps by awkwardly rolling, or some other means. (If the subject were not an adult human, of course, the translation would change to reflect the "prototypical" mode of movement for whatever the subject is). The phrases <landekwe wole> and <wolekwe lande> are also used somewhat idiomatically to mean "to patrol a perimeter (in/on a vehicle)" or "to drive around a race track" (the first takes an object for the thing you are going around, while the second takes an object for the thing you are riding or driving); again, not "to circumnavigate with help".

Furthermore, while the bare roots are not used much by themselves, there are derived forms which are no longer "verbs of motion" themselves, but depend on the motive meaning of the basic manner verbs. Thus, we have words like <landegwel> (literally "to start travelling by vehicle"), which is used for things like "to set out", "to start the car", "to uncircle to wagons", etc.; and words like <vwenbatqe> (literally "to walk as much as possible"), which is used to mean "to have explored" (and the more derived form <vwenbatqweev> "to be out exploring")- not "to do everything yourself"!

Thus, even though Valaklwuuxa necessarily has motive roots for both path and manner, we can determine from usage patterns that it is in fact primarily a path-oriented language.

Saturday, May 14, 2016

Jelinek & Demers Were Wrong

But not about what everybody else thinks they're wrong about! No, I'm totally on board with whole "Straits Salish has no nouns" thing.

In Predicates and Pronominal Arguments in Straits Salish, Eloise Jelinek and Richard A. Demers make the following assertion:
[F]or a language to lack a noun/verb contrast, it must have only [pronouns] in A-positions (i.e. argument positions). Otherwise, if each root heads its own clause, there would be an infinite regress in argument structure.
This is further footnoted as follows:
We thank an anonymous reviewer for raising the question of whether there might be a language just like Straights Salish, except for having DetPs in A-positions. In such a language, the predicates of which the argumental DetP would be based would in turn have their own DetP argument structure, and so on ad infinitum.
But this is an obvious false dichotomy! Yes, there cannot be a language otherwise like Straits Salish that only allows DetP arguments, because that would lead to infinite regress. But who says that just because you decide to allow DetP arguments, that you must eliminate your base case? You can still use pronominal arguments, or just dropped arguments, to halt the recursion!

So, of course, I had to conlang this. Not creating a language just like Straits Salish (that would be boring!), but something that has all of the same relevant bits of morphosyntax, but also allows nesting determiner phrases as clause arguments. This is a project I've been thinking about for a good long while, but as I just mentioned the language in another post, I figured this would be a good time to finally blog about it.

The name of the language is given in that other post as "Valaklusha", but that's an English exonym- it's a close approximation of how the name should be pronounced in normal English orthography. The proper romanization of the language's endonym (it's name in itself) is "Valaklwuuxa". I do not promise to be at all consistent in which spelling I use at different times.

This name was inspired by Guugu Yimithirr, which essentially means "language that has yimi", where yimi is the Guugu Yimithirr word for "this". The meaning of yimi is totally irrelevant to the name of the language, but it's a distinctive word that that language has and its neighbors don't. The practice of naming based on some distinctive word occurs in many other languages, and that's what I did with Valaklwuuxa. That name essentially means "about saying lwuuxa", or "how one says lwuuxa" where lwuuxa is the word for "woman" (pronounced /ɬʷu:ʃa/; the whole name is /fa ɬak ɬʷu:ʃa/). If Valakluuxa had a bunch of neighbor languages, presumably their words for "woman" would be different. Extra-fictionally, this came about because I had a hard time deciding which option I liked best for the word for "woman" in the as-yet-unnamed language, so I picked one and told myself that all of its uncreated, theoretical close relative languages have all of the other versions that I didn't pick.

(Side note: While collecting my notes, I was reminded of Lila Sadkin's Tenata, which was presented at LCC2. Tenata and Valaklwuuxa have no real relation, except that Tenata is another language that erases the noun/verb distinction, but it's cool and you should check it out. Tenata is actually much more similar to WSL, which I've blogged about extensively.)

But enough of that! I can get back into interesting bits of lexical semantics later- on to the hard-hitting morphosyntax! I'm just gonna give a brief overview of the important parts here; if you want gory details, see the ever-evolving Google Doc documentation.

Like WSL, Valaklusha, despite being a proof-of-concept engelang, is not a minimal language- it has a lot of accidental complexities, because they are fun and make it more natural-looking. Many of these accidental complexities make it not much at all like a Salish language (which have their own collections of accidental complexities), except in this basic core structure:

The only open-class roots are basically verbs, divided into basic transitives and basic intransitives. There are proclitic pronouns used for subjects, when an explicit subject phrase is missing, and pronominal agreement suffixes for all other arguments. There is one preposition (maybe two, depending on how you analyze it), used to mark certain oblique arguments, and there are a few determiners. Additionally, there are a few basic adverbs, and "lexical suffixes", which is a very Salishan feature.

Verbs show polypersonal agreement with suffixes and hierarchical alignment with a 2p > 1p > 3/4p animacy hierarchy. Further gradations of animacy in non-discourse-participants are unnecessary, because the subject marking clitic (or lack thereof) tells you whether any explicit 3/4p argument is the subject or not. The 3rd person refers to proximate objects or people who are present or may be addressed, while 4th person refers to distal objects and people who are not present or who are otherwise excluded from a conversation. Verbs also take several valence-altering affixes, including passive, antipassive, inverse, and transitivizer suffixes, as well as applicative prefixes.

Aside from the subject clitics, there are no explicit pronouns that can stand as arguments on their own; Valaklusha is essentially obligatorily pro-drop. All other arguments are determiner phrases.

Brief aside for phonology & romanization
I'm about to start quoting actual morphemes from Valaklusha, and some readers may want to know how to actually read those. If you don't care, you can skip this bit.

The phonology is very not Salishan, but it has some fun features I've wanted to play with for a while- notably, clicks. Stops come in a voiced and unvoiced series: the usual p/t/k, b/d/g distinctions, and then a voiced glottal stop and an unvoiced pharyngeal, <g'> and <k'>. Yes, there are apostrophes, but they actually mean something sensible! There are two basic clicks: a dental/alveolar click <tq>, and a lateral click <lq>.
All stops and clicks can, however, be prenasalized, indicated by an n-digraph, for a total of 12 stop and 4 click phonemes. Voiced prenasalized stops, however, are only realized as stops pre-vocalically; in other positions, they become homorganic nasal continuants. There are no phonemic simple nasals, so this creates no risk of homophony.

There is only a single series of fricatives: <l> (/ɬ/, a lateral fricative), <x> (/ʃ/), <s> (/s/), <h> (/θ/), and <v> (/f/). The fricatives and clicks default to unvoiced, but gain voicing between two voiced sounds (i.e., intervocalically, or between a vowel and a voiced stop), in a sort of Old-Englishy way. I chose <v> rather than <f> for that labiodental fricative, however, just because I think it looks nicer.

The phonemic fricative inventory is very front-heavy, so to balance things out /k/ and /g/ undergo allophonic variation and become fricatives in intervocalic positions, thus providing some back-of-the-mouth fricative phones (even though they're not distinctive phonemes), one phonemic voicing distinction between fricative phones, and one unvoiced intervocalic phone.

There are three basic vowels, which come in rounded and unrounded pairs: <a> (/a/ ~ /ɑ/), <e> (/ɛ/ ~ /i/), and <u> (/ɯ/); and <wo> (/ɔ/ ~ /o/), <we> (/ø/ ~ /y/), and <wu> (/u/). The digraph <wo> was chosen instead of <wa> for the low rounded vowel just because I think that the average reader is more likely to remember to pronounce that correctly. The rounded vowels induce rounding-assimilation on preceding consonants and preceding hiatus vowels. This can induce mutation of previously-final vowels during affixation. Rounded vowels can occur in hiatus with each other (in which case the <w> is only written once, as in <lwuuxa>), but unrounded vowels can only occur individually. Identical vowels in hiatus (again, as in <lwuuxa>) are half-long.

There are a bunch of other phonotactic rules, but you only need to know those to make up new words; this should suffice for reading.

Back to morphosyntax

Determiner phrases are formed from an article or other determiner (e.g., demonstrative) followed by a relative or complement clause. Only subjects can be relativized, and relative clauses never contain subject proclitics- any explicit arguments are always objects or obliques. Complement clauses (with one exception) are distinguished from relatives by the presence of a subject clitic. In sentences with explicit subjects, where the subject marking clitic would otherwise be absent, the clitic <he=> is used, which can result in ambiguity in sentences that are conjugated for a 4th person argument.

The quotative determiner <lak> is used to introduce direct quoted speech as the argument of a verb; this determiner can never introduce a relative clause, and no additional clitics are required to mark the complement. Clauses introduced by <lak> can be used as core argument or obliques, without additional marking- thus the possibility that this might be analyzed as a preposition as well.

The language has only one unambiguous preposition (<va>), which is used before a determiner (other than the aforementioned <lak>) to mark obliques. Some verbs (like passivized transitives, or natural semantic ditransitives like "give") assign a specific role to at least one oblique argument, but extra oblique arguments can be added just about anywhere as long as they "make sense"- e.g., for time or locative expressions. If, for example, there were two obliques in a passive clause, one of which refers to a person and one of which refers to a building, nobody's going to be too terribly confused about which one is the demoted agent and which one is the locative.

Aside from subject clitics, any clause can have at most one explicit core argument. This is because the hierarchical alignment system cannot distinguish participants if there are two 3rd or 4th person arguments in the same clause; some languages are fine with this ambiguity, but the typical Salishan approach, copied here, is just to make it ungrammatical- if you need to talk about two 3rd or 4th person referents with explicit determiner clauses, you'll just have to find a way to re-arrange it.

The explicit marking on obliques means that oblique and core arguments can come in any order, and you are thus free to rearrange them in whatever way is most convenient- for example, to minimize center-embedding.

And it gets a whole lot more complicated with serial verb constructions, and tense and aspect and mood and so forth, but those are the bare essentials.

Now I think it's time for some examples!

First, some basics:

le-swetqe /ɬɛ.sʷyǀɛ/
"He is a man."

le-nk'ap /ɬɛ.n͡ʡap/
"It is a coyote."

Here we have words for "man" and "coyote", but they are not nouns- by themselves, they are intransitive verbs, meaning "to be a man", and "to be a coyote". Also note that the third-person singular subject clitic does not distinguish gender, and the third-person singular intransitive conjugation is null.

le-tupund /ɬɛ.tɯ.pɯn/
"It hit(s) it"

Here we see the word for hit, which is transitive, but otherwise behaves identically to the words for "man" and "coyote"- they're all verbs.

tupund txe swetqe-la /tɯ.pɯn tʃɛ sʷy.ǀɛ.ɬa/
tupund-0 txe swetqe-0=la
hit.TRANS-3sg.3sg DEF man-3sg=ART
"The man hit(s) it" / "The hitter is the one who is a man"

Now we have relativized <swetqe> ("to be a man") with the definite article <txe> (ignore the <-la> bit for now- it's complicated). We can also switch this around to get

swetqe ta tupund-la (sʷy.ǀɛ ta tɯ.pɯn.ɬa)
swetqe-0 ta tupund-0=la
man-3sg IND hit-3sg.3sg=ART
"The/A man is a hitter."

which just further confirms that there is no morphosyntactic distinction between <swetqe> and <tupund>- either one can act as a predicate, and either one can act as an argument.

It certainly looks in both cases like that determiner phrase is acting like a non-pronominal argument, either of <tupund> or <swetqe>, especially since the subject clitics are missing! If we put the subject clitic back in along with an explicit determiner phrase, the meaning changes substantially:

le-tupund txe nk'ap-la /ɬɛ.tɯ.pɯn tʃɛ n͡ʡap.ɬa/
le=tupund-0 txe n'kap-0=la
3sg.SUB=hit.TRANS-3sg.3sg DEF coyote-0=ART
"He/it hit/is hitting the coyote."

But with some fiddling one could argue that perhaps there really is a null pronominal argument, and the relative clause isn't actually nested, but just adjacent... so let's go further!

swetqe txe tupund txe nk'ap-la /sʷy.ǀɛ tʃɛ tɯ.pɯn tʃɛ n͡ʡap.ɬa/
swetqe-0 txe tupund-0 txe nk'ap-0=la
man-3sg DEF hit.TRANS DEF coyote-3sg=ART
"The man hit(s) the coyote." / "The one who hit(s) the coyote is a man."

nk'ap txe tupundsa txe swetqe-la /n͡ʡap tʃɛ tɯ.pɯ tʃɛ sʷy.ǀɛ.ɬa/
nk'ap-0 txe tupund-sa-0 txe swetqe-0=la
coyote-3sg DEF hit.TRANS-3sg.3sg DEF man-3sg=ART
"The coyote is/was hit by the man."

Note that we can't actually make <tupund> the main verb in this sentence, because then we'd have both <swetqe> and <nk'ap> left over to use as arguments, and we can't have two 3rd person core arguments in one clause. But here we can clearly see that, in either case, there must be an intransitive relative clause nested inside a transitive relative clause nested inside another intransitive matrix clause- finite, two-level deep recursive nesting, with no pronominal arguments and no noun/verb distinction!
The two determiner phrases can't both be arguments to the initial predicate both because we know that those predicates are intransitive and because we can't have more than one core argument in a clause, so either they are disconnected sentences talking about different things, which is ruled out by the known interpretation, or they are connected in some other way. Furthermore, we have evidence that the intransitive relative clauses must be nested as arguments inside the transitive relative clauses because changing their order radically changes the interpretation and is only marginally grammatical, if you treat the first part as a parenthetical aside with the first half of the sentence missing:

... (swetqe txe nk'ap-la) txe tupund
... (swetqe-0 txe nk'ap-0=la) txe tupund-0
... (man-3sg DEF coyote-3sg=ART) DEF hit.TRANS
"... (and the man is also a coyote), who hit it."

So, bam, take that, Jelinek and Demers. No nouns, DetPs in argument position, finite recursive depth. Booyah.

Stay tuned for more Fun With Valaklusha!

Friday, May 13, 2016

Thoughts on Designing an AI for Hnefatafl

Hnefatafl is an ancient Scandinavian board game part of a family of Tafl games played on varying board sizes and with slightly different rules for win conditions. The unifying thread of Tafl games is that they are played on odd-numbered square boards (between 7x7 and 19x19) with radially symmetric layouts, and feature asymmetric sides- one side has a king, whose goal it is to escape, starting in the center, while the other side has undifferentiated pieces arranged around the edge who goal is to capture the king, and about twice as many of them.

I like playing Hnefatafl, but not a whole lot other people do these days; given the difficulty of finding good partners, I'd like to write an AI for it. This is not something I have done yet, so I'm not sure how well it will actually work, but these are my pre-programming musings on how I might want to approach it:

The basic ground-level approach to game AI is minimax search- you calculate every board position that you can get to from where you are, and every board position your opponent can get to from those positions, and so on, as deep as you can, and then you use some heuristic to figure out which of the resulting positions are "best" (max) for you and "worst" (min) for your opponent, and take the move that has the greatest probability of taking you to good positions later.

All of the pieces in Hnefatafl move like rooks; combined with the large board, this gives the game an extremely large branching factor; i.e., the number of possible positions you can get to in a small number of moves gets very, very big. This makes minimax game trees very difficult to calculate to any useful depth, because it just takes too long and uses too much memory. One simple optimization is to remember states that you've seen before, and what the best move was. Then, whenever you find a state you recognize, you can stop looking deeper, saving time and memory. This works best across multiple games, with the AI learning more states and getting better each time. It only really works well, though, if you can expect to come across repeated board states on a regular basis- something which becomes very unlikely when the total number of possible states is very, very large, as it is for large Tafl boards.

Fortunately, we can exploit symmetries to reduce the number of board states 8-fold: each position can be rotated into 4 orientations, and reflected, and the strategic situation remains the same. If we can define a criterion for choosing a canonical representation for any of these sets of 8 board states, then every time we generate a new possible move, we can put it in canonical form to compare against a database of remembered states and best moves from those states, and we just have to keep track of 3 bits of additional information to remember how to transform the board between internal representation and presentation, so that the board doesn't flip and rotate on-screen in case the states preceding and following a move have different canonical orientations.

Unfortunately, the space of board states is still huge. For a 13x13 board (my personal favorite), there are something the ballpark of 6x10^77 distinct board states, after accounting for symmetry. Some board states are enormously more likely than others, but even so, with such a large search space, the probability of coming across repeated states between multiple games after the first few moves is extremely low, making a database of previously-accumulated "good move" knowledge of marginal utility. So, we need to do better.

As do many board games, Hnefatafl gives rise to a large number of recognizable patterns that take up only a small part of the board, but disproportionately control the strategic situation. Trying to recognize entire boards is kind of dumb, because even if you have figured out the best move from one position already, that will be useless if you later come across a strategically-identical position that happens to one inconsequential piece misplaced by one square. If the AI can recognize parts of boards as conforming to familiar patterns, however, there is less important stuff to memorize, and the memorized knowledge is much more generalizable.

So, how do we recognize patterns? Some patterns involve specific configurations of pieces in specific relative positions, while others just involve having a certain number of pieces that can attack a particular position, regardless of how exactly they are spaced out along a row or a column. To capture both kinds of information, we'll want to keep track of the pieces arranged in a small segment of the board, as well as a count of how many pieces (and of which type) are on each row and column leading out of that section. In my experience, a 4x4 square should be large enough to capture most common and important patterns. Using a 4x4 square means some duplication of storage for smaller patterns, but we can still make use of symmetry to reduce storage requirements and branching factors. There are generally three types of pieces (attackers, defenders, and the king), and generally three types of squares which may have different movement rules associated (normal squares, the king's throne in the center, and goal squares where the king escapes).
Given restrictions on how throne and goal squares can be positioned relative to each other, and the fact there can only ever be one king, there are somewhere between 5380841 and can be positioned relative to goal squares, which reduces the total a good bit, there are something like 3.6x10^11 canonical 4x4 squares. The upper end of that range is still pretty huge, but without doing the detailed calculation, I suspect the exact number is much closer to the lower end. Given that some positions are still much more likely than others, this looks like a range where we can actually expect to come across repeats fairly often, without having to store an impractically large amount of data. Additionally, with this method it becomes possible to share strategic memory across multiple different Tafl variations!

Adding in row-and-column counts of course increases the number of variations; storing the maximum amount of information on row and column dispositions would increase the number of specific varieties by about 3^9 per row and column, plus 11 positions for a square vertically or horizontally, or a total factor of about 25548534 for 4x4 squares on a 13x13 board; smaller boards, of course, would use smaller subsets of that total set.

I'm not sure what the probability distribution of those extra states is, exactly; I suspect that they will be clustered, in which case the large total space is not a huge deal, but we can also probably get fairly good performance with a simplified model, which simply stores which side, if any, controls a particular row or column, and how many pieces that side has on that row or column before you reach an enemy or the edge. Just storing which side has control might give useful performance, which only multiplies the memory space by a factor of 24 regardless of board size (one or the other side or nobody for each of 8 perimeter spaces) but my intuition about the game says that storing counts is actually important. It may take experimenting to see if it's really worth fragmenting the pattern memory in this way.

In any case, this general approach won't capture every strategically-important pattern, but it will get most of them. A few large-scale patterns, like building walls across the board, could be hand-coded, but they might end up being automatically recognized as advantageous just due to the positive influence of many overlapping small sections that are individually recognized as advantageous.

Now, we can use the database of board sections both to evaluate the strength of a particular position without having to search deeper to find out how it turns out in the end, and to remember the best move from a particular familiar position without having to calculate all possible moves or recognize entire boards.

For evaluating positions, pattern segments can be assigned weights depending on their prevalence and proximity to wins and losses- when the AI wins, it can examine all patterns in the series of states for that game, and assign positive values to them, and vice-versa when it loses.

To pick good moves, it should be possible to prove just by looking at a small section of the board that moving certain pieces will always result in a worse position, or always result in a better position, and thus minimize the number of possible moves that need to be calculated to pick that actual best.

Language Without the Clause

Having been playing for a while with WSL (which has no verbs) and Valaklusha, which I have not yet blogged about, which has no nouns, I had a realization that there are some linguistic concepts even more fundamental than the noun/verb distinction which are nevertheless still not essential to communication.

In particular, every language I have ever heard of has something that can be reasonably called a clause (some syntactic structure which describes a particular event, state, or relation) which is usually (though not always) recursively nestable with other clauses to make more complex sentences.

Predicate logic, however, does not have to be analyzed in terms of nicely-bounded clauses in the linguistic sense. (There are things in logic called "clauses", but they're not the same thing.) Predicates describing the completely independent referents of completely independent linguistic clauses can be mixed together in any order you like with no loss of meaning, due to the availability of an effectively infinite number of possible logical variables that you can use to keep things straight. In fact, we can not only get rid of clauses- we can get rid of recursive phrase structure entirely.

There are, of course, practical problems with trying to use an infinite set of possible pronouns in a speakable language. If there weren't, creating good loglangs wouldn't be hard! But even with a relatively small finite set of pronouns representing logical variables, it's possible to create unambiguous logical structures that overlap elements from what would normally be multiple clauses. Thus, we could come up with a language which, rather than using recursive nesting, uses linear overlapping, with no clear boundaries that be used to delimit specific clauses.

After thinking of that, I realized that Gary Shannon's languages Soaloa and Pop are examples of exactly that kind of language, although (as described on that page) they do have an analysis in terms of nested clauses. Eliminating the possibility of a clausal analysis requires something a little more flexible.

A better example is Jeffrey Henning's Fith. It works completely differently from Pop- which should not be surprising! It would be quite surprising to discover that there are only two ways of structuring information, using clauses and not-using-clauses, but that's not how it is. This is not a dichotomy, which means there is a huge unexplored vista of untapped conlanging potential in the organizational territory outside of recursive clause structure land.

Fith is inspired by the programming language FORTH, and is supposed to be spoken by aliens who have a mental memory stack. Some words (nouns) add concepts to the stack, and other words (verbs and such) manipulate the items already on the stack and replace them with new, more complex concepts. If that were all, Fith would look like a simple head-final language, and the stack would be irrelevant- but that is not all! There are also words, called "stack conjunctions" or "stack operators", which duplicate or rearrange (or both) the items already on the stack. Because it can duplicate items on the mental stack, Fith has no need for pronouns, and every separate mention of a common noun can be assumed to refer to a different instance of it- if you meant the same one, you'd just duplicate the existing reference! But more importantly, the existence of stack operators means that components of completely independent semantic structures can be nearly-arbitrarily interleaved, as long as you are willing to put in the effort to use the right sequence of stack operators to put the right arguments in place for each verb when it comes. One can write a phrase-structure grammar that describes all syntactically valid Fith utterances... but it's meaningless. Surface syntax bears no significant relation to semantics at all, beyond some simple linear ordering constraints.

In fact, Fith is a perfect loglang- it can describe arbitrarily complex predicate-argument structures with no ambiguity, and it doesn't even require an arbitrary number of logical variables to do it! Unfortunately, it's also unusable by humans; Fith doesn't eliminate memory constraints, it just trades off remembering arbitrarily-bound pronouns with keeping track of indexes in a mental stack, which is arguably harder. Incidentally, this works for exactly the same reason that combinator calculus eliminates variables in equivalent lambda calculus expressions.

(Side note: the stack concept is not actually necessary for evaluating Fith or combinator calculus- it's just the most straightforward implementation. Some of Lojban's argument-structure manipulating particles actually have semantics indistinguishable from some stack operators, but Lojban grammar never references a stack!)

Having identified two varieties of languages that eschew traditional clauses, here's a sketch of a framework for a third kind of clauseless language:

The basic parts of speech are Pronouns, Common Verbs, Proper Verbs, and Quantifiers; you can throw in things like discourse particles as well, but they're irrelevant to the basic structure. This could be elaborated on in many ways.

Pronouns act like logical variables, but with a twist: just like English has different pronouns for masculine, feminine, and non-human things ("he", "she", and "it"), these pronouns are not arbitrarily assigned, but rather restricted to refer to things in a particular semantic domain, thus making them easier to keep track of when you have a whole lot of them in play in a single discourse. Unlike English pronouns, however, you'd have a lot of them, like Bantu languages have a large number of grammatical genders, or like many languages have a large number of classifiers for counting different kinds of things. In this way, they overlap a bit with the function of English common nouns as well.
In order to allow talking about more than one of a certain kind of thing at the same time, one could introduce things like proximal/distal distinctions.

Verbs correspond to logical predicates, and take pronouns as arguments with variable arity. There might be a tiny bit of phrase structure here, but at least it would be flat, not recursively nestable- and one could treat pronouns as inflections to eliminate phrase structure entirely.
These aren't quite like normal verbs, though- they do cover all of the normal functions of verbs, but also (since they correspond to logical predicates) the functions of nouns and adjectives, and some adverbs. Essentially, they further constrain the identity of the referents of the pronouns that they take as arguments, beyond the lexical semantic restrictions on the pronouns themselves. Common verbs are sort of like common nouns- they restrict the referents of their arguments to members of a certain generic class. Proper verbs, on the other hand, restrict at least one of their arguments to refer to exactly one well-known thing.

Quantifiers re-bind pronouns to new logical variables. They cover the functional range of articles, quantifiers, and pronouns in English. The simplest way for quantifiers to work would be a one-to-one mapping of predicate logic syntax, where you just have a quantifier word combined with a pronoun (or list of pronouns) that it binds. A bit more interesting, however, might be requiring quantifiers to attach to verbs, implicitly re-binding the arguments of the modified verb to the referents which are the participants in the event described by the verb. If that is not done, it might be useful to have "restrictive" vs. "non-restrictive" inflections of verbs, where restrictive verbs limit the domain of the binding quantifiers for their arguments, and non-restrictive verbs merely provide extra information without more precisely identifying (like English restrictive vs. non-restrictive relative clauses).

For very simple sentences, this wouldn't look too weird, except for the pre-amble where you quantify whatever pronouns you intend to use first. But the first thing to notice is that, just like in predicate logic, there is no need for all of the verbs describing the same referent to be adjacent to each other. A sentence in English which has two adjectives describing two nouns could be translated with the translation-equivalent of the nouns all at the front and the translation-equivalent of the adjectives all at the end, with the translation-equivalent of the verb in between. But hey, if you have a lot of agreement morphology, normal languages can sometimes get away with similar things already; although it's not common, separating adjectives from nouns can occur in, e.g., Russian.

Where it gets really weird is when you try to translate a complex sentence or discourse with multiple clauses.
When multiple English clauses share at least one referent, this relation is not indicated by nesting sentences within each other, or conjoining them in series, but by stringing on more verbs to the end, re-using the same pronouns as many times as you like. Occasionally, you stick in another quantifier when you need to introduce a new referent to the discussion, possibly discarding one that is no longer relevant. But, since this can be done re-binding one pronoun at a time while leaving others intact, the discourse can thus blend continuously from one topic into another, each one linked together by the overlap of participants that they have in common, with no clear boundaries between clauses or sentences. If you were very careful about your selection of pronouns, and made sure you had two non-conflicting sets, you could even arbitrarily interleave the components of two totally unrelated sentences without ambiguity!

Note that this does not describe a perfect loglang- the semantic structures it can unambiguously encode are different from those accessible to a language with tree-structured, recursively nested syntax, but they are still limited, due to the finite number of pronouns available at any one time. This has the same effect on limiting predicate logic as limiting the maximum stack depth does in Fith.

When discussing this idea on the CONLANG-L mailing list, some commenters thought that, like Fith, this style of language sounded incredibly difficult to process. But, I am not so certain of that. It definitely has pathological corner cases, like interleaving sentences, but then, so does English- witness garden path sentences, and the horrors of center embedding (technically "grammatical" to arbitrary depth, but severely limited in practice). Actual, cognitively-limited, users would not be obligated to make use of every structure that is theoretically grammatical! And even in the case of interleaved sentences, the ability of humans to do things like distinguishing multiple simultaneous voices, or separate note trains of different frequencies, makes me think it might just be possible to handle, with sufficient practice. Siva Kalyan compared the extremely free word order to hard-to-process Latin poetry, but Ray Brown (although disagreeing that my system sounded much at all like Latin poetry) had this to say on processability:
If it really is like Latin poetry then it certainly is not going to be beyond humans to process in real-time. In the world of the Romans, literature was essentially declaimed and heard. If a poet could not be understood in real-time as the poem was being declaimed, then that poet's work would be no good. It was a world with no printing; every of a work had to be done by hand. Whether silent reading ever occurred is debatable. If you you had the dosh to afford expensive manuscripts, you would have an educated slave reading them to you. What was heard had to be processed in real-time. The fact that modern anglophones, speaking a language that is poor in morphology and has to rely to a large extent on fairly strict word-order, find Latin verse difficult to process in real time is beside the point. Those brought up with it would certainly do so.
And being myself a second-language speaker of Russian, I have to say I am fairly optimistic about the ability of the human mind to deal with extremely free word-order. If I, a native Anglophone, can handle Russian without serious difficulty, I see no reason why the human mind should not be able to handle something even less constrained, especially if it is learned natively. Furthermore, if I think of pronouns like verb agreement inflections in a somewhat-more-complicated-than-usual switch-reference system, where the quantifiers act like switch-reference markers, then it starts to feel totally doable.

There are several devices speakers could use to reduce the load on listeners, like repeating restrictive verbs every once in a while to remind listeners of the current referents of the relevant pronouns. This wouldn't affect the literal meaning of the discourse at all, but would reduce memory load. This is the kind of thing humans do anyway, when disambiguating English pronouns starts to get too complicated.

This system also allows "floating" in the style of Fith, where one binds a pronoun and then just never actually uses it; unlike Fith, though, if the class of pronouns is small enough, it would quickly become obvious that you were intentionally avoiding using one of them, which should make the argument-floating effect more obvious to psychologically-human-like listeners.

Now, to be clear, what I have described thus far is not really a sketch for one single language, but rather a sketch for a general structure that could be instantiated in numerous different ways, in numerous different languages. As mentioned above, pronouns could exist as separate words, or as verb inflections. The components that I've packaged into Pronouns, Quantifiers, and Verbs could be broken up and re-packaged in slightly different ways, cutting up the syntactic classes into different shapes while still maintaining the overall clauseless structure. One could introduce an additional class of "common nouns" which mostly behave like pronouns, and represent logical variables, but have more precise semantics like verbs. This is potentially very fertile ground for developing a whole family of xenolangs with as much variation in them as we find between clause-full human natlangs! And I am feeling fairly confident that a lot of them would end up as something which, sort of like WSL, is still comprehensible by humans even if it could never arise as a human language naturally.

Friday, May 6, 2016

A Programming Language for Magic

Or, The Structure and Interpretation of Demonic Incantations, Part II.

As previously mentioned, a great deal of the training for a magician in the animist, demon-magic world consists of developing the mental discipline to compose precise and unambiguous instructions. Natural human languages are terrible at both precision and disambiguation; legalese is a direct result of this fact, and it still fails. In order to summon demons safely, therefore, a specialized formal language will be required- a programming (or "incantation") language.

Many people criticize FORTH and related languages for being hard to follow when data is passed implicitly on the program stack, but that's not an indictment of the concatenative style- it's an indictment of point-free (or, less charitably "pointless") style; i.e., the avoidance of explicit variable names. Concatenative programs can use explicit variables for convenience, and applicative languages can be written in point-free style as well. This is in fact quite common in Haskell and various LISPs, though the syntax of most mainstream languages makes it less convenient.

Despite both being called "languages", programming languages aren't much like human languages. That is, after all, why we need them- because human languages don't do the job. Conversely, programming languages tend to not play nice with humans' natural language abilities. Nobody likes trying to speak a programming language, or even to write fluently in one. When working out ideas on a whiteboard, or describing an algorithm in a paper, real-world programmers and computer scientists frequently avoid real programming languages and fall back on intuitive "pseudo-code" which sacrifices rigor in exchange for more easily communicating the essential ideas to other humans (or themselves 10 minutes later!), with careful translation into correct, computer-interpretable code coming later.

Now, magicians could work in a similar way, and it makes sense that they frequently would: laboriously working out correct spells in a written incantation language ahead of time, and then executing them by summoning a demon with the simple instruction "perform the task described on this paper". One can even go farther and imagine that, given the effort involved in creating such a spell, that they might be compiled into books for frequent re-use, stored in a university library, such that from then on magicians could execute common useful spells by instructing a demon to, e.g., "execute the incantation on page 312 of volume 3 of the Standard Encyclopedia of Spells as stored on shelf 4A of the Unseen University Library".

But, this is not a universal solution. For many simple spells, the description of where to find them may well be longer and more error-prone than simply reciting the spell itself. And, as any sysadmin could tell you, sometimes it's not worth writing a stored program for one-off tasks; you just need to be able to write a quite bash script on the command-line. For magical purposes, therefore, we'll want a programming language that is optimized for speaking, while retaining the precision of any other programming language.

After some discussion on the CONLANG-L mailing list, I think I have a pretty good idea of (one possibility for) what a "magical scripting language" might look like.

The major problem that I see needing to be solved is parenthesization; one can't be expected to properly keep track of indentation levels in speech, and explicit brackets around everything can get really hard to
keep track of even when you can look at what you're doing on a screen (which is why we have special editors that keep track of bracket-matching for us!)

Ideally, the language would not require explicit parenthesization anywhere, and other "syntactic noise" and boilerplate code would be minimized, so that every token of the language carries as much semantic weight as possible, to make them easier to memorize. This leads me to think that a concatenative, combinator-based language (like FORTH, or the non-programming-language conlang Fith) would be the best base, on top of which "idiomatic" flourishes could be added. Concatenative languages are often called "stack based", but they don't have to be implemented with a stack. They are in fact isomorphic to applicative languages like LISPs (i.e., there is a purely mechanical method for transforming any concatenative program into a fully-parenthesized applicative program and vice-versa), and can equally well be thought of as "data flow" languages, specifying how the outputs of one operation connect up with the inputs of another, and this is in fact a topic I have written about on this blog before.

There is a trade-off between being able to keep track of complex data flow without any explicit variable names, which can get very difficult if certain items are used over and over again, and being required to keep track of lots of temporary or bookkeeping variables (like loop indices). A concatenative language that allows you to define variables when you want, but minimizes extra variables whenever they would constitute "syntactic noise" with minimal semantic value, seems to me the best of both worlds. This should make simple incantations relatively easy to compose on-the-fly, and easy to memorize, as they would be mostly flat lists of meaningful words, similar to natural language.

Once you introduce re-usable variables, however, you have to decide the question of how variables are scoped. Letting all variables have the same meaning everywhere ("global scope") is an extremely bad idea; it makes it very dangerous to try re-using existing spells as components of a new spell, because variable names might be re-used for very different purposes in different sub-spells. If we want to be able to define and re-use sub-spells (i.e., functions, subroutines, procedures, or blocks, in terms of real-world programming languages), there are two major approaches to scoping rules available: dynamic scoping, and static scoping.

Loosely speaking, dynamic scoping means that any variables that are not defined in a function (or block) are the same as variables with the same name that are in-scope at the place where the function was called. This can only be figured out when a function is actually run, hence "dynamic". Static scoping means that any variables not defined in a function (or block) are the same as variables with the same name that are in-scope at the place where the function was defined. This can be figured out by looking at the text of a program without running it, hence "static".

Dynamic scope is generally easier to implement in (some kinds of) interpreters, while static scope is generally easier to implement in compilers, which early on in CS history led to a split between some languages using dynamic scope and some using static scope just because it was easier to implement the language that way. The modern concensus, however, is that dynamic scope is almost always a Bad Idea, because it makes it harder to analyze a program statically and prove that it is correct, whether with an automatic analysis tool or just in terms of being able to look at it and easily understand how the program works, without running it. Nevertheless, dynamic scope does have some legitimate use cases, and some modern languages provide work-arounds to let you enforce dynamic scoping where you really need it.

Scala, for example, simulates dynamic scoping with "implicit arguments", and this really highlights a good argument for dynamic scoping in a spoken programming language. One pain point with most programming languages, which would only be exacerbated in a spoken context, is remembering the proper order for long lists of function arguments. This can be ameliorated by named (rather than positional) arguments, but having to recite the names of every argument every time one calls on a sub-spell seems like a lot of boilerplate that it would be nice to be able to eliminate.

With dynamic scoping, however, you get the benefits of named arguments without having to actually list them either in a function definition (although that might well be a good idea for documentary purposes when spells are written down!) or when calling them. Just use reasonable names in the body of a sub-spell for whatever object that sub-spell is manipulating, and as long as variables with the appropriate names have already been defined when you wish to use a sub-spell, that's all you need. Making sure that those names refer to the correct things each time the sub-spell is called is taken care of automatically.


So, what might this incantation language actually look/sound like? Let's look first a simple script to do your dishes after a meal:

loop [more-than COUNT-DISHES 0] [let DISH [NEXT-DISH] DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move]

Things that are assumed to built-in to the language are in lower case, while things that are assumed to be magician-defined are in upper case. If every "verb" has fixed arity (so you always know exactly how many arguments it will need- there are no ambitransitives) then you don't need brackets around arguments or special delimiters between statements/expressions. We only need them to delimit "complement clauses"- i.e., blocks of code that are themselves arguments to other operators (in this, case "loop" and "let").

If magicians want to create their own higher-order spells which take other spells as arguments, there's not much to be done about eliminating those brackets. There are transformations in combinator calculus that can be used to remove them, but the results are extremely difficult for a human to follow, and no one would compose them on-the-fly for anything task of significant complexity. We could introduce a macro metaprogramming system that would let magicians define their own clause-delimiter syntax to eliminate cruft or enhance memorability, but then you might have to remember separate syntax for every sub-spell, which could get even worse than having to remember a single set of brackets. This is, in fact, a real problem that the LISP community (among others) deals with- if your language is too powerful, and too expressive, you get fragmentation problems, and end up with extra memory burden trying to keep track of what ridiculous things your co-workers decided to use that power for. It's often easier to enforce consistency.

Still, we can achieve some improvement by adding some "flourishes" to some of the basic, built-in operators in the language, replacing generic brackets with special construct-specific keywords. This is in fact a strategy taken by many real-world programming languages, which use block keywords like "while...wend" or "if...endif" rather than generic brackets for everything. With that alteration made, the spell might look something like this:

while more-than COUNT-DISH 0 loop let DISH be NEXT-DISH in DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move wend

Now there's a lot of distance between the "loop" and the "wend", which could, in a more complicated spell, make it easy to lose track of, even with the specialized keywords. To help with that, we can do some aggressive let-abstraction- assigning blocks to variables, and then using the short variable name in place of the long literal block definition. That gets us a spell like

while more-than COUNT-DISH 0 loop WASHING wend

In some ways, that's better, but now we have the potentially-confusing sequence of nested "let"s ("let ... be let ... be ... in ... in"), and we can't move the second "let" outside the definition of "WASHING" because it actually needs to be re-evaluated, redefining "DISH", every time "WASHING" is called. This wouldn't be a big problem in a written programming language, but it's seriously annoying in speech. There are a few ways it could be fixed; one solution I like is allowing cataphoric variables (which refer to a later definition) in addition to anaphoric variables (which refer to an earlier definition), so we can switch up which structure to use to avoid repetition, and make a choice about which structure is more conceptually appropriate in different kinds of spells. Cataphoric variables are not terribly common in real-world programming languages, but they do exist, as in Haskell's "where"-clauses. Implementing "where" in out magic language, we can write the spell as

while more-than COUNT-DISH 0 loop WASHING wend
where WASHING is let DISH be NEXT-DISH in DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move done

We might also take advantage of dynamic scoping to write the algorithm in a more functional style, mapping over the collection of dishes, like so:

for DISH in DISHES loop WASHING rof
where WASHING is DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move done


for DISH in DISHES loop WASHING rof

In either case, the nested "let" to define the DISH variable is eliminated by binding a DISH variable in a for-in construct over a collection of DISHES, and relying on dynamic scope to make that variable available in the body of WASHING. Note that this avoids the need for spoken function parameter definitions, although it would be relatively easy to add syntax for them in cases where they do turn out to be useful.

Now, this looks like a pretty straightforward spell, but note that it only works if we assume that things like "pickup", "NEXT-DISH", "use", and so forth have all been rigorously and safely defined ahead of time! These are all actually pretty complex concepts, and in practice demon magic would probably not be used for relatively low-effort, high-descriptive-complexity tasks like washing dirty dishes. But, for magical scripting to be useful, we would a lot of these kinds of everyday things to be pre-defined. And just as *NIX users tend to have opinions about what things they individually want to have available in their command-line environment, and maintain custom .bash_profile/.bashrc files per login account, I imagine that perhaps working magicians carry around phylacteries containing written copies of personally-preferred spells, and begin each incantation by referring to them to establish the appropriate execution environment.