Sunday, October 20, 2024

On the Tjugem Alphabet & Font

This Bluesky thread with Howard Tayler reminded me that, although I posted progress updates about it on Twitter back in the day, I never did a comprehensive write-up on how the thing works.

    A good place to start is this Reddit comment on Toki Suli.Yeah, it's not Tjugem, but phonetically it works the same way. Quote:

in the WAV files, the 'm' sounds seem to be going up rather than down, such as with "mi", even though the "m" is supposed to be grave. sharp and acute sounds seem to go down rather than up, such as in "tu".

is the linguistic term for "downward" vs "upward" the opposite of what i'd expect from a western music theory perspective? or am i maybe missing something as i'm listening to the files?

    Yes, Reddit user, you were missing something! Because in the phonetics of human whistle registers, "grave" and "acute" are positions. not motions. So, if you move from a vowel to a grave consonant, the formant will go down in pitch--from a middle-pitch vowel locus to a low-pitch consonant locus. But when going from a grave consonant to a vowel, pitch will go up--from a low-pitch consonant locus to a middle-pitch vowel locus. An "m" in between two vowels willl be realized by a down-then-up formant motion, while a "t" between two vowels will be realized by an up-then-down motion.

    Now, because whistled speech only has a single formant, it turns out to be not-unreasonable to write whistled speech as an image of the formant path on a spectrogram. You can just write a continuous line with a pen! Or, almost. There are some details--like amplitude variation--that are lost if you try to write with a ballpoint, and still difficult to get right if you write with a wide-tip marker or fountain pen. Thus, a few extra embellishments and decorations are useful, but that is the basic concept: each letter is just the shape that that letter makes on a spectrogram when pronounced. And with just that background, you should be able to start to make sense of this chart of Tjugem letters, as they would be written on lined paper:


    The correspondence between Tjugem glyphs and the standard romanization is as follows:

   
    Keep in mind, however, that the actual phonemes are whistles--not sounds that are representable with the IPA, despite the fact that the romanization is designed to be pronounceable "normally" if you really want to. And for the sake of space, only the allographs for one vowel environment are shown for each consonant. The G glyph is not so much a "glyph" as a lack of one, which is why it does not show up in the first image; acoustically, the phoneme is just a reduction in the amplitude of a vowel, represented by a break in the line. Thus, any line termination could be interpreted as a G. That necessitated the introduction of the line termination glyphs, which have no phonetic value but just indicate that a word ends with no phonemic consonant. The above-line vs. below-line variants of the Q glyph are chosen to visually balance what comes before or after them. Additionally, the "schwa" vowel (romanized as "E") is not represented by any specific glyph. The existence of a schwa sound in the first place is an unavoidable artifact of the fact that transitioning between certain consonants requires moving through the vowel space, but which vowel loci end up being hit isn't actually important. So, in the Tjugem script, the schwa just turns into whatever stroke happens to make the simplest connection between adjacent consonants.

    You shouldn't be expected to always be writing on lined paper, which explains the extra lines--a mark above or below a vowel segment tells you whether it is a high vowel or a low vowel, for those curves which could be ambiguous. And the circular embellishments help to distinguish manner of articulation for different consonants, which have the same spectral shape but different amplitude curves, which would otherwise have to be indicated by varying darkness or line weight. But note in particular that every consonant comes in a pair of mirror-symmetric glyphs: one moving from the vowel space to the consonant locus, and one moving from the consonant locus to the vowel space. And there are three different strokes for each half-consonant depending on which vowel is next to it! Making for a total of six different strokes for every consonant, because the actual spectral shapes of consonants change depending on their environment! It's allophony directly mirrored in allography.

    This makes creating a font for Tjugem rather... complicated. Sure, we could assign every allograph to a different codepoint, but that would be very inconvenient to use. It would be nice if we could just type out a sequence of phonemes, one keystroke per phoneme, and have the font take care of the allographic variation for us! Is that sort of thing possible? Yes! Yes, it is!

    The individual letter forms get assigned to a list of display symbols, specifying every possible consonant/vowel pairing:
# i_t i_d i_n i_k i_g i_q i_p i_b i_m
# a_t a_d a_n w_a_k j_a_k a_g w_a_q j_a_q a_p a_b a_m
# u_t u_d u_n u_k u_g u_q u_p u_b u_m
# t_i d_i n_i k_i g_i q_i p_i b_i m_i
# t_a d_a n_a k_a g_a q_a p_a b_a m_a
# t_u d_u n_u k_u g_u q_u p_u b_u m_u
# i_i j_a j_u_a j_u
# u_u w_a w_i_a w_i

and the slots for the romanized letters that we actually type out (a b d e g i j k m n p q t u w) are left blank. Contextual ligatures are then used to replace the sequence of input phonemes with an expanded sequence of intermediate initial, final, and transitional symbols, which are then finally substituted by the appropriate display symbols, which are then used to look up the correct alloglyphs. Then, it we update the boring straight-ruled glyph set with a slanted, more flowy-looking version, we can get a calligraphic font slightly reminiscent of Nastaliq, where lines can overlap each other because the ornamentation disambiguates; the Tjugem Tadpole script:



A Brief Note on John Wick

The actual Russian dialog in the John Wick movies is, uh... not great? But, the fact that John Wick is diegetically fluent in Russian ends up kicking off the plot of the first movie, when Russian gangster Iosef tries to buy John's car. Iosef asks how much, John says it ain't for sale, then, from  the script:

                                              IOSEF
                         (in Russian, subtitled)
                     Everything's got a f[*****]g price.
                         
                                              JOHN
                         (in Russian, subtitled)
                     Maybe so... but I don't.

          Taken aback by John's fluency, he watches as John enters the
          vehicle, guns the engine, and drives off.

(Censored for sensitive eyes.)

However, that's not actually how it was filmed! The Russian dialog for that scene in the movie is as follows (or at least, my interpretation of it; the pronunciations are bad):

                                              IOSEF
                     У всего, сука, своя цена.
                         
                                              JOHN
                     А у этой суки нету.
This is closed-captioned as
                                              IOSEF
                     Everything's got a price, b[***]h.
JOHN Not this b[***]h.

Which is not word-for-word, but essentially accurate. Given that Iosef did not expect John to understand him, we have to assume that his switch into Russian was expressing frustration to himself, even though it contains a vocative, clearly addressing the sentiment to John. Possibly, he was going to switch back into English to attempt another pitch, after reminding himself that everything has a price. And if that's what had happened, then this insertion of Russian dialog would've been just a bit of implicit character exposition, with a bit of an Easter Egg for a Russophone audience. But John responding at all suddenly changes the dynamic. That's also an implicit character exposition moment--we learn that John, despite being American, speaks Russian for some reason, which is further explicated later on. But in the scene, Iosef realizes that John must have understood him, and knows that Iosef was insulting him!  That turns the outcome of the interaction into a face-threatening issue. Now, in addition to still wanting the car which John has denied him, Iosef has to back up the implied threat of his insult to save face.

The change in dialog from the script also adds a layer of double meaning, because John has his (female) dog with him in the car. Thus, Iosef could be interpreted as insulting the dog (which--spoiler alert--he later kills), which John has a strong emotional attachment to. (It turns out the Russian word for "female dog" has exactly the same insulting double-meaning that it does in English!) Out of context, John's reply could even be interpreted as claiming that his dog is not for sale, as opposed to his car--and both interpretations are true! The same cannot be said about Iosef's statement, but the oblique association is a nice addition to the scene as filmed.

If you liked this post, please consider making a small donation!

The Linguistically Interesting Media Index

Wednesday, October 9, 2024

Newtonian Mechanics in 4+1 Dimensions

In the higher-dimensional universe of the world of Ord, most of Newtonian mechanics generalizes to 4 spatial dimensions (and 5 total dimensions when you include time--hence the 4+1 in the title) just fine. 

 is still true when F and a are 4-component vectors instead of 3-component vectors, and so is 
 for linear momentum. Squaring vectors still produces scalar quantities, so

KE = 1/2mv^2
is still true, and 
still works just fine. Rotation occurs in a plane with some fixed center for all numbers of dimensions, so the formula for moment of inertia in a given plane, 
is also still valid.

But when it comes to angular momentum and torque, we've got a problem. 

and 
contain cross products, which only exist in exactly 3 dimensions. Usually, these are explained as creating a unique vector that is perpendicular to both of the inputs; but in less than 3D, there is no such vector, and in 4 dimensions or more, there is a whole plane (or more) of possible vectors. In reality, angular momentum and torque are not vectors--they are bivectors, oriented areas rather than oriented lines, which exist in any space of more than 1 dimension. It just happens that planes and lines are dual in 3D--for every plane, there is a normal vector, and for every vector there is a perpendicular plane, so we can explain the cross product as producing the normal vector to the plane of the bivector.

In 4D, you can't implicitly convert a bivector into its dual vector and back, so we have to deal with the bivectors directly. Bivectors are formed from the outer product or wedge product (denoted ∧) of two vectors, or the sum of two other bivectors. Thus, we can write the angular formulas for a point particle in any number of dimensions as 

and 
And those a good for orbital momentum and torque about an external point on an arbitrary body as well. To get spin, we need a sum, or an integral, all of the components of an extended body. That means we need to be able to sum bivectors! That's easy to do in 2D and 3D; in 2D, bivectors can be represented by a single number (their magnitude and sign), and we know how to add numbers; in 3D, as we saw, bivectors can be uniquely identified with their normal vectors, and we can add normal vectors. In either case, you always get a simple bivector (also called a blade) as a result; i.e., for any bivector in 2D and 3D space, you can find a pair of vectors whose wedge product is that bivector. But in 4 dimensions and above, that is no longer true. This is because, once you identify a plane in 4+ dimensions, there are still 2 or more dimensions left over in which you can specify a second completely perpendicular plane which intersects the first at exactly one point (or zero or one points in 5+ dimensions), and there is no set of two vectors that can span multiple planes. This also means that there can be two simultaneous independent rotations, with unrelated angular velocities, and the formulas for angular momentum and torque must be able to account for arbitrary complex bivector values. You could, of course, just represent sums of bivectors as... sums of bivectors, with plus signs in between them. But that's really inconvenient, and if you can't simplify sums of bivectors, then those formulas aren't very useful for predicting how an object will spin after a torgue is applied to it!

Fortunately, even though the contributions of multiple not-necessarily-perpendicular and not-necessarily-parallel simple bivectors will not always simplify down to a single outer product, it turns out that in 4 dimensions, any bivector can be decomposed into the sum of two orthogonal simple bivectors--and most of the time, the result is unique. Unlike vector / bivector addition in 3D, this is not a simple process of just adding together the corresponding components, but there are fixed formulas for computing the two orthogonal components of any sum of two bivectors. They are complicated and gross, but at least they exist! So, we can, in fact, do physics!

The result of bivector addition does not have a unique decomposition exactly when the two perpendicular rotations have exactly the same magnitude. This is known as isoclinic rotation. With isoclinic rotations, you can choose any pair of orthogonal planes you like as a decomposition. Once you pick a coordinate system to use, there are exactly 4 isoclinic rotations, depending on the signs of each of the two component bivectors. In isoclinic rotation, every point on the surface of a hypersphere follows an identical path, and there is no equivalent of an equator or pole. Meanwhile, simple rotation results in a circular equator, but also a circular pole--i.e., a circle of points that remain stationary as the body spins. That circle is also the equator for the second plane of rotation, so the ideas of "equator" and "pole" become effectively interchangeable for any object in non-isoclinic complex rotation. One plane's equator is the other plane's pole, and vice-versa.

Looking ahead a little bit to quantum mechanics, particle spin in 4D is still quantized, still inherent, still divides particles into fermions and bosons--but has two components, just like the angular momentum of a macroscopic 4D object. Whether or not a particle is a boson or a fermion depends on the sum of the magnitudes of the two components. If the sum is half-integer, the particle is a fermion. If the sum is integer, then its a boson. Thus, bosons can (but need not necessarily) have isoclinic spins, and the weird feature of quantum mnechanics that the spin is always aligned with the axis you measure it in would not be so weird, because that's the case for isoclinic rotation of macroscopic objects, too! Fermions, on the other hand, can never have isoclinic spins! Because if one component has a half-integer magnitude, the other must not. In both cases, however, there end up being four possible spin states for all particles with complex spins, allowing fermions to pack more tightly than they do in our universe; 2 spin states (as in our universe) for particles with simple spins; and of course only a single spin state for particles with zero spin.

Monday, August 12, 2024

Mapping out Tetrachromat Color Categories

tetrachromacy is kind of convenient because you still have only 2 dimensions of hue, so you can actually diagram out what the color regions are, and just tell people "y'all already know how brightness and saturation work, so I don't need to put those on the chart".

    But I didn't actually try to make such a diagram. However, the last two episodes of George Corley's Tongues and Runes stream on Draconic gave me a solid motivation to figure out how to do it.

    Any such diagram will have to use some kind of false-color convention. We could try subdividing the spectrum to treat, e.g., yellow or cyan like a 4th physical primary for producing color combinations, and that might be the most accurate if you're trying to represent the color space of a tetrachromat whose total visual spectrum lies within ours, just divided up more finely--but the resulting diagrams are really hard to interpret. It's even worse if you try to stretch the human visible spectrum into the infrared or ultraviolet, 'cause you end up shifting colors around so that, e.g., what you would actually perceive as magenta ends up represented as green on the chart. The best option I could come up with was to map the "extra" spectral color--the color you can't see if it happens to be ultraviolet or infrared--to black, and use luminance to represent varying contributions of that cone to composite colors. Critically, if you don't want to work out the exact spectral response curves for a theoretical tetrachromatic creature to calculate their neurological opponent channels, you can map out the color space in purely physical terms, like we do with RGB color as opposed to, e.g., YCrCb or HSV color spaces. That doesn't require any ahead-of-time knowledge of which color combinations are psychologically salient.

    My first intuition on how to map out the 2D hue space was to arrange the axes along spectral hue--exactly parallel to the human sense of hue--and non-spectral hue, which essentially measures the distance between two simultaneous spectral stimuli. As the non-spectral hue gets larger, the space that you have to wiggle an interval back and forth before one end runs off the edge of the visible spectrum shrinks, so the space ends up looking like a triangle:

    This particular diagram was intended for describing the vision of RGBU tetrachromats, with black representing UV off the blue end of the spectrum; you could put black representing IR at the other end, but ultimately the perceivable spectrum ends up being cyclic so it doesn't really matter. If you want the extra cone to be yellow or cyan-receptive, though.. eh, that gets complicated, and any false-color representation will be bad. But, that highlights a general deficiency of this representation: it does a really bad job of showing which colors are adjacent at the boundaries. The top-edges spectrum is properly cyclic, but otherwise the edges don't match up, so you can't just roll this into a cone.

    Another possible representation is based on the triangle diagram of trichromat color space:

    Each physical primary goes at the corner of a simplex, and each point within the simplex is colored based on the relative distance from each corner. This shows you both hue, with the spectrum running along the exterior edges, and saturation, with minimal saturation (equal amount of all primaries) in the center. We can easily extend this idea to tetrachromacy, where the 4-point simplex is a tetrahedron:

    The two-dimensional hue space exists on the exterior surface and edges of the tetrahedron, with either saturation or luma mapped to the interior space. Note that one triangular face of the tetrahedron is the trichromat color triangle, but the center of that face no longer represents white.If we call the extra primary Q (so as not to bias the interpretation towards UV, IR, or anything else), the the center of the RGB face represents not white, but anti-Q, which we percieve as white, but which is distinct from white to a tetrachromat. This is precisely analogous to how the center of the dichromat spectrum is "white", but what a dichromat (whose spectral range is identical as hours) sees as white could be any of white, green, or magenta to us. Similarly, what we see as white could be actual 4-white, or anti-Q.

    Since the surface of a tetrahedron is still 2D, we can unfold the tetrahedron into another flat triangle:

    Here, in is unfolded around the RGB face, but that is arbitrary--it could equally well be unfolded around any other face, with a tertiary anti-color in the center, and that would make no difference to a tetrachromat, you as spinning a color wheel makes no difference to you. Note that, after unfolding, the Q vertex is represented three times, and every edge color is represented twice--mirrored along the unfolded edges. This becomes slightly more obvious if we discretize the diagram:

    Primary colors at the vertices, secondary colors along the edges, tertiary colors (which don't exist in trichromat vision) on the faces. This arrangement, despite the duplications, makes it very easy to to put specific labels on distinct regions of the space--although the particular manner in which the color space is divided up is somewhat artificial. And the duplications actually help to show what's going on with the unfolded faces--yes, the Q vertex shows up three times, but note that the total area of the discretized region around the Q vertex is exactly the same size as the area around the R, G, and B vertices.

    If we return to the trichromat triangle, note that you can obtain a color wheel simply be warping it into a circle; the spectrum of fully-saturated hues runs along the outside edge either way. Similarly, we can "inflate" the tetrahedron to get a color ball.

    If we want it flattened out again, any old map projection will do, but we have to keep in mind that the choice of poles is arbitrary; here's the cylindrical projection along the Q-anti-Q axis:

    And here's a polar projection centered on anti-Q:

    This ends up looking quite a lot like a standard color wheel, just extended past full saturation to show darkening as well as lightening; note the fully saturated ring at half the radius. However, the interpretation is quite different; remember, that center color isn't actually white. True tetrachromat white exists at the center of the ball, and doesn't show up on this diagram. And the false-color black around the edge isn't just background, it's the Q pole. If you need extra help to get your brain out of the rut of looking at this as a trichromat wheel, we can look at 7 other equally-valid polar projections that show exactly the same tetrachromatic hue information:

The Q pole.
The B pole
The anti-B pole.
The R pole
The anti-R pole
The G pole
The anti-G pole
(I probably should've done some scaling for equal area on these; the opposite poles end up looking like they take up way more of the color gamut than they actually do, and the false-color-black Q pole ends up getting washed out as a result. But I don't really expect anybody to use these alternate projections for labelling regions of hue--they're just to help you understand that the space really is a sphere, not a wheel!)

    And we could produce alternately-oriented cylindrical projections as well, if we wanted to.

    Of course, the full tetrachromat color space still contains two more whole dimensions--saturation and luminosity. But those work exactly the same way as they do for trichromats. Thus, if you want to create separate named color categories for tetrachromatic equivalents of, say, brown (dark orange) or pink (light red), you can still place them on the map by identifying the relevant range of hues and then just adding a note to say, e.g., "this region is called X when saturated, but Y when desaturated".

    Now, go forth and create language for non-human speakers with appropriate lexical structure in color terms!

Friday, August 9, 2024

Some More Thoughts on Toki Pona

What the heck is Toki Pona?

After publishing my last short article, several people expressed interest in a deeper analysis of various aspects of toki pona--among them, Sai forwarding me a request from jan Sonja for one conlanger's opinion about how to categorize toki pona. So, I shall attempt to give that opinion here.

The Gnoli Triangle, devised by Claudio Gnoli in 1997, remains the most common way to classify conlangs into broad categories.


Within each of these three categories are numerous more specific classifications, but broadly speaking we can define each one as follows based on the goals behind a conlang's construction:

Artlang: A language devised for personal pleasure or to fulfill an aesthetic effect.

Engelang: A language devised according to meet specific objective design criteria, often in order to test some hypothesis about how language does or can work.

Auxlang: A language devised to facilitate communication between people who otherwise do not share a common natural language. Distinct from a "lingua franca", a language which actually does function to facilitate communication between large groups of people without a native language in common.

Any given language can have aspects of all three of these potential categorizations. But, to figure out where in the triangle toki pona should fit, we need to know the motivations behind its creation.

To that end, I quote from the preface of Toki Pona: The Language of Good:

Toki Pona was my philosophical attempt to understand the meaning of life in 120 words. 

Through a process of soul-searching, comparative linguistics, and playfulness, I designed a simple communication system to simplify my thoughts.

I first published [Toki Pona] on the web in 2001. A small community of Toki Pona fans emerged.

In relation to the third point, in private communication jan Sonja confirmed that she never actively tried to get other people to use it. The community just grew organically. Even though the phonology was intentionally designed to be "easy for everyone", that tells me that the defining motivation behind toki pona was not that of an auxlang. In practice, it does sometimes serve as a lingua franca, but it wasn't designed with the intention of filling that role. It was designed to help simplify thoughts for the individual. Therefore, we can conclude that toki pona does not belong in the auxlang corner, or somewhere in the middle. A proper classification will be somewhere along the engelang-artlang edge--what I am inclined to call an "architected language" or "archlang" (although that particular term has been slow to catch on in anyone's usage but my own!)

So, what are the design criteria behind toki pona? Referring again to The Language of Good, toki pona was intended to be minimalist, using the "simplest and fewest parts to create the maximum effect". Additionally, "training your mind to think in Toki Pona" is supposed to promote mindfulness and lead to deeper insights about life and existence.

Toki Pona is also described as a "philosophical attempt"; can it then be classed as a "philosophical language"? I referred to it as such in my last post, and I think yes; it is, after all, the go-to example of a philophical language on the Philosophical language Wikipedia page! The term "philosophical language" is sometimes used interchangeably with "taxonomic language", where the vocabulary encodes some classification scheme for the world, as in John Wilkins's Real Character, but more broadly a philosophical language is a type of engineered language designed from a limited set of first principles, typically employing a limited set of elemental morphemes (or "semantic primes"). Toki Pona absolutely fits that mold--which means it can be legitimately classed as an engelang as well.

However, Toki Pona was clearly not constructed entirely mechanistically. It came from a process of soul-searching and playfulness, and encodes something of Sonja's own sense of aesthetics in the phonology. Ergo, it is clearly also an artlang. Exactly where along that edge it belongs--what percentage of engelang vs. artlang it is--is really something that only jan Sonja can know, given these categorial definitions which depend primarily on motivations. But I for one am quite happy to bring it in to the "archlang" family.

To cement the artlang classification, I'll return to the "minor complexities" I mentioned in the last article. To start with, what's up with "li"? It is supposed to be the predicate marker, but you don't use it if the subject is "mi" or "sina"... yet you do for "ona", so it's clearly not a simple matter of "pronoun subjects don't need 'li'". But, if we imagine a fictional history history for toki pona, it makes perfect sense. There is, after all, a fairly common historical process by which third person pronouns or demonstrative transform into copulas in languages that previously had a null copula. (This process is currently underway in modern Russian, for example.) So, suppose we had "mi, sina, li" as the "original" pronouns; "li", in addition to its normal referential function, ends up getting used in cleft constructions with 3rd person subjects to clarify the boundary between subject and predicate in null-copula constructions. Eventually, it gets re-analyzed as the copula, except when "mi" and "sina" are used because they never required cleft-clarification anyway (and couldn't have used it if they did, because of person disagreement), and a new third-person pronoun is innovated to replace it--which, being new, doesn't inherit the historical patterning of  "mi" and "sina", so you get a naturalistic-looking irregularity.

Or, take the case of "en". It seems fairly transparently derived from "and", and that is one of its glosses in The Toki Pona Dictionary, based on actual community usage, but according the The Language of Good it does not mean "and"--it just means "this is an additional subject of the same clause". Toki Pona doesn't really need a word for "and"; clauses can just be juxtaposed. and the particle "e" makes it clear where an object phrase starts so you can just chain as many of those together as you want with no explicit conjunction. So, we just need a way to indicate the boundary between multiple different subject phrases. You could interpret that as just as kind of marked nominative case--except you don't use it when there's only one subject. It's this weird extra thing that solves a niche edge case in the basic grammar. A strictly engineering-focused language might've just gone with an unambiguous marked nominative, or an explicit conjunction, but Toki Pona doesn't. It's more complicated, in terms of how the grammatical rules are specified, than it strictly needs to be.

And then, we've got the issue of numerals. All numerals follow the nouns which they apply to, whatever their function--but that means an extra particle must be introduced into the lexicon to distinguish cardinal numerals (how many?) from ordinal numerals (which one?). That is an unnecessary addition which makes the lexicon not-strictly-minimalist. The existing semantics of noun juxtaposition within a phrase make it possible to borrow the kind of construction we see in, e.g., Hawai'ian, where using a numeral as the head of a noun phrase forces a cardinal interpretation (something like "a unit of banana", "a pair of shoes", "a trio of people", etc.), while postposing a numeral in attributive position forces an ordinal interpretation ("banana first", "shoe second", "person third"). But Toki Pona doesn't do that!

Finally, as discussed previously, the lexicon is not optimized. These are all expressions of unforced character--i.e., artistic choice.

But what if Toki Pona were an auxlang? How would it be different?

Well, first off, we'd fix those previous complexities. At minimum, introduce an unambiguous marked nominative (which also helps with identifying clause boundaries), unify the behavior of pronouns and the copula / predicate marker, and get rid of the unnecessary ordinal particle. Then, we look at re-structuring the vocabulary. I collected a corpus of Toki Pona texts, removed all punctuation, filtered for only the 137 "essential words", and ended up with set of 585,888 tokens from which to derive frequency data. Based on this data set, 7 of the "essential words" appear zero times... which really makes them seem not that essential, and argues for cutting down the word list to an even 130. (Congratulations to jan Sonja for getting so close to the mark with the earlier choice of 120!) There are 72 two-syllable words that occur "too infrequently"--in the sense that there are three-syllable words that occur more frequently, and so should've been assigned shorter forms first. And similarly, there are 23 one-syllable words which are too infrequent compared to the two-syllable words. Honestly, predicting what these frequency distributions ought to be is really freakin' hard, so jan Sonja can't be blamed for these word-length incongruities even if she had been trying to construct a phonologically-optimized auxlang, but now we have the data from Toki Pona itself, so we could do better! Design a phonology, enumerate all of the possible word forms in order of increasing complexity, and then assign them to meanings according to the empirical frequency list!

For that, of course, we need to define a new phonology. It needs to produce at least 129 (remember, we're dropping the ordinal particle) words of three syllables or less, but no more than that. Based on picking the most cross-linguistically common segments according to Phoible data, we can go with the following inventory:

i,a,u
n (/m), p, k, w (/v)

With a strict syllable structure of CV, that produces 12 monosyllables and 144 disyllables.
Cutting out w/v gives us 9 monosyllables and 81 disyllables--not enough to squish everything into two syllables or less. But there are 729 trisyllables--way more than we need! So, we could cut it down even more... But, that gets at a hard-to-quantify issue: usability. Aesthetics, it turns out, can be an engineering concern when engineering for maximal cross-cultural auxlang usability! Too few phonemes, and the language gets samey and hard to parse. Toki Pona as it is seems to hit a sweet spot in having some less-common phonemes, but sounding pretty good--good enough to naturally attract a speaker community. If I were doing this for real, I'd probably not just look at individual segments, but instead comb through Phoible for the features that are most cross-linguistically common, and try to design a  maximnally-large universally-pronounceable inventory of allophone sets based on that to give variety to the minimal set of words. But if we accept the numbers of phonemes, and accept their actual values as provisional, what happens if we enumerate words while also aliminating minimal pairs?

Well, then we get a maximum of 3 monosyllables (re-using any vowel would produce a minimal pair) well under a hundred disyllables, but plenty of trisyllables. It would be nice to not do worse than Toki Pona in the average word length, though, which means we probably need 118 monosyllables + disyllables--we can get that pretty easily by relaxing the word-difference constraints such that we can have minimal pairs between, e.g., /n/ and /k/, which are extremely unlikely to be confused. Or, we just go up to 5 consonants instead of four, probably adding in something like j (/l).

I'm still not super inclined to add the mountain of failed auxlangs or tokiponidos in the world... but, that's the process I would use to properly engineer an optimal auxlang-alternate to Toki Pona.

Some Thoughts... Index

Friday, August 2, 2024

Some Thoughts on Toki Pona

Toki Pona is a minimalist philosophical artistic language, not an auxlang. Nevertheless, it has attracted a fairly large and international community of users--enough so that it was possible for Sonja Lang to publish a descriptive book on natural usage of Toki Pona (The Toki Pona Dictionary)! Thus, while this should in no way be seen as a criticism in the negative sense of Sonja's creation, it seems fair to critique Toki Pona on how optimized its design is as an auxlang.

Toki Pona has 92 valid syllables, composed of 9 consonants and 5 vowels. Accounting for disallowed clusters at syllable boundaries, this results in 7519 possible 2-syllable words--far, far more than any accounting of the size of Toki Pona's non-proper-noun vocabulary, which does not surpass 200 words. In developing the toki suli whistle register, I discovered that some phonemes can be merged without any loss of lexical fidelity--so even if we wanted to additional restrictions like spreading out words in phonological space to eliminate minimal pairs, or ensuring that the language was uniquely segmentable, the phonetic inventory and phonotactic rules are clearly larger and more permissive than they strictly need to be. And a smaller phonemic inventory and stricter phonotactics would theoretically make it trivially pronounceable by a larger number of people. For example, we could reduce it to a 3-vowel system (/a i u/), eliminate /t/ (merging with either /k/ or /s/) and merge /l/ and /j/. More careful consideration in building a system from scratch rather than trying to pair away at Toki Pona's existing system could minimize things even further, but if we start there, and require that all syllables be strictly CV, then 7x3=21 valid syllables and 441 valid 2-syllable words. We could rebuild a lexicon on top of that with no minimal pairs and unique segmentation just fine, or choose to make the phonemic inventory even smaller--all while still reducing the average Toki Pona word length, since the current vocabulary does include a few trisyllabic words!

The grammar, on the other hand, I really have no complaints about. It is not quite as simple as it could be (e.g., li could be made always obligatory, rather than onligatory-unless-the-subject-is-mi-or-sina), but it's really quite good--and the minor complexities actually help add to its charm as an artlang.

I am not much inclined to actually construct a phonologically-optimized relex of Toki Pona, as what would be the use? But it is fun to imagine an alternate history in which Toki Pona was designed from the outset with usage as an auxlang in mind. Would it actually have become as successful as it is, had Sonja taken that route? Perhaps we need to consider another contributor to Toki Pona's popularity--Sonja's specific phonological aesthetic. As mathematically sub-optimal as it is, Toki Pona sounds nice. Would it still have become popular if its sounds were instead fully min-maxed for optimal intercultural pronoucneability, length, and distinctiveness? Maybe I'll build a Toki Pona relex after all, just to see if it can be made to sound pretty....

Some Thoughts... Index