Tuesday, March 19, 2024

Human Actors Shouldn't Be Able to Speak Alien Languages

Isn't a little weird that humans can speak Na'vi? Or that aliens can learn to speak English? Or, heck, Klingon! The Klingon language is weird, but every single sound is used in human languages.

Of course, there's an obvious non-diegetic reason for that. The aliens are played by human actors. Actors wanna act. Directors want actors to act. It's less fun if all of your dialog is synthesized by the sound department. But while it is an understandable and accepted trope, we shouldn't mistake it for representing a plausible reality.

First, aliens might not even use sound to communicate! Sound is a very good medium for communication--most macroscopic animals on Earth make use of it to some extent. But there are other options: electricity, signs, touch, light, color and patterning, chemicals. Obviously, a human actor will not, without assistance, be able to pronounce a language encoded in changing patterns of chromatophores in skin, nor would a creature that spoke that language have much hope of replicating human speech. But since sound is a good and common medium of communication, let's just consider aliens that do encode language in sound.

The argument was recently presented to me that aliens should be able to speak human languages, and vice-versa, due to convergent evolution. An intelligent tool-using species must have certain physical characteristics to gain intelligence and use tools, therefore... I, for one, don't buy the argument that this means humanoid aliens are likely to start with, but supposing we do: does being humanoid in shape imply having a human-like vocal tract, or a vocal tract capable of making human-like noises? I propose that it does not. For one thing, even our closest relatives, the various great apes, cannot reproduce our sounds, and we can only do poor approximations of theirs. Their mouths are different shapes, the throats are different shapes, they have different resonances and constriction points. We have attempted to teach apes sign languages not just because they lack the neurological control to produce the variety of speech sounds that we do, but also because the sounds they can produce aren't the right ones anyway. Other, less-closely-related animals have even more different vocal tracts, and there is no particular reason to think they would converge on a human-like sound producing apparatus if any of them evolved to be more externally human-like. We can safely assume that creatures from an entirely different planet would be even less similar to us in fine anatomic detail. So, Jake Sully should not be able to speak Na'vi in his human body, and should not be able to speak English in his avatar body--yet we see Na'vi speaking English and humans speaking Na'vi all the time in those movies.

And that's just considering creatures that make sounds in essentially the same way that we do: by using the lungs to force air through vibrating and resonant structures connected with the mouth and nose. Not all creatures that produce sound do so with their breath, and not all creatures that produce sound with their breath breathe through structures in their heads! Intriguingly, cetaceans and aliens from 40 Eridani produce sound by moving air through vibrating structures between internal reservoirs, rather than while inhaling or exhaling--they're using air moving through structures in their heads, but not breath!

Hissing cockroaches make noise by expelling air from their spiracles. Arguably, this should be the basis for Na'vi speech as well: nearly all of the other animals on Pandora breathe through holes in their chests, with no obvious connection between the mouth and lungs. They also generally have six limbs and multiple sets of eyes. Wouldn't it have been cooler to see humanoid aliens with those features, and a language to match? But, no; James Cameron inserted a brief shot of a monkey-like creature with partially-fused limbs, no operculi, and a single set of eyes to provide a half-way-there justification for the evolution of Na'vi people who are just like humans, actually.

Many animals produce sound by stridulation. No airflow required. Cicadas use a different mechanism to produce their extremely loud songs: they have structures called tymbals which are crossed by stiff ribs; flexing muscles attached to the timbals causes the ribs to pop, and the rest of the structure to vibrate. It's essentially the same mechanism that makes sound when you stretch or compress a bendy straw (or, as Wikipedia calls them, straws with "an adjustable-angle bellows segment"). This sound is amplified and adjusted by passage through resonant chambers in the insects' abdomens. Some animals use percussion on the ground to produce sounds for communication. Any of these mechanisms could be recruited by a highly intelligent species as a means of producing language, without demanding any deviation from an essentially-humanoid body plan.

There is, of course, one significant exception: birds have a much more flexible sound-production apparatus than mammals, and some of them are capable of reproducing human-like sounds, even though they do it by a completely different mechanism (but it does still involve expelling air from the lungs through the mouth and nose!) Lyrebirds in particular seem to have the physiological capacity to mimic just about anything... but they extent to which they choose to imitate unnatural or human sounds is limited. Parrots and corvids are known to specifically imitate human speech, but they do so with a distinct accent; their words are recognizable, but they do not sound like humans. And amongst themselves, they do not make use of those sounds. Conversely, intraspecific communication among birds tends to make use of much simpler sound patterns, many of which humans can imitate, about as well as birds can imitate us, by whistling. So, sure, some aliens may be able to replicate human speech--but they should have an accent, and if their sound production systems are sufficiently flexible to produce our sounds by different means, there is no reason they should choose to restrict themselves to human-usable sounds in their own languages. Similarly, humans may be able to reproduce some alien languages, but they will not sound like human languages--and when's the last time you heard a human actor in alien makeup whistling? (Despite the fact that this is a legitmate form of human communication as well!)

The most flexible vocal apparatus at all would be something that mimics the action of an electronic speaker: directly moving a membrane through muscular action to reproduce any arbitrary waveform. As just discussed, birds come pretty close to capturing this ability, but they aren't quite there. There are a few animals that produce noise whose waveform is directly controlled by muscular oscillation which controls a membrane, but they are very small: consider bees and mosquitoes, whose buzzing is the result of their rapid wing motions (or, in the case of bumblebees, muscular vibrations of the thorax). Hummingbirds are much bigger than those insects, and they can actually beat their wings fast enough to create audible buzzing sounds (hence, I assume, the name "humming"bird), but they are still prety small animals. And despite these examples of muscule-driven buzzing, it seems rather unlikely that a biological entity--or at least, one which works at all similarly to us--could have the muscular response speed and neurological control capabilities to replicate the complex waveforms of human speech through that kind of mechanism. But if they did (say, like the Tines from Vernor Vinge's A Fire Upon the Deep), just like parrots and crows, why would their native communication systems happen to use any sounds that were natural for humans?

Now, some people might argue with my assertion that "any of these mechanisms could be recruited... as a means of producing language". That doesn't really impinge on my more basic point that an alien language should not reasonably be expected to be compatible with the human vocal apparatus, but let's go ahead and back up the assertion anyway. Suppose a certain creature's sound-production apparatus isn't even flexible enough to reproduce the kinds of distinctions humans use in whistled speech, based on modulating pitch and amplitude (which cicadas certainly can). Suppose, in fact, that it can produce only four distinct sounds. That should be doable by anybody that can produce sound ata ll--heck, there are more than 4 ways of clapping your hands. With 2 consecutive sounds, you can produce 16 distinct words. If you allow 3, it goes up to 80 words. At a word length of 4 or less, you've got 336 possible words. So far, that doesn't sound like very much. But then, there are 1360 possible words of length 5 or less, and 5456 of length 6 or less. At a length of 7, you get 21,840 possible words--comparable to the average vocabulary of an adult English speaker. The average length of English words is a little less than 5 letters, and we frequently (9 letters) use words that are longer than 7 letters, so needing to go up to 7 to fit your entire adult vocabulary isn't too bad. And that's before we even consider the ability to us homophones to compress the number of distinct words needed! So: we might argue about exactly how many words are needed for a fully-functional language with equivalent expressive power to anything humans use, but through the power of combinatorics, even small numbers of basic phonetic segments can produce huge numbers of possible words--indisputably more than any number we might come up with as a minimum requirement. A language with only four sounds might be difficult for humans to use, as it would seem repetitive and difficult to segment... but we're talking about aliens here. If 4 sounds is all their bodies have to work with, their brains would simply specialize to efficiently process those specific types of speech sounds, just as our brains specialize for our speech sounds.

Now, to be clear, this is not intended to disparage any conlanger who's making a language for aliens and using human-compatible IPA sounds to do so. It's an established trope! And even if it's not ever used in a film or audio drama, it can be fun. There are plenty of awesome, beautiful examples of conlangs of this type, and there's no inherent problem with making more if that's what you want to do. Y'all do what you want. But we should not mistake adherence to the trope for real-world plausibility! And it would be great to see more Truly Alien Languages out there.

Saturday, March 2, 2024

On Mantis Shrimp, Butterflies, & Frogs

Previously, I discussed how to conceptualize the experience of organisms with different dimensionalities of their color spaces, along with a few other effects like the varying color sensitivy across different parts of the retina as seen in rabbits. But, as hinted out by the mention of Mantis shrimp at the end, multichromatic visual systems can actually get a lot weirder than that.

Even humans, and in fact most vertebrate animals you are likely to be familiar with, actually have a more complex visual system than our 3-dimensional color space implies. After all, we have four different types of light sensing cells in our retinas, and yet we do not have tetrachromatic vision! What is that fourth type--the rods--doing? Most readers will probably already know that rod cells are what give us low-light vision. Most of the time there is very little, if any, interaction between rod cells and our three types of cones: either it is too bright, rods are completely bleached, and we only get visual information from cones (known as photopic vision); or, it's too dark for cones to respond at all, and we only get visual information from rods (known as scotopic vision). In those sorts of low-light situations, humans become monochromats--we physiologically cannot see color in dim light! Our brains, however, are very good at lying to us, and filling in the colors that we know things should be. Unless, perhaps, you are small child who does not have a whole lot of experience with what colors things should be yet--a situation which once led to an adorable experience with my oldest child when he was very small. Once, when he had woken up early in the morning, I found him playing in his bedroom with a pile of balls, sorting them into "black ball!" and "grey ball!"; and then, when I turned on the light in the bedroom he gasped and said "Oh! Color!"

Incidentally, it is possible for each of these parallel visual systems to fail independently, mostly due to genetic conditions that inactivate either rods or cones. Humans lacking cone cells are rod monochromats and experience day blindness; humans lacking functional rod cells are nyctalopic and experience night blindness. And while most mammals are at least dichromats, armadillos, anteaters, and tree sloths are all rod monochromats, as are 90% of deep-sea fish species.

In between these extremes, there is range of mesopic vision, where both rods and cones have significant activity, and color perception gradually shifts as light levels get progressively darker or lighter. At no point do we incorporate rod cell data into the opponent process to get tetrachromatic vision, though; it's essentially used to augment luminosity information when cone cells start to struggle, causing a shift in the spectral sensitivity peak and altering apparent saturations.

Not all vertebrates handle dark vision in the same way, though, or have the same rod or cone sensitivity limits. Birds transition into scotopic vision at much brighter illumination levels than we do, as their color vision is sharpened by oil droplets that cut out noise at the tails of cone cell receptivity--but that also means that they waste more light, and so need more light to see color. Meanwhile, although most nocturnal vertebrates rely heavily on rods and tend to reduce their color perception or lose it entirely (which is why so many mammals are dichromats, and as mentioned above some are even rod monochromats--we lost tetrachromacy when our ancestors were nocturnal, and occasionally re-evolved trichromacy in more recent eons) nocturnal geckos don't have rods at all--they rely entirely on cone cells, which have simply evolved to be more sensitive than ours, and so retain a constant sense of color perception across their entire perceptive range of luminosity. (This is probably because they evolved from diurnal ancestors who had already lost their rods, as most diurnal lizards and some snakes have). In fancy terms, they have simplex retinas (containing receptors for a sngle integrated visual system), while we have duplex retinas (containing receptors for two parallel visual systems). Hawkmoths and nocturnal bees have parallel adaptions, with altered ommatidium geometry that improves light concentration onto individual receptors, for monoplex trichromatic low-light vision. But that's less weird and complicated than humans--what about more weird?

Toads and frogs, it turns out, have multiple types of rod cells, which are sensitive to even lower levels of light than human rods are. Which means they have genuine dichromatic vision in situations that would seem to us pitch black! In theory, there could be creatures that integrate their visual experiences across different light levels, using multiple rod types so that the brain has to lie less about what colors things should be in the daylight--but amphibians don't do that! Neither do they have a single 5-dimensional color space--rather, they have two completely independent color spaces, one dichromatic and one trichromatic, overlapping the same frequency range, which could in the correct conditions be perceived at the same time, but generally show up in different environments and are used for different purposes. Frogs and toads use their cone-based vision to identify food and mates, but they use their rod-based vision exclusively for navigation, with dichromacy allowing them to better distinguish directions based on different colors of light sources (incidentally, they prefer to jump towards high-frequency sources in light conditions, and towards lower-frequency sources in dark conditions).

Now Mantis shrimp provide the most famous example of optical complexity, but plenty of arthropods have large numbers of opsin types. Even daphnia, or water fleas, which don't even have image-forming eyes, are tetrachromatic in their ability to respond to the colors of light sources! Does this mean that butterflies with 8 receptor types are octochromats, with a 6-dimensional hue space? Well, no, for the same reason that frogs aren't pentachromats. Like dichromatic rabbits, creatures with large numbers of photorecptor types tend to have them localized in different parts of the visual field, to serve different purposes, and the signals are not neurologically combined to form a single coherent color space. Papilio butterflies, for example, which do have 8 different photoreceptor types, behave like tetrachromats when identifying flowers as food sources, but behave like dichromats (despite using 3 receptor types to form the relevant dichromatic retinal signals!) when selecting leaves for egg-laying. This kind of behavior-specific segmentation of visual systems means that in some species, different sexes actually have completely different visual systems, because they need them for different reproductive tasks! Which suggests some interesting sci-fi possibilities. And while daphnia are individually tetrachromatic, they have genes for many more than just 4 opsin types. If different sets of opsin genes were expressed in different individuals, the philosophical question "is what I call red really the same as what you call red?" would have an objectively-verifiable answer, as every different morph of the species (whether segmented by sex or caste or random variation) would have different color perceptions.

That brings us to the Mantis shrimp. With 12 different spectral receptor types, they could be doing a multiple-parallel-colorspace thing, like frogs and butterflies do. But... they aren't. As mentioned in that previous post, Mantis shrimp don't actually have particularly high spectral resolution, and they don't have the neural architecture to construct decorrelated opponent channels to produce a single perceptual color space. Instead, their large number of receptor types seems to exist to avoid the need for that kind of complex neural architecture! Instead, the Mantis shrimp visual system is built for speed and efficiency. Because of the spatial distribution of different receptor types into bands across their compound eyes, getting a full spectral profile on any given object requires mechanical scanning, which is relativey slow, but metabolically cheap; and wherever a given object falls in the visual field, determining whether or not it matches the spectral sensitivity of that region is instantaneous.

If Mantis shrimp were conscious, we might imagine their experience of color as being more analogous to our own perceptions of sound or taste. Mantis shrimp don't recognize abstract colors--they recognize specific fuzzy spectral patterns. Similarly, we have thousands of auditory hair cells that each respond to a specific frequency, but we don't uniformly group them into a kilodimensional "sonic color" space--we can selectively identify individual frequencies overlayed, or recognize particular spectral patterns of timbres and specific known source types. Taste and smell are similar; we have more than 400 types of olfactory receptors and at least 5 taste receptors, but we don't have a 405-dimensional experience of taste and smell (in fact, we don't know what the neurological dimensionality of human chemoreception is; to date, there is no model that can predict olfactory sensation from receptor activations). Instead, we can pick out individual receptor channels that are useful for specific purposes (sour helps us identify acids; bitter helps us identify poisons; salty helps us identify, well... salt; sweet helps us identify carbohydrates; and umami helps us identify proteins), and we can recognize specific fuzzy patterns that form the chemical signature of specific source types. For a good long time, western philosophy held that smell was "ineffable", and impossible to describe in language through any means other than "smells like a specific thing"; that turns out to be a symptom of western philosophers just not being bothered to try, though, and in fact there are many languages around the world which have generic olfactory terms disconnected from a specific source just as we have generic color terms. Statistical analysis of those language's vocabularies suggests that humans actually conceive of smells arranged in a two-or-maybe-three-dimensional space, where the major axes are "edible vs. non-edible" and "pleasant vs. unpleasant" (or "dangerous vs. safe"). Thus, durian is unpleasant but edible, ammonia is unpleasant and inedible (and dangerous), flowers are (generally) pleasant but inedible, and fruits are pleasant and edible. Languages which have generic olfactory terms generally have 12-16 of them--similar to the maximum number of basic color terms found in human languages.

So, a conscious alien species with Mantis-shrimp-like vision, or even a large number of parallel multidimensional color systems like butterflies have, might experience their spectral perceptions not in an analogous manner to our experience of color, but collapsed down into a small number of behaviorally-relevant dimensions. Is this the spectral pattern of something I can eat? Is this the spectral pattern of something dangerous? Is this the spectral pattern of something useful to me? Is this the spectral pattern of a potential mate? Etc. And depending on how important vision is to their culture (vision doesn't have to be an alien's primary sense just because it's ours!), they may consider the categorization and naming of generic colors to be completely ineffable, or totally normal--just disconnected from the raw physiological inputs which exist below the level of conscious awareness. 

But could there be creatures with extremely high dimensional color vision? Aside from the lack of evidence that they exist on Earth implying that high-dimensional vision probably wouldn't evolve elsewhere either, there are some practical arguments for why they shouldn't exist. Dichromatic vision permits distinguishing between objects and areas exhibiting predominantly higher-frequency vs. predominantly lower-frequency light, which is useful for picking out objects and against a background and general navigation, as seen in amphibians; however, because dichromatic vision conflates hue and saturation, it is not reliable for picking out specific wavelengths. While trichromatic vision can still be fooled by pairs of inputs that are indistinguishable from monochromatic light, it at least provides the possibility of identifying a unique spectral peak, giving us perception of the spectral colors. A lot of animal behaviors rely on this ability, such as the aforementioned Papilio butterflies which use a g-(r+b) opponent color signal to identify green leaves for egg laying, excluding objects which are too red or too blue; or apes and humans, whose trichromatic vision allows us to distinguish ripe, unripe, and overripe fruit (among other things!) as the peak reflectance shifts across the spectrum. Separating out the hue and saturation dimensions also gives us more information about the material properties of reflecting objects. So if trichromacy is alreadys so much better, why are there so many tetrachromats in the world? Well... we don't know. There are probably multiple contributing factors; trichromacy is mostly-adequate for disinguishing most ecologically-significant variation in most natural spectra, but tetratchromacy does reduce further reduce the possibility of spectral confusion. It may assist with color constancy--the ability to calculate what the color of a reflecting object "should" be under varying light conditions (although even dichromats can do that to some extent). Having more receptor types may provide better spectral resolution when covering a wider visual range--note that most tetrachromats can see further into the infrared and ultraviolet than we can. So perhaps pentachromacy or hexachromacy would be more useful to creatures that evolved in an environment with a different atmosphere that transmitted a wider band of potentially-visible light!

References:

Thresholds and noise limitations of colour vision in dim light
From spectral information to animal colour vision: experiments and concepts

Sunday, February 25, 2024

Review: "Reading Fictional Languages"

I'm going meta! I'm reviewing people who are reviewing people who use conlangs in fiction!

Reading Fictional Languages (that's an Amazon Affiliate link, but you can also get it directly from Edinburgh University Press) is a collection of articles that follows up on the presentations given at the eponymous Reading Fictional Languages conference, which brings together both creators and scholars of constructed languages used in fictional works. I was provided with a free review copy as a PDF, but not until after I had bought my own hardcover anyway.

The first thing to note is that the title is kind of poorly chosen. It is telling that articles by conlangers refer to their subject as "constructed languages" or "conlangs", while articles by literary scholars refer to their subject as "fictional languages". Based on personal communication with some of the contributors, it seems that the organizers of the conference on which this volume was based (which I did submit an abstract for myself, but was not accepted) were unaware of the modern conlanging community and taken somewhat by surprise when actual language creators showed up to talk about their work! And they had thus developed their own analytical terminology ahead of time in isolation from conlanging practitioners.

Chapter 1, the introduction, contrasts "real" languages with languages which are "imagined for an equally fictional community of users, where the environment is being imagined at the same time as the language is being constructed". However, that misses out on a very important distinction in the types of non-natural languages that are actually used in fictional works: those that do not exist as usable languages in the real world, and those that do. I.e., those which actually are fictional, and those which are real, despite being artificially constructed.

Skipping to page 77, in Chapter 6: "Design intentions and actual perception of fictional languages: Quenya, Sindarin, and Na’vi", by Bettina Beinhoff, specifies that "fictional languages" are a subset of "constructed languages", being languages constructed for use in fictional works. That's sensible, but when talking about Quenya, Sindarin, and Na'vi in particular--all languages which have been heavily developed and actively used by communities outside of their fictional contexts--it really highlights the inadequacy of this academic terminology.

We also get an explanation of the "Reading" part of the title--in short, it's about the reader's interaction with a text, and how the use of invented languages influences the creative process and the reading experience. Apart from defining terminology, however, Chapter 1 does provide a decent overview of the history of invented languages in fiction and of the proceeding contents of the book. 

Chapter 2, by David Peterson and Jessie Sams (who has since become Jessie Peterson) explores the nature of working with television and film makers as a language creator. I couldn't possibly do this justice in summary; David and Jessie probably have more experience with film and TV language construction than everyone else in the industry combined, and they certainly know what they're talking about! One complication of working in Hollywood, however, is not unique to working in Hollywood:

A script writer often won’t have heard of language creation and will have no sympathy for someone whose role they don’t understand commenting that the line of dialogue they want to be cut mid-word won’t work in translation because the verb in the conlang comes at the end of the sentence and won’t have been uttered yet if cut off after three words

That's basically the lament of every translator ever! Especially the ones that have to translate dialog for foreign-language editions of novels, movies, and TV shows.

Just from having been active in the conlanging community for a good long time, there was a lot in this chapter that I already knew, even though I could not have articulated it as well as David and Jessie do. But the biggest insight I gained came in an explanation of how the form of a constructed language is constrained by the needs of a film production--and not just in the sense that actors need to be able to use it. Additionally, the language creator needs to be able to translate rapidly, which means they need to construct a language that is easy for them to use without too much practice. I have long thought that Davidsonian languages all seem to have a common sort of character about them, which is partially attributable to David's construction process--but now I can see there's a darn good reason for it, and I can't actually blame him! That's just more reason to work towards getting a greater diversity of language creators into the film industry, so that we can start to see a greater diversity of languages reflecting differences in what is easy for individual creators to use in service of the needs of a film production.

I found Chapter 3 "On the inner workings of language creation: using conlangs to drive reader engagement in fictional worlds", by BenJamin Johnson, Anthony Gutierrez, and Nicolás Matías Campi, to be the most immediately useful to me, and probably to most of the people who read my blog (or at least, the intended audience for the Linguistically Interesting Media Index, which is authors who want to figure out how to do this better!) It's pretty comprehensive, covering why you might want to do this, how to handle collaboration between an author and a conlanger if you don't happen to fill both rolls yourself, and some very basic stuff about the mechanics of actually using a conlang in fiction. This is where BenJamin introduces his 5-level categorization of the types of textual representation for conlangs, which I immediately latched onto and began expanding on after seeing the conference presentation that preceded this chapter, as a complement to my own categorization of comprehension-support strategies.

Chapter 4 is a case study in creating dialectal variation in a constructed language. Useful for a language creator, but you're left on your own as far as making use of that variation in your fiction writing. Personally, I think it might be hard to justify, given the difficulty of representing natural language dialects in a non-annoying way in most modern writing. Of course, if you get one of those coveted film jobs, it becomes more practical; see, for example, Paul Frommers call back to create a new dialect of Na'vi for The Way of Water.

Chapter 5, by Victor Fernandes Andrade and Sebastião Alves Teixeira Lopes, is an exploration of the visual influence of Asian scripts on alien typography in science fiction media. I'm not completely convinced, but the argument is worth reading. They've got interesting data to look over, at least.

I already briefly mentioned Chapter 6; essentially, it determines that the languages studied were perceieved as intended on some subjective axes, such as "pleasantness", by a surveyed population, but failed in aethetic design aims on other axes, and that cultural context is important to aesthetic evaluations. Chapter 7 "The phonaesthetics of constructed languages: results from an online rating experiment" by Christine Mooshammer, Dominique Bobeck, Henrik Hornecker, Kierán Meinhardt, Olga Olina, Marie Christin Walch, and Qiang Xia is essentially the same thing, just better, as it covers a broader selection of conlangs, and gathers responses from both English and German speakers, rather than just English speakers from the UK, and controls for gender, age, and linguistic background. They additionally tested listeners' abilities to discriminate between conlangs, as well as their subjective evaluations. This is potentially useful information for conlangers who are trying to target a particular aesthetic effect on a particular audience--however, it also suggests that doing specific research on this isn't really necessary for a creator, as the languages studied were pretty good at achieving their creators' stated goals already!

Chapter 8 "Tolkien’s use of invented languages in The Lord of the Rings" by James K. Tauber is basically exactly what I do on this blog--an analysis of how secondary languages are used in a fictional work to augment the narrative! I've avoided doing this sort of analysis on The Lord of the Rings myself because it is a Very Large Work, so I'll definitely be coming back to this chapter to see what I can integrate into my own analytical system later.

Chapter 9 "Changing tastes: reading the cannibalese of Charles Dickens’ Holiday Romance and nineteenthcentury popular culture" by Katie Wales analyses the representation of a truly fictional language--one which does not exist as a developed and usable language in the real world--in terms of the sociological environment in which it was published, and how the tastes of modern audiences and thus the appropriate means of cultural representation have changed over time. It is a reminder that appreciating old literature often requires being intentional about not ascribing modern points of view and modern judgments on people of the past, and trying to understand the literature as it would've been read by it's original intended audience.

Chapter 10 "Dialectal extrapolation as a literary experiment in Aldiss’ ‘A spot of Konfrontation’" by Israel A. C. Noletto reads like a pretty standard sample of Dr. Noletto's work; he's the only academic author represented in this volume with whom I have a prior acquaintance, such that I can compare his other work! Noletto argues that " the presence of an unfamiliar fictional language interlaced with English as the narrative medium does not necessarily constitute a barrier to understanding as might otherwise be expected", and that the use of the extrapolated dialect in fact serves as an important means of conveying the theme of the story through narrative style. There's a little bit of my sort of detailed analysis of the text to show it is constructed to support comprehension.

Chapter 11 "Women, fire, and dystopian things" by Jessica Norledge examines the successes, failures, and impact of Suzette Haden Elgin's Láadan language as a language for a dystopia--and particularly as a language meant to expand the user's capacity for thought, in contrast to other dystopian languages, like 1984's Newspeak, which are intended to restrict thought in a Whorfian fashion. The title is of course a reference to George Lakoff's Women, Fire, and Dangerous Things.

Chapter 12 "Building the conomasticon: names and naming in fictional worlds" by Rebecca Gregoryis a broad survey of how names are constructed and reflect language and culture--or fail to do so--in a variety of fictional works. She ends with "with a bid for names to be seen as just as fundamental a part of language creation and conceptualisation as any other of language’s building blocks", which I can only read as a plea to academics doing literary analysis, not language creators or authors, given the broad recognition that already exists in the conlanging community of "naming languages" as a thing that is useful in worldbuilding for fiction across many types of media.

Chapter 13 "The language of Lapine in Watership Down" by Kimberley Pager-McClymont analyses the idioms, conceptual patterns, and attested formal structure of the Lapine language, how it is connected to the embodied experience of rabbits, and thus contributes to generating empathy in the reader for non-human protagonists. An excellent case study to reference for conlangers who want inspiration on the developing the connection between language and culture, and especially for those working on non-human languages.

The final chapter, 14, "Unspeakable languages" by Peter Stockwell, presents another case where my intuitions clash with the chosen terminology. Stockwell examines languages which are difficult or impossible to represent directly in the narrative--i.e., a subset of truly fictional languages which necessarily remain fictional for practical reasons related to their asserted nature, not merely because the author didn't bother to flesh them out. Stockwell introduces the term "nonlang" for what I would simply call a fictional language. Terminological disputes aside, though, this chapter presents an intriguing overview of how science fiction works have dealt with the concept of the "linguistically ineffable"--languages which we can never hope to decipher or understand. The only quibble I have with the actual content is that Stockwell claims that "it is evident that the pragmatics of a question and an exclamation are still carried even in Speedtalk by intonation (marked here by ‘?’ and ‘!’)."--but that is an unwarranted conclusion based on the evidence presented, as intonation is definitely not evident on the page, and we should not assume that the use of '?' and '!' in the text actually correspond to intonation contours in the fictional spoken form--or, if they do, that the intonation contours so indicated actually correspond to questions and exclamations, given that the Speedtalk text is untranslated and explicitly not understood by the character transcribing it.

Overall: I have some complaints, and not all chapters are of equal quality or usefulness from my point of view--but there is plenty of good stuff in here that makes it worth a read, and I for one am strongly in favor of further, perhaps more intentional, collaborations between academics and conlangers in analyzing the use of constructed languages in fiction.

If you liked this post, please consider making a small donation!


Saturday, February 24, 2024

How Would We Know If We're Talking to Aliens?

A follow-up to my review of Xenolinguistics.

Suppose we encounter aliens and begin linguistic fieldwork in earnest. Or, suppose that we have reason to believe we may have finally successfully decoded a language of cetaceans or cephalopods (who for all practical purposes in this context may as well be aliens, despite living with us on Earth). How would we be able to tell that we actually got it right--that we understand what they mean, and that they understand what we mean? In particular, how would we overcome the Clever Hans Effect?

Language is ultimately a noisy and lossy channel; a great deal of human communication involves the receiver inferring what the sender probably meant, not directly extracting information that is unambiguously encoded in the linguistic signal itself. And even among humans, this can frequently go wrong, resulting is misinterpretations. But at least living humans can object when they are misinterpreted, and try to correct the miscommunication. That is much, much harder for nonhumans with whom we do not already share a language--and for dead or otherwise unavailable humans who have left behind undeciphered documents.

In these situations, it is all too easy to impute meaning from our own minds onto signals that arise from a totally different intent, or have no meaning at all. And if we're only reading out the information that we unintentionally inserted ourselves, we're not really communicating, are we?

So, there need to be ways to validate our decipherments--ways to obtain information from a non-human entity that we know we could not have provided ourselves. One option, which has been used with human texts, is to hold out validation data; if you can decipher the hieroglyphics on the Rosetta Stone without reference to anything but the Rosetta Stone, and then the system you derived turns out to produce sensible-looking results for other collections of hieroglyphics, then you've probably got it right. If you claim to have deciphered the entire Voynich Manuscript, big deal, it's only the 10th claim this year; but if you claim to have deciphered a few pages in isolation, and other people can use your system to make sense of the rest of it, that would be a much stronger claim.

Theoretically, this could be done with aliens as well. We have, as a species, collected quite a lot of recordings of whale song that could serve as validation data, for example. But it does require special circumstances to be able to collect that data. For example, if we find some technologically-primitive tribe in an alien rainforest (or even an Earthly rainforest for that matter), who do not have written records to reference, would we be terribly surprised if they objected to us setting up equipment to record everything they say just so we can analyze it later? It would be much better to have access to interactive methods, even though interaction itself increases the risk of Clever Hans events.

Another option is to attempt to make predictions about the real world based on alien-sourced data--but this also requires special circumstances, insofar as you must find a subject area which humans do not already know about, but can verify. For example, we haven't explored much of the ocean, but we have the ability to dive to specific places in the ocean if it's worth it. So, if someone claims that a whale or a squid gave them the location of a shipwreck, and then we go and find that shipwreck, that's good evidence that they can really communicate. Another option would be checking on solutions to mathematical problems--but, of course, that only works if the aliens have mathematics, and are more advanced than us in at least one area. "We don't know how to answer that." is sadly both a perfectly reasonable true response, and extremely easy to fake. Additionally, even when they exist, those kinds of natural situations can get expensive to investigate. 

The obvious alternative is to manufacture such situations. Place the alien in a test environment hidden from the human communicator. Allow the human communicator access to the alien, such that the alien is their only source of information about the test environment. See if they can describe it accurately afterwards. If a human can extract information that is verifiably available through no means other than communication with an alien, then we can be confident in the decipherment scheme used for such communication.

Of course, this does require a certain degree of cooperation from the alien! Ultimately, establishing verifiably accurate communication with an alien species depends largely on the motivation that they have for communicating with us, and their ability to understand our desires prior to establishing linguistic communication. Also note that verifying that we have deciphered a language is entirely different from verifying that an alien species has language. There are observational experiments that can rule out any option other than individuals communicating arbitrary information with each other in an open-ended system, such as observing dolphins executing coordinated swim routines together that they have never done before, and so could not have learned from observation. One instance of that type could be attributed to a limited-usage para-linguistic system, but many observations of individuals acting on information they could only have obtained by communication with another individual allows eventually building up a strong case for the existence of language in an alien species, even if we have no idea how it works.

One significant point brought up in the Xenolinguistics book is that we do not currently have the fieldwork techniques that would be necessary for reliably deciphering and documenting alien languages. Creating protocols for identifying the existence of languages to high degree of certainty is one of those gaps--when we do fieldwork with humans, we can assume with a high degree of certainty that they do use language, and we merely need to figure out their particular language. But if you encountered a bunch of electroceptive alien fish, would it even cross your mind that they might have language and might be worth talking to? But another significant gap is precisely in creating those protocols to validate that our understanding is correct. When working with other humans, there is a huge amount of shared context and instinctual knowledge that we can use to guide our investigation--you don't have to speak another person's language to understand the significance of deictic pointing, or to realize when they are upset or happy. But when it comes to non-human creatures (particularly those, unlike dogs, whom we have not already bred to share understandable signals with us; and unlike cats, whom we have spent enough time with to have some understanding of their desires and body language), all of that goes out the window, and we have to start from a place of no assumptions, and rigorous scientific validation of every conclusion if we are to avoid misunderstanding and misleading ourselves. If you're looking for more ways to incorporate linguistics into science fiction, here you go: propose the missing protocols!

Not being a review of anything in particular, this is not part of The Linguistically Interesting Media Index. But, if you liked this post, please consider making a small donation!


Saturday, January 20, 2024

Describing Non-human Vision

Thanks to LangTime Studio creating languages for a lot of mammals with dichromatic vision, I few years ago I did a good bit of research into how visual perception varies between different species. The issue of non-human vision came up again yesterday in George Corley's (of Conlangery fame) latest Draconic language stream, so I dug up some old notes on how to describe colors that you can't see. And in fact, this isn't just useful for conlangers trying to come up with vocabulary for a non-human language; this is good information for fantasy and sci-fi writers, too!

Since I started out with researching rabbits... let's talk about rabbits. It turns out that rabbit vision differs from human vision in just about every way that tetrapod vision can, so it makes an excellent case study. Rabbits have 2 types of color-receptive cone cells, corresponding to peak sensitivities in the green and blue ranges, and one rod cell type. I.e., they are dichromats, like most mammals. Rods don't contribute to color differentiation, so we can ignore those. At first glance, this seems similar to human red-green color blindness, except the peak sensitivities of the rabbit green cone and the red/green cones of a deuteranopic human are not in the same place! This is the first are in which human and non-human visual perception can differ--even other trichromats (e.g., penguins, honeybees) may not have the same spectral sensitivities as humans, and so see completely different color distinctions than we do. The rabbit cone sensitivities are shifted downward to a 509nm peak, compared to the human green cones with peak at 530nm, and red cones which peak at 560nm. Thus, not only can rabbits not distinguish red from green, but everything on the red end of the spectrum appears much dimmer than it would to a human, due to weaker response of the Long-Wavelength Cones to those spectral colors. Note, however, that not having separate cones for red and green does not mean that rabbits (or dogs, for that matter) would always see things-we-perceive-as-red and things-we-perceive-as-green as indistinguishable--it depends on the actual spectral signature of each object. For example, where we perceive two objects as having equal perceptual brightness but different hue, rabbits might perceive identical hue but lower perceptual brightness for the red object compared to the green.

Much like humans have an anomalous blue response in our red cones, which causes us to conflate purple (red+blue) and violet (a spectral color, extreme blue),  rabbit and rat green cones also have a
sensitivity peak in the ultraviolet. Initially, I assumed that, unlike the human anomalous blue response, UV light would be blocked by the structures of the eye, as it is for humans; however, while talking with a sci-fi writer friend of mine about non-human vision last night (as ya do, y'know), when I mentioned that rabbit and rat green-cone pigments have a weird bi-stable response to UV light, but UV is absorbed by mammalian eye tissue, so it's probably just a random non-conserved evolutionary quirk... he noted that UV is absorbed by primate eye tissue, but had I actually explicitly checked on rabbits? And I had not. So I did. And it turns out that that lapine corneal, lens, and vitreous humor tissues are considerably more transparent to near-UV light than human eye tissues are. Now, nobody (that I have been able to find) is actually saying outright that rabbits (or rats) can see UV... but rabbits might actually be able to see UV. If they can, it would be indistinguishable to them from green (not blue!) If it was not already clear from the shifted sensitivity peaks, I think that should highlight the impossibility of just taking, e.g., a JPEG image captured with equipment built for humans and transforming it into an accurate representation of what some other animal would see--if nothing else, the UV information would be completely missing!

Incidentally, if rabbits are UV-sensitive, the bistable nature of the UV response in their green cones means that they would actually be more strongly sensitive to UV in the dark than they are during daytime illumination. I have no idea what to make of that, as there isn't really a whole lot of
environmental UV going around at night or in tunnels... but that's a quirk you can keep in mind as a possibility for fictional creatures. In general, just note that spectral response can vary in different environmental conditions; in humans, we lose the ability to distinguish color entirely in low-light conditions (and your brain lies to you to fill in the colors that you believe things should be), but things can be more complicated than that.

Another interesting feature of rabbit eyesight is that they have a much less dense foveal region than humans (so less effective resolution), and their color-sensitive cells are not evenly distributed--there is a thin band with a mixture of both green and blue cones, with blue cones concentrated at the bottom of the retina (corresponding to the top of the visual field) and green cones concentrated at the top (corresponding to the bottom of the visual field). I.e., their vision along the horizon is in color, but the top and bottom extents of their visual fields are black and white, and specialized for better spectral response to the most common wavelengths of light coming from those directions--blue from the sky, green from the ground. This isn't too different from human peripheral vision (where color information is inferred by the brain, not actually present in the raw retinal output), except that in rabbits different parts of the peripheral fields actually have a different peak spectral response! In wild rabbits, this is probably just an adaptation to getting the maximum information out of a predominantly-blue-background sky and a predominantly-green(/red)-background ground, but intelligent rabbits could theoretically learn to extract additional color information (e.g., distinguishing monochromatic white from dichromatic white) from an object by wiggling their eyes up and down or tilting their heads to put it in different parts of the visual field. Or not, if their brains just fill in missing color information automatically like ours do.... But if you want to write about creature that can do that, by authorial fiat, they could have a whole auxiliary class of color words, analogous to pattern words like "speckled" or "sparkly", to describe objects that have different appearances in different parts of the visual field.

But, if we abstract away from physiological perceptual abilities, what would their experience of color space be like? Tetrapod retinas pre-process raw cone cells signals into antagonistic opponent channels before color information gets sent to the brain; i.e., what your visual cortex has access to is not the original cone cell activations, but sums and differences of the activations of multiple types of cone cells. In human eyes, that means our brains see color coming down the optic nerves as a combination of red vs. green and blue vs. yellow signals--even though yellow isn't actually a physiological primary color! In dichromats like rabbits, the two raw spectral signals (green and blue) are still
processed by an antagonistic opponent system in the retinal ganglia; thus, just like we can't perceive the impossible colors "reddish green" or "yellowish blue", they cannot have any perception of a distinct blue-green mixture--dim dichromatic light at both spectral peaks will look exactly the same as bright monochromatic light exactly in between, which will be indistinguishable from white. In effect, the loss of one cone type compared to humans reduces the color space from 3 dimensions to 2, and the perceptual dimension that is lost after ganglial processing is that of saturation.

The lapine color space is thus defined by a 2D, triangular range with black at one vertex, white (or whatever you want to call it) at the center of the opposite edge, and pure green and pure blue at the
remaining vertices. The hue and saturation axes are the same, with green fading into white and then white fading into blue.



If the most basic colors are defined by the extrema of the opponent-process space, as they are for humans, there should be 3 basic colors, corresponding to black, blue, and green. White would be
the natural next step, followed perhaps by light and dark shades of blue and green. Or you could call the green extremum "yellow" instead, as the Long Wavelength Cone still has sensitivity into the yellow and red ranges of the spectrum, even though its peak is in green, as I have done in the image above. Fundamentally, the 3D human color space and 2D dichromat color spaces are mathematically incommensurate, so all human-perceptible representations involve some arbitrary choices anyway. Treating the long-wavelength end as "yellow" rather than "red" makes is convenient if you want to do something like copying the Old Norse poetic convention of treating blood and gold as being the same color. :)

We can squish and stretch that gamut to get a representation of the dichromat color wheel, with a radial saturation axis and polar hue and brightness:


And the sort of Cartesian representation that an intelligent dichromat graphic designer would use to pick out colors in a computer graphics program:


Keep in mind that the actual colors used in these illustrations are completely arbitrary, aside from being "towards the long-wavelength end" vs. "towards the short-wavelength end". What matters is just the set of possible distinctions. Figuring out exactly what lapine colors any particular object would correspond to would require recording the actual emission spectrum of that object, and then mapping it into the rabbit color space--and being dichromatic does not merely mean that they see a subset of the colors that we can see; the available distinctions are different. E.g., two objects which look identically purple to a human may be monochromatic in the violet spectral range, or they may be dichromatic with light in the
blue and red ranges, but those two objects will look distinct to a rabbit--the first one being obviously pure blue, the second being light blue or white.

So, that's dichromatism... what about tetrachromatism, or higher? My best reference on this subject is this absolutely lovely article: Ways of Coloring: Comparative Color Vision as a Case Study for Cognitive Science, which contains descriptions of comparative color spaces for humans, bees (also trichromats, but with different frequency response), goldfish, turtles (both of which are tetrachromats), and pigeons (suspected pentachromats). And it has an excellent statement of what the problem actually is:
It is important to realize that such an increase in chromatic dimensionality does not mean that pigeons exhibit greater sensitivity to the monochromatic hues that we see. For example, we should not suppose that since the hue discrimination of the pigeon is best around 600nm, and since we see a 600nm stimulus as orange, pigeons are better at discriminating spectral hues of orange than we are. Indeed, we have reason to believe that such a mapping of our hue terms onto the pigeon would be an error: [...] 
Among other things, this result strongly emphasizes how misleading it may be to use human hue designations to describe color vision in non-human species. This point can be made even more forcefully, however, when it is a difference in the dimensionality of color vision that we are considering. An increase in the dimensionality of color vision indicates a fundamentally different kind of color space. We are familiar with trichromatic color spaces such as our own, which require three independent axes for their specification, given either as receptor activation or as color channels. A tetrachromatic color space obviously requires four dimensions for its specification. It is thus an example of what can be called a color hyperspace. The difference between a tetrachromatic and a trichromatic color space is therefore not like the difference between two trichromatic color spaces: The former two color spaces are incommensurable in a precise mathematical sense, for there is no way to map the kinds of distinctions available in four dimensions into the kinds of distinctions available in three dimensions without remainder. One might object that such incommensurability does not prevent one from “projecting” the higher-dimensional space onto the lower; hence the difference in dimensionality simply means that the higher space contains more perceptual content than the lower. Such an interpretation, however, begs the fundamental question of how one is to choose to “project” the higher space onto the lower. Because the spaces are not isomorphic, there is no unique projection relation.

It is also the case that lower-dimensional color spaces, such as those of dogs or rabbits (both dichromats, but in slightly different ways) are incommensurate with our 3D color space, in exactly the same way that our 3D color space is incommensurate with the higher-dimensional perceptions of a pigeon, turtle, or goldfish, and have no unique projections. Thus, visualizations of how your dog or cat sees things are always only approximations--we can try to recreate the kinds of distinctions relevant to a dichromatic animal in our own color space, but we will always experience it differently.

A common feature of all of the systems described is the production of a combined luminance channel from the raw n-dimensional cone cell inputs, and n-1 oppositional chroma channels--in humans, these are the red-green and blue-yellow oppositions, which produce a two-dimensional neurological color space othogonal to the luminosity axis. The YCbCr color space (used for analog color TV transmission) arises from representing the two chromatic dimensions directly in Cartesion coordinates. Saturation arises as the radial dimension--distance from the white-black axis--in a polar transformation of this oppositional color space to produce the trichromat color wheel, with hue arising as the radial coordinate. Trichromat color spaces for different species can vary both in their precise spectral sensitivities, and in how the oppositional chroma channels are generated in the retina; i.e., instead of an RG-B apposition, where R and G physical channels combine to produce Y, there can also be an R-GB opposition: red-cyan vs green-blue. For us, there's no such thing as reddish-green (nor blueish-yellow), because yellow comes in between, but we do have blueish-green. For that other sort of trichromat, reddish-green would make perfect sense, but blueish-green and reddish-cyan would be impossible to perceive instead.

Monochromatic vision is pretty easy to understand--it's just black-and-white / greyscale--luminosity is the only dimension, and leaves zero additional channels for chroma information. As illustrated above, in dichromat vision, the equivalent of the trichromatic color "wheel" is just a line--the radial dimension is not meaningfully distinct from the single linear chromatic dimension, and while we require an additional axis to represent brightness, the dichromat color wheel really does represent every color they can possibly see. As a result, "saturation" and "hue" (or, alternatively, brightness and hue) are indistinguishable to dichromats, and grey (or white, depending on whether you represent the space as a triangular gamut or a Cartesian diamond) is a spectral color. There are only two primary colors (or 4, if you count white and black), and no secondary colors.

In higher-dimensional color spaces, as determined by discrimination experiments on tetrachromatic and pentachromatic organisms, we still see the generation of oppositional color channels from retinal processing. How to generate these oppositional channels, however, is not obvious a-priori; for example, in humans one opposition is between red and green, both of which are primary colors, but the other is between blue, a primary color, and yellow, a composite--and, as mentioned above, that could be reversed in a different species with different specific spectral sensitivities. But why that particular combination for us?

It turns out, across different species, opponent channels are constructed to maximize decorrelation--in other words, to remove redundant information caused by the overlapping response curves of different receptor types. Thus, the precise method of calculating color channels will be slightly different for each species, dependent on physical characteristics of the retinal cells, but they are all qualitatively the same kind of signal, and end up producing a a higher-dimensional chroma-space orthogonal to the white-black luminosity axis. However, there's pretty good reason to believe that this would be a convergently-evolved process to maximize visual acuity (except in some specific circumstances like Mantis shrimp), so this analysis of color perception plausibly applies universally, to most kinds of weird aliens you might come up with, so long as they have eyes at all. Effectively, the retinal ganglia are performing Principle Component Analysis to turn "list of specific frequency activations" information into "total luminosity vs. list of chroma components" information.

Meanwhile, in any such neurological color space, there is only ever a single radial coordinate. Trichromatic vision is kind of special in that it is the first dimensionality at which chroma can be split into saturation and hue components. At higher dimensionalities, the hue space gets more complex, but we can say with some confidence that the extra dimensions introduced in higher-dimensional perceptual color spaces are not some extra sort of radial-coordinate saturation or any kind of weird third thing, but are in fact additional dimensions of hue--and along with extra dimensions of hue, qualitatively different kinds of composite colors!

Monochromats don't have any color. Dichromats don't have any secondary colors--just the spectral colors which, strangely to us, include white/grey. Our three dimensional human color space allows us to perceive two opponent channels, corresponding to 4 pure hues--red, yellow, green, and blue--and weighted binary combinations thereof that give rise to the secondary colors--r+y (orange), y+g (chartreuse?), g+b (cyan), and b+r (magenta), with one non-spectral hue (magenta). Non-spectral colors derive from simulataneous activation of cones with non-adjacent response peaks, and with three cones, there's only one such possibility. Meanwhile, a tetrachromatic system would have 3 opponent axes with 6 basic hues (r-g, y-b, and the new p-q), binary combinations of those hues with their non-opponents producing 12 secondary colors (r+y, r+b, r+p, r+q, g+y, g+b, g+p, g+q, y+p, y+q, b+p and b+q), and ternary combinations producing 8 extremal instances of an entirely new kind of hue--tertiary colors--not found in the perceptual structure of trichromatic color space (r+y+p, r+y+q, r+b+p, r+b+q, g+y+p, g+y+q, g+b+p, g+b+q), just as our secondary colors are not found in the dichromatic space. Additionally, there is not merely one non-spectral secondary color (magenta) in the fully-saturated hue space, but 3--and in general, that number will correspond to however many pairs of non-spectrally-adjacent sensor types there are (which actually works out to the sequence of triangular numbers!) If we assume that r, g, b, and q are the physiological primaries (note that the spectral locations of y and p depend on the decorrelation output for a specific set of 4 receptors with species-specific sensitivities), then the non-spectral secondaries are r+b, r+q, and g+q. All of the tertiary colors are non-spectral.

Ultimate writer takeaway: you may not be able to intuitively understand what non-human color experiences are like, but you can make some arbitrary implicit decisions about retinal physiology (i.e., just decide where you want to the opponent colors to appear along the spectrm), do some basic combinatory math, and then you have a list of descriptions of basic focal colors that you can assign words to--or, if you want to be a little more realistic, assign words to ranges of those focal colors, which you can precisely mathematically describe. This gets more complicated at higher dimensionalities (like pigeons' pentachromatic color space), but tetrachromacy is kind of convenient because you still have only 2 dimensions of hue, so you can actually diagram out what the color regions are, and just tell people "y'all already know how brightness and saturation work, so I don't need to put those on the chart".

Someday, I aspire to have a program where you can input the physiological frequency response curves for an arbitrary organism, and a spectrum, and it'll give you the mathematical description of the perceptual color that that would produce. But till then, you'll just have to do your best at guessing what the aliens and monsters and anthropomorphic animals see whenever a human thinks something is a particular color--but guess informedly, knowing what the structure of their color spaces is like!

P.S. What was that about Mantis shrimp? Well, Mantis shrimp have 16 different light receptor types, with 12 different color receptors, which kinda suggests that they should have a 12-dimensional color space with 10 dimensions of chroma. But... empirically, that's not what happens. Experimentally, they don't actually have all of those different color categories, or a particularly fine capacity for spectral distinction. Rather, they have a large number of different receptor types so that they can identify spectral colors at high speed, without doing any retinal pre-processing--chartreuse cone fires? Cool, that's a chartreuse thing! No need to bother with oponent processing! These kinds of extreme high dimensional visual systems might end up working more like our senses of smell or taste than like our perception of color. However, there's also another aspect of Mantis shrimp vision that's outside of color perception (and not entirely unique to Mantis shrimp, either): they can see polarization (hence the 4 visual receptor types that aren't for color, rather than just 1). This ability is comparatively easy to imagine and describe--it's an overlay of geometric information, that tells you "not only does this light have a particular color, it is also oriented in a particular way". Mantis shrimp are, however, unique in being able to distinguish circularly polarized light; other creatures with polarization sensitivity would be unable to tell circularly polarized from unpolarized light.

Wednesday, January 10, 2024

A Language of Graphs

Recently I got thinking about syntax trees, and what a purely-written language might be like that was restricted to the syntactic structures available to linearized spoken languages and made those structures  explicit in a 2D representation. Or in other words, a graphical (double-entendre fully intended) language consisting of trees--that is, graphs in which there is exactly one path between any two nodes/vertices--whose nodes are either functional lexemes roughly corresponding to internal syntactic nodes and function words in natural languages, or semantic lexemes corresponding to content words--but where, since the "internal" structure is made visible, content words are not restricted to leaf nodes!

Without loss of generality, and for the sake of simplicity, we can even restrict the visual grammar to binary trees--which X-bar theory does for natural languages anyway--although calling them "binary" doesn't make much sense if you don't display them in the traditional top-down tree format with a distinguished root node, since internal nodes can have up to three connections--one "parent" and two "daughters", which are a natural distinction in natlang syntax trees but completely arbitrary when you aren't trying to impose a reversible linearization on the leaf nodes! So, in other terms, we can say that sentences of this sort of language would consist of tree-structured graphs with a maximal vertex degree of 3.

I am hardly the first person to have thought up the idea of 2D written language, but a common issue plaguing such conlang projects (including their most notable example, UNLWS) is figuring out how to lay them out in actual two dimensions; general graphs are three-dimensional, and squishing them onto a plane often requires crossing lines or making long detours, or both. Even when you can avoid crossings, figuring out the optimal way to lay out a graph on the page is a very hard computational problem. Trees, however, have the very nice property that they are always planar, and trivial to draw on a 2D surface; if we allow cycles, or diamonds (same thing with undirected edges), it becomes much more difficult to specify grammatical rules that will naturally enforce planarity--which is whay I've yet to see a 2D language project that even tries. Not only is it easy to planarize trees, there are even multiple ways of doing so automatically, so one could aspire to writing software that would nicely lay out graphical sentences given, say, parenthesized typed input. (Another benefit of trees is that they can be fully specified by matched-parentheses expressions, w we could actually hope to be able to write this on a keyboard!) And then we can imagine imposing additional grammatical rules and pragmatic implications for different standard layout choices--what does it mean if one node is arbitrarily specified as the root, and you do lay it out as a traditional tree? What if you instead highlight a root node by centering it and laying out the rest of the sentence around it? What if you center a degree-two node and split the rest of the sentence into two halves splayed out on either side?

The downside of trees is that semantic structure is not limited to trees; knowledge graphs are arbitrary non-planar graphs. But, linear natural languages already deal with that successfully; expanding our linguistic from a line to a tree should still reduce the kinds of ambiguities that natural languages handle all the time. So, this sort of 2D language will require the equivalent of pronouns for cross-references; but they probably won't look much like spoken pronouns, and there's a lot more potential freedom in where you decide to make cuts in the semantic graph to turn it into a tree, and thus where pronouns get introduced to encode those missing edges, and those choices can probably be filled with pragmatic meaning on top of the implications of visual layout decisions.

Now, what should words--the nodes in these trees--look like? It seems to be common in 2D languages for glyphs to be essentially arbitrary logographs, perhaps with standard boundary shapes or connection point shapes for different word classes. The philosophy behing UNLWS, that it should take maximal advantage of the native possibilities of the written visual medium, even encourages using iconic pictoral expressions when feasible. But that's not how natural languages work; even visual languages (i.e., sign languages), despite having more iconicity on average than oral languages, have a phonological system consisting of a finite number of basic combinatorial units that are used to build meaningful words, analogous to the finite number of phonemes that oral languages have to sring together into arbitrary words. Since we've already got a certain limited set of graphical "phonological" items necessary for drawing syntax trees, and constraint breeds creativity, why not just re-use those?


Here we have an idealized representation of the available phonemes / graphemes / glyphemes: a vertex with one adjoining edge, a vertex with 2 adjoining edges, and a vertex with 3 adjoining edges. On the left, the three -emic forms. On the right, the basic allographic variants. In all cases, absolute orientation and chirality don't matter--if you mirror the "y" glyph, it is still the same glyph. Note that "graph" and "grapheme" are standard terms in linguistics for the written equivalents of "phones" and "phonemes", but that's gonna get really confusing when we're also talking about "graphs" in the mathematical sense. "Glyph" also has a technical meaning, but I am going to repurpose it here to talk about the basic units of this 2D language. So, we have glyphs, glyphemes, and alloglyphs, which are composed into graphs to form lexemes and phrases. Having only 3 glyphemes to work with may seem extremely limiting, but the expanded combinatorial possibilities in 2D vs. 3D make up for it.

While keeping syntax restricted to tree structures is the core idea of this language experiment, lexical items, which don't need to be invented and laid out on the fly, can be more general; we could allow them to be any planar graph. And just as syntax trees can be laid out in many different ways, we could say that lexical items are solely defined by their abstract graphs, which can also laid out in many ways. But, it turns out that recognizing the topological equivalence of two graphs laid out in different ways is a computationall hard problem! If this language is to be usable by humans, that simply will not do. Thus, the layout for lexical items should be significant, up to rotation and reflection equivalence, so that their visual representations are easily recognizable. This doesn't require introducing any additional phonemic elements--the arrangement of phonemes and letters in one-dimensional natural language words also affects meaning, but we don't consider it "phonemic". Despite the Monty Python sketch about the guy who speaks in anagrams, spoken words are not just bags of sounds in arbitrary order, and written words are not just bags of letters--that's why, for example, "bat" and "tab" mean different things, and "bta" just isn't an English word at all. The spatial arrangement--which, in the case of natural language, works out to just linear order--matters a lot, and that sketch only works because it's precisely constructed to use close-enough anagrams  with a lot of supporting context. So, what sort of glyphotactic rules should we have to determine the valid and recognizable arrangements of glyphs in 2D space?

With 3 edges per vertex, the most natural-seeming arrangement is to spread them out at 120 degree angles, and degree-2 vertices would sit nicely in a pattern with 180-degree angles (although we probably want to minimize those, since vertices are more noticeable if they are highlighted by a corresponding angle, rather than a straight line through them). That suggests a triangular grid, which can accomodate both arrangements. The idealized glyphemes and alloglyphs shown above are drawn assuming placement on such a triangular grid, with 60, 120, and 180-degree angles. (I will continue to refer to the features of glyphs in terms of 60, 120, and 180-degree angles, but these, too, are idealizations; in practice, non-equilateral grids might be used for artistic or typographic purposes--e.g., as an equivalent to italics--in which case these angle measurements should be interpreted as representing 1, 2, or 3 angular steps around a point in the grid.) So, words shouldn't be completely arbitrary planar graphs--they should be planar graphs with a particular layout on a triangular grid.

It does not make sense to extend a single grid across an entire phrase or sentence; the boundaries of trees grow exponentially, so you'd need a hyperbolic grid to do it in the general case, and hyperbolic paper is hard to come by (although laying out a sentence on a single common grid within, say, a Poincare-disc model of the hyperbolic plane might be a neat artistic exercise). Maintaining a grid within a word is sufficient to maintain graphical recognizability, and breaking the grid is one signal of the boundary between lexicon and morphology on one side and syntax on the other.

Making an analogy to chemistry, I feel, as an aesthetic preference, that word-graphs should have a minimal amount of "strain". That is, glyphotactically valid layouts should use 120-degree angles wherever possible, and squish them to 60 degrees or spread them to 180 degrees only where necessary. So, where is it necessary?

  • 60-degree angles should only occur on 3-vertex triangles, the acute points of 4-vertex diamonds, or as paired 60-degree angles on the interior of a hexagon.
  • 180-degree angles should only occur adjacent to 60-degree angles, or crossing vertices at the centers of hexagons.
Additional restrictions:
  • All edges should be exactly one grid unit long--i.e., there are no words distinguished by having a straight line across multiple edges, vs. two edges with a 180-degree angle at a vertex in the middle.
  • Syntactic connections must occur on the outer boundary. I.e., you can't have a word embedded inside another word.
  • All vertices must have a maximum of three adjacent edges; thus, any word must have at least one exterior vertex with degree 2 or 1, to allow a syntactic adge to attach to it.
  • As they are nodes in a binary syntax tree, words can have at most 3 external syntactic connection points.

With those restrictions in place, here are all of the possible word skeletons of 2, 3, or 4 vertices:


I refer to these "word skeletons" rather than full words because they abstract away the specification of syntactic binding points--and the choice of binding points may distinguish words (although they should probably be semantically-related words if I'm not being perverse!) Including all of the possible binding point patterns for every skeleton massively increases the number of possibilities, and it quickly gets impractically tedious to enumerate them all and write them down. Here are all of the word skeletons with 5 vertices:

And here are all of the word skeletons with 6 vertices:

And the number of possible 7-vertex words is.... big. Counting graphs turns out to also be a hard problem, so I can't tell you exactly how fast the number of possible words grows, but it grows fast.

Now, I just need to start actually assigning meanings to some of these....

Wednesday, December 27, 2023

What If Marvel Audiences Had to Read Subtitles for Mohawk Dialog?

Episode 6 of season 2 of Marvel's What If... ("What if... Kahhori Reshaped the World?") features Mohawk people and Spanish conquistadors each speaking their own languages on screen, and, excepting a few seconds at a time of English narration, Marvel & Disney+ have trusted audiences to actually read subtitles for nearly all of a 30-minute episode. Good for you, Marvel!

There's a neat trick going on with the subtitling to distinguish the two languages, providing some extra context for people who might not have the ear to easily recognize that the Native Americans and Spaniards are indeed speaking different not-English languages: Mohawk is subtitled in white text, while Spanish is subtitled in yellow text. Not much to analyze there--it's just neat.

However... now I get to rant about subtitles a little bit.

The white and yellow subtitles provided in the "default" presentation of the episode for Anglophone audiences are implemented as "open captions"--text that is "burned in" to the video image, and cannot be dynamically changed. If you switch the language to, say, Spanish, the English subtitles for Spanish dialog don't go away; if you switch to French, the short sections of English dialog are translated to French, but that's the only difference. You have to turn on French closed-caption subtitles separately, and they will display over the burned-in English.

I can only assume that this was done because Disney's streaming platform doesn't support any sort of formatting in closed captions. And sadly, I can't get too mad at Disney in particular for this, because nobody else does any better--Amazon Prime Video has terrible captions, Netflix has terrible captions, Paramount+ has terrible captions, YouTube has terrible captions. And there is no good excuse for any of this. The DVD captioning standard allowed for everything this episode does and far more back in 1996! And yet, nobody really made full use of the possibilities aside from Night Watch, with Lord of the Rings coming in second place. As Pete Bleakley has reminded me (Thanks, Pete!), digital broadcast television, via the CEA-708 closed captioning standard, has had multicolor, positionable closed-captions since the late 1990's, with wide accessibility starting in 2009. Web video, of course, lagged significantly behind, but for a well over a decade now even web browsers have had the built-in capacity to do, as closed-captions, everything that this What if... episode does, and far more.

Come on, streaming companies. If you're going to do captioning at all, please, do captioning right. It's not that hard!


If you liked this post, please consider making a small donation!

The Linguistically Interesting Media Index