Last weekend, I gave a talk at the 10th Language Creation Conference on creating languages that do not use the human voice, in which I went over four case studies of successively more-alien phonologies. (One of which I have previously blogged about here.) Israel Noletto called it a "must-watch" for any speculative fiction writers putting created languages in their stories! Turns out, I had extra time, and could've talked about a fifth... but when I put together my abstract, I thought I'd be hard-pressed to fit 4 case studies in half an hour, so I cut it out. And so, I shall now present case study #5 here, in blog form!
After noodling over the cephalopod-inspired phonology for a while (for context, go watch my talk), it occurred to me that human sign languages and cephalopod communication have in common the feature that you can't flood an area with a linguistic signal the way that you can with a disembodied voice from a speaker system--they have to be displayed on a screen with a certain defined spatial extent, and even if it's a very big screen, the components of the signal are still not evenly distributed throughout space.So, could we create a light-based language that is broadcastable in the way that audio-encoded languages are? And what sort of creature could evolve to use such a system? Well, trivially, yes, we can--just encode the existing language of your choice in Morse code (or something equivalent), and pulse the lights in a room in the appropriate pattern. Heck, people actually do this sometimes (although more often in thriller movies than in real life). But designing a language whose native phonology is Morse code is just... not that interesting. It doesn't feel materially different from designing a language to use the Latin alphabet, for example. We need more constraints to spark creativity here! So, what else could we do to more directly exploit the medium of non-localized light? In Sai's terms, how could we design something that is natural to the medium?
A first thought is that light and sound are both wave phenomena, and one could just transpose sound waves directly into light waves, and use all the same kinds of tricks that audio languages do... except, it turns out that continuously modulating the frequency of light is considerably harder than modulating the frequency of sound. We can do it with frequency-modulated radio, but that's still not how we actually encode audio signals in radio, and similar technology just doesn't exist in the visible range. And if we look at how bioluminescence actually works in nature, no known organism has the ability to continuously modulate the frequency of their light output; they have a small number (usually just one) of biochemical reactions that produce a specific spectrum, and that's it.
But, a bioluminescent creature could do essentially the same thing we do with AM radio: ignore the inherent wave properties of the carrier signal entirely, and vary the amplitude over time to impose a secondary information-carrying waveform, which can be considerably more complex than the binary on/off of Morse signals, and can in fact have its own frequency and amplitude components. That doesn't mean high-contrast flashes couldn't still be involved--going back to nature again, the intraspecific visual signalling of fireflies, for example, is very Morse-like. But it can have more complex components, resulting in a higher bitrate that feels more suitable for a language that's on par with human languages in utility and convenience. Biological signal modulation can be done by controlling the rate of release of certain chemicals (e.g., the rate at which oxygen is introduced into a firefly's light organ to react with luciferin), or by physical motion of shutters to occlude the light to varying degrees (a common mechanism among, e.g., bioluminescent fish whose light is produced by symbiotic bacteria).
So, now we have a single-channel frequency-and-amplitude-modulable signal; the next obvious analogy to explore (at least obvious to me) is whistling registers (again, for context, go watch my talk, or listen to the Conlangery episode on Whistle Registers in which I talk about my conlang Tjugem). However, we can't directly copy whistling phonology into this new medium, precisely because we are ignoring the wave nature of the carrier signal; for a creature with a high visual flicker-fusion rate, perceivable modulation frequencies could be fairly high, but still nowhere near the rate of audio signals; rather, frequency information would have to occupy about the same timescale as amplitude information. In other words, varying "frequency" would give you a distinction between amplitude changes that are fast vs. slow, but it would be much harder to do things like a simultaneous frequency-and-amplitude sweep and keep each component distinguishable, the way you can with whistling. You could do it with flickering "eyelids" or chemical mixing sphincters (or, as Bioluminescent backlighting illuminates the complex visual signals of a social squid in the deep sea puts it, "by altering conditions within the photophores (41) or by manipulating the emitted light using other anatomical features")--trills in human languages introduce low-frequency components of about the right scale--but just as the majority of phonemic tokens in spoken languages are not trills, I would expect that kind of thing in a light-based language to be relatively rare. (Side note: perhaps audio trills and rapid light modulation could both be considered analogous to cephalopod chromatic shimmer patterns.)
So, the possibilities for a single-channel light-based phonology are not quite as rich as those for a whistling phonology, although the possibility of trilling/shimmering does help a bit (even though, AFAIK, no natural whistle register makes use of trilling). But, while the number of channels available to a given bioluminescent species will be fixed, the number of channels that we choose to provide when constructing a fictional intelligent bioluminescent creature is not! And if they have multiple light organs that allow transmitting on multiple different color channels simultaneously, then just two channels would allow them to exceed the combinatorial possibilities of human whistle registers.
Using this sort of medium for communication would have some interesting technological implications. Recording light over time is in some ways much more difficult than mechanically recording sound, but reproducing it is trivial. Light-based semaphore code systems for long-range communication with shuttered lanterns might be a blatantly obvious technology very early in history; and even if it cannot be mechanically recorded, if someone is willing to sit down for a while and manually cut out the right sequence of windows in a paper tape, mechanical reproduction of natural-looking speech could also occur at a very low tech level (especially if the language is monochromatic). Analog optical sound is in fact a technology that was really used in recent human history, and the reproduction step for a species using optical communication natively would be much simpler than it was for us, as there's no need for them to do the translation step from optical signal back into sound.
Now, there's a lot of literature on animal bioluminescence, but not a ton on specific signalling patterns used by different species... except for fireflies. So, if we want to move away from abstract theorizing and look at real-world analogs to extract a set of constraints for what a light-based language might look like, borrowing from firefly patterns is probably our best bet. Additionally, and in line with modelling off of fireflies, I am going to avoid using polychromatic signals, and see just see how far we can get with a single-channel design. After all, I already looked at a multi-channel / multi-formant signal system in the electroceptive phonology of Fysh A. I won't be sticking strictly to firefly patterns, because fireflies pretty much only use flashes, without significant variation in amplitudes, and that would end up being very Morse-like. However, per the US National Park Service, there are some interesting variations in the flashing patterns seen in various species; for example:
- Long, low-amplitude glows (not really a flash at all).
- Single, medium-amplitude flashes with long gaps.
- Pairs of medium-amplitude flashes.
- Trains of medium-amplitude flashes.
- Single high-amplitude flashes ("flashbulbs").
So, let's go ahead and define three amplitude bands that phonemic segments might occupy, analogous to the frequency bands that organize whistling phonologies:
- A low band, which allows continuous glows and smooth waves.
- A middle band, where we have to pause between blinks, but we can blink fast enough for multiple blinks to constitute a single segment.
- A high band, where recharge pauses are too long for sequential blinks to be interpreted as a single segment.
- Slow attack vs. hard attack
- Slow decay vs. hard decay--only available in the low band; the upper bands only allow hard decay, since they use up all the luciferin!
- "Tapped" -- a single amplitude peak.
- "Trilled" -- two or more close-spaced amplitude peaks (not available in the high band)
- Low
- slow, short, slow, tapped
- slow, short, slow, trilled
- slow, short, hard, tapped
- slow, short, hard, trilled
- slow, long, slow, tapped
- slow, long, slow, trilled
- slow, long, hard, tapped
- slow, long, hard, trilled
- hard, short, slow, tapped
- hard, short, slow, trilled
- hard, short, hard, tapped
- hard, short, hard, trilled
- hard, long, slow, tapped
- hard, long, slow, trilled
- hard, long, hard, tapped
- hard, long, hard, trilled
- Mid
- slow, tapped
- slow, trilled
- hard, tapped
- hard, trilled
- High
- slow attack
- hard attack
For purposes of this sketch, I'll select the following phonemes for maximal distinction:
- Low
- slow, short, slow, trilled - <w>
- slow, long, slow, tapped - <r>
- hard, long, slow, tapped - <t>
- Mid
- slow, tapped - <d>
- slow, trilled - <rr>
- hard, tapped - <k>
- High
- slow attack - <b>
- hard attack - <p>
L>H: 8 possible syllables
L>M>H: 32 possible syllables
M>H: 8 possible syllables
L>MM: 64 possible syllables
MM>H: 32 possible syllables
LL>M>H: 128 possible syllables
L>MM>H: 128 possible syllables
And suddenly, it has become impractical to write with a syllabary!
As I watched your talk I began to wonder if you’ve ever read Krueger’s paper ‘Language and techniques of communication as theme or tool in sf’. Krueger harshly criticised sf writers for their unimaginative take on xenolinguistics. He’d be happy to listen to you, I believe. The paper was published back in the 60s. More recent literature, like China Miéville’s Embassytown (2011), has raised the bar in that regard. Still, in terms of otherworldly linguistics I’m yet to encounter anything like your ‘mad’ ideas :)
ReplyDeleteI hope speculative writers get to see your paper and your blog. This way, academic writers like myself, who are interested in glossopoesis, will have much more to write about.
I hadn't read it, but now I have! And it reminded me that I should add Cycle of Fire to my list of things to review, probably in comparison with Mission of Gravity and Close to Critical, other Hal Clement works which use the "they just learned our language" approach, and maybe Needle....
Delete