Wednesday, July 12, 2023

Some Old Spaceship Models

Working on models of spacecraft modules this week reminded me that I once was actually quite into this kind of thing, so I dug out some old POV files of spacecraft models stored on an external harddrive from my childhood (the files, not the harddrive--that's only a few years old), and I had quite a lot of stuff saved that I had forgotten about. Apparently, I was at one point working on a model of a universal docking adapter, which I never finished:


My old spacecraft models have a number of other repeated components--engines, fuel tanks, etc.--but most of them do not have their own separate model files yet. A few which do include these rocket engine exhaust plumes:



And this RCS flywheel housing:


And it seems that I got most of the way through a decent recreation of the forest domes from the Valley Forge, from the old sci-fi film Silent Running:


And it seems I have one incomplete spacecraft design loosely based on the Valley Forge aesthetic:




And I've got a partial section of a spar intended for supporting a fission reactor, which I think was just an experiment in trying to get gold foil surface texturing to look right:


In terms of complete spacecraft models, the Minos is not fully detailed but it is a whole ship, which I used for TTRPG purposes in high school:



Here's a model of a rotating dumbell station, with the crew module at one end connected to a central airlock, and a counterweight for equipment:


Next we have a nuclear rocket design, with the engine markedly far away from the crew quarters and other ship components on a long spar:


A small asteroid tugboat:



Which is carried by a bussard ramjet:



And I played around a little bit with schematics for possibly slightly more realistic magscoops / magsails:



Here we have a large interstellar probe with instrument booms:


A gigantic tanker with three engines and a tiny little crew module up front:


And a final design vaguely modelled off the Orion concept, but not actually using an Orion pusher plate, which I called the Pilgrim:



The key feature of this last ship is the rotating passenger sections supported by cabling on winches, which swings and and out to keep the passenger modules aligned with apparent gravity as the ship switches between thrust and spin.

I've put all of these old models, my new tiling modules, and some other cruft that needs organizing, in a GitHub repository, so y'all can follow along as I clean stuff up and get those old components integrated into my new spacecraft design system.

Dense Fractal Space Construction

Suppose we want to design a series of basic reusable modules that can serve as the basis of a design aesthetic for some spacefaring civilization.

Let's start with a sphere, for maximum strength and material economy. It should be as easy as possible to expand stations or ships built out of these modules over time, and we'd like subsequently-added sphere modules to pack together as tightly as possible to minimize travel distances within the structure. That means each sphere should be able to connect to 12 others (the 3 dimensional kissing number), so each basic module should have twelve cylindrical attachment points (unused ones can be capped off with windows or airlocks or sphere-section bulkheads). There are two ways to arrange those--with stacked layers either aligned with each second layer over, or shifted by one lattic position--but the aligned arrangements, corresponding to placing connections at the vertices of a cuboctahedron (or on the faces of a rhombic dodecahedron) maximizes symmetry and maximizes the number of straight line paths from one attachment point through the center of the sphere to an opposite attachment point, which makes it easy to build straight-line structures with individual modules in many different orientations.

Standardized modules and standardized attachment points, resulting in standard exterior spacing between components, also suggests that these modules should come with standardized exterior anchor points for EVA, which means nicely standardized greeblies completing the design aesthetic. While doing a high ropes course once, I came across an ingenious design for a passive Continuous Belay System (an system which allows you to move around a structure relatively freely without ever becoming disconnected from a safety line)--a special C-clip is used which has an opening large enough for flat plates to pass through, which allows passing by mounting points for cables, but small enough that it cannot come off the cable itself. Specially-designed intersection plates also allow sliding the clip from one cable to another without ever becoming detached. This is slightly more restrictive than two-carabiner systems, where you move one at a time while keeping the other attached, but also much more idiot-proof, and provides a functional set of greeblies in terms of the anchor and intersection plates. The details of how the plates are designed are not easily visible when zoomed out to look at a whole module, but they do provide nice greebling.


Now, you can just densely tile space with the standard minimum-sized modules. But a collection of 13 basic modules forming a "raspberry" has the 12 exterior modules arranged such that each one has one of its connection points exactly at the vertex of a larger cuboctahedron.


Thus, these raspberry units can be used to tile space as well, in exactly the same pattern as the base units. (Also note that the doubled anchor-rings around each intermodule connection provide multiple "lanes" for workers to pass each other on EVA.) One might choose to do so, leaving a lot of cluster-internal attachment points unused, so as to set up a recursive city-like structure of local neighborhoods, larger districts, and whole cities. But additionally, these can be used to define a scale for larger modules that will fit into the same system. For example, a large sphere can enclose a raspberry except where the 12 external attachments poke through, providing double-hull protection for the interior lower-scale modules, or permitting leaving out some members of the full cluster while maintaining a standard external interface. In this following image (which took forever to render), 13 raspberries are each encased in a glass secondary hull and used to form a second-scale raspberry structure.


And, once larger-scale modules are established as A Thing, you can just have monolithic modules of that size which have whatever internal configuration you want, for holding stuff that needs more space (say, gravity centrifuges, or nuclear reactors, or whatever), and smaller scale modules will fit nicely in between them.

Non-spherical modules can also be added to the system as long as they have attachment points that align with the lattice points for their specified scale. So, if you have, e.g., a long mass driver or something, you don't need to waste a ton of sphere space on it--you can make a cylindrical module with attachment points spaced appropriately to tesselate more standard modules around it. More prosaically, however, they may be some utility in having polyhedral modules.


This cuboctahedral modules, with connectors at each vertex, has less interior volume lower inherent pressure-hull capacity, but provides the convenience of flat surface for mounting equipment on the exterior, and more room between modules for maintenance access. As a raspberry cluster, it looks like this:


Meanwhile, this rhombic dodecahedral module, with connection points in the center of each face, does a better job of efficiently filling space, leaving less wasted space betwee modules. 


That's no good for external maintenance access, but when working with larger scale modules, where there would be more wasted space between spheres, this may be a good option, at least for low-pressure environments. As a raspberry cluster, it looks like this:



Tuesday, July 4, 2023

Truly Alien Languages

A few weeks ago, I watched Avatar: The Way of Water, and I was rather disappointed at the reduction in linguistic worlbuilding effort compared to the original Avatar. I'll do a proper review on that at some point when I am not in the middle of an interstate move with three children, but in the meantime I want to talk a bit about properly alien languages--the sort that cannot be accurately pronounced, or produced at all, by unassisted human actors, because they use sounds that humans can't make, or signs that we can't perform, or completely inaccessible modalities. Specifically, I want to talk about how these show up in movies and TV shows--or rather, how they don't show up.

The Star Wars universe numerous aliens that produce non-human-like speech--most notably, Wookies--but as portrayed on screen, it's all gibberish. The Prawns from District 9 (<- Amazon affiliate link) have non-human-like speech... but again, it's just gibberish. Star Trek: Picard, season 3 features a few lines of subtitled non-humanoid dialog, in a receptive-multilingualism situation in conversations between Vadic and her crew... but it's just gibberish, despite Star Trek being a franchise that has already shown willingness to invest in conlangs for Vulcan and Klingon. And Star Trek: Discovery, season 4 (proper review coming later) features a modally-alien language encoded in a combination of spatially-and-temporally distributed visual signals and chemical messaging... but it's just gibberish!

Clearly, modern TV and movie productions are willing to invest in producing proper conlangs, and they are willing to pay audio and visual artists to produce asemic representations of Truly Alien Languages; so why have we not yet seen these two things combined? Even the Avatar franchise, which introduced the Na'vi language, seems to have completely punted on the language of the Tulkun, which to all appearance is whalesong-esque (you guessed it!) asemic gibberish!

Two plausible reasons have been suggested to me by other members of the Language Creation Society

I fear studios wouldn't be interested in commissioning a conlang they wouldn't be able to use for promotional purposes, unlike, for instance, High Valyrian.
--Baptiste Faussat
I interpret "promotional purposes" in this sense as meaning "that fans can learn"--but most of the conlangs that, e.g., David Peterson has been comissioned for have never developed huge fan speaker communities, or any fan speaker communities, and there frequently isn't enough public material for that to even be possible, yet such commissions continue.
I feel like a non-human-pronounceable conlang in a movie would be a nightmare of going back and forth with sound designers and such.
--George Corley

As opposed to just telling the designers to go wild once, and not requiring any back-and-forth with multiple people. That seems... unfortunately plausible. It's also entirely plausible that it has just never crossed the mind of any producer or director that anyone would ever care, and it probably won't until there is some initial breakthrough production that demonstrates it. I had hoped that Avatar: The Way of Water would turn out to be such a production, but it failed me. So, the opportunity to be the first is still open!

At the moment, I am putting my hopes in Project Hail Mary, which is currently in pre-production to star Ryan Gosling as Ryland Grace. (See also my previous linguistic review of the book, along with other works by Andy Weir.) Decoding the Truly Alien Language in this story is in fact a significant feature of the plot, so I rather hope that they will see fit to actually produce a language to decode! But given my disappointment with Avatar: The Way of Water, I kinda worry that they'll make a hash of it after all unless someone pushes for them to do it right. So, here's my public plea to Phil Lord, Christopher Miller, Ryan Gosling, Andy Weir, and anyone else involved in the production: please, hire a conlanger for Eridian!

(Incidentally, I'm available! Contact me! But soliciting applications through the LCS is also an excellent option!)

Stay tuned for a future blog post where I start working out details of a canon-compatible Eridian language, after I dig my copy of Project Hail Mary-the-book out of whichever box it ended up in so I can re-extract all the details that I didn't quote in my last review. I can probably even build off my previous work on Fysh A and Tjugem to produce synthesized speech samples for Rocky, though I admit I have no idea whether that would be considered helpful or annoying by any potential film sound design department that one might end up going back-and-forth-with....

If you liked this post, please consider making a small donation!


Tuesday, April 25, 2023

A Loudspeaker-Compatible Photo-Phonology

Last weekend, I gave a talk at the 10th Language Creation Conference on creating languages that do not use the human voice, in which I went over four case studies of successively more-alien phonologies. (One of which I have previously blogged about here.) Israel Noletto called it a "must-watch" for any speculative fiction writers putting created languages in their stories! Turns out, I had extra time, and could've talked about a fifth... but when I put together my abstract, I thought I'd be hard-pressed to fit 4 case studies in half an hour, so I cut it out. And so, I shall now present case study #5 here, in blog form!

After noodling over the cephalopod-inspired phonology for a while (for context, go watch my talk), it occurred to me that human sign languages and cephalopod communication have in common the feature that you can't flood an area with a linguistic signal the way that you can with a disembodied voice from a speaker system--they have to be displayed on a screen with a certain defined spatial extent, and even if it's a very big screen, the components of the signal are still not evenly distributed throughout space.

So, could we create a light-based language that is broadcastable in the way that audio-encoded languages are? And what sort of creature could evolve to use such a system? Well, trivially, yes, we can--just encode the existing language of your choice in Morse code (or something equivalent), and pulse the lights in a room in the appropriate pattern. Heck, people actually do this sometimes (although more often in thriller movies than in real life). But designing a language whose native phonology is Morse code is just... not that interesting. It doesn't feel materially different from designing a language to use the Latin alphabet, for example. We need more constraints to spark creativity here! So, what else could we do to more directly exploit the medium of non-localized light? In Sai's terms, how could we design something that is natural to the medium?

A first thought is that light and sound are both wave phenomena, and one could just transpose sound waves directly into light waves, and use all the same kinds of tricks that audio languages do... except, it turns out that continuously modulating the frequency of light is considerably harder than modulating the frequency of sound. We can do it with frequency-modulated radio, but that's still not how we actually encode audio signals in radio, and similar technology just doesn't exist in the visible range. And if we look at how bioluminescence actually works in nature, no known organism has the ability to continuously modulate the frequency of their light output; they have a small number (usually just one) of biochemical reactions that produce a specific spectrum, and that's it.

But, a bioluminescent creature could do essentially the same thing we do with AM radio: ignore the inherent wave properties of the carrier signal entirely, and vary the amplitude over time to impose a secondary information-carrying waveform, which can be considerably more complex than the binary on/off of Morse signals, and can in fact have its own frequency and amplitude components. That doesn't mean high-contrast flashes couldn't still be involved--going back to nature again, the intraspecific visual signalling of fireflies, for example, is very Morse-like. But it can have more complex components, resulting in a higher bitrate that feels more suitable for a language that's on par with human languages in utility and convenience. Biological signal modulation can be done by controlling the rate of release of certain chemicals (e.g., the rate at which oxygen is introduced into a firefly's light organ to react with luciferin), or by physical motion of shutters to occlude the light to varying degrees (a common mechanism among, e.g., bioluminescent fish whose light is produced by symbiotic bacteria).

So, now we have a single-channel frequency-and-amplitude-modulable signal; the next obvious analogy to explore (at least obvious to me) is whistling registers (again, for context, go watch my talk, or listen to the Conlangery episode on Whistle Registers in which I talk about my conlang Tjugem). However, we can't directly copy whistling phonology into this new medium, precisely because we are ignoring the wave nature of the carrier signal; for a creature with a high visual flicker-fusion rate, perceivable modulation frequencies could be fairly high, but still nowhere near the rate of audio signals; rather, frequency information would have to occupy about the same timescale as amplitude information. In other words, varying "frequency" would give you a distinction between amplitude changes that are fast vs. slow, but it would be much harder to do things like a simultaneous frequency-and-amplitude sweep and keep each component distinguishable, the way you can with whistling. You could do it with flickering "eyelids" or chemical mixing sphincters (or, as Bioluminescent backlighting illuminates the complex visual signals of a social squid in the deep sea puts it, "by altering conditions within the photophores (41) or by manipulating the emitted light using other anatomical features")--trills in human languages introduce low-frequency components of about the right scale--but just as the majority of phonemic tokens in spoken languages are not trills, I would expect that kind of thing in a light-based language to be relatively rare. (Side note: perhaps audio trills and rapid light modulation could both be considered analogous to cephalopod chromatic shimmer patterns.)

So, the possibilities for a single-channel light-based phonology are not quite as rich as those for a whistling phonology, although the possibility of trilling/shimmering does help a bit (even though, AFAIK, no natural whistle register makes use of trilling). But, while the number of channels available to a given bioluminescent species will be fixed, the number of channels that we choose to provide when constructing a fictional intelligent bioluminescent creature is not! And if they have multiple light organs that allow transmitting on multiple different color channels simultaneously, then just two channels would allow them to exceed the combinatorial possibilities of human whistle registers.

Using this sort of medium for communication would have some interesting technological implications. Recording light over time is in some ways much more difficult than mechanically recording sound, but reproducing it is trivial. Light-based semaphore code systems for long-range communication with shuttered lanterns might be a blatantly obvious technology very early in history; and even if it cannot be mechanically recorded, if someone is willing to sit down for a while and manually cut out the right sequence of windows in a paper tape, mechanical reproduction of natural-looking speech could also occur at a very low tech level (especially if the language is monochromatic). Analog optical sound is in fact a technology that was really used in recent human history, and the reproduction step for a species using optical communication natively would be much simpler than it was for us, as there's no need for them to do the translation step from optical signal back into sound.

Now, there's a lot of literature on animal bioluminescence, but not a ton on specific signalling patterns used by different species... except for fireflies. So, if we want to move away from abstract theorizing and look at real-world analogs to extract a set of constraints for what a light-based language might look like, borrowing from firefly patterns is probably our best bet. Additionally, and in line with modelling off of fireflies, I am going to avoid using polychromatic signals, and see just see how far we can get with a single-channel design. After all, I already looked at a multi-channel / multi-formant signal system in the electroceptive phonology of Fysh A. I won't be sticking strictly to firefly patterns, because fireflies pretty much only use flashes, without significant variation in amplitudes, and that would end up being very Morse-like. However, per the US National Park Service, there are some interesting variations in the flashing patterns seen in various species; for example:
  • Long, low-amplitude glows (not really a flash at all).
  • Single, medium-amplitude flashes with long gaps.
  • Pairs of medium-amplitude flashes.
  • Trains of medium-amplitude flashes.
  • Single high-amplitude flashes ("flashbulbs").
I see a pattern going on here that may just be a coincidence, but seems like a plausible restriction on a bioluminescent alien: for a creature like a firefly which is using its own metabolic resources to produce light, rather than relying on symbiotic bacteria, there may be a maximum average rate at which power can be delivered to photophores, thus implying that, while you can glow at a low level indefinitely, brighter flashes, using more power all at once, entail a longer recovery period between flashes to "recharge". So, IF. YOU. ARE. SHOUTING. YOU. MUST. SPEAK. SLOWER. This is analogous to the amplitude-frequency dependence seen in the Fysh A electroceptive phonology.

So, let's go ahead and define three amplitude bands that phonemic segments might occupy, analogous to the frequency bands that organize whistling phonologies:

  1. A low band, which allows continuous glows and smooth waves.
  2. A middle band, where we have to pause between blinks, but we can blink fast enough for multiple blinks to constitute a single segment.
  3. A high band, where recharge pauses are too long for sequential blinks to be interpreted as a single segment.
These are sort of analogous to "places of articulation". Then, we can also define attack/decay characteristics for each blink--something like "manners of articulation":
  1. Slow attack vs. hard attack
  2. Slow decay vs. hard decay--only available in the low band; the upper bands only allow hard decay, since they use up all the luciferin!
And, furthermore, we can have a distinction between:
  1. "Tapped" -- a single amplitude peak.
  2. "Trilled" -- two or more close-spaced amplitude peaks (not available in the high band)
And, in the low band only, a unique distinction between short peaks and long peaks.

So, now we can map out the complete set of distinctive segments that might exist--the alien IPA!
  1. Low
    1. slow, short, slow, tapped
    2. slow, short, slow, trilled
    3. slow, short, hard, tapped
    4. slow, short, hard, trilled
    5. slow, long, slow, tapped
    6. slow, long, slow, trilled
    7. slow, long, hard, tapped
    8. slow, long, hard, trilled
    9. hard, short, slow, tapped
    10. hard, short, slow, trilled
    11. hard, short, hard, tapped
    12. hard, short, hard, trilled
    13. hard, long, slow, tapped
    14. hard, long, slow, trilled
    15. hard, long, hard, tapped
    16. hard, long, hard, trilled
  2. Mid
    1. slow, tapped
    2. slow, trilled
    3. hard, tapped
    4. hard, trilled
  3. High
    1. slow attack
    2. hard attack
And we could also have phonemic lengthening of the darkness following a hard decay for the tapped segments in the lower bands, which would give us an additional 10 possible segments, for a total of 32. Note that there's not really anything here that corresponds to "vowels". You might try to think of the low+long or low+slow-decay+trilled segments as vowels, or at least continuants, but they don't have the amplitude peaks that we would typically associate with human vowels as syllable nucleii. In fact, the whole basis of human syllable structure is missing! Instead, we might organize segments into larger units based on what kinds of segments can start or end those units--kind of like I did in Fysh A with initial and non-initial segments. The higher amplitude bands make it harder to follow up quickly with additional segments, so it would make sense if those are finals in larger, syllable-analogous units, and we end up with alien syllables that terminate in amplitude peaks rather than having them in the middle--kinda like all of their syllables are "CV" (but recall that we don't actually have a good analogy for vowels here!)

Now, with 32 different possible segments to choose from, with varying degrees of distinctiveness, not all languages in this phonetic space will use all of them, or choose exactly the same subset--just like human languages don't all use every possible human spoken phone! In particular, the low-band segments will be the most difficult to distinguish on average, due to being the "quiet"-est, so I would expect languages to vary significantly in exactly which low-band segments they utilize.

For purposes of this sketch, I'll select the following phonemes for maximal distinction:

  1. Low
    1. slow, short, slow, trilled - <w>
    2. slow, long, slow, tapped - <r>
    3. hard, long, slow, tapped - <t>
  2. Mid
    1. slow, tapped - <d>
    2. slow, trilled - <rr>
    3. hard, tapped - <k>
  3. High
    1. slow attack - <b>
    2. hard attack - <p>
Plus long <tt> and <dd>, exploiting the geminated-darkness feature, giving us a total of 10 distinct phonemes. As in the canine phonology sketch, that's not a ton (actually less than occur even in Rotokas, with its famously small phonemic inventory), but if we look at organizing the language in terms of possible syllables rather than possible segments, things look better. If we specify that every syllable must have a rise from a dark segment to a bright segment, and terminates with the brightest segment, as soon as we see a drop, then we get the following possible syllable types:

L>M: 16 possible syllables
L>H: 8 possible syllables
L>M>H: 32 possible syllables
M>H: 8 possible syllables

For a total of 64--and that's without allowing multiple segments of a single type per syllable! If we allow clusters of low or mid segments, we get multiplicative gains. Again, different languages of this same theoretical species could vary in what kinds of clusters they allow, just as, e.g., Russian differs from Hawai'ian, so perhaps there are small-phonology languages that allow no clusters, but for convenience let's say that in this sketch we'll allow either two low segments or two mid segments per syllable; then we get:

LL>M: 64 possible syllables
L>MM: 64 possible syllables
MM>H: 32 possible syllables
LL>M>H: 128 possible syllables
L>MM>H: 128 possible syllables

And suddenly, it has become impractical to write with a syllabary!

After my LCC presentation, I had a conversation with Biblaridion in which he pointed out an aspect of all of these non-IPA-codable languages that's directly relevant to writing stories with them: who can perceive them, and who can produce them? Audio and visually-coded languages like the canine sketch, the cephalopod sketch, and Tjugem can all be perceived by humans, so we could in principle develop receptive multilingualism in them, even if we couldn't produce them (and in the case of languages like Tjugem, we can even learn to produce them, even though they don't use typicaly human phonemes). This "firefly" phonology falls into that class as well--if humans can learn to decode morse code, surely we could learn to understand a firefly phonology, but we couldn't reply in the same language, or at least not in the same modality, without technological assistance. Fysh A presents a more extreme case--if there were, say, some intelligent star-nosed moles with electroceptive noses inhabiting the Fysh's world, they could gain receptive competence while being mute, but humans can neither produce nor even perceive the language without technological assistance. This suggests a new pathway for developing alien creatures: decide what communicative barriers you need in place to drive the plot, pick a modality that makes that work, and design your creatures to make it plausible for them to communicate in that modality. In fact, on further reflection, this seems to be exactly what H. Beam Piper did for Little Fuzzy (and you thought I would get through this whole post without an affiliate link! ha!)--the Fuzzies do communicate with sound, but in a frequency range that humans can neither hear nor replicate!

Monday, April 10, 2023

Reactionless Drives & Over-Unity Devices

So, IVO Ltd is getting ready to launch the IVO Quantum Drive, based on the theory of Quantized Inertia, into space., as a fuel-less, pure-electric method of satelite propulsion. The company claims that this "is not a reactionless system" and that they can "move spacecraft without fuel and without violating Newton’s laws of motion", but... it's not at all obvious how those two things can both be true at the same time.

It certainly looks like it would violate conservation of energy, conservation of momentum, and conservation of angular momentum, so I fully expect it to not work. If it does work, it'll increase job security for a lot of physicists, and either overturn all the basic conservation laws, or require explanation of why it's not really violating them after all; maybe space is actually quantized in a way that breaks the assumptions underlying Noether's theorem; maybe it's dumping momentum somewhere else in place of dumping it into local reaction mass; maybe the total energy of the universe is zero, and extra energy produced by the thruster is balanced by negative energy in the gravitational field. [1]

But regardless of how that all shakes out, the claimed effect of an IVO Quantum Thruster is that you put a particular amount of electrical power in, and you get force out, in a fixed ratio, without expelling any reaction mass. And that has implications.

It turns out there is one kind of fuelless thruster that really exists: the photon rocket. Or in other words, the flashlight! Photons have momentum, and apply forces, and you can make them with nothing but electricity--no need to carry fuel. That's why solar sails work. But, flashlights produce extremely tiny amounts of thrust per watt--obviously, because you don't have to worry about recoil when turning on a flashlight! In fact, a flashlight (or anything else that produces light pointed mostly in one direction) only produces about 3.33 micronewtons of thrust per kilowatt--or, in other words, requires 299,792,458 watts to produce 1 newton of thrust! If that's a familiar-looking number to you, that's because, in SI units, a watt divided by a newton has the same units as velocity, and the number you get out for the ratio of power to thrust for a photon is, in fact, the speed of light! 299,792,458 W / 1 N =  299,792,458 m/s.

If your fuelless thruster produces that amount of thrust or less, you might as well just use a flashlight. And if you are getting the energy to power it from solar panels, as IVO is, then you might as well use a solar sail, to get double the momentum from the bounce! If you can produce a better power-to-thrust ratio than that, then either you have disproven relativity and established the existence of an absolute reference frame, with absolute motion altering the power requirements of your device... or, if you have a fixed power consumption and a device that respects Galilean relativity, you have a free energy machine. A perpetual motion machine of the first kind. An Over-Unity Device. Maybe it's stealing energy from elsewhere in the universe, maybe it's separating negative energy in the gravitational field, but however you resolve the problems it has with Newtonian mechanics, you've got a device that can output more energy than you put in, without fuel.

Why? Simply because, given constant acceleration under a constant force, the energy consumed by the device grows linearly, but the kinetic energy of the device grows quadratically! Thus, there exists some critical speed at which the curves will cross, and the next incremental increase in speed from activating the thruster will result in the device gaining more energy as kinetic energy than it took to power the thruster to produce that acceleration.

So, why can't you turn a flashlight into an over-unity device? Well, we can calculate the critical speed with some pretty basic Newtonian mechanics. The work done by a force is force over distance. If a particular force is applied over a particular distance over a period of time, we get power: work done per time. So we just need to find the speed--distance over time--such that a force applied at that speed produces the same amount of power going into kinetic energy as it takes to power the device. With a little algebra, we get:

E = F * D
E/t = F * D/t
P = F * V
V = P / F

In other words, the critical velocity is the per-newton power consumption, and as we saw above, for a photon rocket, that is the speed of light! A flashlight can't go faster than the speed of light, because nothing can go faster than the speed of light, so you can never meet the conditions to produce excess energy. But if thrust per watt goes up, then the critical speed goes down, and then you've got an over-unity device. [2]

So, if the IVO Quantum Thruster, or some other fuelless propulsion device, actually worked as advertised, how could we use it to engineer a practical power plant?

Let's start out by defining some useful variables and formulas:

Pi: the electrical power input to the thruster.
η: the ratio of force to power input, such that Pi*η gives the thrust.
Vo: the velocity of the thruster under power-producing operation.
Pk = Pi*η*Vo: the power delivered as kinetic energy to the thruster (and any supporting structures).
ε: the efficiency coefficient for converting kinetic energy back into electrical energy.
Pe = ε*Pk: the electrical power produced by the device.
Po = Pe - Pi: the usable output power of the device, after some electrical power is used to continue powering the thruster.
G = Po/Pi = ε*η*Vo - 1: the amplification gain factor, or ratio between input and output power.
Vg = (G + 1)/(ε*η): the velocity required to achieve a desired gain factor after accounting for inefficiencies in the equipment. When G = 0, this is the minimum operating velocity.

The IVO thruster is supposed to have a max η of 52 millinewtons / watt. That is a ridiculously huge value. The Rocketdyne F1 engines which powered the first stage of the Saturn V rocket had a thrust-to-power ratio of about 0.77 millinewtons / watt, less than 1/65th as much. That makes the IVO thruster about 15,616 times more performant than a flashlight, which should be the physical limit, according to currently-known-and-accepted physics. In other words, if these thrusters actually work, they would not just be useful for stationkeeping and orbit boosting of satelites--you could replace a rocket enginer withem, cut the fuel load by a factor of 65, run the remaining liquid hydrogen and oxygen through a fuel cell to produce electrical power to run a bank of quantum thrusters, and use that to launch an entire Saturn 5 into orbit. And if we can build a free-energy device, we don't even need the fuel cells, so continuing with that...

Mechanical-to-electrical conversion efficiencies for alternators can be pretty high--80% to 90%--so let's go ahead and set ε to 0.8. This gives us a minimum operating velocity (with zero gain) of just over 24 m/s. Not all that fast! 

So, suppose we set our operating velocity at 30m/s. That gives us a gain factor of just under 25%. If we design for 20 watts input power, producing just over 1 newton of thrust, we will thus generate just under 5 watts of excess power.

Now, 30m/s isn't ridiculously fast, but it is fast enough that building a linear track to shoot the thruster down would not be particularly convenient. Additionally, using a linear generator to recover excess energy would require pulsed operation, with time to reset the thruster at the beginning of the track after every run, or to slow down and reverse direction. So, let's just bend the thruster path into a circle! If we swing it around an arm with a 0.5 m radius, for a generator 1 meter in diameter, we get a radial acceleration of just under 184g. That might sound like a lot, but tiny benchtop laboratory centrifuges regularly get up to several tens of thousands of gs, and SpinLaunch is trying to build a centrifuge that can hold a small rocket under 10,000g. So, building a centrifuge that can hold a IVO Quantum Thruster under 184g seems very doable, and then it can just spin the shaft of an off-the-shelf alternator to produce power. 5 watts may not seem like very much, but suppose we increase the input wattage with the same gain factor? Stack up thrusters so that they can convert 800 W (about the power consumption of a typical microwave oven), and we'll get 198 W out. Increase Vo to 100 m/s, and the gain jumps to 3.16, corresponding to an output of 2528 W, and a load of only 2041g on the centrifuge. You'd want to armor the casing for this thing, but note that, unlike a flywheel battery, this thing isn't meant to store energy in rotation--so you'd want to make the centrifuge structure as light as possible, and minimize the danger of stored energy.

Bigger devices could operate at higher speeds and higher gain factors, and thus produce even higher wattages, because centripetal acceleration for a given tangential velocity goes down with radius. Suppose wanted to take this basic design and scale it up by a factor of 100: a 100-meter diameter municipal power station. At 30 m/s, that gives not-quite-2-gs of acceleration, but 100 meters is also the planned size of SpinLaunch's rocket centrifuge, so if we assume we can build for 10,000gs, that gives us an operating speed of just over 2,210 m/s, and a gain factor of 91. With space for a hundred times more thruster units around the rim, Pi goes up to 80kW, giving an output of ~7.28 megawatts--enough to power nearly 6,000 average American homes.

Now remember, these things wouldn't just violate conservation of energy--they violate conservation of angular momentum too, as a direct consequence of violating conservation of linear momentum. That introduces an entirely new sort of environmental hazard--it would take a long time, but a large number of these sorts of generators, if the fleet isn't properly balanced, would eventually start to have a noticeable impact on the rotation axis of the Earth! On shorter timescales, they would apply inconvenient continuous torques to any spaceships using them for power. Thus, you'd probably want to always build them in pairs, or build units with two counter-rotating coaxial centrifuges, to keep the accumulation of excess angular momentum at zero.

Now, suppose that the IVO thruster does end up failing, but you still want to use this idea for a science fictional device. Maybe you want to go with a less ridiculously aggressive efficiency for your fictional thruster--something comparable to a Rocketdyne F1, for example: η = .77 millinewtons / watt. Keeping our ε value of 0.8, that gives us a minimum operating velocity of 1626 m/s. A one-meter diameter centrifuge won't do in that case! You're looking strictly at larger-scale installations. Even a 100-meter centrifuge would already be operating at well over 5000g with a gain factor of 0! And handling higher gs is actually harder at larger sizes, since the larger centrifuge structure has to support itself under high accelerations as well as the constant-sized functional component--the thruster. If we go up to a 200-meter diameter installation,  we could get an effective operating speed of 2220 m/s, for a gain of ~0.37, and an output of 58.6kW for a 160kW power plant, with accelerations just over 5000g--doubling the size of the centrifuge arms, but halving the acceleration they are under.

For more engineering safety margin and higher gain factors, we have to go bigger, and then taking up all of the space inside a gigantic disk for centrifuge arms starts to seem inconvenient--not to mention the mass of the centrfiguge structure itself! You don't want to have to lug all that around in the middle of your spaceship! As assumed η values go lower, the minimum viable size of a power plant goes up, and we transition into a regime where fewer and fewer spaceships can make use of them at all, so you've got an excuse to keep spaceships using other power sources while ground-based facilities can use giant free energy power plants. But remember, we don't actually care about storing kinetic energy, so we want to cut down on mass anyway--and all of the over-unity power generation happens in the reactionless thrusters out on the edge. So what if we throw away most of the centrifuge, and just have a ring of thrusters spinning in a circular track? Now, we can build large, spindly ring-shaped power plants with a gigantic hole in the middle that you can fit the rest of a spaceship in! Or giant ground-based rings, several kilometers in diameter, with dirt-and-rock backed tracks holding back rings with thousands of gs, and a city built inside--if anything goes wrong, the ring will explode outward leaving the ship or the city in the middle unharmed. And with all that extra space to stack more thruster units, even a fairly small gain factor could get you gigawatts of power output. 


[1] It is generally believed that it should still be impossible to produce energy this way, even though it doesn't technically violate conservation, because if you can separate the vacuum into negative gravitational potential and positive useful energy, that would make the vacuum unstable, and that would be very bad for existence. But if the separation requires a macroscopic machine to achieve, and can't occur as a result of local single-particle interactions, we don't need to be as worried about triggering accidental vacuum decay.

[2] This paper (also linked previously) derives a value for the critical speed that's twice that big; that's because it is calculating a slightly different thing: the speed you have to accelerate to for the instaneous kinetic energy of the device to exceed the total energy used to accelerate it thus far. But, the point at which kinetic energy begins accumulating at a faster rate than energy is put in occurs significantly earlier.

Saturday, March 25, 2023

The Sci-Fi Linguistics of The Embedding

The Embedding, by Ian Watson, is... not good. It tied for the Campbell Award in 1974 and won the Nebula Award for Best Novel in 1975, and has been called a "modern classic", but, much like the Hugonauts' review of Dune*, while I recognize that it has some great ideas, I just don't think they're actually executed all that well. Now, young people don't always like old SF, but I've read and liked a lot of old SF, and this one just really doesn't hold up on the strength of writing or the story. My feelings pretty much mirror those expressed in this review from 2006--its understanding of linguistic theory is slightly confused, and the multiple plot threads are poorly integrated and redundant, which is a great disappointment given the novel's stellar reputation. I can only assume it has that reputation because it blew everyone's minds by actually engaging with theoretical linguistics at all back in 1974, and nobody's seriously re-evaluated it since then.  But it is the most linguisticky linguistic fiction ever, and explores ideas that have not been done better since, despite the low bar! So, let's talk about those ideas.

*Go subscribe to the Hugonauts! They deserve more listeners.

The introduction to the Gollancz SF Masterworks edition says that "There are two ideas in linguistics that have had a particular influence on twentieth-century science fiction.": the Sapir-Whorf hypothesis, and Universal Grammar. The Sapir-Whorf hypothesis--the idea that the language you speak can influence or control how you think--is low-hanging fruit, and there's tons of SF, some of which I have previously reviewed, that plays with that.

The core idea of Universal Grammar is that humans come pre-wired with an understanding of how language should work; that our brains are built with a standard template for grammar with a multitude of switches that different language merely set in different ways. This idea originates with Noam Chomsky, who famously claimed that Martians studying Earth would conclude that we all spoke mere dialects one humanese--although quite a lot of advancement has been made in linguistics since that time, and Chomsky himself no longer holds that extreme view. If you do run with that extreme view, however, it lends itself to the trope of Incomprehensible Aliens--if we are hardwired to Do Language in a particular way, and they are hardwired to Do Language in a different way, then presumably we could never learn to understand each other's languages and communication would be forever impossible.

The idea of built-in, innate grammar arises from the "Poverty of the Stimulus" argument, which basically goes that human children aren't exposed to a large enough sample of language to learn how it works from first principles in the time it actually takes for children to acquire their first language. Any language whose rules didn't conform to whatever that built-in template is then could not be learned by children. This, of course, depends on the assumption that the stimulus is actually impoverished--that there really isn't enough information in the ambient linguistic data to which children are exposed during their lives to calculate the correct rules of a human language--and that assumption is not without controversy.

It is clear that humans must have some innate capacity for language--after all, something makes the difference between a human baby who learns to understand and speak English in only a few years, versus, say, a kitten who grows up in the same house and maybe learn to recognize a few individual words. Whatever that is is called "the biological endowment", and the idea that that could vary between linguistically-capable species is unexplored here, or in any other published story that I am aware of. But exactly how extensive our innate knowledge specific to language is, is still an active area of research, and the idea of an extensive Universal human Grammar can be attacked from two directions:

  1. Showing that, for some particular linguistic feature, the stimulus is not actually impoverished--that children are exposed to enough of the right kinds of examples to just "figure it out".
  2. Showing that we have some innate cognitive biases relevant to linguistic learning, but that they are not specific to linguistic learning--thus, other linguistically-capable species may well exhibit exactly the same linguistic biases, because of developing the same general reasoning capabilities.

Somewhat confusingly, some people use "Universal Grammar" to refer to any innate knowledge relevant to language, not just that which is specific to human language, and only evidence of type 1 is relevant to disproving that kind of Universal Grammar. But given the particular feature that Ian Watson chose to focus on (the eponymous "embedding"), and how interactions with aliens are portrayed in the book (they are fascinated by exactly the same constraint), I have to assume that that was the understanding that Watson had of the term "Universal Grammar"--that it was not merely universal to humans, but cosmologically "universal", based on principles that would be reliably replicated in any intelligent mind.

In some ways, Watson's choice of feature to focus on is a clever one; center embedding is an easy concept to explain to readers who are otherwise lacking in theoretical linguistic education, and he does just that in conversations between characters. For those who do not wish to go read the novel looking for the definition, center embedding is just taking a particular grammatical structure--like a relative clause--and sticking n the middle of another structure of the same type, rather than at one end or the other. For example, take the sentence "This is the malt that the rat ate."--it's got a relative clause in it. We can self-embed another relative clause at the edge like this: "This is the malt that was eaten by the rat that was worried by the cat." Or, we can center-embed that relative clause--stick it in the middle of the first relative clause, breaking that up--like this: "This is the malt that the rat that the cat worried ate." That's harder to understand, but it's the sort of thing that might be said from time-to-time. But what if we add a third clause? "This is the malt that the rat that the cat that the dog chased worried ate." That's... really hard to interpret, and people just don't speak that way! And if you one more level... no, that'll never happen!

However, despite being a clever choice of linguistic phenomenon, it's not actually a test of Universal Grammar, as Chomsky intended the term! In fact, this gets a completely different bit of Chomskyan linguistics, which Watson completely ignores: the distinction between competence and performance. "Competence" is what you know about the rules of language, and your ability to judge things as grammatical or ungrammatical. It is competence that allows us to say, yeah, mechanically, we could add a fourth embedded clause to that horrible incomprehensible sentence, and it wouldn't violate any grammatical rules. We are capable of learning the rules that would let us do that. Competence is what lets us look at the famous sentence "Colorless green ideas sleep furiously," and say "yeah, it's grammatical, but...." Meanwhile, performance is the fact that we sometimes make mistakes that we know are mistakes, and that we can rate things as acceptable or unacceptable, because they do or don't make sense or because they are easy or hard to interpret, independent of whether or not they are grammatical. The limitations on center embedding in English aren't grammatical, and this tell us nothing about the rules of Universal Grammar--they are just a consequence of the fact that humans have limited short-term working memory, so we lose track of the first halves of multiply-embedded structures before we get to the end! And in fact, you can prove that there is no hard grammatical limit on embedding depth by observing that equally-embedded structures can be more or less acceptable depending on which precise nouns, pronouns, and adjectives you happen to use; compare, for example: "The rat which the cat which the dog chased bit fell." vs. "The elegant woman whom the man that I love met moved to Barcelona."

Each of the three major plot threads in the novel has its own mini-linguistic-ideas as well. The aliens engage with the Sapir-Whorf hypothesis in thinking that learning more languages--and specifically, learning a heavily-center-embedding language--will allow them to achieve new metaphysical abilities. The Amazonian natives use drugs to expand their linguistic competence, which could be interpreted as a precursor to the drugs used by Sheila Finch's Guild of Xenolinguists. And the opening thread of the book deals with conducting The Forbidden Experiment--isolating children from natural adult language to see what happens, or what you can make happen, in order to explore the boundaries of the biological endowment and of any Universal Grammar that we might have. As you can see from that Wikipedia link, The Embedding was not the first to explore this idea--it shows up in books, comics, and even The Twilight Zone. And if you believe in the strong Sapir-Whorf hypothesis, or linguistic determinism, it can be quite a compelling idea--raising children without exposure to any existing language would release them from the limitations of those languages, would it not? But... no. That's not actually what happens at all. It is very easy when thinking about problems in language acquisition and Universal Grammar to start thinking, "ugh, if only we could controlled experiments on the acquisition process, we could answer so many questions so much more easily!" In fact, while working on this article, complete by accident, I came across this Tweet:

Which is a serious exaggeration, and if you click through to read the ensuing thread and quote-tweets, it's one which many people disagree with and have very strong feelings about, for good reason! But it is an exaggeration of something real. Let's be clear: when we say "intrusive thoughts", we mean "intrusive thoughts"; a lot of linguists really want to know what's going on in kids' heads when they acquire language, and Lingthusiasm even sells baby onesies with "Daddy's Little Longitudinal Language Acquisition Project" on them (which I have proudly clad all three of my children in!) but nobody is going around thinking "man, I would totally raise some kids in linguistic isolation for 10 years if it weren't for that pesky Ethics Review Board!" (Or at least, we all hope nobody is thinking that!) It is the sort of thing that you put in a depressing dystopian SF novel (which the Embedding most definitely is! No one makes good decisions, and the ending is typically Cold-War-Era depressing)--or, if you think of it "for real", you immediately feel bad about it and move on to trying to find practical methods of getting the data you want, work on something else. Unfortunately, we actually do have some data on linguistic deprivation, from studies of rescued feral children and deaf children of hearing adults who do not speak a sign language, and the effects of these "natural experiments" are dire, and a source of ongoing trauma to the Deaf community. So, no, language deprivation does not give you special insight or psychic powers--it just gives you brain damage.

The fact that the main character of The Embedding actually performed a deprivation experiment thus clearly marks him as a villain, and that's only the first of the many unethical things that are done for the sake of "science" and "progress" in this book. And what's more, the particular experiments Watson describes aren't actually testing Universal Grammar, (the embedding experiment, realistically, is just training short-term working memory) which removes even the scientific justification! Every character is just straight-up unlikeable. So please, if you are an author--go write something that engages with theoretical linguistics as deeply as The Embedding does, but is more fun to read!

An additional note: The novel claims that "Stone Age children" took "hundreds of generations" to develop language; that's seriously misleading, based on what we know from some natural experiments. We have no idea exactly how long it took our biological endowment for language to evolve, but it seems that biologically-modern human children, in an appropriate social context, will spontaneously generate languages within one generation. This is evidenced by the development of pidgins into creole languages, and the spontaneous generation of new sign languages when new deaf communities are established--see, for example, the case of Nicaraguan Sign Language.


If you liked this post, please consider making a small donation!


Wednesday, March 15, 2023

The Sci-Fi Linguistics of Babel-17

Despite being far from the only, or even the best, novel, novella, or short story about the Sapir-Whorf hypothesis, Samuel R. Delaney's Babel-17 (Amazon Affiliate link as usual) is famous as "that novel about the strong Sapir-Whorf hypothesis". But, there is a bit more to it than that.

The opening of the story is highly reminiscent of the much later Story of Your Life by Ted Chiang: a language expert who has previously done work for the military is recruited by a general to decipher some alien communication and tells him that it's impossible without more data:

"Unknown languages have been deciphered without translations, Linear B and Hittite for example. But if I'm going to get further with Babel-17, I'll have to know a great deal more. [...] General, I have to know everything you know about Babel-17; where you got it, when, under what circumstances, anything that might give me a clue to the subject matter. [...] You gave me ten pages of double-spaced typewritten garble with the code name Babel-17 and asked me what it meant. With just that, I can't tell you. With more, I might. It's that simple."

In fact, Rydra Wong is in a much worse position with Babel-17 than historical linguists were with Hittite and Linear B--in each of those cases, although we lacked an equivalent to the Egyptian Rosetta Stone, at least we had the context of history and knowledge of other possible related languages to provide some direction in decoding their texts. Or at least, she would be... if she weren't psychic. Delaney neatly sidesteps the entire problem of actually deciphering and learning the language (excusable, because that's not actually the point of the story) by giving Rydra Wong supernatural powers to extract meaning that just isn't actually there. Some biotechnobabble explanation is given for how her ability to read minds works, but it fails to extend to the fact that she is said to have a history of being able to look ant unbroken code and suddenly intuit what it was meant to say--an ability which she also employs to start cracking Babel-17, and which kind of undercuts the otherwise entirely reasonable claim that she needs more data to actually decipher it! She might as well be a D&D character casting Comprehend Languages.

Once Rydra learns Babel-17, we get only minimal descriptions of how it actually works as a language. It appears to be a sort of oligosynthetic speedtalk and taxonomic language, in which the form of every word encodes its definition, a feature which supposedly promotes clearer thinking and deep understanding of everything in the world that it can name. Additionally, it has no word for "I", which is supposed to imply that thinking in Babel-17 prevents someone from acting with self-awareness, with the explanation that

"Butcher, there are certain ideas which have words for them. If you don't know the words, you can't know the ideas."

Which is, well... crap. After all, we coin new words after conceiving of the new words for, so clearly having the words for ideas is not necessary to having the ideas; rarely do we coin new words and then go looking for novel ideas to attach to them! When Rydra talks about language throughout the rest of the book, it's a mixture of reasonable stuff and linguistic technobabble. For example, as a weaker form of the previous statement, Rydra also explains that

"If you have the right words, it saves a lot of time and makes things easier."

which is absolutely true! That's why technical jargons exist. But this gets taken to a ridiculous science-fiction extreme in the description of another alien language: Supposedly, Çiribians can describe the complete schematics of an industrial facility with novel features that they want to duplicate in nine short words, which is... implausible, to say the least. Why would anyone have pre-existing short words to describe previously-unknown technological innovations developed by other aliens?

Then, we have this:

"Mocky, when you learn another tongue, you learn the way another people see the world, the universe."

Also very true! This is one of the many arguments for why documenting and trying to save dying languages is such important work--every time a language dies, the worldview communicated through that language, and the cultural knowledge encoded in that language, dies with it. But then...

"Well, most textbooks say language is a mechanism for expressing thought, Mocky. But language is thought. Thought is information given form. The form is language."

I was a little surprised that Delaney-via-Rydra would even provide the hedge of "most textbooks say..." there, because for a long time real-world textbooks would've agreed with Rydra, and this is a commonly-assumed position among linguistically-naïve people. The fact is, many people do experience their own thoughts in the form of language, and are shocked and disbelieving when they discover that not everyone else shares this experience! Yet, such people do exist, despite the existence of a good bit of 20th century academic literature claiming that they can't possibly--literature which I spent a mid-term paper in my grad school Intro to Semantics class tearing to shreds. So, there is a certain type of person who would've read that line in the book, and just like me, immediately thought "Bull! Crap! Rydra!"--but if you are not that sort of person, just take it from me that language is not identical with thought.

But let's get to the actual point: that learning Babel-17 turns a person into an agent of the enemy. There is actually a teeny-tiny kernel of truth underlying this conceit: multilingual people do often tend to develop different personalities when using different languages. This is a multilayered effect--partially, it can probably be attributed to the fact that different languages require that you pay attention to different things. Thus, Russian speakers are, on average, better at distinguishing shades of blue than English speakers, and Guugu Yimithirr speakers are better at absolute orientation than English speakers, because the vocabulary choices and grammatical categories required by their languages require them to pay more attention to those things, and thus develop the skills; and its not too hard to imagine that shifting aspects of your attention when shifting between languages could have some impact on personality. But a much, much larger component of the effect is simply an extension of the fact that we all have varying presentations of ourselves in different social groups, and languages are strongly associated with the social groups among whom we learned them and with whom we used them, and with the purposes we have in communicating with those groups. Rydra did not learn Babel-17 from "native speakers", in the presence of the enemy, so in reality, there is no particular reason to believe that just learning the language from decoding intercepted communications would have had anywhere near such a drastic effect on thought processes or personality.

So: neat idea, definitely science fiction. However, we can draw a parallel with a slightly more plausible idea from Neal Stephenson's novel Snow Crash. In both works, language is used as an attack vector to allow an enemy to take control of other people's actions. In Snow Crash, there is a language which acts like a programming language to insert instructions into people's brains; in Babel-17, the language itself is the program. (How is Snow Crash's take on this concept more realistic than Babel-17's? Well, you'll just have to wait for me to get around to reviewing Snow Crash to find that out.)

There is, however, another sci-fi linguistic idea in Babel-17 which is completely overlooked in most discussions of the novel: communication from "discorporate" people can't be remembered. Babel-17 is a wild ride through a psychedelic future with all kinds of ridiculous world-building details thrown in that have no direct bearing on the core premise of intergalactic war and Whorfian linguistic weapons, and one of those is the existence of ghosts, and in fact the requirement that some positions on a starship crew be filled by literal ghosts--or, as they are called in the novel, "discorporate people". The integration of discorporate people into the crew is complicated by the fact that living humans cannot remember anything said by a ghost for more than a few seconds, so special machinery is necessary to allow communication between the living and dead crew members--which is really kind of a neat concept all by itself, and I'd love to see that explored as the basis of a story on its own. (Not necessarily communication with ghosts, but just the idea that there is some class of people whose words cannot be remembered. Cf. the Silence from Doctor Who, but in that case nothing about the person can be remembered once you stop perceiving them, not merely their words.) Rydra ends up using her multilingualism to derive an advantage in this regard--while she can't remember the actual words spoken by a ghost, she gets around this by translating ghosts' speech into another language in her head as they are talking. And while she forgets the original words, she can remember the process of translation, and what she translated them into, and thus recall the content of the conversation without the need for assistive machinery.


If you liked this post, please consider making a small donation!