Saturday, January 20, 2024

Describing Non-human Vision

Thanks to LangTime Studio creating languages for a lot of mammals with dichromatic vision, I few years ago I did a good bit of research into how visual perception varies between different species. The issue of non-human vision came up again yesterday in George Corley's (of Conlangery fame) latest Draconic language stream, so I dug up some old notes on how to describe colors that you can't see. And in fact, this isn't just useful for conlangers trying to come up with vocabulary for a non-human language; this is good information for fantasy and sci-fi writers, too!

Since I started out with researching rabbits... let's talk about rabbits. It turns out that rabbit vision differs from human vision in just about every way that tetrapod vision can, so it makes an excellent case study. Rabbits have 2 types of color-receptive cone cells, corresponding to peak sensitivities in the green and blue ranges, and one rod cell type. I.e., they are dichromats, like most mammals. Rods don't contribute to color differentiation, so we can ignore those. At first glance, this seems similar to human red-green color blindness, except the peak sensitivities of the rabbit green cone and the red/green cones of a deuteranopic human are not in the same place! This is the first are in which human and non-human visual perception can differ--even other trichromats (e.g., penguins, honeybees) may not have the same spectral sensitivities as humans, and so see completely different color distinctions than we do. The rabbit cone sensitivities are shifted downward to a 509nm peak, compared to the human green cones with peak at 530nm, and red cones which peak at 560nm. Thus, not only can rabbits not distinguish red from green, but everything on the red end of the spectrum appears much dimmer than it would to a human, due to weaker response of the Long-Wavelength Cones to those spectral colors. Note, however, that not having separate cones for red and green does not mean that rabbits (or dogs, for that matter) would always see things-we-perceive-as-red and things-we-perceive-as-green as indistinguishable--it depends on the actual spectral signature of each object. For example, where we perceive two objects as having equal perceptual brightness but different hue, rabbits might perceive identical hue but lower perceptual brightness for the red object compared to the green.

Much like humans have an anomalous blue response in our red cones, which causes us to conflate purple (red+blue) and violet (a spectral color, extreme blue),  rabbit and rat green cones also have a
sensitivity peak in the ultraviolet. Initially, I assumed that, unlike the human anomalous blue response, UV light would be blocked by the structures of the eye, as it is for humans; however, while talking with a sci-fi writer friend of mine about non-human vision last night (as ya do, y'know), when I mentioned that rabbit and rat green-cone pigments have a weird bi-stable response to UV light, but UV is absorbed by mammalian eye tissue, so it's probably just a random non-conserved evolutionary quirk... he noted that UV is absorbed by primate eye tissue, but had I actually explicitly checked on rabbits? And I had not. So I did. And it turns out that that lapine corneal, lens, and vitreous humor tissues are considerably more transparent to near-UV light than human eye tissues are. Now, nobody (that I have been able to find) is actually saying outright that rabbits (or rats) can see UV... but rabbits might actually be able to see UV. If they can, it would be indistinguishable to them from green (not blue!) If it was not already clear from the shifted sensitivity peaks, I think that should highlight the impossibility of just taking, e.g., a JPEG image captured with equipment built for humans and transforming it into an accurate representation of what some other animal would see--if nothing else, the UV information would be completely missing!

Incidentally, if rabbits are UV-sensitive, the bistable nature of the UV response in their green cones means that they would actually be more strongly sensitive to UV in the dark than they are during daytime illumination. I have no idea what to make of that, as there isn't really a whole lot of
environmental UV going around at night or in tunnels... but that's a quirk you can keep in mind as a possibility for fictional creatures. In general, just note that spectral response can vary in different environmental conditions; in humans, we lose the ability to distinguish color entirely in low-light conditions (and your brain lies to you to fill in the colors that you believe things should be), but things can be more complicated than that.

Another interesting feature of rabbit eyesight is that they have a much less dense foveal region than humans (so less effective resolution), and their color-sensitive cells are not evenly distributed--there is a thin band with a mixture of both green and blue cones, with blue cones concentrated at the bottom of the retina (corresponding to the top of the visual field) and green cones concentrated at the top (corresponding to the bottom of the visual field). I.e., their vision along the horizon is in color, but the top and bottom extents of their visual fields are black and white, and specialized for better spectral response to the most common wavelengths of light coming from those directions--blue from the sky, green from the ground. This isn't too different from human peripheral vision (where color information is inferred by the brain, not actually present in the raw retinal output), except that in rabbits different parts of the peripheral fields actually have a different peak spectral response! In wild rabbits, this is probably just an adaptation to getting the maximum information out of a predominantly-blue-background sky and a predominantly-green(/red)-background ground, but intelligent rabbits could theoretically learn to extract additional color information (e.g., distinguishing monochromatic white from dichromatic white) from an object by wiggling their eyes up and down or tilting their heads to put it in different parts of the visual field. Or not, if their brains just fill in missing color information automatically like ours do.... But if you want to write about creature that can do that, by authorial fiat, they could have a whole auxiliary class of color words, analogous to pattern words like "speckled" or "sparkly", to describe objects that have different appearances in different parts of the visual field.

But, if we abstract away from physiological perceptual abilities, what would their experience of color space be like? Tetrapod retinas pre-process raw cone cells signals into antagonistic opponent channels before color information gets sent to the brain; i.e., what your visual cortex has access to is not the original cone cell activations, but sums and differences of the activations of multiple types of cone cells. In human eyes, that means our brains see color coming down the optic nerves as a combination of red vs. green and blue vs. yellow signals--even though yellow isn't actually a physiological primary color! In dichromats like rabbits, the two raw spectral signals (green and blue) are still
processed by an antagonistic opponent system in the retinal ganglia; thus, just like we can't perceive the impossible colors "reddish green" or "yellowish blue", they cannot have any perception of a distinct blue-green mixture--dim dichromatic light at both spectral peaks will look exactly the same as bright monochromatic light exactly in between, which will be indistinguishable from white. In effect, the loss of one cone type compared to humans reduces the color space from 3 dimensions to 2, and the perceptual dimension that is lost after ganglial processing is that of saturation.

The lapine color space is thus defined by a 2D, triangular range with black at one vertex, white (or whatever you want to call it) at the center of the opposite edge, and pure green and pure blue at the
remaining vertices. The hue and saturation axes are the same, with green fading into white and then white fading into blue.



If the most basic colors are defined by the extrema of the opponent-process space, as they are for humans, there should be 3 basic colors, corresponding to black, blue, and green. White would be
the natural next step, followed perhaps by light and dark shades of blue and green. Or you could call the green extremum "yellow" instead, as the Long Wavelength Cone still has sensitivity into the yellow and red ranges of the spectrum, even though its peak is in green, as I have done in the image above. Fundamentally, the 3D human color space and 2D dichromat color spaces are mathematically incommensurate, so all human-perceptible representations involve some arbitrary choices anyway. Treating the long-wavelength end as "yellow" rather than "red" makes is convenient if you want to do something like copying the Old Norse poetic convention of treating blood and gold as being the same color. :)

We can squish and stretch that gamut to get a representation of the dichromat color wheel, with a radial saturation axis and polar hue and brightness:


And the sort of Cartesian representation that an intelligent dichromat graphic designer would use to pick out colors in a computer graphics program:


Keep in mind that the actual colors used in these illustrations are completely arbitrary, aside from being "towards the long-wavelength end" vs. "towards the short-wavelength end". What matters is just the set of possible distinctions. Figuring out exactly what lapine colors any particular object would correspond to would require recording the actual emission spectrum of that object, and then mapping it into the rabbit color space--and being dichromatic does not merely mean that they see a subset of the colors that we can see; the available distinctions are different. E.g., two objects which look identically purple to a human may be monochromatic in the violet spectral range, or they may be dichromatic with light in the
blue and red ranges, but those two objects will look distinct to a rabbit--the first one being obviously pure blue, the second being light blue or white.

So, that's dichromatism... what about tetrachromatism, or higher? My best reference on this subject is this absolutely lovely article: Ways of Coloring: Comparative Color Vision as a Case Study for Cognitive Science, which contains descriptions of comparative color spaces for humans, bees (also trichromats, but with different frequency response), goldfish, turtles (both of which are tetrachromats), and pigeons (suspected pentachromats). And it has an excellent statement of what the problem actually is:
It is important to realize that such an increase in chromatic dimensionality does not mean that pigeons exhibit greater sensitivity to the monochromatic hues that we see. For example, we should not suppose that since the hue discrimination of the pigeon is best around 600nm, and since we see a 600nm stimulus as orange, pigeons are better at discriminating spectral hues of orange than we are. Indeed, we have reason to believe that such a mapping of our hue terms onto the pigeon would be an error: [...] 
Among other things, this result strongly emphasizes how misleading it may be to use human hue designations to describe color vision in non-human species. This point can be made even more forcefully, however, when it is a difference in the dimensionality of color vision that we are considering. An increase in the dimensionality of color vision indicates a fundamentally different kind of color space. We are familiar with trichromatic color spaces such as our own, which require three independent axes for their specification, given either as receptor activation or as color channels. A tetrachromatic color space obviously requires four dimensions for its specification. It is thus an example of what can be called a color hyperspace. The difference between a tetrachromatic and a trichromatic color space is therefore not like the difference between two trichromatic color spaces: The former two color spaces are incommensurable in a precise mathematical sense, for there is no way to map the kinds of distinctions available in four dimensions into the kinds of distinctions available in three dimensions without remainder. One might object that such incommensurability does not prevent one from “projecting” the higher-dimensional space onto the lower; hence the difference in dimensionality simply means that the higher space contains more perceptual content than the lower. Such an interpretation, however, begs the fundamental question of how one is to choose to “project” the higher space onto the lower. Because the spaces are not isomorphic, there is no unique projection relation.

It is also the case that lower-dimensional color spaces, such as those of dogs or rabbits (both dichromats, but in slightly different ways) are incommensurate with our 3D color space, in exactly the same way that our 3D color space is incommensurate with the higher-dimensional perceptions of a pigeon, turtle, or goldfish, and have no unique projections. Thus, visualizations of how your dog or cat sees things are always only approximations--we can try to recreate the kinds of distinctions relevant to a dichromatic animal in our own color space, but we will always experience it differently.

A common feature of all of the systems described is the production of a combined luminance channel from the raw n-dimensional cone cell inputs, and n-1 oppositional chroma channels--in humans, these are the red-green and blue-yellow oppositions, which produce a two-dimensional neurological color space othogonal to the luminosity axis. The YCbCr color space (used for analog color TV transmission) arises from representing the two chromatic dimensions directly in Cartesion coordinates. Saturation arises as the radial dimension--distance from the white-black axis--in a polar transformation of this oppositional color space to produce the trichromat color wheel, with hue arising as the radial coordinate. Trichromat color spaces for different species can vary both in their precise spectral sensitivities, and in how the oppositional chroma channels are generated in the retina; i.e., instead of an RG-B apposition, where R and G physical channels combine to produce Y, there can also be an R-GB opposition: red-cyan vs green-blue. For us, there's no such thing as reddish-green (nor blueish-yellow), because yellow comes in between, but we do have blueish-green. For that other sort of trichromat, reddish-green would make perfect sense, but blueish-green and reddish-cyan would be impossible to perceive instead.

Monochromatic vision is pretty easy to understand--it's just black-and-white / greyscale--luminosity is the only dimension, and leaves zero additional channels for chroma information. As illustrated above, in dichromat vision, the equivalent of the trichromatic color "wheel" is just a line--the radial dimension is not meaningfully distinct from the single linear chromatic dimension, and while we require an additional axis to represent brightness, the dichromat color wheel really does represent every color they can possibly see. As a result, "saturation" and "hue" (or, alternatively, brightness and hue) are indistinguishable to dichromats, and grey (or white, depending on whether you represent the space as a triangular gamut or a Cartesian diamond) is a spectral color. There are only two primary colors (or 4, if you count white and black), and no secondary colors.

In higher-dimensional color spaces, as determined by discrimination experiments on tetrachromatic and pentachromatic organisms, we still see the generation of oppositional color channels from retinal processing. How to generate these oppositional channels, however, is not obvious a-priori; for example, in humans one opposition is between red and green, both of which are primary colors, but the other is between blue, a primary color, and yellow, a composite--and, as mentioned above, that could be reversed in a different species with different specific spectral sensitivities. But why that particular combination for us?

It turns out, across different species, opponent channels are constructed to maximize decorrelation--in other words, to remove redundant information caused by the overlapping response curves of different receptor types. Thus, the precise method of calculating color channels will be slightly different for each species, dependent on physical characteristics of the retinal cells, but they are all qualitatively the same kind of signal, and end up producing a a higher-dimensional chroma-space orthogonal to the white-black luminosity axis. However, there's pretty good reason to believe that this would be a convergently-evolved process to maximize visual acuity (except in some specific circumstances like Mantis shrimp), so this analysis of color perception plausibly applies universally, to most kinds of weird aliens you might come up with, so long as they have eyes at all. Effectively, the retinal ganglia are performing Principle Component Analysis to turn "list of specific frequency activations" information into "total luminosity vs. list of chroma components" information.

Meanwhile, in any such neurological color space, there is only ever a single radial coordinate. Trichromatic vision is kind of special in that it is the first dimensionality at which chroma can be split into saturation and hue components. At higher dimensionalities, the hue space gets more complex, but we can say with some confidence that the extra dimensions introduced in higher-dimensional perceptual color spaces are not some extra sort of radial-coordinate saturation or any kind of weird third thing, but are in fact additional dimensions of hue--and along with extra dimensions of hue, qualitatively different kinds of composite colors!

Monochromats don't have any color. Dichromats don't have any secondary colors--just the spectral colors which, strangely to us, include white/grey. Our three dimensional human color space allows us to perceive two opponent channels, corresponding to 4 pure hues--red, yellow, green, and blue--and weighted binary combinations thereof that give rise to the secondary colors--r+y (orange), y+g (chartreuse?), g+b (cyan), and b+r (magenta), with one non-spectral hue (magenta). Non-spectral colors derive from simulataneous activation of cones with non-adjacent response peaks, and with three cones, there's only one such possibility. Meanwhile, a tetrachromatic system would have 3 opponent axes with 6 basic hues (r-g, y-b, and the new p-q), binary combinations of those hues with their non-opponents producing 12 secondary colors (r+y, r+b, r+p, r+q, g+y, g+b, g+p, g+q, y+p, y+q, b+p and b+q), and ternary combinations producing 8 extremal instances of an entirely new kind of hue--tertiary colors--not found in the perceptual structure of trichromatic color space (r+y+p, r+y+q, r+b+p, r+b+q, g+y+p, g+y+q, g+b+p, g+b+q), just as our secondary colors are not found in the dichromatic space. Additionally, there is not merely one non-spectral secondary color (magenta) in the fully-saturated hue space, but 3--and in general, that number will correspond to however many pairs of non-spectrally-adjacent sensor types there are (which actually works out to the sequence of triangular numbers!) If we assume that r, g, b, and q are the physiological primaries (note that the spectral locations of y and p depend on the decorrelation output for a specific set of 4 receptors with species-specific sensitivities), then the non-spectral secondaries are r+b, r+q, and g+q. All of the tertiary colors are non-spectral.

Ultimate writer takeaway: you may not be able to intuitively understand what non-human color experiences are like, but you can make some arbitrary implicit decisions about retinal physiology (i.e., just decide where you want to the opponent colors to appear along the spectrm), do some basic combinatory math, and then you have a list of descriptions of basic focal colors that you can assign words to--or, if you want to be a little more realistic, assign words to ranges of those focal colors, which you can precisely mathematically describe. This gets more complicated at higher dimensionalities (like pigeons' pentachromatic color space), but tetrachromacy is kind of convenient because you still have only 2 dimensions of hue, so you can actually diagram out what the color regions are, and just tell people "y'all already know how brightness and saturation work, so I don't need to put those on the chart".

Someday, I aspire to have a program where you can input the physiological frequency response curves for an arbitrary organism, and a spectrum, and it'll give you the mathematical description of the perceptual color that that would produce. But till then, you'll just have to do your best at guessing what the aliens and monsters and anthropomorphic animals see whenever a human thinks something is a particular color--but guess informedly, knowing what the structure of their color spaces is like!

P.S. What was that about Mantis shrimp? Well, Mantis shrimp have 16 different light receptor types, with 12 different color receptors, which kinda suggests that they should have a 12-dimensional color space with 10 dimensions of chroma. But... empirically, that's not what happens. Experimentally, they don't actually have all of those different color categories, or a particularly fine capacity for spectral distinction. Rather, they have a large number of different receptor types so that they can identify spectral colors at high speed, without doing any retinal pre-processing--chartreuse cone fires? Cool, that's a chartreuse thing! No need to bother with oponent processing! These kinds of extreme high dimensional visual systems might end up working more like our senses of smell or taste than like our perception of color. However, there's also another aspect of Mantis shrimp vision that's outside of color perception (and not entirely unique to Mantis shrimp, either): they can see polarization (hence the 4 visual receptor types that aren't for color, rather than just 1). This ability is comparatively easy to imagine and describe--it's an overlay of geometric information, that tells you "not only does this light have a particular color, it is also oriented in a particular way". Mantis shrimp are, however, unique in being able to distinguish circularly polarized light; other creatures with polarization sensitivity would be unable to tell circularly polarized from unpolarized light.

Wednesday, January 10, 2024

A Language of Graphs

Recently I got thinking about syntax trees, and what a purely-written language might be like that was restricted to the syntactic structures available to linearized spoken languages and made those structures  explicit in a 2D representation. Or in other words, a graphical (double-entendre fully intended) language consisting of trees--that is, graphs in which there is exactly one path between any two nodes/vertices--whose nodes are either functional lexemes roughly corresponding to internal syntactic nodes and function words in natural languages, or semantic lexemes corresponding to content words--but where, since the "internal" structure is made visible, content words are not restricted to leaf nodes!

Without loss of generality, and for the sake of simplicity, we can even restrict the visual grammar to binary trees--which X-bar theory does for natural languages anyway--although calling them "binary" doesn't make much sense if you don't display them in the traditional top-down tree format with a distinguished root node, since internal nodes can have up to three connections--one "parent" and two "daughters", which are a natural distinction in natlang syntax trees but completely arbitrary when you aren't trying to impose a reversible linearization on the leaf nodes! So, in other terms, we can say that sentences of this sort of language would consist of tree-structured graphs with a maximal vertex degree of 3.

I am hardly the first person to have thought up the idea of 2D written language, but a common issue plaguing such conlang projects (including their most notable example, UNLWS) is figuring out how to lay them out in actual two dimensions; general graphs are three-dimensional, and squishing them onto a plane often requires crossing lines or making long detours, or both. Even when you can avoid crossings, figuring out the optimal way to lay out a graph on the page is a very hard computational problem. Trees, however, have the very nice property that they are always planar, and trivial to draw on a 2D surface; if we allow cycles, or diamonds (same thing with undirected edges), it becomes much more difficult to specify grammatical rules that will naturally enforce planarity--which is whay I've yet to see a 2D language project that even tries. Not only is it easy to planarize trees, there are even multiple ways of doing so automatically, so one could aspire to writing software that would nicely lay out graphical sentences given, say, parenthesized typed input. (Another benefit of trees is that they can be fully specified by matched-parentheses expressions, w we could actually hope to be able to write this on a keyboard!) And then we can imagine imposing additional grammatical rules and pragmatic implications for different standard layout choices--what does it mean if one node is arbitrarily specified as the root, and you do lay it out as a traditional tree? What if you instead highlight a root node by centering it and laying out the rest of the sentence around it? What if you center a degree-two node and split the rest of the sentence into two halves splayed out on either side?

The downside of trees is that semantic structure is not limited to trees; knowledge graphs are arbitrary non-planar graphs. But, linear natural languages already deal with that successfully; expanding our linguistic from a line to a tree should still reduce the kinds of ambiguities that natural languages handle all the time. So, this sort of 2D language will require the equivalent of pronouns for cross-references; but they probably won't look much like spoken pronouns, and there's a lot more potential freedom in where you decide to make cuts in the semantic graph to turn it into a tree, and thus where pronouns get introduced to encode those missing edges, and those choices can probably be filled with pragmatic meaning on top of the implications of visual layout decisions.

Now, what should words--the nodes in these trees--look like? It seems to be common in 2D languages for glyphs to be essentially arbitrary logographs, perhaps with standard boundary shapes or connection point shapes for different word classes. The philosophy behing UNLWS, that it should take maximal advantage of the native possibilities of the written visual medium, even encourages using iconic pictoral expressions when feasible. But that's not how natural languages work; even visual languages (i.e., sign languages), despite having more iconicity on average than oral languages, have a phonological system consisting of a finite number of basic combinatorial units that are used to build meaningful words, analogous to the finite number of phonemes that oral languages have to sring together into arbitrary words. Since we've already got a certain limited set of graphical "phonological" items necessary for drawing syntax trees, and constraint breeds creativity, why not just re-use those?


Here we have an idealized representation of the available phonemes / graphemes / glyphemes: a vertex with one adjoining edge, a vertex with 2 adjoining edges, and a vertex with 3 adjoining edges. On the left, the three -emic forms. On the right, the basic allographic variants. In all cases, absolute orientation and chirality don't matter--if you mirror the "y" glyph, it is still the same glyph. Note that "graph" and "grapheme" are standard terms in linguistics for the written equivalents of "phones" and "phonemes", but that's gonna get really confusing when we're also talking about "graphs" in the mathematical sense. "Glyph" also has a technical meaning, but I am going to repurpose it here to talk about the basic units of this 2D language. So, we have glyphs, glyphemes, and alloglyphs, which are composed into graphs to form lexemes and phrases. Having only 3 glyphemes to work with may seem extremely limiting, but the expanded combinatorial possibilities in 2D vs. 3D make up for it.

While keeping syntax restricted to tree structures is the core idea of this language experiment, lexical items, which don't need to be invented and laid out on the fly, can be more general; we could allow them to be any planar graph. And just as syntax trees can be laid out in many different ways, we could say that lexical items are solely defined by their abstract graphs, which can also laid out in many ways. But, it turns out that recognizing the topological equivalence of two graphs laid out in different ways is a computationall hard problem! If this language is to be usable by humans, that simply will not do. Thus, the layout for lexical items should be significant, up to rotation and reflection equivalence, so that their visual representations are easily recognizable. This doesn't require introducing any additional phonemic elements--the arrangement of phonemes and letters in one-dimensional natural language words also affects meaning, but we don't consider it "phonemic". Despite the Monty Python sketch about the guy who speaks in anagrams, spoken words are not just bags of sounds in arbitrary order, and written words are not just bags of letters--that's why, for example, "bat" and "tab" mean different things, and "bta" just isn't an English word at all. The spatial arrangement--which, in the case of natural language, works out to just linear order--matters a lot, and that sketch only works because it's precisely constructed to use close-enough anagrams  with a lot of supporting context. So, what sort of glyphotactic rules should we have to determine the valid and recognizable arrangements of glyphs in 2D space?

With 3 edges per vertex, the most natural-seeming arrangement is to spread them out at 120 degree angles, and degree-2 vertices would sit nicely in a pattern with 180-degree angles (although we probably want to minimize those, since vertices are more noticeable if they are highlighted by a corresponding angle, rather than a straight line through them). That suggests a triangular grid, which can accomodate both arrangements. The idealized glyphemes and alloglyphs shown above are drawn assuming placement on such a triangular grid, with 60, 120, and 180-degree angles. (I will continue to refer to the features of glyphs in terms of 60, 120, and 180-degree angles, but these, too, are idealizations; in practice, non-equilateral grids might be used for artistic or typographic purposes--e.g., as an equivalent to italics--in which case these angle measurements should be interpreted as representing 1, 2, or 3 angular steps around a point in the grid.) So, words shouldn't be completely arbitrary planar graphs--they should be planar graphs with a particular layout on a triangular grid.

It does not make sense to extend a single grid across an entire phrase or sentence; the boundaries of trees grow exponentially, so you'd need a hyperbolic grid to do it in the general case, and hyperbolic paper is hard to come by (although laying out a sentence on a single common grid within, say, a Poincare-disc model of the hyperbolic plane might be a neat artistic exercise). Maintaining a grid within a word is sufficient to maintain graphical recognizability, and breaking the grid is one signal of the boundary between lexicon and morphology on one side and syntax on the other.

Making an analogy to chemistry, I feel, as an aesthetic preference, that word-graphs should have a minimal amount of "strain". That is, glyphotactically valid layouts should use 120-degree angles wherever possible, and squish them to 60 degrees or spread them to 180 degrees only where necessary. So, where is it necessary?

  • 60-degree angles should only occur on 3-vertex triangles, the acute points of 4-vertex diamonds, or as paired 60-degree angles on the interior of a hexagon.
  • 180-degree angles should only occur adjacent to 60-degree angles, or crossing vertices at the centers of hexagons.
Additional restrictions:
  • All edges should be exactly one grid unit long--i.e., there are no words distinguished by having a straight line across multiple edges, vs. two edges with a 180-degree angle at a vertex in the middle.
  • Syntactic connections must occur on the outer boundary. I.e., you can't have a word embedded inside another word.
  • All vertices must have a maximum of three adjacent edges; thus, any word must have at least one exterior vertex with degree 2 or 1, to allow a syntactic adge to attach to it.
  • As they are nodes in a binary syntax tree, words can have at most 3 external syntactic connection points.

With those restrictions in place, here are all of the possible word skeletons of 2, 3, or 4 vertices:


I refer to these "word skeletons" rather than full words because they abstract away the specification of syntactic binding points--and the choice of binding points may distinguish words (although they should probably be semantically-related words if I'm not being perverse!) Including all of the possible binding point patterns for every skeleton massively increases the number of possibilities, and it quickly gets impractically tedious to enumerate them all and write them down. Here are all of the word skeletons with 5 vertices:

And here are all of the word skeletons with 6 vertices:

And the number of possible 7-vertex words is.... big. Counting graphs turns out to also be a hard problem, so I can't tell you exactly how fast the number of possible words grows, but it grows fast.

Now, I just need to start actually assigning meanings to some of these....