Showing posts with label concatenative programming. Show all posts
Showing posts with label concatenative programming. Show all posts

Friday, May 6, 2016

A Programming Language for Magic

Or, The Structure and Interpretation of Demonic Incantations, Part II.

As previously mentioned, a great deal of the training for a magician in the animist, demon-magic world consists of developing the mental discipline to compose precise and unambiguous instructions. Natural human languages are terrible at both precision and disambiguation; legalese is a direct result of this fact, and it still fails. In order to summon demons safely, therefore, a specialized formal language will be required- a programming (or "incantation") language.

Many people criticize FORTH and related languages for being hard to follow when data is passed implicitly on the program stack, but that's not an indictment of the concatenative style- it's an indictment of point-free (or, less charitably "pointless") style; i.e., the avoidance of explicit variable names. Concatenative programs can use explicit variables for convenience, and applicative languages can be written in point-free style as well. This is in fact quite common in Haskell and various LISPs, though the syntax of most mainstream languages makes it less convenient.

Despite both being called "languages", programming languages aren't much like human languages. That is, after all, why we need them- because human languages don't do the job. Conversely, programming languages tend to not play nice with humans' natural language abilities. Nobody likes trying to speak a programming language, or even to write fluently in one. When working out ideas on a whiteboard, or describing an algorithm in a paper, real-world programmers and computer scientists frequently avoid real programming languages and fall back on intuitive "pseudo-code" which sacrifices rigor in exchange for more easily communicating the essential ideas to other humans (or themselves 10 minutes later!), with careful translation into correct, computer-interpretable code coming later.

Now, magicians could work in a similar way, and it makes sense that they frequently would: laboriously working out correct spells in a written incantation language ahead of time, and then executing them by summoning a demon with the simple instruction "perform the task described on this paper". One can even go farther and imagine that, given the effort involved in creating such a spell, that they might be compiled into books for frequent re-use, stored in a university library, such that from then on magicians could execute common useful spells by instructing a demon to, e.g., "execute the incantation on page 312 of volume 3 of the Standard Encyclopedia of Spells as stored on shelf 4A of the Unseen University Library".

But, this is not a universal solution. For many simple spells, the description of where to find them may well be longer and more error-prone than simply reciting the spell itself. And, as any sysadmin could tell you, sometimes it's not worth writing a stored program for one-off tasks; you just need to be able to write a quite bash script on the command-line. For magical purposes, therefore, we'll want a programming language that is optimized for speaking, while retaining the precision of any other programming language.

After some discussion on the CONLANG-L mailing list, I think I have a pretty good idea of (one possibility for) what a "magical scripting language" might look like.

The major problem that I see needing to be solved is parenthesization; one can't be expected to properly keep track of indentation levels in speech, and explicit brackets around everything can get really hard to
keep track of even when you can look at what you're doing on a screen (which is why we have special editors that keep track of bracket-matching for us!)

Ideally, the language would not require explicit parenthesization anywhere, and other "syntactic noise" and boilerplate code would be minimized, so that every token of the language carries as much semantic weight as possible, to make them easier to memorize. This leads me to think that a concatenative, combinator-based language (like FORTH, or the non-programming-language conlang Fith) would be the best base, on top of which "idiomatic" flourishes could be added. Concatenative languages are often called "stack based", but they don't have to be implemented with a stack. They are in fact isomorphic to applicative languages like LISPs (i.e., there is a purely mechanical method for transforming any concatenative program into a fully-parenthesized applicative program and vice-versa), and can equally well be thought of as "data flow" languages, specifying how the outputs of one operation connect up with the inputs of another, and this is in fact a topic I have written about on this blog before.

There is a trade-off between being able to keep track of complex data flow without any explicit variable names, which can get very difficult if certain items are used over and over again, and being required to keep track of lots of temporary or bookkeeping variables (like loop indices). A concatenative language that allows you to define variables when you want, but minimizes extra variables whenever they would constitute "syntactic noise" with minimal semantic value, seems to me the best of both worlds. This should make simple incantations relatively easy to compose on-the-fly, and easy to memorize, as they would be mostly flat lists of meaningful words, similar to natural language.

Once you introduce re-usable variables, however, you have to decide the question of how variables are scoped. Letting all variables have the same meaning everywhere ("global scope") is an extremely bad idea; it makes it very dangerous to try re-using existing spells as components of a new spell, because variable names might be re-used for very different purposes in different sub-spells. If we want to be able to define and re-use sub-spells (i.e., functions, subroutines, procedures, or blocks, in terms of real-world programming languages), there are two major approaches to scoping rules available: dynamic scoping, and static scoping.

Loosely speaking, dynamic scoping means that any variables that are not defined in a function (or block) are the same as variables with the same name that are in-scope at the place where the function was called. This can only be figured out when a function is actually run, hence "dynamic". Static scoping means that any variables not defined in a function (or block) are the same as variables with the same name that are in-scope at the place where the function was defined. This can be figured out by looking at the text of a program without running it, hence "static".

Dynamic scope is generally easier to implement in (some kinds of) interpreters, while static scope is generally easier to implement in compilers, which early on in CS history led to a split between some languages using dynamic scope and some using static scope just because it was easier to implement the language that way. The modern concensus, however, is that dynamic scope is almost always a Bad Idea, because it makes it harder to analyze a program statically and prove that it is correct, whether with an automatic analysis tool or just in terms of being able to look at it and easily understand how the program works, without running it. Nevertheless, dynamic scope does have some legitimate use cases, and some modern languages provide work-arounds to let you enforce dynamic scoping where you really need it.

Scala, for example, simulates dynamic scoping with "implicit arguments", and this really highlights a good argument for dynamic scoping in a spoken programming language. One pain point with most programming languages, which would only be exacerbated in a spoken context, is remembering the proper order for long lists of function arguments. This can be ameliorated by named (rather than positional) arguments, but having to recite the names of every argument every time one calls on a sub-spell seems like a lot of boilerplate that it would be nice to be able to eliminate.

With dynamic scoping, however, you get the benefits of named arguments without having to actually list them either in a function definition (although that might well be a good idea for documentary purposes when spells are written down!) or when calling them. Just use reasonable names in the body of a sub-spell for whatever object that sub-spell is manipulating, and as long as variables with the appropriate names have already been defined when you wish to use a sub-spell, that's all you need. Making sure that those names refer to the correct things each time the sub-spell is called is taken care of automatically.

Examples

So, what might this incantation language actually look/sound like? Let's look first a simple script to do your dishes after a meal:

loop [more-than COUNT-DISHES 0] [let DISH [NEXT-DISH] DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move]

Things that are assumed to built-in to the language are in lower case, while things that are assumed to be magician-defined are in upper case. If every "verb" has fixed arity (so you always know exactly how many arguments it will need- there are no ambitransitives) then you don't need brackets around arguments or special delimiters between statements/expressions. We only need them to delimit "complement clauses"- i.e., blocks of code that are themselves arguments to other operators (in this, case "loop" and "let").

If magicians want to create their own higher-order spells which take other spells as arguments, there's not much to be done about eliminating those brackets. There are transformations in combinator calculus that can be used to remove them, but the results are extremely difficult for a human to follow, and no one would compose them on-the-fly for anything task of significant complexity. We could introduce a macro metaprogramming system that would let magicians define their own clause-delimiter syntax to eliminate cruft or enhance memorability, but then you might have to remember separate syntax for every sub-spell, which could get even worse than having to remember a single set of brackets. This is, in fact, a real problem that the LISP community (among others) deals with- if your language is too powerful, and too expressive, you get fragmentation problems, and end up with extra memory burden trying to keep track of what ridiculous things your co-workers decided to use that power for. It's often easier to enforce consistency.

Still, we can achieve some improvement by adding some "flourishes" to some of the basic, built-in operators in the language, replacing generic brackets with special construct-specific keywords. This is in fact a strategy taken by many real-world programming languages, which use block keywords like "while...wend" or "if...endif" rather than generic brackets for everything. With that alteration made, the spell might look something like this:

while more-than COUNT-DISH 0 loop let DISH be NEXT-DISH in DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move wend

Now there's a lot of distance between the "loop" and the "wend", which could, in a more complicated spell, make it easy to lose track of, even with the specialized keywords. To help with that, we can do some aggressive let-abstraction- assigning blocks to variables, and then using the short variable name in place of the long literal block definition. That gets us a spell like

let WASHING be let DISH be NEXT-DISH in DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move in
while more-than COUNT-DISH 0 loop WASHING wend


In some ways, that's better, but now we have the potentially-confusing sequence of nested "let"s ("let ... be let ... be ... in ... in"), and we can't move the second "let" outside the definition of "WASHING" because it actually needs to be re-evaluated, redefining "DISH", every time "WASHING" is called. This wouldn't be a big problem in a written programming language, but it's seriously annoying in speech. There are a few ways it could be fixed; one solution I like is allowing cataphoric variables (which refer to a later definition) in addition to anaphoric variables (which refer to an earlier definition), so we can switch up which structure to use to avoid repetition, and make a choice about which structure is more conceptually appropriate in different kinds of spells. Cataphoric variables are not terribly common in real-world programming languages, but they do exist, as in Haskell's "where"-clauses. Implementing "where" in out magic language, we can write the spell as

while more-than COUNT-DISH 0 loop WASHING wend
where WASHING is let DISH be NEXT-DISH in DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move done


We might also take advantage of dynamic scoping to write the algorithm in a more functional style, mapping over the collection of dishes, like so:

for DISH in DISHES loop WASHING rof
where WASHING is DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move done


or

let WASHING be DISH pickup DISH SINK move WATER DISH use SOAP DISH use DISH CABINET move in
for DISH in DISHES loop WASHING rof


In either case, the nested "let" to define the DISH variable is eliminated by binding a DISH variable in a for-in construct over a collection of DISHES, and relying on dynamic scope to make that variable available in the body of WASHING. Note that this avoids the need for spoken function parameter definitions, although it would be relatively easy to add syntax for them in cases where they do turn out to be useful.

Now, this looks like a pretty straightforward spell, but note that it only works if we assume that things like "pickup", "NEXT-DISH", "use", and so forth have all been rigorously and safely defined ahead of time! These are all actually pretty complex concepts, and in practice demon magic would probably not be used for relatively low-effort, high-descriptive-complexity tasks like washing dirty dishes. But, for magical scripting to be useful, we would a lot of these kinds of everyday things to be pre-defined. And just as *NIX users tend to have opinions about what things they individually want to have available in their command-line environment, and maintain custom .bash_profile/.bashrc files per login account, I imagine that perhaps working magicians carry around phylacteries containing written copies of personally-preferred spells, and begin each incantation by referring to them to establish the appropriate execution environment.

Thursday, June 11, 2015

Uniform Call Syntax in Parles

After a long hiatus, I've come back to do a little more work on Parles. The GitHub repository now has a very simple but functioning compiler and VM that can run very simple programs. Now, I've got some thoughts on the next feature I'd like to add. Uniform Function Call Syntax is a concept implemented in several modern programming languages (like D, or Magpie) which helps make the core language smaller by treating object-oriented method calls, of the form foo.bar(baz), as simple syntactic sugar for function calls of the form bar(foo, baz). If you remove the parentheses, it becomes obvious that this is a fairly simple reordering rule: foo . bar baz gets rewritten as bar foo baz, with bar and foo switching places.

Parles already has two similar bits of token-reordering syntactic sugar (the ; and | operators), so adding a third one should really be no big deal. Since parentheses are already allowed in exactly the positions that you would put them in a method call in a language like C++ or JavaScript, this minor addition would allow Parles to simulate OO method call syntax exactly. We just need to figure out the rules for what exactly gets swapped with what.

The obvious simple solution is to just swap expressions (individual words or blocks). But, what if the expression preceding . produces multiple values? Calling a method on multiple values doesn't make much sense. It would be nice to define the . operator so that it requires the programmer to use it in a way that "makes sense" in terms of OO method calls, rather than as an arbitrary re-ordering operator. To handle that, we'll need to look at type information to guarantee that the output of foo is in fact a single value. It's also important that foo not require any arguments- i.e., that it actually be a value, not a partially-applied function call.

Additionally, we need to ensure that the method name (whatever comes after a .) is in fact a name - a single word - not an arbitrary expression. In other words, we want to make sure that programmers can write things like foo.bar, but not foo."hello world" or foo.(+ 1 2). That's a very simple syntax check that doesn't require any type information. It may also be a good idea to ensure that bar is a function that takes at least one argument (namely, foo).

These typing rules are similar to the restrictions that come with parens, and can be handled in a similar way. So, we'll start out by simply swapping the expressions on either side of a ., but adding implicit type annotations to the AST so that the type checker can ensure that a method call "make sense".

Method Chaining

Swapping single expressions on either side of a . lets us do simple method calls like foo.bar(baz), which gets re-written as bar foo (baz). But what if we want to chain method calls together, writing something like foo.bar(baz).qux(fred)? If . associates to the left, this gets re-written in two steps as bar foo (baz).qux(fred) and then bar foo qux (baz) (fred), which is not what we want! It should be qux bar foo (baz) (fred). We don't want to use just the first expression to the left as the first argument to qux- we want to use the whole preceding method call. There are several possible approaches to fixing this.

First, we could try using type information, just like we would for auto-parenthesization. There is some circularity involved if we use type information to figure out how to re-order things, because we can't actually do type unification until we know the proper order for all expressions, but there is a way around that: only the left argument is potentially ambiguous (since we already said the right argument must always be a single word), so the type checker just needs to keep track of the types of all contiguous subsequences of expressions up to the point where it encounters a .. That takes O(n^2) time for arbitrarily long subsequences, but we can skip sequences that cross block boundaries or ; or | operators. In that case, splitting a long program into two halves with a single block or re-ordering operator also cuts the typechecking time in half. As long as the basic chunks are small enough, this should be reasonable fast, close to O(n) time to typecheck a complete program. With progressive typechecking, every time a . is encountered, the compiler can then select some previous sequence that takes no arguments and returns a single value, and use that sequence as the left argument for the . reordering operator. If the Parles implementation does auto-parenthesization, the necessary type information will have to be calculated anyway, and re-using it for method call left-argument identification adds no additional typechecking overhead. Given that there may be more than one type-compatible left subsequence, though, we still need a rule for how to consistently select exactly one of them. There are two sane options: take the longest type-appropriate sequence, or the shortest.
The longest possible sequence is unlikely to be what you really want in most cases, and it potentially requires programmers to hold a lot more stuff in their heads to understand how any particular program is put together. Using the shortest possible sequence seems like a much better choice; it keeps the scope of reordering smaller and easier to think about, and in most cases will identify just a single expression anyway. Unfortunately, that is also a problem. For example, if baz returns a single value, then foo.bar(baz).qux(fred) will end up being re-written as bar foo qux (baz) (fred), which is not what we wanted! This approach would work fine for chaining method calls with functions that take zero, two, or more arguments, but fails for methods of one argument, which is not an uncommon case.

So, it seems that relying on the typechecker to tell us how to rewrite method calls is not going to work. The second option is to simply ignore the problem, continue to only swap single expressions on either side of a ., and make programmers explicitly mark off whatever they want to function as the left argument in a method call with a block of some sort whenever it is larger than a single word. In that case, we'd just require foo.bar(baz).qux(fred) to be written instead as (foo.bar baz).qux(fred), or {foo.bar baz}.qux(fred). That fails to faithfully simulate how method chaining works in other languages, though, and is thus rather unsatisfying.

The third option is to still require explicit delimiters for left method arguments, but in a slightly more subtle way: we can simply always collect everything to the left of a . up to the last enclosing block boundary, variable binding, |, or ;, regardless of type information. That means that, if you want to embed foo.bar(baz) in the middle of a larger line, you'll probably have to explicitly mark it off as {foo.bar(baz)} so that the . operator doesn't suck up anything more to the left of foo. But honestly, if you're writing Parles code in an imperative, OO style where method call syntax makes sense, you're not likely to want to embed {foo.bar(baz)} in the middle of a line- you'll stick it on a line by itself terminated by a semicolon and beginning with a variable binding and/or preceded by a block boundary or another line terminated by a semicolon.

After all that consideration, then, that's what Parles will do: . becomes a straightforward re-ordering operator working purely at the surface syntactic level, like ; and |, which swaps everything to the left up the the last enclosing block, variable binding, semicolon, or pipe, with a single following word, and which introduces implicit type annotations similar to those implied by parentheses.

Addendum: the git commit which implements method call syntax can be seen here on GitHub.

Sunday, March 9, 2014

Parles: A Language With Less Irritating Superfluous Parenthesization

Well, I haven't blogged in rather a while. But, what with being in a Linguistics MA program, I've been feeling the need to take some me-time and do some Computer Science recently, and I might as well tell the world about it.

Building off my thoughts from Lisp Without Parentheses and Pointful Concatenative Programming, I've started working on a new language called Parles.

Most programming languages seem to me to have a rather inconsistent idea about evaluation order within a line- it usually goes left-to-right, the direction you read in, except when you have to jump backwards to the left to figure out what function to apply to all those argument expressions you just read left-to-right. Most concatenative languages are more straightforward- everything always goes left-to-right; but despite the wonderful consistency, this seems to cause Discomfort and Confusion to people coming from traditional Algol-style languages beyond what comes from just dealing with weird syntax (or the utter lack thereof- even less syntax than LISPs!). I have tried to resolve this issue in the design of Parles.

At its most basic level, Parles is essentially FORTH, read backwards: evaluation proceeds from right to left, with arguments evaluated before function calls (ignoring for a moment that all arguments *are* function calls), and appearing after them in the program text.

There are two ways to alter this basic behavior: semicolons and pipes.
The semicolon is the sequence operator; it is eliminated at parse time and causes everything parsed prior to the semicolon to be post-fixed to whatever comes after, up to the next semicolon or end-of-block. If you end every line with a semicolon, this will cause lines to be evaluated top-to-bottom, just as pretty much everybody ever expects them to be. Thus,

+ 1 2;
* 3 4;

is equivalent to

* 3 4 + 1 2

The trailing semicolon is irrelevant- it causes the entire program to be postfixed to the empty string, which is a no-op.

The pipe (|) operator behaves like a unix shell pipe- take the output from the stuff on the left, and feed it to the stuff on the right. It's essentially identical to the semicolon, except that it binds more tightly (re-ordering occurs up to the next pipe, semicolon, or block boundary) and encodes a type assertion that the output row type of the left side exactly matches the input row type of the right side. E.g., while

1 2 3; +

is totally valid, and results in "3 3" on the stack (with + popping 1 and 2 off the stack and replacing them with 3, and the original three coasting underneath untouched)

1 2 3 | +

is a type error, as the compound expression "1 2 3" has output type "int int int" while + expects only two ints as input. Pipe can be used to simulate infix operators, as long as there is a block boundary or semicolons to properly delimit the expressions whose types are to be matched. E.g.,

1 | + 2

is a valid infix version of

+ 2 1

There are three kinds of blocks, using three kinds of brackets: [], (), and {}.
[] blocks are called quotations and are taken pretty much straight from FORTH: they produce a function value whose type is equal to the type of the body. These are Parles's primary means of abstraction and delayed computation. In addition to all of the things that you would usually use functions for, they are necessary for using control-flow operators. For example, "if" in Parles is a function which takes a boolean and two quotations, and applies one or the other of them depending on the value of the boolean. Its usage looks like this:

if < 2 3
[print "true"]
[print "false"]

This is an excellent example of a circumstance where you do not want to end a line with a semicolon. Inserting semicolons would require re-ordering lines as follows (so that they will be properly un-reordered by the parser):

[print "false"];
[print "true"];
if < 2 3

which is the scary weird order that most FORTHs will make you write it in (modulo each line still being written backwards).

() blocks are argument blocks. They are evaluated immediately, but introduce a type assertion that the contained expression requires no arguments itself- i.e., takes nothing off the stack. Thus, they represent things that you would put parentheses around if you were writing in a traditional LISP. Indeed, you can write Parles to look like Scheme if you feel like putting parentheses everywhere you possibly can and not doing any interesting tricks with floating extra arguments on the stack. I considered extending the type assertion to require that the body terminate having placed exactly one value on the stack to serve as the return value, but plenty of LISPs already allow multi-value return, and it'd be a shame to eliminate that feature from a language to which it belongs so naturally.
Aside from organizing and documenting things for the sake of future readers of the code, the primary purpose of argument blocks is to annotate sections of code that can be executed in parallel due to a lack of input dependencies. A smart compiler would be able to identify such regions automatically, rather than just verify them when pointed out by a human, but that's Hard, so I probably won't implement it for a while, and putting in some clues to make your code easier to read is a good thing anyway.

{} blocks are just regular blocks, and they have no particularly special function except delimiting scope. As mentioned above, block boundaries limit semicolon and pipe re-ordering. In addition, however, all three kinds of blocks allow for an optional argument declaration that binds local variables by popping values off the stack. Thus, binding local variables is likely to be the most common use for regular blocks. That usage looks like this:

{a: str -> print print a a} "waka"

which binds "waka" to the string variable "a" and prints it twice ("wakawaka"). (This is slightly different from the syntax I played with in Pointful Concatenative Programming because I ended up needing the pipe character for something else.) All arguments (at least for now, until the type inference system is more advanced) must have a type declaration, and variables listed in the same order as the expressions which produced the values they bind to. E.g.,

{a: str b: num -> b a} "hi" 1

will bind "hi" to "a" and "1" to "b" (and swap their positions on the stack when it's done).

Quotations are a little bit special. Their argument binding behavior is just like what one would expect from anonymous functions in any other language: arguments are taken from the stack at the point of application, not at the point of creation (technically, this is true for all blocks, but for the others those are the same time). Quotations also form closures over their free variables that are bound in higher scopes.

So far, all functions (except built-in ones like "if") are anonymous. There is an "apply" operator for executing quotations stored on the top of the stack, but it is also possible to give quotations names by binding them to variables (as in pretty much every other language with first-class functions). In order to ensure equality between built-in and programmer-defined functions, using a variable that holds a quotation does not place that quotation value back on the stack, like accessing another value would; rather, it results in the application of the stored quotation. This means you can bind quotations to variable names and then use them just like the built-in words, without needing to use "apply". If you don't want them to be executed, just re-quote.

There is also one additional way to bind variables: using a lambda binding. This consists of the unicode lowercase letter lambda (λ) or a backslash (\) followed immediately by a name and type assertion. This is translated at parse time into a block binding a single variable whose scope extends left until the next explicit block boundary (or EOF). The intent is that you put these things at the beginning of a line terminated by a semicolon, kind of like a regular variable declaration in an Algol-style language, but you can put them anywhere you like. So, one might decide to declare a new function like this:

\sayhi [print "!" print print "hi "];
sayhi "Bob"

or

\sayhi [name: str -> print "!" print name print "hi "];
sayhi "Bob"

which will both print out "hi Bob!"

All variables are (for now) immutable, but can be shadowed as much as you like. That includes all of the built-in functions; so far, the only keywords or reserved symbols are: the various brackets, the argument arrow "->", the colon for introducing type assertions, the backslash and lambda characters, and the semicolon and pipe characters. Everything else is free game for redefinition by shadowing. Use this power wisely.

As of now, you sadly can't actually run any Parles programs; there's an outline of a virtual machine for it, but no code-generator. The parser and type-checker (a critical component of the compiler, as you can't figure out what hooks up with what unless you can figure out the in- and out-arity of every word) are working though, so you can at least parse and type-check Parles programs with the Python scripts in the Github repo.

Thursday, September 12, 2013

Lisp Without Parentheses

Lisp is sometimes jokingly said to stand for "Lots of Irritating Superfluous Parentheses". Despite the joke, it turns out most of the parentheses in an average Lisp program are, in fact, superfluous. Let's examine the program for finding square roots by Newton's Method from SICP:

(define (sqrt-iter guess x)
  (if (good-enough? guess x)
      guess
      (sqrt-iter (improve guess x)
                 x)))

(define (improve guess x)
   (average guess (/ x guess)))
   
(define (average x y)
  (/ (+ x y) 2))

(define (good-enough? guess x)
  (< (abs (- (square guess) x)) 0.001))

(define (sqrt x)
  (sqrt-iter 1.0 x))

This isn't as bad as some Lisp programs, but even so, three closing-parens in a row starts to get hard to keep track of without editor support. Admittedly, most programmers have editor support (if your text editor can't match parens for you in 2013, you need to get a new text editor); but if we don't need them we might as well get rid of them. Parentheses do 3 things in this program: they tell indicate where function/macro applications start and where the argument lists for those applications end, and they delimit lists. If we know the signature of each function or macro, then we don't need parentheses for that second purpose- telling us where argument lists end. We can restrict them to simply marking applications (as opposed to functions that are being used as values), and delimiting lists (in this case, the only lists we need to worry about are the argument lists in function definitions). That gives us a transformed program that looks like this:

(define) '(sqrt-iter guess x)
  (if) (good-enough?) guess x
      guess
      (sqrt-iter) (improve) guess x
                 x

(define) '(improve guess x)
   (average) guess (/) x guess
   
(define) '(average x y)
  (/) (+) x y 2

(define) '(good-enough? guess x)
  (<) (abs) (-) square guess x 0.001

(define) '(sqrt x)
  (sqrt-iter) 1.0 x

For clarity, I've explicitly quoted the lists to distinguish an application from a list of a single element. This is starting to look a lot like Polish Notation with all of the operators parenthesized. You might think "Polish Notation arithmetic doesn't need parentheses because arithmetic operators use distinct symbols from operands"; true, but that turns out not to matter. If we think of values as functions that take no arguments and return themselves, and variables as functions that take no arguments and apply their values, then we can assume that everything is always applied by default, and then we don't need those parentheses (or anything else) to mark applications, giving us this:

define '(sqrt-iter guess x)
  if good-enough? guess x
      guess
      sqrt-iter improve guess x
                x

define '(improve guess x)
   average guess / x guess
   
define '(average x y)
  / + x y 2

define '(good-enough? guess x)
  < abs - square guess x 0.001

define '(sqrt x)
  sqrt-iter 1.0 x

This should look suspiciously like FORTH. If it doesn't, try capitalizing everything and reading it backwards (in Reverse Polish Notation, like old HP calculators use). I have a sneaking suspicion that FORTH variants have remained fairly niche languages largely because they put their arguments and function names backwards from everybody else (well, except for Assembly languages, but nobody really wants to program in assembly either). (Incidentally, this is very similar to how actual Lisps are compiled into concatenative languages like VM bytecode.)

As long as you have just enough static typing to keep track the arity of each function, all of the original parenthesization can be reconstructed automatically (which means that we can still implement things like lazy evaluation or automatic parallelization under the hood).

Now, we did lose something by eliminating parentheses are explicit delimiters of argument lists: you can't handle variadic functions anymore (well, unless you do crazy Haskell stuff). But, for the most part, you don't really need variadic functions. Lisps tend to map rest arguments to lists anyway, so just pass list. If you want, say, variadic add, just write a function to fold a list over binary add. Something like...

define '(\+ l) fold + 0 l

...er, oops. Type error. That code will be automatically parenthesized as follows:

(define (\+ l) (fold (+ 0 l)))

+ needs two numbers, not a number and a list, and fold doesn't take a number as it's first argument (plus, it needs a couple more). What we need is some way to turn off the application of + and get its value instead. We only need this for passing first-class function values as arguments, and that happens far less often than function applications do in most programs, so we can bring back parentheses for that (the exact opposite of their original purpose) and still have a net gain in program concision (and reduced nesting):

define '(\+ l) fold (+) 0 l

We're using parentheses here rather than some other symbol (like, say, a prefixed # or something) to indicate non-application, because we can actually put more stuff in there- a whole list of stuff, in fact. Maybe something like

define '(list-double-plus l) map (+ 1 * 2) l

What's really going on here is not that parentheses are suppressing application- rather, parentheses delimit lists (just like they do when quoted), but now, instead of representing function applications, unquoted lists represent functions that return functions composed of the contents of the list. (+) isn't a binary operator that hasn't been applied, rather it's a new function that returns a function composed solely of that binary operator. It gets applied as normal (to zero arguments), but its return value is what fold needed as its first argument. (+ 1 * 2) returns a function that's the composition of +, 1, *, and 2, with automatic currying. With these semantics, we don't actually need a separate "lambda" construct in the language anymore, since lists produce anonymous functions- as long as you're willing to program in point-free style, anyway.
If we just imagine the whole program as wrapped in one big set of top-level, outside parentheses, we can see that all programs are still lists- just less deeply nested ones. The language is still homoiconic, so it's still pretty easy to do metaprogramming, and it still makes sense to call this a dialect of Lisp.

Addendum on String Formatting:

The canonical example of a variadic function is a string formatter (which is what that Haskell variadic craziness was developed for, too). Since it's horribly stupid to get format strings out of variables, though,  and you should always have them as literals in the program text, we can solve that problem by building string formatting/interpolation into the language as have the language implementation turn literal strings into functions that take a fixed number of arguments (for the interpolation slots) and return string values.

Saturday, May 11, 2013

Pointful Concatenative Programming

Well, I managed to spend the last year or so working on parallelism and vau expressions without actually writing any blog posts about it. Next week I'm giving a conference presentation, and maybe after that I'll come back here and review the last year's work. But for now, I am on to something a little different.

Reading Why Concatenative Programming Matters has inspired me to experiment with concatenative languages. Concatenative languages are often called "stack-based" languages, because they are incredibly easy to implement with stacks, but that article enlightened me to the fact that stacks are not actually necessary- other evaluation options are possible. In fact, type analysis can be used to identify substrings of concatenative programs that take no inputs which correspond to expressions in argument positions in functional languages and can all be executed in parallel. (In retrospect, this should have been obvious since functional languages compile to concatenative ones- this is just the reverse mapping.)

One of the reasons concatenative programming is cool because it's point-free; i.e., you never have to specify variable names, which makes things nice and concise. Unfortunately, that can backfire and end up making things horribly unreadable, which is why concatenative languages tend to suck at expressing basic math. Sometimes, you just want to use a variable to put the same value in multiple places; even in RPN syntax, "y x * y -", using variables to put values in the right place on the virtual stack, is way easier to read and understand than "DUP ROT * SWAP -", using stack effect operators. Not having variables also means that concatenative languages tend to lack the ability to do variable capture, which means no closures.

I have thus been pondering a concatenative language that adds variable-binding lambda expressions. Something like this:

{ a b | b a }

would take two values off the top of the virtual stack and bind them to variables "a" and "b" within the scope of the expression. Variables are just words that take no arguments are return their bound values. The effect of this expression would then be to swap the top two elements on the virtual stack. Lots of otherwise basic, built-in words can be created with these kinds of expressions. E.g., DUP looks like this:

{ x | x x }

These sorts of lambda expressions would be interpreted like words and applied immediately, causing them to function rather like let-bindings in functional languages. Combining lambdas with quotation, however, allows us to create closures:

{ x | [{ y | x y + }] }

This expression binds one argument and returns a closure that takes another argument and adds it to the closed-over value.

Well, that's all for now. Maybe, if I'm lucky, I'll have time this summer to start implementing this thing.