Showing posts with label compilers. Show all posts
Showing posts with label compilers. Show all posts

Thursday, June 11, 2015

Uniform Call Syntax in Parles

After a long hiatus, I've come back to do a little more work on Parles. The GitHub repository now has a very simple but functioning compiler and VM that can run very simple programs. Now, I've got some thoughts on the next feature I'd like to add. Uniform Function Call Syntax is a concept implemented in several modern programming languages (like D, or Magpie) which helps make the core language smaller by treating object-oriented method calls, of the form foo.bar(baz), as simple syntactic sugar for function calls of the form bar(foo, baz). If you remove the parentheses, it becomes obvious that this is a fairly simple reordering rule: foo . bar baz gets rewritten as bar foo baz, with bar and foo switching places.

Parles already has two similar bits of token-reordering syntactic sugar (the ; and | operators), so adding a third one should really be no big deal. Since parentheses are already allowed in exactly the positions that you would put them in a method call in a language like C++ or JavaScript, this minor addition would allow Parles to simulate OO method call syntax exactly. We just need to figure out the rules for what exactly gets swapped with what.

The obvious simple solution is to just swap expressions (individual words or blocks). But, what if the expression preceding . produces multiple values? Calling a method on multiple values doesn't make much sense. It would be nice to define the . operator so that it requires the programmer to use it in a way that "makes sense" in terms of OO method calls, rather than as an arbitrary re-ordering operator. To handle that, we'll need to look at type information to guarantee that the output of foo is in fact a single value. It's also important that foo not require any arguments- i.e., that it actually be a value, not a partially-applied function call.

Additionally, we need to ensure that the method name (whatever comes after a .) is in fact a name - a single word - not an arbitrary expression. In other words, we want to make sure that programmers can write things like foo.bar, but not foo."hello world" or foo.(+ 1 2). That's a very simple syntax check that doesn't require any type information. It may also be a good idea to ensure that bar is a function that takes at least one argument (namely, foo).

These typing rules are similar to the restrictions that come with parens, and can be handled in a similar way. So, we'll start out by simply swapping the expressions on either side of a ., but adding implicit type annotations to the AST so that the type checker can ensure that a method call "make sense".

Method Chaining

Swapping single expressions on either side of a . lets us do simple method calls like foo.bar(baz), which gets re-written as bar foo (baz). But what if we want to chain method calls together, writing something like foo.bar(baz).qux(fred)? If . associates to the left, this gets re-written in two steps as bar foo (baz).qux(fred) and then bar foo qux (baz) (fred), which is not what we want! It should be qux bar foo (baz) (fred). We don't want to use just the first expression to the left as the first argument to qux- we want to use the whole preceding method call. There are several possible approaches to fixing this.

First, we could try using type information, just like we would for auto-parenthesization. There is some circularity involved if we use type information to figure out how to re-order things, because we can't actually do type unification until we know the proper order for all expressions, but there is a way around that: only the left argument is potentially ambiguous (since we already said the right argument must always be a single word), so the type checker just needs to keep track of the types of all contiguous subsequences of expressions up to the point where it encounters a .. That takes O(n^2) time for arbitrarily long subsequences, but we can skip sequences that cross block boundaries or ; or | operators. In that case, splitting a long program into two halves with a single block or re-ordering operator also cuts the typechecking time in half. As long as the basic chunks are small enough, this should be reasonable fast, close to O(n) time to typecheck a complete program. With progressive typechecking, every time a . is encountered, the compiler can then select some previous sequence that takes no arguments and returns a single value, and use that sequence as the left argument for the . reordering operator. If the Parles implementation does auto-parenthesization, the necessary type information will have to be calculated anyway, and re-using it for method call left-argument identification adds no additional typechecking overhead. Given that there may be more than one type-compatible left subsequence, though, we still need a rule for how to consistently select exactly one of them. There are two sane options: take the longest type-appropriate sequence, or the shortest.
The longest possible sequence is unlikely to be what you really want in most cases, and it potentially requires programmers to hold a lot more stuff in their heads to understand how any particular program is put together. Using the shortest possible sequence seems like a much better choice; it keeps the scope of reordering smaller and easier to think about, and in most cases will identify just a single expression anyway. Unfortunately, that is also a problem. For example, if baz returns a single value, then foo.bar(baz).qux(fred) will end up being re-written as bar foo qux (baz) (fred), which is not what we wanted! This approach would work fine for chaining method calls with functions that take zero, two, or more arguments, but fails for methods of one argument, which is not an uncommon case.

So, it seems that relying on the typechecker to tell us how to rewrite method calls is not going to work. The second option is to simply ignore the problem, continue to only swap single expressions on either side of a ., and make programmers explicitly mark off whatever they want to function as the left argument in a method call with a block of some sort whenever it is larger than a single word. In that case, we'd just require foo.bar(baz).qux(fred) to be written instead as (foo.bar baz).qux(fred), or {foo.bar baz}.qux(fred). That fails to faithfully simulate how method chaining works in other languages, though, and is thus rather unsatisfying.

The third option is to still require explicit delimiters for left method arguments, but in a slightly more subtle way: we can simply always collect everything to the left of a . up to the last enclosing block boundary, variable binding, |, or ;, regardless of type information. That means that, if you want to embed foo.bar(baz) in the middle of a larger line, you'll probably have to explicitly mark it off as {foo.bar(baz)} so that the . operator doesn't suck up anything more to the left of foo. But honestly, if you're writing Parles code in an imperative, OO style where method call syntax makes sense, you're not likely to want to embed {foo.bar(baz)} in the middle of a line- you'll stick it on a line by itself terminated by a semicolon and beginning with a variable binding and/or preceded by a block boundary or another line terminated by a semicolon.

After all that consideration, then, that's what Parles will do: . becomes a straightforward re-ordering operator working purely at the surface syntactic level, like ; and |, which swaps everything to the left up the the last enclosing block, variable binding, semicolon, or pipe, with a single following word, and which introduces implicit type annotations similar to those implied by parentheses.

Addendum: the git commit which implements method call syntax can be seen here on GitHub.

Sunday, March 9, 2014

Parles: A Language With Less Irritating Superfluous Parenthesization

Well, I haven't blogged in rather a while. But, what with being in a Linguistics MA program, I've been feeling the need to take some me-time and do some Computer Science recently, and I might as well tell the world about it.

Building off my thoughts from Lisp Without Parentheses and Pointful Concatenative Programming, I've started working on a new language called Parles.

Most programming languages seem to me to have a rather inconsistent idea about evaluation order within a line- it usually goes left-to-right, the direction you read in, except when you have to jump backwards to the left to figure out what function to apply to all those argument expressions you just read left-to-right. Most concatenative languages are more straightforward- everything always goes left-to-right; but despite the wonderful consistency, this seems to cause Discomfort and Confusion to people coming from traditional Algol-style languages beyond what comes from just dealing with weird syntax (or the utter lack thereof- even less syntax than LISPs!). I have tried to resolve this issue in the design of Parles.

At its most basic level, Parles is essentially FORTH, read backwards: evaluation proceeds from right to left, with arguments evaluated before function calls (ignoring for a moment that all arguments *are* function calls), and appearing after them in the program text.

There are two ways to alter this basic behavior: semicolons and pipes.
The semicolon is the sequence operator; it is eliminated at parse time and causes everything parsed prior to the semicolon to be post-fixed to whatever comes after, up to the next semicolon or end-of-block. If you end every line with a semicolon, this will cause lines to be evaluated top-to-bottom, just as pretty much everybody ever expects them to be. Thus,

+ 1 2;
* 3 4;

is equivalent to

* 3 4 + 1 2

The trailing semicolon is irrelevant- it causes the entire program to be postfixed to the empty string, which is a no-op.

The pipe (|) operator behaves like a unix shell pipe- take the output from the stuff on the left, and feed it to the stuff on the right. It's essentially identical to the semicolon, except that it binds more tightly (re-ordering occurs up to the next pipe, semicolon, or block boundary) and encodes a type assertion that the output row type of the left side exactly matches the input row type of the right side. E.g., while

1 2 3; +

is totally valid, and results in "3 3" on the stack (with + popping 1 and 2 off the stack and replacing them with 3, and the original three coasting underneath untouched)

1 2 3 | +

is a type error, as the compound expression "1 2 3" has output type "int int int" while + expects only two ints as input. Pipe can be used to simulate infix operators, as long as there is a block boundary or semicolons to properly delimit the expressions whose types are to be matched. E.g.,

1 | + 2

is a valid infix version of

+ 2 1

There are three kinds of blocks, using three kinds of brackets: [], (), and {}.
[] blocks are called quotations and are taken pretty much straight from FORTH: they produce a function value whose type is equal to the type of the body. These are Parles's primary means of abstraction and delayed computation. In addition to all of the things that you would usually use functions for, they are necessary for using control-flow operators. For example, "if" in Parles is a function which takes a boolean and two quotations, and applies one or the other of them depending on the value of the boolean. Its usage looks like this:

if < 2 3
[print "true"]
[print "false"]

This is an excellent example of a circumstance where you do not want to end a line with a semicolon. Inserting semicolons would require re-ordering lines as follows (so that they will be properly un-reordered by the parser):

[print "false"];
[print "true"];
if < 2 3

which is the scary weird order that most FORTHs will make you write it in (modulo each line still being written backwards).

() blocks are argument blocks. They are evaluated immediately, but introduce a type assertion that the contained expression requires no arguments itself- i.e., takes nothing off the stack. Thus, they represent things that you would put parentheses around if you were writing in a traditional LISP. Indeed, you can write Parles to look like Scheme if you feel like putting parentheses everywhere you possibly can and not doing any interesting tricks with floating extra arguments on the stack. I considered extending the type assertion to require that the body terminate having placed exactly one value on the stack to serve as the return value, but plenty of LISPs already allow multi-value return, and it'd be a shame to eliminate that feature from a language to which it belongs so naturally.
Aside from organizing and documenting things for the sake of future readers of the code, the primary purpose of argument blocks is to annotate sections of code that can be executed in parallel due to a lack of input dependencies. A smart compiler would be able to identify such regions automatically, rather than just verify them when pointed out by a human, but that's Hard, so I probably won't implement it for a while, and putting in some clues to make your code easier to read is a good thing anyway.

{} blocks are just regular blocks, and they have no particularly special function except delimiting scope. As mentioned above, block boundaries limit semicolon and pipe re-ordering. In addition, however, all three kinds of blocks allow for an optional argument declaration that binds local variables by popping values off the stack. Thus, binding local variables is likely to be the most common use for regular blocks. That usage looks like this:

{a: str -> print print a a} "waka"

which binds "waka" to the string variable "a" and prints it twice ("wakawaka"). (This is slightly different from the syntax I played with in Pointful Concatenative Programming because I ended up needing the pipe character for something else.) All arguments (at least for now, until the type inference system is more advanced) must have a type declaration, and variables listed in the same order as the expressions which produced the values they bind to. E.g.,

{a: str b: num -> b a} "hi" 1

will bind "hi" to "a" and "1" to "b" (and swap their positions on the stack when it's done).

Quotations are a little bit special. Their argument binding behavior is just like what one would expect from anonymous functions in any other language: arguments are taken from the stack at the point of application, not at the point of creation (technically, this is true for all blocks, but for the others those are the same time). Quotations also form closures over their free variables that are bound in higher scopes.

So far, all functions (except built-in ones like "if") are anonymous. There is an "apply" operator for executing quotations stored on the top of the stack, but it is also possible to give quotations names by binding them to variables (as in pretty much every other language with first-class functions). In order to ensure equality between built-in and programmer-defined functions, using a variable that holds a quotation does not place that quotation value back on the stack, like accessing another value would; rather, it results in the application of the stored quotation. This means you can bind quotations to variable names and then use them just like the built-in words, without needing to use "apply". If you don't want them to be executed, just re-quote.

There is also one additional way to bind variables: using a lambda binding. This consists of the unicode lowercase letter lambda (λ) or a backslash (\) followed immediately by a name and type assertion. This is translated at parse time into a block binding a single variable whose scope extends left until the next explicit block boundary (or EOF). The intent is that you put these things at the beginning of a line terminated by a semicolon, kind of like a regular variable declaration in an Algol-style language, but you can put them anywhere you like. So, one might decide to declare a new function like this:

\sayhi [print "!" print print "hi "];
sayhi "Bob"

or

\sayhi [name: str -> print "!" print name print "hi "];
sayhi "Bob"

which will both print out "hi Bob!"

All variables are (for now) immutable, but can be shadowed as much as you like. That includes all of the built-in functions; so far, the only keywords or reserved symbols are: the various brackets, the argument arrow "->", the colon for introducing type assertions, the backslash and lambda characters, and the semicolon and pipe characters. Everything else is free game for redefinition by shadowing. Use this power wisely.

As of now, you sadly can't actually run any Parles programs; there's an outline of a virtual machine for it, but no code-generator. The parser and type-checker (a critical component of the compiler, as you can't figure out what hooks up with what unless you can figure out the in- and out-arity of every word) are working though, so you can at least parse and type-check Parles programs with the Python scripts in the Github repo.