Thursday, June 11, 2015

Uniform Call Syntax in Parles

After a long hiatus, I've come back to do a little more work on Parles. The GitHub repository now has a very simple but functioning compiler and VM that can run very simple programs. Now, I've got some thoughts on the next feature I'd like to add. Uniform Function Call Syntax is a concept implemented in several modern programming languages (like D, or Magpie) which helps make the core language smaller by treating object-oriented method calls, of the form foo.bar(baz), as simple syntactic sugar for function calls of the form bar(foo, baz). If you remove the parentheses, it becomes obvious that this is a fairly simple reordering rule: foo . bar baz gets rewritten as bar foo baz, with bar and foo switching places.

Parles already has two similar bits of token-reordering syntactic sugar (the ; and | operators), so adding a third one should really be no big deal. Since parentheses are already allowed in exactly the positions that you would put them in a method call in a language like C++ or JavaScript, this minor addition would allow Parles to simulate OO method call syntax exactly. We just need to figure out the rules for what exactly gets swapped with what.

The obvious simple solution is to just swap expressions (individual words or blocks). But, what if the expression preceding . produces multiple values? Calling a method on multiple values doesn't make much sense. It would be nice to define the . operator so that it requires the programmer to use it in a way that "makes sense" in terms of OO method calls, rather than as an arbitrary re-ordering operator. To handle that, we'll need to look at type information to guarantee that the output of foo is in fact a single value. It's also important that foo not require any arguments- i.e., that it actually be a value, not a partially-applied function call.

Additionally, we need to ensure that the method name (whatever comes after a .) is in fact a name - a single word - not an arbitrary expression. In other words, we want to make sure that programmers can write things like foo.bar, but not foo."hello world" or foo.(+ 1 2). That's a very simple syntax check that doesn't require any type information. It may also be a good idea to ensure that bar is a function that takes at least one argument (namely, foo).

These typing rules are similar to the restrictions that come with parens, and can be handled in a similar way. So, we'll start out by simply swapping the expressions on either side of a ., but adding implicit type annotations to the AST so that the type checker can ensure that a method call "make sense".

Method Chaining

Swapping single expressions on either side of a . lets us do simple method calls like foo.bar(baz), which gets re-written as bar foo (baz). But what if we want to chain method calls together, writing something like foo.bar(baz).qux(fred)? If . associates to the left, this gets re-written in two steps as bar foo (baz).qux(fred) and then bar foo qux (baz) (fred), which is not what we want! It should be qux bar foo (baz) (fred). We don't want to use just the first expression to the left as the first argument to qux- we want to use the whole preceding method call. There are several possible approaches to fixing this.

First, we could try using type information, just like we would for auto-parenthesization. There is some circularity involved if we use type information to figure out how to re-order things, because we can't actually do type unification until we know the proper order for all expressions, but there is a way around that: only the left argument is potentially ambiguous (since we already said the right argument must always be a single word), so the type checker just needs to keep track of the types of all contiguous subsequences of expressions up to the point where it encounters a .. That takes O(n^2) time for arbitrarily long subsequences, but we can skip sequences that cross block boundaries or ; or | operators. In that case, splitting a long program into two halves with a single block or re-ordering operator also cuts the typechecking time in half. As long as the basic chunks are small enough, this should be reasonable fast, close to O(n) time to typecheck a complete program. With progressive typechecking, every time a . is encountered, the compiler can then select some previous sequence that takes no arguments and returns a single value, and use that sequence as the left argument for the . reordering operator. If the Parles implementation does auto-parenthesization, the necessary type information will have to be calculated anyway, and re-using it for method call left-argument identification adds no additional typechecking overhead. Given that there may be more than one type-compatible left subsequence, though, we still need a rule for how to consistently select exactly one of them. There are two sane options: take the longest type-appropriate sequence, or the shortest.
The longest possible sequence is unlikely to be what you really want in most cases, and it potentially requires programmers to hold a lot more stuff in their heads to understand how any particular program is put together. Using the shortest possible sequence seems like a much better choice; it keeps the scope of reordering smaller and easier to think about, and in most cases will identify just a single expression anyway. Unfortunately, that is also a problem. For example, if baz returns a single value, then foo.bar(baz).qux(fred) will end up being re-written as bar foo qux (baz) (fred), which is not what we wanted! This approach would work fine for chaining method calls with functions that take zero, two, or more arguments, but fails for methods of one argument, which is not an uncommon case.

So, it seems that relying on the typechecker to tell us how to rewrite method calls is not going to work. The second option is to simply ignore the problem, continue to only swap single expressions on either side of a ., and make programmers explicitly mark off whatever they want to function as the left argument in a method call with a block of some sort whenever it is larger than a single word. In that case, we'd just require foo.bar(baz).qux(fred) to be written instead as (foo.bar baz).qux(fred), or {foo.bar baz}.qux(fred). That fails to faithfully simulate how method chaining works in other languages, though, and is thus rather unsatisfying.

The third option is to still require explicit delimiters for left method arguments, but in a slightly more subtle way: we can simply always collect everything to the left of a . up to the last enclosing block boundary, variable binding, |, or ;, regardless of type information. That means that, if you want to embed foo.bar(baz) in the middle of a larger line, you'll probably have to explicitly mark it off as {foo.bar(baz)} so that the . operator doesn't suck up anything more to the left of foo. But honestly, if you're writing Parles code in an imperative, OO style where method call syntax makes sense, you're not likely to want to embed {foo.bar(baz)} in the middle of a line- you'll stick it on a line by itself terminated by a semicolon and beginning with a variable binding and/or preceded by a block boundary or another line terminated by a semicolon.

After all that consideration, then, that's what Parles will do: . becomes a straightforward re-ordering operator working purely at the surface syntactic level, like ; and |, which swaps everything to the left up the the last enclosing block, variable binding, semicolon, or pipe, with a single following word, and which introduces implicit type annotations similar to those implied by parentheses.

Addendum: the git commit which implements method call syntax can be seen here on GitHub.

Sunday, March 9, 2014

Parles: A Language With Less Irritating Superfluous Parenthesization

Well, I haven't blogged in rather a while. But, what with being in a Linguistics MA program, I've been feeling the need to take some me-time and do some Computer Science recently, and I might as well tell the world about it.

Building off my thoughts from Lisp Without Parentheses and Pointful Concatenative Programming, I've started working on a new language called Parles.

Most programming languages seem to me to have a rather inconsistent idea about evaluation order within a line- it usually goes left-to-right, the direction you read in, except when you have to jump backwards to the left to figure out what function to apply to all those argument expressions you just read left-to-right. Most concatenative languages are more straightforward- everything always goes left-to-right; but despite the wonderful consistency, this seems to cause Discomfort and Confusion to people coming from traditional Algol-style languages beyond what comes from just dealing with weird syntax (or the utter lack thereof- even less syntax than LISPs!). I have tried to resolve this issue in the design of Parles.

At its most basic level, Parles is essentially FORTH, read backwards: evaluation proceeds from right to left, with arguments evaluated before function calls (ignoring for a moment that all arguments *are* function calls), and appearing after them in the program text.

There are two ways to alter this basic behavior: semicolons and pipes.
The semicolon is the sequence operator; it is eliminated at parse time and causes everything parsed prior to the semicolon to be post-fixed to whatever comes after, up to the next semicolon or end-of-block. If you end every line with a semicolon, this will cause lines to be evaluated top-to-bottom, just as pretty much everybody ever expects them to be. Thus,

+ 1 2;
* 3 4;

is equivalent to

* 3 4 + 1 2

The trailing semicolon is irrelevant- it causes the entire program to be postfixed to the empty string, which is a no-op.

The pipe (|) operator behaves like a unix shell pipe- take the output from the stuff on the left, and feed it to the stuff on the right. It's essentially identical to the semicolon, except that it binds more tightly (re-ordering occurs up to the next pipe, semicolon, or block boundary) and encodes a type assertion that the output row type of the left side exactly matches the input row type of the right side. E.g., while

1 2 3; +

is totally valid, and results in "3 3" on the stack (with + popping 1 and 2 off the stack and replacing them with 3, and the original three coasting underneath untouched)

1 2 3 | +

is a type error, as the compound expression "1 2 3" has output type "int int int" while + expects only two ints as input. Pipe can be used to simulate infix operators, as long as there is a block boundary or semicolons to properly delimit the expressions whose types are to be matched. E.g.,

1 | + 2

is a valid infix version of

+ 2 1

There are three kinds of blocks, using three kinds of brackets: [], (), and {}.
[] blocks are called quotations and are taken pretty much straight from FORTH: they produce a function value whose type is equal to the type of the body. These are Parles's primary means of abstraction and delayed computation. In addition to all of the things that you would usually use functions for, they are necessary for using control-flow operators. For example, "if" in Parles is a function which takes a boolean and two quotations, and applies one or the other of them depending on the value of the boolean. Its usage looks like this:

if < 2 3
[print "true"]
[print "false"]

This is an excellent example of a circumstance where you do not want to end a line with a semicolon. Inserting semicolons would require re-ordering lines as follows (so that they will be properly un-reordered by the parser):

[print "false"];
[print "true"];
if < 2 3

which is the scary weird order that most FORTHs will make you write it in (modulo each line still being written backwards).

() blocks are argument blocks. They are evaluated immediately, but introduce a type assertion that the contained expression requires no arguments itself- i.e., takes nothing off the stack. Thus, they represent things that you would put parentheses around if you were writing in a traditional LISP. Indeed, you can write Parles to look like Scheme if you feel like putting parentheses everywhere you possibly can and not doing any interesting tricks with floating extra arguments on the stack. I considered extending the type assertion to require that the body terminate having placed exactly one value on the stack to serve as the return value, but plenty of LISPs already allow multi-value return, and it'd be a shame to eliminate that feature from a language to which it belongs so naturally.
Aside from organizing and documenting things for the sake of future readers of the code, the primary purpose of argument blocks is to annotate sections of code that can be executed in parallel due to a lack of input dependencies. A smart compiler would be able to identify such regions automatically, rather than just verify them when pointed out by a human, but that's Hard, so I probably won't implement it for a while, and putting in some clues to make your code easier to read is a good thing anyway.

{} blocks are just regular blocks, and they have no particularly special function except delimiting scope. As mentioned above, block boundaries limit semicolon and pipe re-ordering. In addition, however, all three kinds of blocks allow for an optional argument declaration that binds local variables by popping values off the stack. Thus, binding local variables is likely to be the most common use for regular blocks. That usage looks like this:

{a: str -> print print a a} "waka"

which binds "waka" to the string variable "a" and prints it twice ("wakawaka"). (This is slightly different from the syntax I played with in Pointful Concatenative Programming because I ended up needing the pipe character for something else.) All arguments (at least for now, until the type inference system is more advanced) must have a type declaration, and variables listed in the same order as the expressions which produced the values they bind to. E.g.,

{a: str b: num -> b a} "hi" 1

will bind "hi" to "a" and "1" to "b" (and swap their positions on the stack when it's done).

Quotations are a little bit special. Their argument binding behavior is just like what one would expect from anonymous functions in any other language: arguments are taken from the stack at the point of application, not at the point of creation (technically, this is true for all blocks, but for the others those are the same time). Quotations also form closures over their free variables that are bound in higher scopes.

So far, all functions (except built-in ones like "if") are anonymous. There is an "apply" operator for executing quotations stored on the top of the stack, but it is also possible to give quotations names by binding them to variables (as in pretty much every other language with first-class functions). In order to ensure equality between built-in and programmer-defined functions, using a variable that holds a quotation does not place that quotation value back on the stack, like accessing another value would; rather, it results in the application of the stored quotation. This means you can bind quotations to variable names and then use them just like the built-in words, without needing to use "apply". If you don't want them to be executed, just re-quote.

There is also one additional way to bind variables: using a lambda binding. This consists of the unicode lowercase letter lambda (λ) or a backslash (\) followed immediately by a name and type assertion. This is translated at parse time into a block binding a single variable whose scope extends left until the next explicit block boundary (or EOF). The intent is that you put these things at the beginning of a line terminated by a semicolon, kind of like a regular variable declaration in an Algol-style language, but you can put them anywhere you like. So, one might decide to declare a new function like this:

\sayhi [print "!" print print "hi "];
sayhi "Bob"

or

\sayhi [name: str -> print "!" print name print "hi "];
sayhi "Bob"

which will both print out "hi Bob!"

All variables are (for now) immutable, but can be shadowed as much as you like. That includes all of the built-in functions; so far, the only keywords or reserved symbols are: the various brackets, the argument arrow "->", the colon for introducing type assertions, the backslash and lambda characters, and the semicolon and pipe characters. Everything else is free game for redefinition by shadowing. Use this power wisely.

As of now, you sadly can't actually run any Parles programs; there's an outline of a virtual machine for it, but no code-generator. The parser and type-checker (a critical component of the compiler, as you can't figure out what hooks up with what unless you can figure out the in- and out-arity of every word) are working though, so you can at least parse and type-check Parles programs with the Python scripts in the Github repo.

Thursday, September 12, 2013

Lisp Without Parentheses

Lisp is sometimes jokingly said to stand for "Lots of Irritating Superfluous Parentheses". Despite the joke, it turns out most of the parentheses in an average Lisp program are, in fact, superfluous. Let's examine the program for finding square roots by Newton's Method from SICP:

(define (sqrt-iter guess x)
  (if (good-enough? guess x)
      guess
      (sqrt-iter (improve guess x)
                 x)))

(define (improve guess x)
   (average guess (/ x guess)))
   
(define (average x y)
  (/ (+ x y) 2))

(define (good-enough? guess x)
  (< (abs (- (square guess) x)) 0.001))

(define (sqrt x)
  (sqrt-iter 1.0 x))

This isn't as bad as some Lisp programs, but even so, three closing-parens in a row starts to get hard to keep track of without editor support. Admittedly, most programmers have editor support (if your text editor can't match parens for you in 2013, you need to get a new text editor); but if we don't need them we might as well get rid of them. Parentheses do 3 things in this program: they tell indicate where function/macro applications start and where the argument lists for those applications end, and they delimit lists. If we know the signature of each function or macro, then we don't need parentheses for that second purpose- telling us where argument lists end. We can restrict them to simply marking applications (as opposed to functions that are being used as values), and delimiting lists (in this case, the only lists we need to worry about are the argument lists in function definitions). That gives us a transformed program that looks like this:

(define) '(sqrt-iter guess x)
  (if) (good-enough?) guess x
      guess
      (sqrt-iter) (improve) guess x
                 x

(define) '(improve guess x)
   (average) guess (/) x guess
   
(define) '(average x y)
  (/) (+) x y 2

(define) '(good-enough? guess x)
  (<) (abs) (-) square guess x 0.001

(define) '(sqrt x)
  (sqrt-iter) 1.0 x

For clarity, I've explicitly quoted the lists to distinguish an application from a list of a single element. This is starting to look a lot like Polish Notation with all of the operators parenthesized. You might think "Polish Notation arithmetic doesn't need parentheses because arithmetic operators use distinct symbols from operands"; true, but that turns out not to matter. If we think of values as functions that take no arguments and return themselves, and variables as functions that take no arguments and apply their values, then we can assume that everything is always applied by default, and then we don't need those parentheses (or anything else) to mark applications, giving us this:

define '(sqrt-iter guess x)
  if good-enough? guess x
      guess
      sqrt-iter improve guess x
                x

define '(improve guess x)
   average guess / x guess
   
define '(average x y)
  / + x y 2

define '(good-enough? guess x)
  < abs - square guess x 0.001

define '(sqrt x)
  sqrt-iter 1.0 x

This should look suspiciously like FORTH. If it doesn't, try capitalizing everything and reading it backwards (in Reverse Polish Notation, like old HP calculators use). I have a sneaking suspicion that FORTH variants have remained fairly niche languages largely because they put their arguments and function names backwards from everybody else (well, except for Assembly languages, but nobody really wants to program in assembly either). (Incidentally, this is very similar to how actual Lisps are compiled into concatenative languages like VM bytecode.)

As long as you have just enough static typing to keep track the arity of each function, all of the original parenthesization can be reconstructed automatically (which means that we can still implement things like lazy evaluation or automatic parallelization under the hood).

Now, we did lose something by eliminating parentheses are explicit delimiters of argument lists: you can't handle variadic functions anymore (well, unless you do crazy Haskell stuff). But, for the most part, you don't really need variadic functions. Lisps tend to map rest arguments to lists anyway, so just pass list. If you want, say, variadic add, just write a function to fold a list over binary add. Something like...

define '(\+ l) fold + 0 l

...er, oops. Type error. That code will be automatically parenthesized as follows:

(define (\+ l) (fold (+ 0 l)))

+ needs two numbers, not a number and a list, and fold doesn't take a number as it's first argument (plus, it needs a couple more). What we need is some way to turn off the application of + and get its value instead. We only need this for passing first-class function values as arguments, and that happens far less often than function applications do in most programs, so we can bring back parentheses for that (the exact opposite of their original purpose) and still have a net gain in program concision (and reduced nesting):

define '(\+ l) fold (+) 0 l

We're using parentheses here rather than some other symbol (like, say, a prefixed # or something) to indicate non-application, because we can actually put more stuff in there- a whole list of stuff, in fact. Maybe something like

define '(list-double-plus l) map (+ 1 * 2) l

What's really going on here is not that parentheses are suppressing application- rather, parentheses delimit lists (just like they do when quoted), but now, instead of representing function applications, unquoted lists represent functions that return functions composed of the contents of the list. (+) isn't a binary operator that hasn't been applied, rather it's a new function that returns a function composed solely of that binary operator. It gets applied as normal (to zero arguments), but its return value is what fold needed as its first argument. (+ 1 * 2) returns a function that's the composition of +, 1, *, and 2, with automatic currying. With these semantics, we don't actually need a separate "lambda" construct in the language anymore, since lists produce anonymous functions- as long as you're willing to program in point-free style, anyway.
If we just imagine the whole program as wrapped in one big set of top-level, outside parentheses, we can see that all programs are still lists- just less deeply nested ones. The language is still homoiconic, so it's still pretty easy to do metaprogramming, and it still makes sense to call this a dialect of Lisp.

Addendum on String Formatting:

The canonical example of a variadic function is a string formatter (which is what that Haskell variadic craziness was developed for, too). Since it's horribly stupid to get format strings out of variables, though,  and you should always have them as literals in the program text, we can solve that problem by building string formatting/interpolation into the language as have the language implementation turn literal strings into functions that take a fixed number of arguments (for the interpolation slots) and return string values.

Thursday, August 8, 2013

My Journey Through Flow Arts

Time for something a little bit different. Flowtoys has asked people to put up their "flow stories" on Facebook, and, as I really like Flowtoys, I thought I'd do so, and did. And then I thought "I should edit this a bit and put it on my blog, because I have one, and I ought to use it more." So, here it is.

Prior to high school, I had very little interest in any physical activity; I pretty much figured that as long as my body was working well enough to keep my brain alive, that was good enough. My parents, on the other hand, were determined that I ought to still get some sort of regular exercise even after I hit junior year and no longer had to take mandatory PE. No, they did not introduce me to poi- they decided I should join the internationally-acclaimed high school rowing crew. Predictably, I absolutely hated it. But, in the meantime, I had some friends who did glowstringing, and I thought that was coolest thing ever and determined to figure it out. The basic butterfly in particular totally blew my mind. I was totally uncoordinated, and I accidentally re-invented some basic meteor spinning, having never heard of it before, just as an excuse to avoid having to use my cruddy left hand (ironically, I still suck at meteor spinning). From glowstringing, via YouTube tutorials, I got into poi, and made my own set by tying shoelaces to tennis balls (flat shoelaces are still my favorite tethers). After a couple of months, I quit rowing, and I'm pretty sure my interest in this weird poi thing that at least got me out of a chair was a significant factor in my parents letting me do it.

Then I graduated and went off to college. Prior to this point, I was certain that I hated dancing- I went to exactly two dances in high school (Junior and Senior Prom) and spun glowsticks. College gave me a lot more opportunity for dances (residence hall dances, student association dances, etc.), and, figuring that I ought to have some way of figuring out how to engage in socialization, I made myself go to them and got through it with glowsticks. Eventually, it clicked that poi is dance- just using your arms more than your feet. Prior to that moment, I had been almost entirely focused on technical spinning. Afterwards, I found myself opened up to exploring a whole new world of body movement, and started playing with freehand light tracing to try to force myself to develop some "danciness" independent of the motion of the poi.

Looking for good tutorials on YouTube had led me to Nick Woolsey's videos and Playpoi. During my second semester at college, Playpoi started putting out videos on double staff, and I decided I had to figure out staffs as well. So, the next summer, I got myself some dowel rods from Home Depot and started playing. From the start, I was fascinated by the connections between staff and poi patterns, analysing which patterns would transfer between the two forms, which couldn't, and why. I still frequently get a lot of my new performance ideas by learning a new trick with poi and then thinking "what's the closest I can come to replicating this with a staff?" or vice-versa.

Then I took two years off from life to serve on a mission for the LDS Church in Ukraine. In what little free time a missionary has, I practiced poi with shoelaces tied to tennis balls, and staffs with broom handles, spare PVC pipe, or whatever was at hand. During that time, my mom shipped me my first set of two flowlights- so much better than chemical glowsticks! Having no access to videos or other spinners for two years, I credit that period with helping me develop my own sense of style as a simple unavoidable necessity.

When I got back and started college again in 2010, I was totally hooked on flowlights. I got myself oggpoi, crystal cases (still using shoelace tethers!), and collapsable staffs and went to town out on the lawns in my apartment complex nearly every night. When I met new people around campus or new people moved in to my complex between semesters, it became not uncommon to be greeted with "hey! you're that guy with the lights!" Based on my glowsticking roots, I took to using double-ended crystal poi, mixing in techniques from handle spinning and meteor hammer; I've seen lots of people do tosses with poi, and even use weighted handles, but nobody outside of the glowstringing community play with two equally weight ends. One more thing to add to my personal style.

I heard rumors that there was another poi spinner in the campus juggling club, and I should go hang out with them, and I thought maybe that would give me the chance to finally try out some partner poi stuff- sadly, no luck. Poi is unloved in the juggling club. But, I did get a ready-made group of friends who like to do all kinds of other prop-manipulation arts, and started doing performances for libraries, school groups, the homecoming parade.... Then in the spring of 2011, I found myself sitting next to a chocolate fountain at a dance and simultaneously next to a girl who I was certain was merely waiting for her date to get back from the buffet table. Turns out, she was single, also drawn to chocolate fountains, and wondered if I could show her how to spin lights, 'cause those looked just so awesome! By this point, lots of people had asked to borrow my lights for a few minutes, or wondered if I could teach them, or told me how they wished they could do that, but no one ever followed through for more than 5 minutes. This particular girl, on the other hand, to my shock and amazement, asked me to dance, got my phone number, and set up regular weekly poi spinning lessons. Needless to say, we became friends, and I finally got my partner-poi-partner.

When summer break came, we talked on the phone just about every other day, but somehow I did not quite get the hint. During the next fall semester, though, she finally managed to convince me that I really wanted to marry her. Seven excruciatingly long months later, we got married just in time to shoot our first video for Circles of Light. In the meantime, I had tried introducing her to staffs, but it just didn't stick like poi spinning did. That is, until we both realized that, while I had always been into double staff (because two sticks are more fun than one), she loved single staff. Thus, I had to learn single staff. And of course, we had to figure out how to do it as partners. Sadly, my new mother-in-law was of the opinion that this whole poi spinning thing was extremely weird, and what is this crazy guy getting my baby girl into....
Until November, when we found out that, no, we did not win, but we did get on the 2012 COL DVD! External validation from an international competition went a long way towards improving my mother-in-law's opinions.

That fall, some new people had joined the juggling club- one wanting to learn contact juggling, and one wanting to sell some contact juggling balls. Thus we gained our third flow art. I'm still not very good at it yet (being distracted by the events of the following paragraph), but contact juggling adds a whole new dimension to the experience of flow arts- it's flow, but not as we know it, Jim.

Then it was time to start planning for the next year's video. I initially wanted to merge partner poi with passing patterns from toss juggling- but our toss-juggling-fu was not that good. Some day I'm sure it will make for some awesome performances, but after realizing we weren't going to make that work, we fell back to the idea of doing partner staff. Having no YouTube videos to look at for inspiration, we invented our own patterns for two staffs and two people- a mix of single and double staff techniques and interconnected patterns that are impossible with only two hands. In a fit of choreographic panic, we both somehow managed to go from sort of casually playing around to actually being competent with a single staff just in time to get a video in the mail.

A few weeks later (and just a few weeks ago, by now), we went to a big week-long family reunion in Island Park, Idaho. One of my brothers, who has three kids, lives just a couple miles away from me, and my wife & I have often ended up babysitting for him. Whenever we go, we've taken to taking our big bag of toys along. As a result, one of my nephews has taken an interest in contact juggling, both think staff spinning is pretty cool (one of them is in karate, and does a totally different kind of staff work), and my two year old niece has oodles of fun throwing juggling balls. Knowing there would be lots of kids, we of course took the big bag of toys to the family reunion as well. There we discovered that one of my other nieces seems to have a natural talent for contact juggling. One night, we did our video performance with glow staffs for the whole family, and afterwards took apart all of our equipment and let all of our (numerous!) nieces and nephews run around the yard with flowlights and oggs. They were a pretty big hit! In addition to the personal pleasure that comes from playing with flowtoys, it's enormously gratifying to see little kids get so much joy from play with some bright glowing lights and pretending they're lightsabers or fairy dust or whatever.

I have high hopes that we are inspiring another generation of flow artists.

Now, that's the end of my flow art journey (until I go and learn how to do more cool stuff), but I ought to add another little anecdote. Flowtoys has a really fantastic warranty policy, so when one of our spectrum lights started acting funky, I naturally wanted to get it replaced. However, it is nearly time for my wife's birthday, and I also wanted to buy her some new toys. It would be nice if I could the warranty items and the new order all packaged together so I could save a few bucks on shipping... but, oh darn, I waited too long to send stuff in, and there's no way the warranty items are gonna ship soon enough for everything to get to me on time....

Enter Flowtoys' exceptional customer service. Not only did they decide to process and ship out my order in 1 day rather than the expected "up to three" to make sure my wife got her birthday presents on time, they went out of their way to bundle in my warranty items even though my returns hadn't arrived yet. Not only do they make the best equipment you can get, they're really nice people.

Also, Home of Poi: they're based in New Zealand (the actual home of poi), so shipping can be a bit steep, but if Flowtoys doesn't sell it, HOP probably does. I haven't had any personal interaction with HOP employees to evaluate them on, but I do really like their stuff.

Saturday, May 11, 2013

Pointful Concatenative Programming

Well, I managed to spend the last year or so working on parallelism and vau expressions without actually writing any blog posts about it. Next week I'm giving a conference presentation, and maybe after that I'll come back here and review the last year's work. But for now, I am on to something a little different.

Reading Why Concatenative Programming Matters has inspired me to experiment with concatenative languages. Concatenative languages are often called "stack-based" languages, because they are incredibly easy to implement with stacks, but that article enlightened me to the fact that stacks are not actually necessary- other evaluation options are possible. In fact, type analysis can be used to identify substrings of concatenative programs that take no inputs which correspond to expressions in argument positions in functional languages and can all be executed in parallel. (In retrospect, this should have been obvious since functional languages compile to concatenative ones- this is just the reverse mapping.)

One of the reasons concatenative programming is cool because it's point-free; i.e., you never have to specify variable names, which makes things nice and concise. Unfortunately, that can backfire and end up making things horribly unreadable, which is why concatenative languages tend to suck at expressing basic math. Sometimes, you just want to use a variable to put the same value in multiple places; even in RPN syntax, "y x * y -", using variables to put values in the right place on the virtual stack, is way easier to read and understand than "DUP ROT * SWAP -", using stack effect operators. Not having variables also means that concatenative languages tend to lack the ability to do variable capture, which means no closures.

I have thus been pondering a concatenative language that adds variable-binding lambda expressions. Something like this:

{ a b | b a }

would take two values off the top of the virtual stack and bind them to variables "a" and "b" within the scope of the expression. Variables are just words that take no arguments are return their bound values. The effect of this expression would then be to swap the top two elements on the virtual stack. Lots of otherwise basic, built-in words can be created with these kinds of expressions. E.g., DUP looks like this:

{ x | x x }

These sorts of lambda expressions would be interpreted like words and applied immediately, causing them to function rather like let-bindings in functional languages. Combining lambdas with quotation, however, allows us to create closures:

{ x | [{ y | x y + }] }

This expression binds one argument and returns a closure that takes another argument and adds it to the closed-over value.

Well, that's all for now. Maybe, if I'm lucky, I'll have time this summer to start implementing this thing.

Saturday, May 12, 2012

Even More Concurrency; Or, What You Can't Find By Searching Google Scholar

(Continuing the series of Schrodinger's Equation of Software, First Class Everything: Loops, Tail-Calls, and Continuations, and Automatic Concurrency)

      Our last version of cps_map_eval gave us concurrent evaluation of arguments, and without even needing any locks or other kinds of synchronization. That's pretty impressive... except that it only works as long as a) every argument returns at least once and b) no argument returns more than once before all have returned at least once.
      Those may not seem like very strict restrictions, except that the whole point of continuations is to break them. Consider this program:

((vau () % (+ (return 2) 2)))

      With the last version of cps_map_eval, it will cause an error, because when the first argument thread has terminated, it will not have filled in its slot- it jumped to a different continuation. The parent thread will try to evaluate (+ None 2) and throw an exception, when what we really wanted was for it to not run at all! A similar problem can be seen in this program:

(+  ((vau () % (par (return 1) (return 2))))
    (do-long-computation))

      In this case, no arguments fail to return. But the first argument activates its continuation twice, and from different threads. If both of those continuation calls execute before (do-long-computation) completes, the first one will fill in the argument slot and disappear as intended, but the second will go ahead and execute the function body with an incomplete argument list, because we have assumed erroneously that a continuation can only be called a second time after the function body has begun executing. This is false as soon as argument expressions can themselves contain multiple threads, and even more false if you allow mutation so that external threads might get hold of the continuation.
      I searched far and wide for existing research on combining continuations and concurrency. But it seems that on this particular topic, nothing exists. Fortress does not have continuations, and no experimental LISP dialect has automatic concurrency! There is a lot of interesting work on what continuations mean in concurrent systems, and different ways to deal with them when you have explicit language constructs for parallelism like spawn or pcall, but none of it is easily applicable to a situation where all of the parallelism is implicit. There are some very interesting ideas out there, like this work on subcontinuations and this on inter-thread communication with ports, which I may come back to in later posts, but for now it looks like I, too, can be on the cutting edge of functional programming research.
      What we want is not for all argument threads to terminate before executing the function body, but for all argument slots to be filled, whether it's by the same threads that were started for that purpose or not. In order to do that, we'll have to introduce a counter for the number of argument positions that's decremented whenever an argument slot is filled; when the counter reaches zero, then we can run the function body. A version of cps_map_eval that does that looks like this:

def cps_map_eval(k,v,*x):
    """
    Evaluates the elements of an argument list,
    creating continuations that will assign values
    to the correct indices in the evaluated list.
    """
    arglen = len(x)
    if arglen == 0: return k([])

    counter = [arglen]
    finished = Event()
    flock = Lock()
    argv = [None]*arglen

    def assign_val(i,val):
        argv[i] = val
        flock.acquire()
        counter[0] -= 1
        flock.release()
        if counter[0] == 0:
            finished.set()

    def reassign(i,val):
        finished.wait()
        new_argv = argv[:]
        new_argv[i] = val
        return k(new_argv)

    def arg_thread(i,ax):
        eval(ax,v,ArgK( lambda val: assign_val(i,val),
                        lambda val: reassign(i,val)))

    threads = [Thread(target=arg_thread,args=(i,ax))
                for i, ax in enumerate(x[:-1])]

    for t in threads: t.start()

    def arg_k(val):
        argv[-1] = val
        flock.acquire()
        counter[0] -= 1
        flock.release()
        if counter[0] == 0:
            finished.set()
        else:
            finished.wait()
        for t in threads: t.join()
        return k(argv)

    return Tail(x[-1],v,
                ArgK(arg_k,lambda val: reassign(-1,val)))

      Again, we're using a 1-element array because Python won't let us mutate variables in closure scopes, but will let us alter the contents of collections. Initial continuations fill in values and decrement the counter; non-initial continuations and the parent thread that will execute the function body wait until all of the slots are filled (every initial continuation has been called). When all of the slots are filled, any waiting threads are unblocked and the function body can be executed for as many times as continuations containing it have been activated. This almost works. Except for one rather important problem. This program will never terminate:

((vau () % (+ (return 1) 1)))

      The argument thread will send 1 to the outer continuation, never giving anything to (+ [] 1), and the parent thread will hang forever waiting for that slot to be filled. So, we can't allow the parent thread to block. To fix this, we don't privilege the parent thread. It becomes just another argument evaluator, whose sole speciality is the ability to call join() to cleanup all the other threads eventually. Instead, every initial continuation checks to see whether it filled in the last argument slot, and if so, it evaluates the function body. This means that the thread that began a computation may not be the same thread that completes it- the continuation can get picked up by a different thread, and the parent may end up swapped out for one of its children. This version of cps_map_eval takes care of that:

def cps_map_eval(k,v,*x):
    """
    Evaluates the elements of an argument list,
    creating continuations that will assign values
    to the correct indices in the evaluated list.
    """
    arglen = len(x)
    if arglen == 0: return k([])

    counter = [arglen]
    finished = Event()
    flock = Lock()
    argv = [None]*arglen

    def assign_val(i,val):
        argv[i] = val
        flock.acquire()
        counter[0] -= 1
        flock.release()
        if counter[0] == 0:
            finished.set()
            return k(argv)

    def reassign(i,val):
        finished.wait()
        new_argv = argv[:]
        new_argv[i] = val
        return k(new_argv)

    def arg_thread(i,ax):
        eval(ax,v,ArgK( lambda val: assign_val(i,val),
                        lambda val: reassign(i,val)))

    threads = [Thread(target=arg_thread,args=(i,ax))
                 for i, ax in enumerate(x[:-1])]

    for t in threads: t.start()

    def arg_k(val):
        r = assign_val(-1,val)
        for t in threads: t.join()
        return r

    return Tail(x[-1],v,
            ArgK(arg_k, lambda val: reassign(-1,val)))
      We're getting closer. All of the previous examples will now behave properly. But this one won't:

((vau () % (+ (return 1)
              ((vau () % (par (return 1)
                              (return 2)))))))

      The first argument slot is never filled. The first thread to return to the second argument slot fills in its value and terminates. And the second thread to return to the second argument slot then waits for the first one to be filled. And it will wait forever, that one blocked thread preventing the program from terminating.
      We could consider having some kind of thread garbage collector that periodically checks to see if all existing threads are blocked and if so terminates the program, or cleans up threads that can never be unblocked because the necessary continuations aren't held by anyone anymore. But what we really want is to simply not block in the first place. The choice of the name Event for the notification objects we've been using from Python's threading library so far is apt: we'd like something like an event loop that can store up computations and then execute them when an event (filling all the argument slots) occurs, or not if that event never occurs.
      We can achieve the same effect without actually building an event loop, though. Everything can still live nicely inside of cps_map_eval, and it looks like this:

def cps_map_eval(k,v,*x):
    """
    Evaluates the elements of an argument list,
    creating continuations that will assign values
    to the correct indices in the evaluated list.
    """
    arglen = len(x)
    if arglen == 0: return k([])

    counter = [arglen]
    flock = Lock()
    argv = [None]*arglen
    blocked = set()

    def assign_val(i,val):
        argv[i] = val
        flock.acquire()
        counter[0] -= 1
        if counter[0] == 0:
            flock.release()
            for t in blocked: t.start()
            r = k(argv)
            for t in blocked: t.join()
            blocked.clear()
            return r
        flock.release()

    def reassign(i,val):
        new_argv = argv[:]
        new_argv[i] = val
        return k(new_argv)

    def reactivate(i,val):
        flock.acquire()
        if counter[0] == 0:
            flock.release()
            return reassign(i,val)
        blocked.add(Thread(target=reassign,args=(i,val)))
        flock.release()

    def arg_thread(i,ax):
        eval(ax,v,ArgK( lambda val: assign_val(i,val),
                        lambda val: reactivate(i,val)))

    threads = [Thread(target=arg_thread,args=(i,ax))
                 for i, ax in enumerate(x[:-1])]

    for t in threads: t.start()

    def arg_k(val):
        r = assign_val(-1,val)
        for t in threads: t.join()
        return r

    return Tail(x[-1],v,
            ArgK(arg_k, lambda val: reactivate(-1,val)))

      If a continuation call cannot immediately continue, it's dumped into a blocked set and the thread terminates. If the argument slots are ever all filled, that set is checked, and everything in it is re-activated. If they aren't, the set just gets garbage collected when the necessary continuations are no longer held by anybody else. Note that thread join()s only occur after a parent thread has finished all possible useful work, and cannot do anything but return a final value to the top-level eval loop; thus, they do not inhibit concurrency. We could even get along without them, and let Python deal with the cleanup, but if we were using raw POSIX pthreads that wouldn't be an option, so we might as well cleanup our garbage. We did end up having to use 1 lock, to ensure atomic access to the counter, but that's pretty darn small; most of the time, it will never be contended. Note that we need to keep the lock around additions to the blocked set, just in case, but we don't need the lock when iterating over it; that's because we only get to that point if the counter has just been set to zero, and if the counter is zero, we know that no other thread will ever access the blocked set again.
      So there it is. Automatic concurrency that plays nice with continuations, without any need to modify the core vau evaluator.

As usual, working code can be found at https://github.com/gliese1337/schrodinger-lisp/.

Thursday, May 3, 2012

Automatic Concurrency

(Continuing the series of Schrodinger's Equation of Software and First Class Everything: Loops, Tail-Calls, and Continuations)

      So far, we've had a language with de-jure concurrent evaluation of lambda arguments, but de-facto left-to-right evaluation order. A modification to cps_map_eval can make our language genuinely concurrent.
      First, we'll drastically simplify the continuations of each argument evaluation, so they don't explicitly chain together anymore. Instead, we'll just evaluate them all in a loop. If the loop already finished, and all of the arguments have been previously evaluated, the continuation is the same as before: reassign the current argument and re-call the continuation of the whole argument list; otherwise, the continuation is implicitly the next iteration of the loop.

def cps_map_eval(k,v,*x):
    """
    Evaluates the elements of an argument list,
    creating continuations that will assign values
    to the correct indices in the evaluated list.
    """
    arglen = len(x)
    if arglen == 0: return k([])

    argv = [None]*arglen
    done = False
    
    def arg_thread(i,ax):
        def assign_val(val):
            if done:
                new_argv = argv[:]
                new_argv[i] = val
                return k(new_argv)
            else:
                argv[i] = val
        eval(ax,v,assign_val)

    for i, ax in enumerate(x):
        arg_thread(i,ax)
    
    done = True
    return k(argv)

      This is much simpler than our previous code, so why didn't we use it? Well, evaluating arguments in an explicit loop means that each call to eval is no longer a tail call. That's not a huge deal, because it will only grow the stack with recursive calls to eval for as many times as you type in nested function calls in argument positions in your source code; but we don't need to grow the stack at all, and doing so gains us nothing by itself since this still results in implicit left-to-right sequential evaluation. However, notice that the extra function used to create a closure for the value of the index for each argument evaluation is named arg_thread- with the code in this form, we can arrange to execute each argument evaluation in its own concurrent thread.

def cps_map_eval(k,v,*x):
    """
    Evaluates the elements of an argument list,
    creating continuations that will assign values
    to the correct indices in the evaluated list.
    """
    arglen = len(x)
    if arglen == 0: return k([])

    argv = [None]*arglen
    done = False
    
    def arg_thread(i,ax):
        def assign_val(val):
            if done:
                new_argv = argv[:]
                new_argv[i] = val
                return k(new_argv)
            else:
                argv[i] = val
        eval(ax,v,assign_val)

    threads = [Thread(target=arg_thread,args=(i,ax))
                for i, ax in enumerate(x)]

    for t in threads: t.start()
    for t in threads: t.join()
    
    done = True
    return k(argv)

      This isn't quite true parallelism because of Python's Global Interpreter Lock. But it is true concurrency, and with a better underlying implementation of threads would result in automatic paralellism. The existing thread starts up one new thread for each argument to be evaluated, waits for all of them to terminate, and then returns to the main eval loop. Each new thread makes one call to eval to start up its own evaluation loop. However, this implementation creates one more thread than necessary: the main thread sits idle while all of the arguments are evaluating. This is especially heinous if you only have 1 argument; why start a new thread just to do one sequential evaluation? That can be fixed by changing just a couple of lines so that the last argument in the list is evaluated in the current thread:

    threads = [Thread(target=arg_thread,args=(i,ax))
                for i, ax in enumerate(x[:-1])]

    for t in threads: t.start()
    arg_thread(arglen-1,x[-1]) #make use of the current thread
    for t in threads: t.join()

      But now we've re-introduced a recursive call to eval (inside of arg_thread)! Let's go ahead and CPS that away:

def cps_map_eval(k,v,*x):
    """
    Evaluates the elements of an argument list,
    creating continuations that will assign values
    to the correct indices in the evaluated list.
    """
    arglen = len(x)
    if arglen == 0: return k([])

    argv = [None]*arglen
    done = [False]
    
    def arg_thread(i,ax):
        def assign_val(val):
            if done[0]:
                new_argv = argv[:]
                new_argv[i] = val
                return k(new_argv)
            else:
                argv[i] = val
        eval(ax,v,assign_val)

    threads = [Thread(target=arg_thread,args=(i,ax))
                for i, ax in enumerate(x[:-1])]

    for t in threads: t.start()
    
    def arg_k(val):
        if done[0]:
            new_argv = argv[:]
            new_argv[-1] = val
            return k(new_argv)
        else:
            argv[-1] = val
            for t in threads: t.join()
            done[0] = True
            return k(argv)
    
    return Tail(x[-1],v,arg_k)
    
      This has some ugly code duplication. We can get rid of that and eliminate all of the conditional branches at the same time. The new version of cps_map_eval looks like this:

def cps_map_eval(k,v,*x):
    """
    Evaluates the elements of an argument list,
    creating continuations that will assign values
    to the correct indices in the evaluated list.
    """
    arglen = len(x)
    if arglen == 0: return k([])

    argv = [None]*arglen
    
    def assign_val(i,val):
        argv[i] = val

    def reassign(i,val):
        new_argv = argv[:]
        new_argv[i] = val
        return k(new_argv)

    def arg_thread(i,ax):
        eval(ax,v,ArgK( lambda val: assign_val(i,val),
                        lambda val: reassign(i,val)))

    threads = [Thread(target=arg_thread,args=(i,ax))
                for i, ax in enumerate(x[:-1])]

    for t in threads: t.start()
    
    def arg_k(val):
        argv[-1] = val
        for t in threads: t.join()
        return k(argv)

    return Tail(x[-1],v,
                ArgK(arg_k,lambda val: reassign(-1,val)))
    
      ArgK is a callable wrapper class that takes a function to run the first time it's called and a function to run every other time it's called. This eliminates a lot of nesting and allows reassign to be shared everywhere. The definition of ArgK looks like this:

### self-modifying continuations for argument evaluation

class ArgK():
    def __init__(self,first,rest):
        def k(val):
            self.k = rest
            return first(val)
        self.k = k

    def __call__(self,val):
        return self.k(val)
        
      And it contains no conditional branches.
      Now that we have a mechanism for concurrent evaluation of arguments, we can use that to build a concurrent analog to the sequential "begin" construction. The simplest way to do it is to simply pass concurrent expressions as arguments to "list"; a simple vau expression to do that and resturn the result of the first expression looks like this:

(define par (vau (a) % (car (eval % (cons list a)))))

      It's rather inefficient to create a list if you're going to throw most of it away, though. Just as we could make the built-in sequence function much simpler and more efficient than the sequential version of cps_map_eval, we can make a much better built-in concurrency construction. We'll have it parallel the semantics of "begin" by returning the value of the last listed expression.

def par(k,v,*x):
    """
    Evaluates arguments in parallel, returning the
    last listed value like sequence does.
    Ensures that all parallel threads terminate
    before continuing
    """
    if len(x) == 0: return k(None)
    final = [None]

    def call_k(val):
        return k(final[0])

    def par_thread(ax):
        eval(ax,v,ArgK(lambda val: None,call_k))
    
    threads = [Thread(target=par_thread,args=(ax,))
                for ax in x[:-1]]
    for t in threads: t.start()

    def par_k(val):
        for t in threads: t.join()
        final[0] = val
        return k(val)
    return Tail(x[-1],v,ArgK(par_k,call_k))
    
      The par function starts up a new thread with its own eval loop for every argument except the last, throws away the results of all those other threads, and saves the results of evaluating its last argument. And what if you don't want to throw away the results of all those other threads? Just call "list", and it will evaluate all of it's arguments concurrently! You can test it with this sample program which evaluates a bunch of expressions of varying complexity and prints the results in the order that they are completed:

(define mul (lambda (a b) (* a b)))
(begin
    (print (mul (+ 1 2) 3))
    (print (* 4 5))
    (print (+ 10 12))
    (print 7))
(par
    (print (mul (+ 1 2) 3))
    (print (* 4 5))
    (print (+ 10 12))
    (print 7))
(print
    (list
        (print (mul (+ 1 2) 3))
        (print (* 4 5))
        (print (+ 10 12))
        (print 7)))


As usual, working code can be found at https://github.com/gliese1337/schrodinger-lisp/.