Monday 20 April 2009

State of Play for Perlish PoC

First off, the slides from my talk are here.

A tarball of a Mac MLVM (OpenJDK7 with invokedynamic) dating from 2009-04-10 is here.

So, where are we with the code?
The current codebase uses SableCC, an automated parser generator to transform an EBNF grammar into a parser, lexer, etc.

I'm not saying that's the most powerful way to do it, or that it will ultimately suffice for a reasonable stab at getting Perl onto the JVM. It was simply what was to hand that I had a modicum of experience with and with which I could up to speed with some necessary chunks of parser theory in order to get off the ground.

My current grammar has a number of deficiencies, mostly because I cobbled it together by cribbing from some pre-existing grammars which parse Java and PHP.

Longer term we may need either or both of: a more subtle grammar and/or a parser written in Java (basically, a port of the existing C-based parser to Java, but one which tries to stay out of the later phases of compilation).

In terms of code generation, I have two branches - one which uses a homegrown ASM codegen, and one which translates the AST to JRuby's AST, and then uses JRuby's codegen (which is also based on ASM).

Going forward, the semantic differences between Perl and Ruby (notably Everything-Is-An-Object and string handling) probably make AST translation not viable as a long-term strategy.

However, one thing which did occur is if there are parts of the JRuby codegen libs which could be refactored out into a separate package that we could use for Perl / other langs, that would be helpful.

In addition, when designing the AST for use with a homegrown ASM-based codegen, a good hard look at the JRuby AST seems like a good plan - the scoping constructs that they use are directly relevant to Perl, for example (although I'm aware that for performance reasons, they may need to optimise how they handle scope constructs).

Places to get started

  • A better grammar. One quick way to improve the current situation is to improve the quality of the EBNF grammar. The current grammar is here. It may not be the long term plan, but making progress with what we've got in advance of a parser port should help to figure out issues and will help momentum

  • Design session for how to handle Perlish dispatch in full generality. This is probably the most fundamental design issue which needs to get nailed right now. I have some ideas about how to do it, but they need validating and the input of others. If there's interest, I suggest we get together in the back room of a pub or cafe in London one afternoon and thrash this out.

  • Test cases. Having a set of test cases (grouped by complexity - ie low-hanging fruit first) would be very useful. Ultimately, we want to run as much of the test suite as possible, but little acorns are the first step...

  • Starting up a wiki or similar to track know issues with syntax and semantic issues (eg the semantic issues around 'new', GC, etc)

  • Help from a guru who understands the OP representations. This would be really useful in starting to think about the ultimate form of the parser.



If any of these are appealing, especially the dispatch design task, please get in touch.

Sunday 19 April 2009

invokedynamic on OS X - Update

OK, so the big news is that the Sun guys (John Rose and co) have formally switched to working primarily off the bsd-port tree, rather than the mainline.

This has the advantage that as a lot of the people on mlvm-dev are on Macs, the patches should just work.

Based on this, I have also confirmed that at present, something appears to be preventing OpenJDK7 from self-hosting, ie that currently, on OSX at least you have to use Java 6 (ie SoyLatte) for builds.

Thus, the sequence is that of http://wikis.sun.com/display/mlvm/Building, but making sure that you start of with this line:


hg fclone http://hg.openjdk.java.net/bsd-port/bsd-port sources


if you're on a Mac - ie download the bsd-port branch, not mainline as the source base.

For the actual build step, ie the gnumake build line, I recommend that you adapt John's build script from here:

http://blogs.sun.com/jrose/resource/davinci-build.sh

The project should then build cleanly. If you do manage to get it to build cleanly on a Mac using an OpenJDK7 as the bootstrapping JVM, please let me (and the mlvm-dev group) know.

With all this in place, it should build cleanly. Install the resulting SDK somewhere sensible (I use /usr/local, labelled something like: openjdk7-sdk-darwin-i386-20090410).

This should now be available to be installed as a primary JVM for your Java IDE (I use Eclipse). You will have to explicitly enable MethodHandle functionality for now. In Eclipse, this is under Preferences > Java > Installed JREs. Highlight the new JRE, and click 'Edit".

Within the IDE detail, you need to add -XX:+EnableMethodHandles to the Default VM arguments.

With all of this done, you should be ready to test.

Remi Forax has posted here with a description of a simple test case.

Download the zip file from here and unpick it - if you've built a JVM as above, all you'll need is a snippet of the code in the src directory of the zip file.

Import the package fr.umlv.davinci.test into a suitable test project. It will not compile, because the class 'Magic' does not exist yet - there's no Java-language source representation of a dynamic invocation like: Magic.invoke(mh, element)

Instead, run the MagicPatcher class. This will generate a binary-only Magic.class file, which you should put on the build path of your project. Refreshing and rebuilding the Eclipse project and the IDE should now be happy, and you should be able to run MethodHandleTest.

For any Perl people who are here - the point which should leap out is the similarity of the second invdyn call (ie the one involving the Java Sum class) to a Perl closure. It's this ability which is motivating my interest from a Perl perspective in this area (I have other interests from a Java perspective, but that's another story).

Now I've got all this down on electrons, I'll move to trying to explain where my proof-of-concept baby-Per-lish dynlang is at, how I see possible approaches from here, and try to flesh out the slides from my talk a bit more so that they make a bit more sense without me talking to them.

Tuesday 7 April 2009

Akamai IP Application Accelerator

This post is all about a PoC I did with Akamai's IP Application Accelerator technology.

The basic idea is that you change your DNS entries for your services (and think services other than just websites here - this is a Layer 3 solution) to be CNAMEs for Akamaised DNS entries (which will of course resolve to IPs which are close to your end users) and the solution then opens a tunnel over Akamai's private network, which does not use standard Internet routing (and also uses a technique where multiple copies of each packet are sent by diverse routes) or the main exchanges, until the packets are reassembled close to the real origin servers. NATting is used heavily (both SNAT and DNAT) to ensure that this is invisible to the application.

Note that this seamlessness is from an IP perspective - but this does not cover all corner cases completely, and there may be problems with load balancers and integrating fully with your DNS - depending on the details.

Joel Spolsky has written up his description of it here: http://www.joelonsoftware.com/items/2009/02/05.html

Akamai's description of it is here: http://www.akamai.com/ipa

So how did we find it? Well, it definitely has a place for some companies. Joel's setup and customer distribution seem to highlight the upside quite well, so I'll leave you to read that on his site.

Some of the problems we found with it:


  1. It only really works well if your destination is in a different "logical metro". This one probably isn't too surprising - if you're in the same city or data centre you wouldn't expect there to be any gain by routing onto the Akamai network and back again.

  2. It has to be either on for all customers, or off for all customers - there's no way to have it only switched on for (say) just Middle Eastern customers with sucky connectivity.

  3. It's charged for both by number of concurrent sessions, and by total bandwidth. Make sure you tie Akamai down about exactly how the costs are calculated - some of the salespeople we spoke to were overly evasive about the costs.



Taking these together means that you may well have to know more about the geographic distribution of your users and their bandwidth usage patterns than you currently do.

What's also worth noting is the use of the phrase "logical metro". Sure, LA customers might see a speedup back to an NY datacentre (Joel's example), but let's suppose you have 3 regional datacentres (NY, LN, TK) covering the Americas, EMEA and Asia respectively.

It's likely that virtually none of your customers in Europe will see any meaningful speedup (leaving aside outages where you fail those customers over to another region) - and certainly no-one in the LN / Paris / ADM / BRX / Frankfurt area will. The links back to the LN datacentre should just be too damned good for the Akamai solution to be able to shave any time off.

Also, it's plausible that your concentration of Asian customers could be in HK / TK / Shanghai so similar remarks could apply about routing back to TK from within Asia.

Akamai didn't give us a straight answer when we asked a question like: "OK, so suppose we only have a very slight, not user-visible speedup. At what stage will the product not route over the Akamai network (and thus not charge us for an acceleration that didn't really do anything)?"

Given that for customers within a logical metro where you have a datacentre, this is the norm, this is absolutely critical for finding out how much this is really going to cost.

In the end, we decided not to go for it, simply because when we'd analysed where our customers were in the world, we realised we'd be paying a lot of money to speed up about 2% of our mid-range customers.

If you are thinking of deploying this tech consider these points as well:


  • Consider how your failover model works in the event of complete loss of a datacentre. Do your customers fail across to another region? What is the impact of that? How often do events like that occur? What is the typical duration?

  • Load balancing and the details of how your DNS is setup. You may be surprised by how much detail you need to understand of this in order to get this tech to work smoothly. Do not assume that it will necessarily be straightforward for an enterprise deployment



Is there a point here? I guess just that you have to see the acceleration tech in context of your application architecture, understand where your customers are coming from, test it out and check that it's not going to cost the company an arm and a leg.

New technologies like this can be a real benefit, but they must always be seen in context of the business, and in this case - in the geographic and spend distribution of your customers.