Sunday, 16 August 2009

Thanks For All The Fish

One of the major themes of the blog so far has been my attempts to write a version of (perhaps a subset of) Perl 5 that will run on the JVM, and/or find out what makes this a difficult exercise. And, to have fun while doing so.

For reasons which I outline in this post, I think it's time to give up, and I would strongly encourage anyone reading who is tempted to have a go at carrying on where I'm leaving off to think again. The code we wrote is out there, but I would suggest that anyone who's really interested contact me first.

So, here's the lowdown on the problems we faced.

There are really two separate sets of issues - first of all there are issues related to nullability.

Because Perl does not require brackets around function parameters, and has functions which do not need to specify their prototypes, then this line of code:

my $a = dunno + 4;

can be parsed in one of two ways:
  • $a gets 4 + the result of calling the function dunno(), which takes no arguments. That is, + is treated as a binary operator
  • $a gets the result of calling dunno(4), where dunno takes at least one argument. That is, the + is treated as a unary operator on 4.
This line of argument is expanded upon significantly by Jeffrey Kegler at http://www.perlmonks.org/?node_id=663393 - where he links it to Halting. I have not fully satisfied myself of the full implications yet, but the initial points amount to a hugely significant parsing problem, with no real good solution.

The second source of problems is that Perl 5 is old (1994) - and dates from a time when automated language tools were rather more lacking than they are today. When Larry Wall was working on the first versions of p5, lex and yacc were pretty much the state of the art in terms of what was practical for autogeneration, and a skilled practitioner could outperform them by modifying the output, or writing from scratch.

Perl wasn't written with formal parser theory in mind, and has now reached the stage where the implementation is really all we have. It does not fit well into a rigorous model of language, and during its development flexible language features were considered to be more important than linguistic concerns (such as static analysis).

Simply put, there's no grammar, and attempting to write something which matches the only existing implementation is a major undertaking - no existing automated language tools will help much, it's largely a matter of needing to completely reimplement the existing C code in Java (or bytecode). This is a huge amount of work, if it's possible at all, and is not going to be fun - and will be likely to be very frustrating for a large chunk of the time spent on it.

So, here we are. I've had a lot of fun working on this (and particular thanks to James Laver and Shevek, both of whom provided insight, help and encouragement - and to the many other people in the Perl, Java and Ruby worlds with whom I had interesting and sometimes amazing conversations) and I'd like to close with a short summary of what I've learned from this project:
  • Too much backwards-compatibility is a huge millstone
  • Always ensure that the people you're talking to have the same definitions you do
  • If you're going to use formal language, you must have proofs available. Declaring a problem to be in a particular class by fiat does not help anyone.
  • Perl's block / closure / subroutine definitions are too overlapping and unclear. This is a major problem
  • Indirect object syntax in Perl 5 was a misstep
So, that's it for now.

I'll be moving on to other problems on my stack now, so my next posts will be about broader topics than just language design / implementation but I'm sure I'll return to language design in due course - after all, I just can't seem to stay away from it.

Saturday, 1 August 2009

My Interview Checklist

Someone asked me recently about what sort of job interview prep I do, and having recently found myself a new job, I thought I'd post a sample here.

This is the bare bones of what I polished before my most recent job hunt. It's skewed towards Java for some of the actual technology bits, but the CS fundamentals should be language-independent.

My attitude is that the working practitioner should have a good command of a lot of this (especially the CS topics) at all times, and should only need to briefly revisit each subject to ensure the polish and the details are 100% there.

The books I used most heavily were "Introduction to Algorithms" (Rivest et al) and Doug Lea's "Concurrent Programming in Java".

Comments and suggestions for things other people have found useful would be most welcome.

Algorithms

Details of order notation (eg Omega etc)
Mergesort
Quicksort
String Matching
NP / NP-completeness

Trees
Basic Trees and Tree Construction
Red / Black Trees
Hashing / Hashtable
HashMap / TreeMap
B-Trees

Graphs
Representations of Graphs in code (object / pointers, matrix, adjacency list)
Graph Traversal (BFS, DFS)
Minimal Spanning Tree
Dijkstra

Discrete Maths / Probability / "Logic Puzzles"
Probability Exercises
Decision Trees Exercises
n-choose-k Problems
Permutation Groups, Cycle, Reprns, etc
"Perfectly Logical Beings" puzzles
Decision / Ply problems (eg Monty Hall)

DB / Hibernate
Normal Form
Having clause
Outer joins
XML Form of Hibernate - Basics
JPA / Hibernate
Indexes and Optimisation

Java Internals and Details
Bitwise operators
Collections / Generics nitty-gritty (ie to bytecode level)
OO nitty-gritty (and nasty edge cases)
Annotations nitty-gritty
Arrays nitty-gritty

OS/Concurrency details
Safety, Liveness, Performance, Reusability
Permanent Fail: Deadlock, Missed Signals, Nested Monitor Locks, LiveLock, Starvation, Resource Exhaustion, Distributed Fail
Immutability
Block-structured Locking, Synchronisation, JMM, Fully Synchronized Objects
Other Constructs: Mutex, Latch, Futures, Callable / Command Adapter
Real-world multithreaded application development

Future Web Tech
HTML 5
ECMAScript 4 hassles
Flex vs Silverlight (ref issues with LCDS and the Adobe approach)
Asynch Messaging for webapps

Thursday, 11 June 2009

Last Night's Dynamic Languages Meeting

Yesterday evening, a group of us gathered to talk about dynamic languages, at the British Computer Society in London. The format of the evening was lightning talks - ie 5 minutes per talk - on any subject connected with dynamic languages.

There were a lot of great talks given - from using Ruby for systems programming to improving the test coverage of PHP.

I spoke about the JVM / MLVM as a platform for dynamic languages - my slides are here.

It was really great to meet a lot of people from different parts of the dynamic languages world - especially Rob and Zoe from IBM's sMash / Zero implementation of PHP.

Many thanks to the BCS for hosting, Leon from London.pm for organising it and Billy for being the liason with the BCS. Hopefully there'll be another similar event soon.

Wednesday, 10 June 2009

Status Update

It's been a while since last update, largely being driven by my not having many cycles.

The grammar rewrite for p5vm is coming on well, but is still being debugged.

If there's anyone reading who is very good with SableCC (or shift-reduce parsers in general) and could spare a few hours helping us debug the new grammar, please get in touch.

I'm giving a lightning talk at the British Computer Society tonight about some of the work being done to put dynamic languages on the JVM - slides will appear here tomorrow.

Monday, 20 April 2009

State of Play for Perlish PoC

First off, the slides from my talk are here.

A tarball of a Mac MLVM (OpenJDK7 with invokedynamic) dating from 2009-04-10 is here.

So, where are we with the code?
The current codebase uses SableCC, an automated parser generator to transform an EBNF grammar into a parser, lexer, etc.

I'm not saying that's the most powerful way to do it, or that it will ultimately suffice for a reasonable stab at getting Perl onto the JVM. It was simply what was to hand that I had a modicum of experience with and with which I could up to speed with some necessary chunks of parser theory in order to get off the ground.

My current grammar has a number of deficiencies, mostly because I cobbled it together by cribbing from some pre-existing grammars which parse Java and PHP.

Longer term we may need either or both of: a more subtle grammar and/or a parser written in Java (basically, a port of the existing C-based parser to Java, but one which tries to stay out of the later phases of compilation).

In terms of code generation, I have two branches - one which uses a homegrown ASM codegen, and one which translates the AST to JRuby's AST, and then uses JRuby's codegen (which is also based on ASM).

Going forward, the semantic differences between Perl and Ruby (notably Everything-Is-An-Object and string handling) probably make AST translation not viable as a long-term strategy.

However, one thing which did occur is if there are parts of the JRuby codegen libs which could be refactored out into a separate package that we could use for Perl / other langs, that would be helpful.

In addition, when designing the AST for use with a homegrown ASM-based codegen, a good hard look at the JRuby AST seems like a good plan - the scoping constructs that they use are directly relevant to Perl, for example (although I'm aware that for performance reasons, they may need to optimise how they handle scope constructs).

Places to get started

  • A better grammar. One quick way to improve the current situation is to improve the quality of the EBNF grammar. The current grammar is here. It may not be the long term plan, but making progress with what we've got in advance of a parser port should help to figure out issues and will help momentum

  • Design session for how to handle Perlish dispatch in full generality. This is probably the most fundamental design issue which needs to get nailed right now. I have some ideas about how to do it, but they need validating and the input of others. If there's interest, I suggest we get together in the back room of a pub or cafe in London one afternoon and thrash this out.

  • Test cases. Having a set of test cases (grouped by complexity - ie low-hanging fruit first) would be very useful. Ultimately, we want to run as much of the test suite as possible, but little acorns are the first step...

  • Starting up a wiki or similar to track know issues with syntax and semantic issues (eg the semantic issues around 'new', GC, etc)

  • Help from a guru who understands the OP representations. This would be really useful in starting to think about the ultimate form of the parser.



If any of these are appealing, especially the dispatch design task, please get in touch.

Sunday, 19 April 2009

invokedynamic on OS X - Update

OK, so the big news is that the Sun guys (John Rose and co) have formally switched to working primarily off the bsd-port tree, rather than the mainline.

This has the advantage that as a lot of the people on mlvm-dev are on Macs, the patches should just work.

Based on this, I have also confirmed that at present, something appears to be preventing OpenJDK7 from self-hosting, ie that currently, on OSX at least you have to use Java 6 (ie SoyLatte) for builds.

Thus, the sequence is that of http://wikis.sun.com/display/mlvm/Building, but making sure that you start of with this line:


hg fclone http://hg.openjdk.java.net/bsd-port/bsd-port sources


if you're on a Mac - ie download the bsd-port branch, not mainline as the source base.

For the actual build step, ie the gnumake build line, I recommend that you adapt John's build script from here:

http://blogs.sun.com/jrose/resource/davinci-build.sh

The project should then build cleanly. If you do manage to get it to build cleanly on a Mac using an OpenJDK7 as the bootstrapping JVM, please let me (and the mlvm-dev group) know.

With all this in place, it should build cleanly. Install the resulting SDK somewhere sensible (I use /usr/local, labelled something like: openjdk7-sdk-darwin-i386-20090410).

This should now be available to be installed as a primary JVM for your Java IDE (I use Eclipse). You will have to explicitly enable MethodHandle functionality for now. In Eclipse, this is under Preferences > Java > Installed JREs. Highlight the new JRE, and click 'Edit".

Within the IDE detail, you need to add -XX:+EnableMethodHandles to the Default VM arguments.

With all of this done, you should be ready to test.

Remi Forax has posted here with a description of a simple test case.

Download the zip file from here and unpick it - if you've built a JVM as above, all you'll need is a snippet of the code in the src directory of the zip file.

Import the package fr.umlv.davinci.test into a suitable test project. It will not compile, because the class 'Magic' does not exist yet - there's no Java-language source representation of a dynamic invocation like: Magic.invoke(mh, element)

Instead, run the MagicPatcher class. This will generate a binary-only Magic.class file, which you should put on the build path of your project. Refreshing and rebuilding the Eclipse project and the IDE should now be happy, and you should be able to run MethodHandleTest.

For any Perl people who are here - the point which should leap out is the similarity of the second invdyn call (ie the one involving the Java Sum class) to a Perl closure. It's this ability which is motivating my interest from a Perl perspective in this area (I have other interests from a Java perspective, but that's another story).

Now I've got all this down on electrons, I'll move to trying to explain where my proof-of-concept baby-Per-lish dynlang is at, how I see possible approaches from here, and try to flesh out the slides from my talk a bit more so that they make a bit more sense without me talking to them.

Tuesday, 7 April 2009

Akamai IP Application Accelerator

This post is all about a PoC I did with Akamai's IP Application Accelerator technology.

The basic idea is that you change your DNS entries for your services (and think services other than just websites here - this is a Layer 3 solution) to be CNAMEs for Akamaised DNS entries (which will of course resolve to IPs which are close to your end users) and the solution then opens a tunnel over Akamai's private network, which does not use standard Internet routing (and also uses a technique where multiple copies of each packet are sent by diverse routes) or the main exchanges, until the packets are reassembled close to the real origin servers. NATting is used heavily (both SNAT and DNAT) to ensure that this is invisible to the application.

Note that this seamlessness is from an IP perspective - but this does not cover all corner cases completely, and there may be problems with load balancers and integrating fully with your DNS - depending on the details.

Joel Spolsky has written up his description of it here: http://www.joelonsoftware.com/items/2009/02/05.html

Akamai's description of it is here: http://www.akamai.com/ipa

So how did we find it? Well, it definitely has a place for some companies. Joel's setup and customer distribution seem to highlight the upside quite well, so I'll leave you to read that on his site.

Some of the problems we found with it:


  1. It only really works well if your destination is in a different "logical metro". This one probably isn't too surprising - if you're in the same city or data centre you wouldn't expect there to be any gain by routing onto the Akamai network and back again.

  2. It has to be either on for all customers, or off for all customers - there's no way to have it only switched on for (say) just Middle Eastern customers with sucky connectivity.

  3. It's charged for both by number of concurrent sessions, and by total bandwidth. Make sure you tie Akamai down about exactly how the costs are calculated - some of the salespeople we spoke to were overly evasive about the costs.



Taking these together means that you may well have to know more about the geographic distribution of your users and their bandwidth usage patterns than you currently do.

What's also worth noting is the use of the phrase "logical metro". Sure, LA customers might see a speedup back to an NY datacentre (Joel's example), but let's suppose you have 3 regional datacentres (NY, LN, TK) covering the Americas, EMEA and Asia respectively.

It's likely that virtually none of your customers in Europe will see any meaningful speedup (leaving aside outages where you fail those customers over to another region) - and certainly no-one in the LN / Paris / ADM / BRX / Frankfurt area will. The links back to the LN datacentre should just be too damned good for the Akamai solution to be able to shave any time off.

Also, it's plausible that your concentration of Asian customers could be in HK / TK / Shanghai so similar remarks could apply about routing back to TK from within Asia.

Akamai didn't give us a straight answer when we asked a question like: "OK, so suppose we only have a very slight, not user-visible speedup. At what stage will the product not route over the Akamai network (and thus not charge us for an acceleration that didn't really do anything)?"

Given that for customers within a logical metro where you have a datacentre, this is the norm, this is absolutely critical for finding out how much this is really going to cost.

In the end, we decided not to go for it, simply because when we'd analysed where our customers were in the world, we realised we'd be paying a lot of money to speed up about 2% of our mid-range customers.

If you are thinking of deploying this tech consider these points as well:


  • Consider how your failover model works in the event of complete loss of a datacentre. Do your customers fail across to another region? What is the impact of that? How often do events like that occur? What is the typical duration?

  • Load balancing and the details of how your DNS is setup. You may be surprised by how much detail you need to understand of this in order to get this tech to work smoothly. Do not assume that it will necessarily be straightforward for an enterprise deployment



Is there a point here? I guess just that you have to see the acceleration tech in context of your application architecture, understand where your customers are coming from, test it out and check that it's not going to cost the company an arm and a leg.

New technologies like this can be a real benefit, but they must always be seen in context of the business, and in this case - in the geographic and spend distribution of your customers.