The documentation being sparse was also a problem, as was the fact that the classes that Kawa produces are a long way from being usable as regular Java types - and still display a large amount of their Scheme heritage.
I've found that JRuby does a lot better - in some ways it is a lot closer to the AST that I'm producing. In addition, the code is quite easy to follow, and the low-level generation is all handled with ASM.
I have some basic examples of programmatically generating a JRuby AST and then compiling it - there's no nice API method to do it, but by cribbing from the compiler source it's relatively easy.
Adapting this method for purr, in the Main class of the purr compiler, we now have some code like this:
Lexer lexer = new Lexer (new PushbackReader(new BufferedReader(new FileReader(filename)), 1024));
Parser parser = new Parser(lexer);
Start ast = parser.parse();
SemanticAnalyzer sem = new SemanticAnalyzer();
ast.apply(sem);
NodeMapper nmap = new NodeMapper(sem);
ast.apply(nmap);
nmap.stitchTree(ast);
Node jHead = nmap.getNodeMap(ast);
compile(jHead);
where the compile method is as simple as:
private static void compile(Node ast) {
try {
String classname = JavaNameMangler.mangledFilenameForStartupClasspath(filename);
ASTInspector inspector = new ASTInspector();
inspector.inspect(ast);
StandardASMCompiler asmCompiler = new StandardASMCompiler(classname, filename);
ASTCompiler compiler = new ASTCompiler();
compiler.compileRoot(ast, asmCompiler, inspector, false, false);
File file = new File(filename);
asmCompiler.writeClass(file);
}
catch (Exception x) {
System.out.println(x.toString());
}
}
So, still quite a way to go (especially with figuring out all the details of NodeMapper, which is the class which transforms from purr's AST to JRuby's) - but some good progress.
No comments:
Post a Comment