Xp For Optimizing Compilers

See also: ExtremeProgrammingChallengeOne

If you are working in an area with a lot of deep theory (OptimizingCompilers, for example) then XP needs to be modified, at the very least. YAGNI assumes that you know the theory, and that you use it to tell whether you are going to need it. "The simplest thing that could possibly work" depends on your experience; what a novice thinks might work, an expert knows will not. An expert compiler writer will start with a scanner, parser, symbol table, and code generator, even though this is not needed for the first test cases. Perhaps this is what the XP folks mean by picking a metaphor. Maybe what you learn in a compiler course is the "compiler metaphor". -- RalphJohnson

OptimizingCompilers are odd beasts. They rely on several decades of theory, many high-level design patterns (ParserGenerators?, CodeGeneratorGenerators?, AliasAnalysis, StaticSingleAssignmentForm (SSA), to name a few) and lots of empirically-measured domain knowledge. Lots of design decisions turn out to be related to other design decisions in surprising ways.

So this raises our first problem: To build a modern optimizing compiler, it helps to do some up-front design, decide upon a general architecture, and choose optimization and code generation techniques well-suited to that architecture. You can pick such pre-made architectural patterns out of text books, and they help a lot--especially for novices like me.

Similarly, optimizing compilers are hard to test. The optimizer, for example, has the following contract:

Given an input program, the optimizer should produce an equivalent output program.

But thanks to the HaltingProblem, you can't compare two programs for equivalence. Instead, you must dance around the central question (are these two programs equivalent?) by compiling lots of sample programs with and without optimization, and seeing if they produce the same results for specific inputs.

The HaltingProblem says you can't prove two arbitrary programs are equivalent. But you're not dealing with arbitrary programs. You're dealing here with programs that have been derived from the same source, using known techniques, and using well-understood transformations. The HaltingProblem is completely irrelevant here unless you're doing hack-and-debug programming.

Some Questions for the XP Experts

I've managed to apply XP to small business systems, and I like the methodology. But I'm interested in the experiences of people who've tried to apply XP to compilers and other hairy problems.

I'm not trying to argue against XP, I'm trying to figure out how to apply it. :-)

-- EricKidd?


The thesis of this page seems to be that OptimizingCompilers need a lot of design up front, so are not Xp friendly.

For an Optimizing compiler to be an Xp project, a way would need to be found to factor out the optimizations so that they are not deeply intertwined in the 'main' code. If someone can specify optimizations in a table driven way then we have a decent chance of doing a compiler within Xp.

Are there any real issues in designing unit tests?


For an Optimizing compiler to be an Xp project, a way would need to be found to factor out the optimizations so that they are not deeply intertwined in the 'main' code.

Generally, the optimization code is typically fairly easy to separate.

Data formats are still a bit of a black art, though. The compiler's IntermediateRepresentation? (the data format used to pass partially-compiled code between modules) profoundly influences your choice of algorithms and architecture.

See TheSimplestPossibleCompiler.

Are there any real issues in designing unit tests?

Yes and no. :-) Many parts of compilers are easy to test, a few are tedious to test, and some can't be tested directly. I've personally seen the following test patterns used for compiler projects:

It's important to run the tests frequently when making changes to the optimizer or code generator. These bugs will often show up as test failures in the compiled programs, and it's very hard to trace the failure back to the source if you've made lots of changes since last time. -- EricKidd?


Don't use XP for writing an optimizing compiler. Instead, do something that works for writing optimizing compilers. (Feel free to be inspired by XP.) -- SunirShah


Well, of course it's always a local option. But YAGNI and DTSTTCPW aren't about naive design, they are about not putting in features before they are needed, and about always making the system be simple.

Ralph says:

An expert compiler writer will start with a scanner, parser, symbol table, and code generator, even though this is not needed for the first test cases. I'm no longer expert, but I've done a few compilers in my day. Let's see what a semi-expert XPer might do. Then maybe we'll poke holes at it. Feel free to factor this elsewhere or into vacuum if it needs it.

When I look at what to do first, I'm mostly thinking that I have to implement an important story completely in the first iteration. That could be difficult. Let's suppose that we have a linker, so all we need to do is go from source to an obj file. All, he says. It seems enough.

So I pick a simple program, doubtless "Hello, World". I write a program that reads the source for hello.cpp, and produces an object file that when linked and executed, prints "Hello, world" to the console. The acceptance test is "easy": compile, link, run, check console. The implementation is easy, too, I'll probably just spit out the obj file as a literal.

I don't have scanner et al yet, but that's because I don't need them to output a literal. But then I get another story, maybe to print "Hello, Ron", or even to set a variable to a string and print that. Now I have a good excuse to convert to tokens, build a small symbol table, emit a little bit of special code. So I'll have a rudimentary scanner (probably using regex or something else simple right now, a trivial parser (it only understands =, simple var names, strings), and a trivial code generator.

I've got two weeks to deliver some important running program (or something else the customer agrees is of value). So I don't have time to do a full-blown job on the scanner, parser, symbol table, etc. But I will almost certainly have those objects in there, in some embryonic form.

Will I have those objects? Very likely. Will I wait for the code to "tell me" that I need them? Probably not. It would be fun to try to let the code direct everything just to see what would happen. I'd need a compiler person who was willing, and a grant to pay for our time. I think the compiler person would need some XP coach type person to keep him "honest", though if the compiler person was XP enough he could do it on his own and probably be OK.

Embryonic, incremental, yes. Naive and letting those pieces be discovered? I'm not sure. It would take nerve, and even the XpHammer wonders why one would bother. -- RonJeffries


I've got two weeks to deliver some important running program (or something else the customer agrees is of value). So I don't have time to do a full-blown job on the scanner, parser, symbol table, etc. But I will almost certainly have those objects in there, in some embryonic form.

From experience, I know that it's possible to build a rudimentary lexer, a parser, an AST-based compiler and a simple VM in two weeks, all capable of handling integer expressions of the form "1 + 2 * 20 / 5" (see TheSimplestPossibleCompiler). Variables would require a bit more work, but who cares? We're already well underway, and we can do that in the next iteration. In three or four iterations, we'll start to have a nice little language (though nothing that's especially valuable to the customer quite yet - let's not kid ourselves).

But if we actually want this compiler to succeed, I think we'll have to cheat a bit. We'll have to make some big architectural decisions up front (what optimizations do we potentially need to support? what intermediate representations work well with these optimizations?). We'll need to write lots of implementation-oriented stories ("the compiler converts the parse tree into StaticSingleAssignmentForm using dominator analysis").

This is not to say that XP is without value. Creeping complexity, isolated knowledge, poor refactoring and a sloppy test policy would kill our project as quickly as any other. But this example does suggest that some problems (1) are annoyingly hard, but (2) have reasonably good "cookbook" solutions. And in these cases, perhaps we should begin as follows:

  1. Decide roughly what we need to accomplish.

  2. Pick an appropriate recipe from the cookbook.

  3. Write small, incremental stories about the recipe.

-- EricKidd?

I don't think you need to cheat. Just write a bunch of user stories ("eliminate dead code", "propagate constants" etc.) and then maybe you'll decide that SSA is the easiest way to do these. After all you only do these transforms to deliver user value, which for a compiler is the optimisations you make. -- NoelWelsh

These typically don't become user stories until late in the project, long after some irrevocable design decisions have been made. But you need to lay the ground-work early on.

For example, would you attempt to replace a compiler's intermediate representation (i.e., change AST to 3-address form) by refactoring? I certainly wouldn't want to do this - in my experience, that would completely break something like 20,000 lines of code in a medium-sized research compiler, and even more in something the size of gcc.

So why don't I just go ahead, use some domain knowledge iterations before the stories suggest it, and avoid the most obvious dead-ends?

This sounds like an excellent MS thesis topic. Write an optimizing compiler completely test-first, and grow the design completely organically. The hypothesis can be either that you will strike some dead-end, where a big refactoring will cause a small change to be very expensive, or that you will be able to add optimizations with relatively smooth cost using only careful thought, not forethought.


I'm sure refactoring a compiler intermediate representation would be a nightmare, if the compiler were implemented in C or C++ or maybe Java. If you were implementing in Smalltalk or Python or Ruby, maybe that refactoring wouldn't be such a nightmare at all. Ease of refactoring depends on having (or creating) good abstraction and encapuslation, clever applications of OnceAndOnlyOnce, etc...

There is a good book on Parsing in Java written by a member of the XP Mailing list who may have used test-first design in writing his code. Search the archived message to find where online he put his JUnit tests. The nice thing about his design is that conversion from something like Extended Bakkus Naur Form to high-level OO Java code is close to being "natural".


Building Parsers With Java, by Steve Metsker. Paperback - 400 pages Bk&Cd-Rom edition (March 26, 2001) Addison-Wesley Pub Co; ISBN 0201719622 .


As far as I understand, there are quite a couple of compilers that basically work by having some internal data format (which typically started out as an AbstractSyntaxTree and then became Something Else), and every optimisation pass mostly converts this data format into something convenient, does its thing, and converts back. In this setup, you can basically add optimizations at will, and easily. At some point you might note that you spent most of your time converting between your AST and, say StaticSingleAssignmentForm. At that point, you might want to promote SSA to the "main" internal form of your compiler, a change which can probably be done incrementally.

Anyway, the simplest compiler that could possibly work would probably look somewhat like the old TurboPascal compiler, which parsed the source and then basically wrote out the AST in reverse polish notation, i.e. it used your x86 as a plain old stack machine. Possibly a tiny bit of peephole optimization, to remove the most glaring inefficiencies. Such an extremely simple back end will allow you to get the front-end correct. As soon as your compiler does the whole ANSI regression test without a glitch, you are allowed to think about optimizations.


CategoryCompilers


EditText of this page (last edited February 26, 2005) or FindPage with title or text search