The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

TITLE

Parrot FAQ - Frequently Asked Questions

VERSION

Revision 0.5 - 04 September 2002
Revision 0.4 - 26 August 2002

Fixed up the licensing bits

Revision 0.3 - 13 March 2002

Translated to POD and added "Why aren't we using external tool or library X?"

Revision 0.2 - 03 December 2001

Added the "Parrot and Perl" section and "Why Re-implement Perl". Incorporated Dan's Q&A items.

Revision 0.1 - 03 December 2001

Adopted from Simon Cozens's article, "Parrot: A Cross-Language Virtual Machine Architecture".

GENERAL QUESTIONS

What is Parrot?

Parrot is the new interpreter being designed from scratch to support the upcoming Perl6 language. It is being designed as a standalone virtual machine that can be used to execute bytecode compiled dynamic languages such as Perl6, but also Perl5. Ideally, Parrot can be used to support other dynamic, bytecode-compiled languages such as Python, Ruby and Tcl.

Why "Parrot"?

The name "Parrot" relates to Simon Cozens's April Fool's Joke where Larry Wall and Guido van Rossum announced the merger of the Perl and Python languages.

As penance, Simon spent time as Parrot's lead developer, but he's gotten better.

Is Parrot the same as Perl 6?

No. Parrot is an implementation that is expected to be used to execute Perl6 programs. The Perl6 language definition is currently (December 2001) being crafted by Larry Wall. While the true nature of Perl6 is still unknown, it will be substantially similar to Perl as we know it today, and will need a runtime system. For more information on the nascent Perl6 language definition, check out Larry's apocalypses.

Can I use Parrot today?

Yes.

Parrot is in the early phases of its implementation. The primary way to use Parrot is to write Parrot assembly code, described in PDD6.

You can also create dynamic content within Apache using Ask Bjorn Hansen's mod_parrot module. You are strongly advised that mod_parrot is a toy, and should not be used with any production code.

Why should I program in Parrot Assembly language?

Lots of reasons, actually. :^)

  • All the cool kids are doing it.

  • It's a neat hack.

  • You get all the pleasure of programming in assembly language without any of the requisite system crashes.

Seriously, though, programming in Parrot assembly language is an interesting challenge. It's also one of the best ways to write test cases for Parrot.

When can I expect to use Parrot with a real programming language?

It depends on what you mean by real. :^)

  • Leon Brocard has released a proof-of-concept Java bytecode to Parrot bytecode compiler.

  • Gregor Purdy is working on a little language called Jako that targets Parrot bytecode directly. (Available with the Parrot distribution.)

  • Dan Sugalski and Jeff Goff have started work on compiling Scheme down to Parrot bytecode. (Available with the Parrot distribution.)

  • Clint Pierce wrote an Integer Basic implementation in parrot assembly, which is shipped with the parrot distribution, as are a few example programs. (Including Hunt the Wumpus and Eliza)

  • There's a Befunge interpreter in the languages directory

  • There's an (ahem)BF interpreter in the languages directory. Be aware that BF is not, strictly speaking, the language's name, merely its initials.

  • There is a prototype Perl 6 implementation in the languages directory as well, though it's only as complete as the Perl 6 spec. (Which, at this writing, isn't sufficiently complete)

What language is Parrot written in?

C.

For the love of God, man, why?!?!?!?

Because it's the best we've got.

That's sad.

So true. Regardless, C's available pretty much everywhere. Perl 5's in C, so we can potentially build any place Perl 5 builds.

Why not write it in insert favorite language here?

Because of one of:

  • Not available everywhere.

  • Limited talent pool for core programmers.

  • Not fast enough.

Why aren't you using external tool or library X?

The most common issues are:

  • License compatibility.

    Parrot has an odd license -- it currently uses the same license as Perl 5, which is the disjunction of the GNU GPL and the Artistic License, which can be written (Artistic|GPL) for short. Thus, Parrot's license is compatible with the GNU GPL, which means you can combine Parrot with GPL'ed code.

    Code accepted into the core interpreter must fall under the same terms as parrot. Library code (for example the ICU library we're using for Unicode) we link into the interpreter can be covered by other licenses so long as their terms don't prohibit this.

  • Platform compatibility.

    Parrot has to work on most of Perl 5's platforms, as well as a few of its own. Perl 5 runs on eighty platforms; Parrot must run on Unix, Windows, Mac OS (X and Classic), VMS, Crays, Windows CE, and Palm OS, just to name a few. Among its processor architectures will be x86, SPARC, Alpha, IA-64, ARM, and 68x00 (Palms and old Macs). If something doesn't work on all of these, we can't use it in Parrot.

  • Speed, size, and flexibility.

    Not only does Parrot have to run on all those platforms, but it must also run efficiently. Parrot's core size is currently between 250K and 700K, depending on compiler. That's pushing it on the handheld platforms. Any library used by Parrot must be fast enough to have a fairly small performance impact, small enough to have little impact on core size, and flexible enough to handle the varying demands of Perl, Python, Tcl, Ruby, Scheme, and whatever else some clever or twisted hacker throws at Parrot.

These tests are very hard to pass; currently we're expecting we'll probably have to write everything but the Unicode stuff.

Why your own virtual machine? Why not compile to JVM/.NET?

Those VMs are designed for statically typed languages. That's fine, since Java, C#, and lots of other languages are statically typed. Perl isn't. For a variety of reasons, it means that Perl would run more slowly there than on an interpreter geared towards dynamic languages.

The .NET VM didn't even exist when we started development, or at least we didn't know about it when we were working on the design. We do now, though it's still not suitable.

So you won't run on JVM/.NET?

Sure we will. They're just not our first target. We build our own interpreter/VM, then when that's working we start in on the JVM and/or .NET back ends.

What about insert other VM here

While I'm sure that's a perfectly nice, fast VM, it's probably got the same issues as do the languages in the "Why not something besides C" question does. I realize that the Scheme-48 interpreter's darned fast, for example, but we're looking at the same sort of portability and talent pool problems that we are with, say, Erlang or Haskell as an implementation language.

Why is the development list called perl6-internals?

The mailing list precedes the Parrot joke and subsequent unveiling of the True Grand Project by a number of months. We've just not gotten around to renaming the mailing list. We will.

PARROT AND PERL

Why re-implement Perl?

Good question.

At The Perl Conference 4.0, in the summer of 2000, Larry Wall announced that it was time to recreate Perl from the ground up. This included the Perl language, the implementation of that language, the community of open source developers who volunteer to implement and maintain the language, and the larger community of programmers who use Perl.

A variety of reasons were given for embarking on this project:

  • Perl5 is a stable, reliable, robust platform for developing software; it's not going away for a long time, even after Perl6 is released. (Proof: Perl4 is still out there, no matter how much we all want it to go away.)

  • We have the ability to translate Perl5 into Perl6 if necessary. This preserves backward compatibility with a large body of existing Perl code, which is very important.

  • The language can stand some revision: formats don't really belong in the core language, and typeglobs have outlived their usefulness. By revising the language now, we can make Perl better.

  • Some warts really should be removed: system should return true instead of false on success, and localtime should return the year, not the year - 1900.

  • It would be nice to write the Perl to Bytecode compiler in Perl, instead of C. That would make it much easier for Perl hackers to hack on Perl.

You want to write the Perl compiler in Perl?

Sure. Why not? C, Java, Lisp, Scheme, and practically every other language is self-hoisting. Why not?

Isn't there a bootstrapping problem?

No, not really. Don't forget that we can use Perl 5 to run Perl 5 programs, such as a Perl 5 to Parrot compiler.

How will Parrot handle both Perl 5 and Perl 6?

We don't know yet, since it depends on the Perl 6 language definition. But we could use the more appropriate of two Perl compilers, depending of whether we're compiling Perl 5 or Perl 6. Larry has mumbled something about a package statement declaring that the file is Perl 5, but we're still not quite sure on how that fits in.

Is this how Parrot will run Python, Ruby, and Tcl code?

Probably.

Latin and Klingon too?

No, Parrot won't be twisted enough for Damian. Perhaps when Parrot is ported to a pair of supercool calcium ions, though...

Huh?

You had to be there.

PARROT IMPLEMENTATION ISSUES

What's with the whole register thing machine?

Not much, why do you ask?

Don't you know that stack machines are the way to go in software?

No, in fact, I don't.

But look at all the successful stack-based VMs!

Like what? There's just the JVM.

What about all the others?

What others? That's it, unless you count Perl, Python, or Ruby.

Yeah them!

Yeah, right. You never thought of them as VMs, admit it. :^)

Seriously, we're already running with a faster opcode dispatch than any of them are, and having registers just decreases the amount of stack thrash we get.

Right, smarty. Then name a successful register-based VM!

The 68K emulator Apple ships with all its PPC-enabled versions of Mac OS.

Really?

Really.

You're not using reference counting. Why not?

Reference counting has three big issues.

Code complexity

Every single place where an object is referenced, and every single place where a reference is dropped, must properly alter the refcount of the objects being manipulated. One mistake and an object (and everything it references, directly or indirectly) lives forever or dies prematurely. Since a lot of code references objects, that's a lot of places to scatter reference counting code. While some of it can be automated, that's a lot of discipline that has to be maintained.

It's enough of a problem to track down garbage collection systems as it is, and when your garbage collection system is scattered across your entire source base, and possibly across all your extensions, it's a massive annoyance. More sophisticated garbage collection systems, on the other hand, involve much less code. It is, granted, trickier code, but it's a small chunk of code, contained in one spot. Once you get that one chunk correct, you don't have to bother with the garbage collector any more.

Cost

For reference counting to work right, you need to twiddle reference counts every time an object is referenced, or unreferenced. This generally includes even short-lived objects that will exist only briefly before dying. The cost of a reference counting scheme is directly linked to the number of times code references, or unreferences, objects. A tracing system of one sort or another (and there are many) has an average-case cost that's based on the number of live objects.

There are a number of hidden costs in a reference-counting scheme. Since the code to manipulate the reference counts must be scattered throughout the interpreter, the interpreter code is less dense than it would be without reference counts. That means that more of the processor's cache is dedicated to reference count code, code that is ultimately just interpreter bookkeeping, and not dedicated to running your program. The data is also less dense, as there has to be a reference count embedded in it. Once again, that means more cache used for each object during normal running, and lower cache density.

A tracing collector, on the other hand, has much denser code, since all it's doing is running through active objects in a tight loop. If done right, the entire tracing system will fit nicely in a processor's L1 cache, which is about as tight as you can get. The data being accessed is also done in a linear fashion, at least in part, which lends itself well to processor's prefetch mechanisms where they exist. The garbage collection data can also be put in a separate area and designed in a way that's much tighter and more cache-dense.

Having said that, the worst-case performance for a tracing garbage collecting system is worse than that of a reference counting system. Luckily the pathological cases are quite rare, and there are a number of fairly good techniques to deal with those. Refcounting schemes are also more deterministic than tracing systems, which can be an advantage in some cases. Making a tracing collector deterministic can be somewhat expensive.

Self-referential structures live forever

Or nearly forever. Since the only time an object is destroyed is when its refcount drops to zero, data in a self-referential structure will live on forever. It's possible to detect this and clean it up, of course... by implementing a full tracing garbage collector. That means that you have two full garbage collection systems rather than one, which adds to the code complexity.

Could we do a partial refcounting scheme?

Well... no. It's all or nothing. If we were going to do a partial scheme we might as well do a full scheme. (A partial refcounting scheme is actually more expensive, since partial schemes check to see whether refcounts need twiddling, and checks are more expensive than you might think)

LINKS

April Fool's Joke: http://www.perl.com/pub/a/2001/04/01/parrot.htm

apocalypses: http://www.panix.com/~ziggy/

cool kids: http://use.perl.org/~acme/journal

Java bytecode to Parrot bytecode: http://archive.develooper.com/perl6-internals@perl.org/msg03864.html

http://www.perl.com/pub/a/2000/10/23/soto2000.html

be there: http://www.csse.monash.edu.au/~damian/papers/#Superpositions

Really.: http://developer.apple.com/techpubs/mac/PPCSoftware/PPCSoftware-13.html