[Techtalk] interpreted vs. compiled languages

Wim De Smet kromagg at gmail.com
Wed Jun 10 07:06:00 UTC 2009


Hi,

On Wed, Jun 10, 2009 at 2:28 AM, Daniel Pittman<daniel at rimspace.net> wrote:
> Carla Schroder <carla at bratgrrl.com> writes:
>
>> Assuming a programmer has reasonable skills, are there (generally speaking)
>> significant performance differences between interpreted vs. compiled
>> languages?
>
> Probably not, for most practical purposes.  Unless your application is
> performing a lot of heavy numeric computation, or if it runs over the
> Internet, performance is probably dominated by I/O waits anyhow.
>
> As Michael mentioned, there is also a significant range of performance
> variation among "interpreted" languages, especially with the widespread use of
> "just in time" compilers.

As you say there's a range. If you compare ruby 1.8 with ruby 1.9 with
python there's a world of difference. Another well known example is
javascript, still a target for optimization. Comparing js in say IE6
vs. google chrome there's a world of difference.

How much this matters is a matter of perspective. If you run a ruby on
rails deployment with 1.8 you might have to start scaling horizontally
slightly sooner, but claiming your requests to the server are slow
because ruby is slow is patent nonsense, most web apps are IO-bound
anyway.

> In fact, in some cases a good JIT can deliver *better* performing code than a
> static compiler, using some variant of a "trace tree" model to use runtime
> profiling information to guide the compiler, where a static compiler needs
> manual tuning to even come close to the same information.

Of course the crucial point here is that the old compiled vs.
interpreted argument is a bit moot in an age where just-in-time
compiling exists and where many of the more popular languages (java,
c#) run inside a VM anyway, thus kind of blurring the line between
compiled and interpreted. I'm not too well versed in .net but java
bytecode gets interpreted anyway so you could sorta call it an
interpreted language, even though it uses a traditional compile phase
just like the pure compiled languages.

At some point of course, all code is interpreted, whether on the
hardware level or above. The question is at what level can you run it
the fastest, which is a tough one. If you run it higher up the stack
you'll have a bit more overhead but you're aware of all the layers
below you and you can possibly do a lot more optimization.

In fact, I once wrote a compression algorithm in java with a simple
tree with pointers as backing data structure and outperformed a C
implementation that used the same tree but in array form. In principle
mine should've been slower, or at least one would have expected it to,
but it was probably a lot easier to optimize for.

> Finally, most practical real world interpreted languages make it easy to call
> through to native compiled code for cases when you need the performance that
> hand-tuned assembly can deliver (or the JIT doesn't exist, or can't keep up.)

Which, I hasten to add, has a certain fixed performance overhead, so
you typically only call to native code if it's a big chunk of
processing. Not, say, to multiply two doubles with your hand rolled
assembly or something. Something beginners seem to get wrong
sometimes.

regards,
Wim


More information about the Techtalk mailing list