[Techtalk] Philosophical question: CPU/memory/disk cheaper than efficiency?

Akkana Peck akkana at shallowsky.com
Tue Apr 10 16:55:48 UTC 2007

Wim De Smet writes:
> Eric S. Raymond claims in his book The Art of UNIX Programming[1] that
> you should only consider recoding the program if you think you'll be
> able to make it an order of magnitude faster, i.e. 10, 100, 1000 times
> faster depending on what you change. 

I haven't read ESR's book, but I suspect he's talking about
refactoring (Joel Spolsky of joelonsoftware.com has also written
against that) or complete rewrites from scratch.

I doubt ESR or Joel or anyone else would argue against
performance bugfixes to make a program more efficient.
Sometimes a program that takes over a machine and "slows other
processes to a crawl", as the original poster described, does
that because it has a memory leak or some other inefficiency,
a bug which could be easily found and fixed. That doesn't involve
anywhere near the programmer effort that a rewrite implies.

Other times, it's a program that really needs a lot of resources
to do its job, and all the tuning in the world won't change that.
As Magni said, it depends on the program. Do you think that it was
written by excellent programmers who spent time on tuning its
performance and efficiency? Or was it written quickly, and not
tested extensively?

If you don't know anything about how the program was written, or
if you suspect it might have been sloppily written, a little time
spent with performance monitoring tools might point out possible
problems and enable it to run well on the hardware you have.

Kelly Jones asked:
> >Have there been studies done on this? Articles written?

Sure, but at this point you're asking such a general question that
it would be hard to quantify an answer or to find relevant research.
What sort of program is it? How big is it? Is it CPU or memory
bound? Initially or after running for days? Has any performance
tuning been done on it already? If the answer to these questions
are "don't know", then it's probably a waste buying better hardware
because you don't even know what aspect of the hardware needs to
be better.

Wim again:
> On the other hand that can be dangerous to say. I often come across
> little tidbits of implementation details in gnome that I wonder
> whether the gnome developers weren't a bit too eager to follow ESR's
> advice on, e.g. they picked the stupid implementation that's fast to
> do and easy to understand, but lots of instances of this means gnome
> gets a reputation of being slow. 

Yeah. It is true that it's often a waste of time to try to
second-guess which parts of a program will be performance
bottlenecks (that's called "premature optimization"): it's usually
better to just write the program the obvious way, then use
performance analysis tools to find out where the real bottlenecks
are and optimize that code.

Unfortunately, since everyone knows that, what happens in the real
world is that people write straightforward code, saying "we
shouldn't do premature optimization; we'll optimize it once it's
written". Then once it's written, the developers with their
state-of-the-art machines and relatively simple setups say "Gee, it
seems to work fine, and performance analysis is boring. Let's go
work on the next new feature!" The optimization never happens, and
users on slower hardware or with different environments are left
with major performance problems.

    "Beginning GIMP: From Novice to Professional": http://gimpbook.com

More information about the Techtalk mailing list