Subject: Re: New ESR paper: The Magic Cauldron
From: "Stephen J. Turnbull" <>
Date: Mon, 28 Jun 1999 13:31:28 +0900 (JST)

>>>>> "rn" == Russell Nelson <> writes:

    rn> Stephen J. Turnbull writes:
    >> If you take the "all bugs are shallow with enough eyeballs"
    >> argument seriously, it leads to the conclusion that bugfixing
    >> and feature provision should be increasing returns to scale in
    >> user population.  Now, it is true that with "easy" bugs and
    >> features, congestion and coordination problems will quickly
    >> lead to decreasing returns (pace, Steve McConnell).

    rn> You're presuming the absence of a Linus Torvalds to say "We
    rn> have enough kernel hackers.  Go hack Gnome."  You're further
    rn> presuming that the feedback from the c & c problems won't be
    rn> noticed.

    Eric> Stephen J. Turnbull <>:
    >> I don't understand the "not scale" point.  I've looked at this
    >> in a simple mathematical model, where "hackerishness" is the
    >> innate propensity to hack, and is distributed randomly in
    >> proportion to population.  In that case, the problem is
    >> homogeneous of degree one in the number of users.  In order to
    >> get less than degree one, you need to assume that in some sense
    >> the hackers are "first in" and as the number of users rises the
    >> proportion of hackers falls.

>>>>> "Eric" == Eric S Raymond <> writes:

    Eric> Yes, that's precisely the case as I understand it.  There
    Eric> are at least two good reasons for this:

    Eric> 1. Having the hacker nature is not a binary attribute but a
    Eric> gradient -- and people who have a lot of it aren't merely
    Eric> passively attracted to the culture but actually seek it out.
    Eric> Therefore, as the culture expands it tends to cherry-pick
    Eric> the people with the highest aptitude early on, with a
    Eric> corresponding relative decrease in the yield per unit time
    Eric> later.

>>>>> "Ian" == Ian Lance Taylor <> writes:

    Ian> I think it's correct that as the number of users rises the
    Ian> proportion of hackers falls.  This matches my personal
    Ian> experience, and it matches the predictions of Geoffrey
    Ian> Moore's early-adopter/late-adopter model.  Hackers tend to be
    Ian> early adopters of software.  Late adopters expect the
    Ian> software to work, and are less inclined to fix it, or deal
    Ian> with it at all, if it doesn't.

This is presumably true in the dynamic models you have in mind.
However, I suspect they are _not_ true in cross-section comparisons,
such as GNU Emacs vs. Canna (a system for inputting Japanese, with
obviously limited appeal elsewhere, a much smaller project).

The reason the dynamic vs. static (cross-section) issue is important
is that the dynamic issue is interesting only as long as there is an
essential limitless pool of users of non-free software to draw
non-hackers from.  If _all_ software were free, then entry into a
given project would be determined by (a) features, not correlated with
hackerishness, and (b) hack value, which is correlated with
hackerishness.  In that context, it is no longer obvious that the
"hack value" factor is sufficiently large to generate a gradient for
successful projects---of course it will be there for startups, but
entrepreneurs are always central to startups.

    Eric> 2. Symmetrically, culture growth is limited by its capacity
    Eric> to educate new members in cultural norms.  When the medium
    Eric> of education is person-to-person contact out of limited time
    Eric> budgets, this introduces non-linearities rather similar to
    Eric> diffusion-limited growth.

But this depends again on your assumption that all potential hackers
are already hacking.  I don't think this is true; I know a few people
who would be hacking free software if they weren't bound to Windose or 
Mac systems for some reason.  And I think they'd be hacking a lot more 
effectively if they didn't have to reinvent all those proprietary

    sjt>    If you take the "all bugs are shallow with enough
    sjt> eyeballs" argument seriously, it leads to the conclusion that
    sjt> bugfixing and feature provision should be increasing returns
    sjt> to scale in user population.  Now, it is true that with
    sjt> "easy" bugs and features, congestion and coordination
    sjt> problems will quickly lead to decreasing returns (pace, Steve
    sjt> McConnell).  But with "hard" bugs and features, the number of
    sjt> discoveries should be sufficiently sparse that the normal OSS
    sjt> procedure of "discuss on Usenet, code, and distribute" will
    sjt> be sufficient coordination.  Presumably these are the highest
    sjt> value-added, too.

    Eric> Russ has already addressed this point in a way I
    Eric> substantially agree with.

Russ?  Did I miss something?  Or do you mean Ian, as follows?

    Ian> I personally only believe that argument in the sense of
    Ian> finding the bugs in the first place.  For finding bugs, I
    Ian> think it's true that the more people you have looking at the
    Ian> program the better.

    Ian> For actually fixing the bugs, I agree that you have to
    Ian> categorize the bugs.  Easy bugs can be fixed by anybody who
    Ian> understands the source.  But they naturally tend to get fixed
    Ian> quickly.  For any given package, I believe there are
    Ian> ordinarily relatively few people who can fix hard bugs.  It's
    Ian> true that as the number of users goes up, the number of
    Ian> people who can fix hard bugs generally goes up.  But the
    Ian> increase is not only not linear, it's exponentially
    Ian> decreasing, and moreover it often tops out at a relatively
    Ian> small number of actual developers.

I agree this is empirically true.  But does it express an underlying
psychological reality, or is it an issue of poor organizational
methods?  Surely there are a hundred people or more who can fix "hard
bugs" in Linux.  In XEmacs, I have seen at least 10 different people
make large-scale modifications (core code, I'm not talking about Lisp
libraries) and suspect there are probably another 10 in the community
capable of fixing "hard bugs".

I wouldn't be surprised if there is a small number, like 1-5, of "core
developers" who can pretty much fix any bug (although in the case of
Linux, this is not clear since the team is hacking away at the kernel
and pushing more and more functionality out into modularized drivers
or even user space, that number may now be zero), that this has a
maximum of perhaps 5, and this introduces a non-linearity at the very
top of the "bug hardness" scale---no matter how big the user base, the
number of core developers doesn't rise above five.  But immediately
below that, as long as we have Well-Modularized Code, I don't see why
we shouldn't have proportionality to user base.

As long as the user base for free software is high enough that larger
user bases don't simply mean sucking in Win-over-dosed clueless
newbies---and my own limited experience with "clueless newbies" is
that they are far less clueless than I used to think, and enormously
eager to get hacking creds.  Mostly they have been crippled by
"user-friendly" systems that actually amount to guards in the psych
ward, and lack of source access.  Mere exposure to source is a
powerful catalyst.

And of course, if it's the result of free riding by people who could
fix bugs but have no incentive to do so---which is going to be
exclusively a phenomenon of people attracted by features, people there 
for the hack value will of course hack---then the "Mentat Computation" 
is no longer so implausible.

I don't know the answers, but I hope I've managed to completely
muddy the water.  ;-)

University of Tsukuba                Tennodai 1-1-1 Tsukuba 305-8573 JAPAN
Institute of Policy and Planning Sciences       Tel/fax: +81 (298) 53-5091
What are those two straight lines for?  "Free software rules."