Subject: Re: A new(?) account for why free generates quality
From: Craig Burley <burley@gnu.org>
Date: Thu, 23 Apr 1998 16:25:08 -0400 (EDT)

>If this were true, I would expect free software companies doing
>complex stuff to experience lower *variance* in engineer productivity, 
>and therefore better schedulability.  Does anyone have any data that
>would support or refute this?

I don't offhand, but you're on to something there.

To *really* get a "rush" from future-thinking about the
viability of software always accompanied by source code
(in effect, anyway, GPL'ed software, or some very close
approximation thereof), consider what might happen when:

  -  People can hire themselves out for "momentary" work
     with very little transaction cost; e.g. work as small
     as "reproduce a bug", or "debug a problem", or "proofread
     and edit a chapter in a document"

  -  The mechanisms for these transactions, and the work
     itself, includes software and Internet-like
     communications services expressly designed to
     facilitate rapid, widespread, source-code development,
     interchange, testing, and so on (thus, it matters
     little where people live vs. what they work on)

  -  People can "reward" others, who do good work, in the
     form of easily-transferred, validated donations, which
     in turn are close to being tax-free (e.g. I get these
     kinds of rewards right now, but my "helpers" don't,
     and it's kind of tedious for others to go to the
     trouble to send me money)

If you exclude "software authoring" from the list of possible
activities, I think it's still safe to say that the above
represents a likely trend in the global community in activities
such as music-making, charitable organizing, and so on.

Now, include "software authoring" in the kind of fast-paced,
do-just-what-you-want-that-moment-and-get-suitably-rewarded-for-
it, world that should result.

Personally, I find it hard to see just how *proprietary* companies
could compete, since they depend so completely on keeping their
source code *hidden* from the public, thus from a lot of people
that, potentially, could help improve it for lower cost.

Note that public beta testing of things like Microsoft products
comes somewhat close to the overall model, at least for now,
and, while I don't claim it guarantees perfect Windows releases,
I *do* claim that even Microsoft knows it cannot hire the
all the best testers in the world.  (Some people still think
they can hire all the best *programmers*, though, somehow.)
So, they resort to more public testing than they'd probably like,
if they had their druthers.

There's a lot of "stuff" that has to happen to realize the kind
of vision I have to hold in thought to see proprietary companies
at an actual disadvantage, however.  It's a huge task.  The
question I have is, at what point does the task itself become
"done" enough to be undertaken by the very community the final
product is designed to serve -- the global Internet community?
It's not unlike the classic question, "at what point will I
have implemented enough of my new computer language to make the
rest of the job so easy that the initial costs become worth
incurring?", classic for compiler people at least.  I think it
requires so much planning and thought aforehand (for example, I
want to include native interfaces for the blind, people speaking
languages other than English, and so on) that it's not wise
to undertake in the same way the FSF got started.

Also, I think a big question is to what extent large *planned*
projects are viable using the "new model" exclusively.  I see
ways it can happen, but don't think they'll necessarily
happen "automatically".  But I also think the same question
applies to reservation of global resources to do things like
"broadcast" an event; the Internet currently is not well-suited
to that in the same way, say, broadcast/cable TV and even the
phone system are.  At the other extreme, no project MS
undertakes ever requires its planners to first assess whether
they have enough air in the atmosphere or water in the Seattle
area to complete it, because they know that, if they start
going short here, they can find more *there*; to the extent
the global Internet community in the early 21st century begins
to resemble a pool of fairly liquid, moving, marketable
resources, future planning of large software projects might
be doable with a similar degree of inattention to just where
and when the specific R&D resources will appear.

(Personally, I think the result will be that *architects*
will become the people companies hire for long-term work.
Once the product is well-architected, it can be fleshed out
by nearly commodity, quality, workers.)

The reason I consider this so much of a "rush" to think about
is that I still feel I'm just barely beginning to appreciate
the potential of free, GPL-style software, having been so
entrenched in the world of proprietary, licensed software for
so long.  I used to propose ideas to friends about Mac-like
machines with *built-in* "dongle" ports around the monitor and
keyboard allowing vendors to put nice icons on their dongles,
which users would plug into an available port to enable use
of the software!  Sure, much more elegant than a special
doo-dad on a serial or parallel cable, less obnoxious than
copy-protection, and better at advertising that "this system
uses FooMaker Plus" to passersby, but, man...what a collective
*waste* of resources that would be compared to the free-software
world we're now beginning to envision!

        tq vm, (burley)