Subject: Re: rocket science
From: "Stephen J. Turnbull" <stephen@xemacs.org>
Date: Mon, 28 Feb 2005 17:19:29 +0900

>>>>> "g" == DV Henkel-Wallace <gumby@henkel-wallace.org> writes:

    g> Commercial software, generally, balances out the optimal
    g> spending on feature development, QA, etc (let's ignore
    g> transients at the extrema -- startups and monopoly providers

Aside: Monopolists will not be "socially optimal", but they will
optimize following the same set of incentives.  Startups could be
handled by a Darwinian model, and proper definition of product.

    g> The question is: I would not be surprised (not on any evidence
    g> but off the top of my head) that Free Software Businesses can
    g> handle these risks poorly, since they don't get valid economic
    g> signals about this.

I think they can get the signals, same as a proprietary model.  I have
argued in the past on this list that FLOSS will handle those risks
(and feature choice in general) poorly in development for that reason,
but that's in response to folks who argue from a project-focused point
of view that "such-and-such software is great and the VCs just never
seem to get it!"  Once you wrap a working business model around it,
which includes warrantees, service contracts, and stuff like that to
serve the _customers' perceived_ interests, the incentives to produce
reliability look more conventional again.

    g> The usual model of commercial investment in free software is
    g> based on incrementalism (companies invest at the margins on the
    g> features that matter to them) which doesn't work in this case
    g> since the security or reliability decision has to be made on
    g> the entire system and not just on the new features being
    g> developed.

So?  NASA doesn't care, they're paying first-copy development costs.
The threat to free software in this context is the same one as always:
NASA wants to reduce explicit payout, so it will arrange the contract
so that the development organization can exploit IP and thus reduce
charges to NASA.  If the software does not have obvious commercial
value, then IP becomes a matter of a not very valuable option on
future "closed" commercialization, and NASA might be able to buy it
and free it quite cheaply.

Or if the software were required to become free as a matter of law, I
don't see any reason why proprietary vendors would have an advantage
(except as they can afford to hire more and better developers, and see
cross-subsidy---underbidding on the high-reliability project---as
useful for their marketing communications effort).

Once you've got the reliable free core in place, then reliable free
extensions are going to arise according to your "usual model".

    g> {GNU/,}Linux was lucky in that it had some serious interest in
    g> reliability back when the code base was small and that focus
    g> has generally remained as it has grown.

I was never under the impression that this was an accident.  There are
a number of FLOSS organizations with a reliability focus, and all of
them produce hacker tools (I'm including systems integrators like
cutting edge web admins as hackers, here).  I think a lot of this has
to do with the importance of reliability to the "customers" (ie, the
hackers themselves), and with the accuracy of the specifications.

Specs are much more accurate than for desktop software---desktop users
typically have very little idea of what is possible or how to describe
what they want.  If the developer don't have a good idea of what the
spec means, it's hard to imagine being able to implement it reliably.
But when when you're talking to yourself, you understand what you mean
no matter how insane it actually is.  In the popular metaphor, it
really helps to be eating your own dog food.

    g> I can't say that's the same, for example for Mozilla (not to
    g> pick on them, just an example).

Isn't this a crucial contrast, especially in view of Tom's
"rockets vs. desktop" comparison?

    g> And Apache went the other way from super-flaky to usably
    g> reliable and has gotten better ever since,

Well, NCSA httpd was super-flaky; "a patchy httpd" was developed in
response to that.  The Apache developers have always acted like the
reliability of the core and enough modularity to let other people take
responsibility for their 3rd party hacks was the butter on their
bread.  I can't draw you a circuit diagram but again I think this is
no accident.

    g> but I don't think it's yet as reliable as, say, Oracle and I
    g> can't imagine anybody knows if it will get there (or even if it
    g> should).

But until you've answered the parenthetical question, the first two
are moot, no?  Shouldn't the questions be "is it as reliable as IIS et
al, and will you even notice unreliability in the httpd given the crap
that it's serving?"

-- 
Institute of Policy and Planning Sciences     http://turnbull.sk.tsukuba.ac.jp
University of Tsukuba                    Tennodai 1-1-1 Tsukuba 305-8573 JAPAN
               Ask not how you can "do" free software business;
              ask what your business can "do for" free software.