Subject: Re: *precisely* NOT a commons (with tragedy)
From: "Karsten M. Self" <>
Date: Fri, 4 Jan 2002 18:11:52 -0800
Fri, 4 Jan 2002 18:11:52 -0800
on Fri, Jan 04, 2002 at 03:44:56AM -0800, Tom Lord ( wrote:
> Karsten opines:
> > Tom wrote:
> > > The bits in a distribution are not rivalrous.  The "commons" is the
> > > engineering infrastructure which produces, maintains, and deploys
> > > the software, not the software itself. =20
> > 
> > You're not being clear.  This engineering infrastructure isn't a
> > commons.  An arbitrary user can't lay claim to my own resources.  An
> > employer or client can hire my time.  I can volunteer it.  
> Your comments reflect a deep misunderstanding.

...a reasonable outcome of lack of clarity.

> The time or labor power of engineers is not the engineering
> infrastructure.

Addressed, but my $0.02.

> A software engineering infrastructure is a mixture of processes,
> information repositories, and information flows.[1]  It requires an
> investment of engineer-hours to sustain or improve an engineering
> infrastructure, but those engineer-hours are not the infrastructure
> itself.

> A software engineering infrastructure is a consumable resource.  The
> ways in which it is consumed are numerous and subtle, but to choose
> just one easy-to-understand example: the bandwidth for patches
> (contributed proposed changes) to a particular project is finite;
> scheduling the allocation of that bandwidth has to look beyond simple
> fairness-of-access or merit-of-proposed-changes to consider factors
> such as avoiding forking a project.  

This is not consumption (exhaustion).  This is rivalrous use.  The
resource itself isn't exhausted, but the current supply is tapped.  Stop
the consumption, and the supply resumes.

> Therefore, a busy contributor with the resources to fork a project can
> consume an unreasonable amount of that bandwidth, excluding or slowing
> down the processing of contributions which may very well be important
> to the long term health of the project.

This speaks to project organization.  I'll point to the Linux kernel

There's a saying in the kernel development community:  "Linus doesn't
scale" (this was part of the justification behind Larry McVoy's
BitKeeper source control system).  Growth of the Linux kernel codebase
is fairly well documented[1], yet the project remains headed by Linus, with
no formal version control system.

This works because of some fundamental project management procedures:

  - Linus drops packets.  The protocol for Linus dealing with email
    overload is to delete mail.  The protocol for getting a patch
    approved is to re-send if it's not acknowledged is to re-send.  A
    similar tactic has worked well for Ethernet.  It's simple, it's not
    perfect, but it's satisficing, and effective over a wide range of

  - The kernel is highly modularized.  The size of the core kernel has
    remained relatively stable, what's increased most are the number of
    platforms and devices supported.  Functionality is largely isolated
    within modules.

  - Kernel development is highly modularized.  By his own admission,
    Linus has very little involvement with networking code.

  - Kernel developers are highly structured.  There's a "ring of
    lieutenants" around Linus who field specific focus areas -- Alan Cox
    for networking and odd patches; Donald Becker, network card drivers,
    Ted Tso and Hans Reiser, filesystems; etc.  Contributions within an
    area are passed up through channels, fielded by area experts, and
    are finally assimilated by Linus (or Alan).

A disruptive developer would generally tend to disrupted only a certain
portion of kernel development, and would likely find themselves simply
walled off -- packets would simply stream out of the pipe.

If the project has merits, the putative defector now faces two problems:

  - Attracting interest (and trust) in the process.

  - Dealing with his/her own set of patches.  Old saw about activity:
    for any activity, there are is another activity which then competes
    with it, attempting to keep it from being accomplished.  As this
    competing activity is itself trying to do something, and is subject
    to interference, some things actually get done.

> A software engineering infrastructure is a renewable resource.

This contradicts your prior statement.  Your use of these terms doesn't
correspond to convention.


> Because it is a consumable, renewable resource which we hold in
> common, the public software engineering infrastructure is *precisely*
> a commons.

This is an inaccurate specification, generally conflicting with
conventional useage of:

  - Consumable resource.
  - Renewable resource.
  - A commons.
  - The tragedy of the commons.

...all adequately addressed in my own and others' comments.  If Tom
wants to give us all a glory, that's fine, but I'm not going looking for
Humpty Dumpty to translate it all for me.

> The long-standing claim made by FSBs is that the mechanisms which
> wall-off the proprietary infrastructures create both significant
> inefficiencies and significant risks to customers.  


> In support of "inefficiencies", they point out that with a public
> infrastructure, there are more chances to spot and act upon
> opportunities to improve programs.  In support of "risks", they point
> out that customers of the public infrastructure can inspect their
> goods and avoid lock-in.

Yes, yes.

> Unfortunately, the predominant FSB practice has been to exploit the
> public infrastructure without making a correspondingly large
> investment in its renewal.  They make some investment, but not enough.

See my response to Dwyer and the description of  my  ideal FSB as a
library.  You're assuming the FSB has to do the bulk of development.

I'm willing to posit that, by economic analysis, an FSB model will
result in under provision of software development resources.  I believe,
however, that this is offset by several factors.  Dwyer has touched
(inadequately IMO) on this in his prior draft study on this topic,
there's also the Lerner and Triole paper which has a somewhat more
satisfactory treatment[2]:

  - Free software development avoids significant inefficiencies
    introduced by proprietary software development.

  - Alternative economic incentives, and the enabling nature of free
    software (outside contributors/contributions are enabled by the FS
    "bazaar" development process), provide for a larger development pool.

  - Multiple FSBs can work on the same projects.  Viz:  Apache,
    OpenOffice, Mozilla, Emacs, Xemacs.

  - Free software tends to compete on technical merits, rather than
    marketing prerogatives.  Economic incentives to depart from
    technically superior solutions are less prominent.

The result is a system on the one hand that has a tendency to
underprovide, with another that has a tendency to work inefficiently.
You have two non-optimal systems in competition.  The least worst wins
(or an alternative emerges).


> Proprietary companies charge much higher prices for more or less the
> same kinds of products, obtaining the higher prices via a combination
> of licensing and software license management.  

They do where they can.  Where they cannot, they sell for comparable
prices, typically near zero.  I'll spend ten or twenty bucks
occasionally for a GNU/Linux distro -- burned to CD and mailed to me,
but not much more -- this is about the price cap, and you'll probably
find the bulk of software at a Fry's, Office Depot, or similar retailer,
is in the same range.  This needn't be fully voluntary -- this is a
market.  Remember that even a monopolist can't set  both  price  and 
quantity -- it can chose one, the market sets the other (by contrast, a
competitive business can only set quantity, the market sets price).


  - MSIE is distributed for free, as are most of its competitors
    (Mozilla, Netscape, Galeon, Konqueror).  Opera is sold at a low fee.  

  - Asian and Eastern European markets are notorious for high levels of
    unauthorized software sales.  There are two possible responses:
    selling authorized software at lower cost, or conceding the market.

  - There are documented instances of Microsoft's OS prices varying by
    market, particularly being higher in France and Canada then
    neighboring countries.

  - There's evidence that GNU/Linux has cost Microsoft from $2 billion
    to $11 billion in sales[3].  GNU/Linux pricing (from free download
    to supported, box-set sales of up to several hundred dollars)
    serves to provide a floor to pure-play software pricing.  Free
    software doesn't have as pronounced an advantage on the services
    side of revenues, though I'd say it benefits here as well.

The trend is as I showed it previously:  companies with software
products for which they have significant lock-in or other leverage
advantages, can charge a premium for product.  They do so at the risk of
cannibalizing new sales (look to the RDBMS market for the next free
software success story, though played out more slowly as enterprise
system migration exerts significant friction), and only so long as there
is no equivalent gratis or libre product.  Where competition from gratis
or libre equivalents exists the proprietary software market price is at
or near zero.  This is particularly true of consumer software.

Moreover, converting proprietary technology to free software has become
an attractive option to some companies faced with a competitor with a
proprietary product and significant market leverage (Netscape: Mozilla,
Sun:  StarOffice), and has been strongly encouraged for other products
(HP:  OpenMail).

> FSBs have started small 

True[4].  But utterly irrelevant.

I think it would be useful for Tom to define just what "FSB" means to
him.  To me, it's any company with a significant involvement (either in
regards to itself, or in regards to the free software community), in
free software, development, distribution, or use.  By this metric, IBM,
Sun, HP, AOL, the University of California, MIT, the US Government, and
Cisco are all FSBs.  None of them are pure-play FSBs.  Each entered the
FSB arena when it was well established.  Tom and I appear to be
inhabiting different worlds.

> they graze heavily.

Tom seems to find this objectionable.  I don't.  The reasons should be
clear, I'll expand on request.

> Given a shaky looking public software engineering infrastructure, 

Not on my planet.

> and relatively small FSBs, 

Not on my planet.

> its pretty hard to make the case to customers that you now want to
> start charging prices comparable to what a MS would get.  

I've already explained how and why Microsoft is the wrong model to

I'd expect to charge prices comparable to what Sun, HP, IBM, Cisco,
Price Waterhouse, or a smaller independent consulting shop with a
technical or geographic focus, might hope to get.



1.  Grossly:  about 40KLoC 1993, to 18MLoC, 2000, roughly three orders
    of magnitude.
    Cf:  Linux Benchmark Data:
    Mike Godfrey, "Toward an Understanding of Software Evolution"

2.  Josh Lerner, Jean Triole, "The Simple Economics of Open Source",
    NBER Working Paper No.w7600,  Issued in March 2000

3.  Robert X. Cringely, July 15, 1999,
    "Linux is already taking what would have been at least $2 billion in
    annual sales away from legacy MS Windows NT. Microsoft spokespeople
    have said as much, citing GNU/Linux as a major threat to NT."

    Stephen Shankland, "GNU/Linux growth underscores threat to
    Microsoft", CNet,  February 28, 2001.
    "Microsoft's Windows held 41 percent of the server OS market in 2000,
    up from 38 percent in 1999....  Linux grabbed 27 percent market
    share in 2000, up from 25 percent the previous year."

    Other estimates put the market value at $18b.  The $11b revenue loss
    seems plausible, though I can't document it directly.

4.  With very, very, few exceptions,  all  companies start small.

Karsten M. Self <>
 What part of "Gestalt" don't you understand?              Home of the brave                    Land of the free
We freed Dmitry! Boycott Adobe! Repeal the DMCA!
Geek for Hire            

["application/pgp-signature" not shown]