Subject: Re: Open letter to those who believe in a right to free software
Date: Tue, 26 Oct 1999 11:59:13 -0400

> >>>>> "Ben" == Ben Tilly <> writes:
>     Ben> And their success in modelling is measured by...other
>     Ben> economists.
> Yup.  I'm not happy about that, but ... intermediate micro is _not_ a
> popular course among those who don't plan to become economists.
Neither is topology among people who don't plan to become
mathematicians...(grumble, grumble, *college students*,

>     Ben> OK, cheap shot.  But still.  [...]

I do apologize.  It was a really cheap shot.

>     Ben> It is the considered opinion of many experts in software
>     Ben> development,
>     >> Etc, etc.
>     Ben> ie You know better.
> Not at all.  Most lists I'm on, it's not considered polite to quote
> slabs of text that are not relevant to the discussion.  I simply don't
> contest any of the facts that you presented.  The facts are relevant,
> but quoting them is not.
The bane of email. :-(

I read that as being close to, "etc, etc, yada yada, shut up
already" when you actually meant something closer to what
I mean by "[...]" meaning "be aware that stuff was omitted to
save space..."

>     Ben> The usual assumptions appear to be somewhat distorted by the
>     Ben> fact that:
> Please, you obviously don't know what "the usual assumptions" are.
> The characteristics you mention are important, and I'm happy to resond
> to them.  But "the usual assumptions" are rather weak; talking about
> them being "distorted" is not very useful.  For technical simplicity,
> something like monotonicity and continuity of the costs functions for
> distributing software, monotonicity, continuity, and convexity of the
> cost function for developing software, and downward-sloping and
> continuous demand functions for the various products is useful.
> "Something like" because the exact assumptions needed are going to
> depend on the exact specification of the model.
I can argue against every one of these except the continuity,
and in the real world the difference between a nasty continuous
function and a discontinuous one is not worth arguing about.
Besides which, at heart I am a constructivist and so I am at
least somewhat dubious about the existence of non-continuous
functions! :-)

And what grounds can I ague against them?

The cost of distributing free software realistically is largely the
cost of getting people to be aware of and trained in it.  As a
piece of software reaches wider distribution it becomes easier
to tell people about it, and easier for them to learn it.  However
as a piece of software grows it can develop feeping creaturitis,
making it harder to learn.  (mumble, *various window-managers*,

The cost function for developing software is complex.  Sure, as
you get more interfaces you get increasing costs.  (As Brooks
points out.)  However real projects periodically recognize their
internal problems and go through internal clean-up.  Plus most
significant projects eventually develop reasonable interfaces
through which additional functionality can be added in a pretty
good way.  When that happens the development cost does
the unexpected and drops.

For instance the cost of developing a major Perl project dropped
moving from Perl 4 to Perl 5, and has realistically continued to
drop since then.  As you mention, network effects are a major
factor in software...

> So the results I'm talking about are on the order of generality of
> "the minimum of a strictly convex function on a convex region is
> unique."
Well if you want to rely on false results, then GIGO.

(OK, OK, so I am quibbling when I point out that a convex region
that is not compact need not have a minimum. :-P )

> Lock-in may or may not cause problems with the demand functions.  This
> will depend on exactly how lock-in is specified in the demand
> functions.  But not all software creates lock-in problems.
Possibly not all, I grant you that.  I am having a hard time coming
up with a good example though...

>     Ben>  - Vendor lock-in means that each software product is to some
>     Ben> extent a natural monopoly (lawyers originally were encouraged
> No.  This is a misuse of the term "natural monopoly."
> In fact, it's a misuse of the term "monopoly".  A market which is ex
> ante perfectly competitive can still have technical characteristics
> which result in full lock-in.  In this case, the buyer can force the
> seller to absorb all of the costs of lock-in, except for any
> transactions costs due to playing one seller off against another.

I claim that there are monopolies today in areas like:

 - client operating systems
 - spreadsheets
 - word processors
 - client-side languages
 - personal financial planning software

There would be one in browsers except that Microsoft went to
extreme measures.

The Palm system is trying hard to be one in the PDA arena.

I claim that general factors about the software market cause the
emergence, again and again, of such monopolies.  I further
claim that, theory notwithstanding, in the market the seller is in
fact forcing buyers to absorb the costs of  lock-in, and not the
other way around.  Should the resulting monopolies not be
called natural?

>     Ben> to put Word Perfect to full use, they are still locked into
>     Ben> it despite years of pressure from clients to switch - other
>     Ben> examples exist if you are interested)
> Lock-in effects can be analyzed separately from the production side.
> They're well understood, have been since at latest the late 70s.
> Network externalities have only been carefully studied since the early
> 90s.  I don't know whether they are more important than lock-in as
> such.
Both are significant in software.  Isn't it true that software is a major
motivator of studying network externalities?  In any case the
example of lawyers using Word Perfect is an example where the
one factor is pitted against the other.  So far lock-in has won in this

> I don't see how either makes the software market unique.  They do
> imply a first-mover advantage.  From a strategic point of view the
> individual product market is "tippy" (rivalry generally ends with one
> firm winning a monopoly position, not in a stable duopoly or
> oligopoly).  But this is more or less true of any natural monopoly
> (defined as a firm whose production technology exhibits global
> increasing returns to scale).
There *YOU* are using the phrase, "natural monopoloy"!
AFAICS I was using it to mean the same thing that you are

>     Ben>  - The bulk of development costs lies in testing and
>     Ben> debugging, both of which are readily distributed
> Ie, their costs are convex, in fact linear, in "size," if I understand
> what you mean by distributed.  Except that I don't think it's so easy
> to "distribute" testing of a complex system.  Unit test, yes; system
> test, no.

Partial disagreement here.  What I mean by readily distributed
is that it is possible to extract considerable parallelism in the
testing.  The overall effort may be substantially increased, but
the time to test plus the individual effort required from any one
participant is decreased.

And yes, I would claim that this is true for many systems as well.
In the real world you get problems on strange combinations.  To
find those at some point you need to simply try a lot of different
combinations and see what breaks.

>      That makes development costs convex, ie, marginal costs
> increase with system size.  This is the main thing I need; it
> basically implies that there are going to be a lot of developers out
> there, because the world is not going to merge into a single enormous
> "do everything" program (notwithstanding Microsoft's attempt to
> integrate the operating system into the browser---or was that vice
> versa?---there's still that 95% of software developed for internal use
> etc).
Here is an interesting question for you.  Suppose that marginal
total costs are increasing, but the marginal costs/participant
are decreasing.  Does this economically behave like you would
normally expect a convex cost-model to behave?  If all
participants are part of a single economic entity, clearly yes.  But
if they are not?

In a private discussion of ours, the answer to this question could
well provide a reason that a company might not want to
internally simulate open-source development methods.  OK, I
am not claiming that it is a convincing one, but it is a non-obvious

A concrete example to consider is Perl's test suites.  A
substantial amount of Perl, including most good modules, come
with test suites.  If something obscure breaks on your system, there
is an excellent chance that the breakage was found and identified
on installation, and your standard perlbug report likely contains the
information that developers will need to debug what happened.
This means that a lot of people sit through a lot of tests, but fixing
things becomes far, far easier!

However one thing is much more sharply true of free
software than proprietary software.  Poor modularity in your
interfaces much more rapidly becomes a barrier for testing and
development.  As a result free software has a strong immediate
incentive to keep things modular and well-defined.  Is this a
benefit or a disadvantage?  I have seen both sides argued...

>     Ben>  - Software quality is frequently a market externality,
> I don't know what you mean by "market externality."  Please define it.
Poor choice of words.  I mean that the quality of the software is
not something that (for whatever reasons) has a significant
effect on what happens in the market.

>     Ben> indeed the developer *benefits* from a certain level of bugs
>     Ben> - it gives consumers reasons to upgrade!
> Urban legend, IMO.  Optimizing the level of bugs is going to be
> expensive; I can't imagine it would be done.  "Planned obsolescence"
> has been around for a long time.  There are few documented cases.  If
> consumers get the idea this is taking place, they get very angry (as
> they do when locked in).  Unless the consumer is also locked in, this
> is not very conducive to the health of your future income statement.
I hope you are right!  BTW have you looked at Microsoft's
negativity ratings recently?

> Most economists tend to interpret what looks like "planned
> obsolescence" as "developer incompetence."  It's much better to get
> the customer locked in to features that work, rather than give them
> excuses to switch.  I don't say planned obsolescence doesn't happen,
> but without quantitative evidence I would think it's unusual in any
> industry.
Tell me why I cannot buy a slant 6 engine today then.  (This was
an engine sold briefly by Dodge in the 70's that was unbelievably
reliable.  I know people who still swear by it...)

> This is not to say that there isn't a tradeoff between quality and
> cost; there is, and as users become well-informed about TCO it may
> very well behoove the company to raise price for the product in return
> for higher quality and lower aftermarket service costs to the users.
> However, there's no need to attribute to vendor malice what can be
> fully explained by customer ignorance.
Agreed that customers are ignorant of computers.  However I
suspect that rapid change is in the nature of sofware for the
near future.  Rapid change guarantees ignorance in most of
your consumers...

>     Ben>  - A significant portion of the value to a consumer lies not
>     Ben> in the product, but in guarantees of future support and
>     Ben> development
> >From the economic standpoint this is just a component of lock-in, I
> think.
Agreed.  If you were not locked in to the solution, then guarantees
of future support would not matter - you could just switch later.  But
switching is hard, consumers know that, and so said guarantees
are very important to consumers.

>     Ben>  - The incremental cost of distribution is approximately zero
> The size of this cost is irrelevant as long it's constant.  (To the
> qualitative results.)
For a wide range of prices I would agree.  But I think that
approximations which make price irrelevant as long as it is
constant break down at 0 because then alternate economic
models become viable.  I think that you cannot ignore the shift
when you move from having a vendor selling a product with a
company like LinuxCare which offers you support on a range of
products from a range of sources.

>     Ben> Please don't just wave a hand and say that you are an expert
>     Ben> economist while I am not.  If you are aware of another
>     Ben> product with even half of these characteristics, please tell
>     Ben> us.
> Not my job.  That's like if I asked you for an estimate on a program
> which requires a one-line Emacs-keystroke using editing buffer, the
> ability to fetch URLs, and the ability to produce postscript output,
> and then told you I wouldn't believe you until you produced an example
> of a similar program.
I don't think that a good comparison...

I claim that these factors result in unusual economic properties
for software, and should not be ignored in an analysis.  If you
explain why they are irrelevant to an analysis, that should also
be sufficient.  I believe that the shift in economic models followed
by the supplier is likely to break hidden assumptions that
economists make...

>     Ben> And also please explain why these factors are irrelevant.
> That's your problem, I didn't say they were.  If you think they are,
> be my guest and explain.
Huh?  I am claiming that they are relevant, why should I try
explaining why they are irrelevant?

>     Ben> A basic research question that I recommend to you.  To what
>     Ben> extent is a proprietary program a natural monopoly?  In
> 100%.  Fixed cost is positive, marginal cost is low and as near
> constant as anything ever could be.  This applies to ALL software.
> You never took an intermediate microeconomics course, I take it.
It was somewhat of a rhetorical question.  I am claiming that
software lends itself strongly to natural monopolies, this fact is
being recognized more and more widely, and the resulting
dynamics are not to be underestimated.  As you know better
than I, monopolies introduce inefficiencies.  But when the
"monopoly" is held by a free software product, which allows
for the product to be supplied by "an internal free market" within
the bounds what would otherwise be a "monopoly market",
then you get an opportunities for a more efficient competition.
(Quotes used because yes, I know I am mutilating, mangling,
and otherwise seriously abusing those terms.)

A major claim of mine is that as a consequence the choice
between a proprietary product and a free software product is
an implicit choice between whether your needs will be provided
by a monopoly with all that entails, or a free market.  If that is
indeed the decision, then this has significant consequences for
the relative economic efficiency of different software
development models.

Note that in several areas of computing, such as mini-computers,
and workstations, the market has come to see the choice
between competing technologies in those terms.  Once
that shift happens - as I see it happening in many areas of
software today - the result became guaranteed.  There is a
reason that I am typing on a PC today...

>     Ben> particular what are the real and perceived barriers in the
>     Ben> market to switching products?
> I don't care.  To an economist, perceived barriers are real barriers,
> and vice versa.  A real barrier which isn't perceived has a tendency
> to supply the dreamer with a rude awakening.  Perceived barriers
> which aren't real have a habit of evaporating, to the great profit of
> the lucky sleeper who wakes first.  No guarantees, but close enough
> for government work.
In the long-run at equilibrium, yes.  However a substantial part of
the dynamics of software depends upon what the current
*dis-equilibrium* is.

Transient effects are hard to model, granted.  However that does
not stop them from being incredibly important in a rapidly
changing industry...

PS to FSB:  Is this conversation of general interest?  Should we
take this one to private email, or are people following this?