Subject: Re: Open letter to those who believe in a right to free software
From: Ben_Tilly@trepp.com
Date: Wed, 27 Oct 1999 11:25:25 -0400


Should we take this thread to private email, or are people
still following it?  If I get no response I will assume that
nobody but Stephen and me cares about it.  (Note, using
"me" there is grammatically correct and "I" would be
wrong. :-)

Stephen wrote:
> >>>>> "Ben" == Ben Tilly <Ben_Tilly@trepp.com> writes:
[...]
>     Ben> And what grounds can I ague against them?
>
>     Ben> The cost of distributing free software realistically is
>     Ben> largely the cost of getting people to be aware of and trained
>     Ben> in it.  As a piece of software reaches wider distribution it
>     Ben> becomes easier to tell people about it, and easier for them
>     Ben> to learn it.
>
> Network externalities on the consumer side.  I disagree about the
> empirical significance of these externalities; I think they're
> small compared to the personal cost of learning for most people.
>
I think that the personal cost of learning for most people is
strongly affected by network externalities.  This is known as
the "helpful friend" syndrome...

> But I will keep the point in mind.  I suspect you are right that if
> moderately strong it could affect the conclusion.
>
Compare learning Linux on your own.

Compare learning Linux with a friend you can call when the
going gets tough.

Just how strong an effect are we talking?

>     Ben> The cost function for developing software is complex.  Sure,
>     Ben> as you get more interfaces you get increasing costs.  (As
>     Ben> Brooks points out.)  However real projects periodically
>     Ben> recognize their internal problems and go through internal
>     Ben> clean-up.  Plus most significant projects eventually develop
>     Ben> reasonable interfaces through which additional functionality
>     Ben> can be added in a pretty good way.  When that happens the
>     Ben> development cost does the unexpected and drops.
>
> This is a good point.  However, it is not clear to me that it is
> relevant to economics.  I think that for economic purposes Perl 5 is
> _not_ a bigger Perl 4.  It is a new product.  I think that this model,
> although unnatural for the developers themselves, is accurate and
> would support the result I proposed (a mixed regime is better).
>
Being a Perl programmer, I disagree.  Most Perl 4 code will
run, unchanged, under Perl 5.  (The reverse is not true.)  The
huge difference is that with Perl 5 anyone can write a module
(if need be they can make it an xsub implemented in C) and
effectively extend the language.  In Perl 4 you cannot make
such a change without changing the interpreter.

A similar example with Linux is the addition of loadable
modules.  With browsers it is plug-ins and Java.  (Hey,
proprietary vendors apply good ideas as well!)  Shared
libraries serve a similar purpose.  Many other equivalents
exist as well.

>        [...]     But not all software
>     >> creates lock-in problems.
>
>     Ben> Possibly not all, I grant you that.  I am having a hard time
>     Ben> coming up with a good example though...
>
> Anything under GPL.  You can hire anyone who can read code to fix it
> for you.  At that point, the decision to go back to the original
> supplier means they've come up with such an incredibly redeeming
> improvement that it's equivalent to deciding to buy a new product.
>
No, you are still locked into the basic product and architecture.
That no longer means that you are locked into a single supplier,
granted.  But you have no more freedom to switch freely to the
Next Big Thing than you had before.  (Unless it implements
compatibility with what you have, including any changes you
chose to make...)

[...]
>     Ben> I claim that there are monopolies today in areas like:
>
> [examples snipped]
>
> I don't contest these examples are monopolies.  I am contesting that
> lock-in necessarily implies monopoly in the important market.  It does
> not imply monopoly in the original installation market, and I claim
> that for many software products that is the important one.
>
No, it does not imply monopoly when the product comes out.
But the threatened future control does constain decisions.
In software purchasing people pay a lot of attention to questions
about whose products have the most "momentum".  People
know that something will win, and there is a lot of pressure to be
on the right "bandwagon".

>     Ben> I claim that general factors about the software market cause
>     Ben> the emergence, again and again, of such monopolies.  I
>
> Yes, but the factors that cause monopoly are economies of scale in
> distribution and network externalities for users.  Not lock-in.
>
OK.  But the factors that contribute to monopoly are present for
software, and the effects of said monopolies are generally
made worse for consumers because of lock-in.

>     Ben> further claim that, theory notwithstanding, in the market the
>     Ben> seller is in fact forcing buyers to absorb the costs of
>     Ben> lock-in, and not the other way around.
>
> A real example of sellers shouldering the switching cost is in the
> long distance telephone market, where vendors offer significant
> bonuses to customers to switch.  In software, many vendors offer
> significant rebates to users of competing products, and discounts to
> students.  Of course, once you've learned to use Word as a student,
> you or your company must bear the switching cost as a legal assistant.
> But it's not like the company invested in handcuffs from which you get
> no benefit.  To many customers who complain about lock-in, I say
> "didn't you listen when your mom told you not to take candy from
> strangers?"  Or, to switch metaphors: overeaters shouldn't complain
> about the cost of the Alka-Seltzer!
>
Re-training is the least of lock-in worries.  I work for a
company which has a significant investment in a rather
complex VB front-end to their data.  (The company's basis
depends on its models of a specific type of financial security.)

At the time they did this development they had to choose a
platform.  At this point to switch they have to build a platform
with similar functionality - while maintaining the existing one
because without it the company would quickly fail.  This is
rather greater lock-in than just, "Retrain a couple of people."

Likewise the Word Perfect example that I gave with lawyers
is not a re-training issue.  Today if you walk into a legal firm
with a potential case, before you walk out they already have
several of the legal documents that you need already written,
customized, etc for you because of infrastructures built in
Word Perfect.  Being able to do this is critical to their business.
Sure, they could rebuild the system from scratch in Microsoft
Word, but the investment required is substantial, plus nobody
wants to go through the inevitable mistakes involved in doing
such a thing.

> If the seller can establish a monopoly, the seller can extract a
> greater share of the surplus from the long-term relationship created
> by lock-in.  That awkward wording is intentional; it focuses on two
> things.  First, the buyer is not forced to absorb the costs of
> lock-in; they are deducted from the surplus.  Second, lock-in is a
> _good_ thing from the (static) point of view of production; costs of
> switching are avoided, thus creating the increased surplus.
>
This is all theoretically true, however it merely supplies a
general theoretical explanation for my point.  In the software
market the supplier frequently does establish a monopoly,
and lock-in contributes to the ability of the supplier to extract
profits from this situation.  To date Microsoft is the most
successful company at this, but they are hardly alone, and
were not the first, company to attempt to do this in the
software market.  (Was IBM first to do it on a large scale?
Possibly...)

> This means that customers who buy products which artificially create
> switching costs (eg, proprietary file formats) are locking the
> handcuffs on their own wrists.  Caveat emptor, I have no sympathy.
> But if the firm was a monopoly ex ante, the unbalanced sharing of
> gains from trade should be ascribed to the monopoly, not to the
> lock-in.
>
Proprietary formats are just the vendor helping the process
along.  There are still large factors creating lock-in.  Try to
switch a body of automated processes from Perl to Python
some time.  Even though the Perl processes may be using
non-proprietary files, and the code is available, the switch is
not very easy...

[...]
>     Ben> Should the resulting monopolies not be called natural?
>
> It depends on what you see as their cause.  Natural monopolies in
> economic analysis have a strong connotation of productive efficiency.
> You don't want to put that gloss on proprietary software, do you?
>
Why not?

It *IS* more productive to agree on a single standard and build
on that base.  Were it otherwise then there would be a few less
standards out there and people wouldn't care as much about
the ones that exist.

The only additional issue that proprietary products introduce to
this basic observation is that someone will be trying to turn this
into a (hopefully large) ongoing revenue stream from the
consumers.

[...]
>     Ben> Both are significant in software.  Isn't it true that
>     Ben> software is a major motivator of studying network
>     Ben> externalities?  In any case the example of lawyers using Word
>     Ben> Perfect is an example where the one factor is pitted against
>     Ben> the other.  So far lock-in has won in this case.
>
> Yes.  It's worth noting that the lock-in here is generated by a strong
> network externality in a "local" community, opposed by a weaker global
> network externality.
>
Uh, no.  It is generated by the fact that legal firms have
implemented important infrastructures using features of
Word Perfect that would have to be rebuilt from scratch
in Microsoft Word.

[...]
>     Ben> Partial disagreement here.  What I mean by readily
>     Ben> distributed is that it is possible to extract considerable
>     Ben> parallelism in the testing.  The overall effort may be
>     Ben> substantially increased, but the time to test plus the
>     Ben> individual effort required from any one participant is
>     Ben> decreased.
>
> Unless the cost of time-to-market really dominates the cost of
> production, this is convex costs.
>
But does it act like convex costs?

>     Ben> Here is an interesting question for you.  Suppose that
>     Ben> marginal total costs are increasing, but the marginal
>     Ben> costs/participant are decreasing.  Does this economically
>     Ben> behave like you would normally expect a convex cost-model to
>     Ben> behave?  If all participants are part of a single economic
>     Ben> entity, clearly yes.  But if they are not?
>
> No, this doesn't behave like convex costs.  This is precisely why the

Apparently not... :-)

> free software case is so theoretically interesting.  The costs of
> producing proprietary software are (plausibly) convex, whether
> development is centralized or distributed.  The costs of distributing
> it are not.  However, we can plausibly separate the two processes, so
> that your question about the two margins are analytically separable.
> This is basically the economic state of the art, as represented by
> _Information Rules_.
>
I think that in a well-run company the costs of productions need
not be convex.  A sample example.  A real company that I know
took the C libraries and modified them so that a core-dump, in
the process of dumping core, would capture various
information including the command-line, location of the core,
and a stack-backtrace, and mail that to a list of people.  What
do you think this did to their development costs for maintaining
their ongoing crons?  (OK, *after* development stopped for a
month and a half. :-)

[...]
>     Ben> A concrete example to consider is Perl's test suites.  A
>     Ben> substantial amount of Perl, including most good modules, come
>     Ben> with test suites.  If something obscure breaks on your
>     Ben> system, there is an excellent chance that the breakage was
>     Ben> found and identified on installation, and your standard
>     Ben> perlbug report likely contains the information that
>     Ben> developers will need to debug what happened.  This means that
>     Ben> a lot of people sit through a lot of tests, but fixing things
>     Ben> becomes far, far easier!
>
> This can, of course, be emulated by proprietary firms, although their
> customers may tend to be more cranky about sitting through the tests.
>
I think that having to fill in a bug report when they *expected*
to do an install might not fly too well either...

> I don't understand the implications for cost; it looks like developing
> test suites should be convex in size.
>
The introduction and acceptance of this technique will reduce
development costs.  Sure, once the improvement is
implemented, the costs again begin climbing (more shallowly
than before),  And there is substantial cost in first creating these
suites.  But while you are creating this infrastructure, the future
per person development costs are dropping.  This is true both
per developer and per user.

[...]
>     Ben> However one thing is much more sharply true of free software
>     Ben> than proprietary software.  Poor modularity in your
>     Ben> interfaces much more rapidly becomes a barrier for testing
>     Ben> and development.  As a result free software has a strong
>     Ben> immediate incentive to keep things modular and well-defined.
>     Ben> Is this a benefit or a disadvantage?  I have seen both sides
>     Ben> argued...
>
> I can't see how the beneficial part can do anything but dominate in
> the real world, which is dynamic.  I know that Windows NT started as a
> microkernel architecture but today is anything but, for performance
> reasons.  Well, Linux, for all purposes I know of, kicks that
> "anything but".  And Linux is going to be able to "embrace and extend"
> new modules faster.  I expect this advantage to increase as more and
> more stuff gets modularized.
>
I like that argument.  But how do you respond to the assertion
that many features which consumers want require such tight
integration, and as a result free software projects will never be
able to tackle projects above a certain complexity?  (Say, any
project on the order of NT. :-)

While this difference hands proprietary software a lot of rope, it
also puts a barrier on what sorts of techniques are available
to it.  *I* think that this is an advantage for free software.  But
some people disagree.

[...]
> Re: planned obsolescence.
>
>     Ben> Tell me why I cannot buy a slant 6 engine today then.  (This
>     Ben> was an engine sold briefly by Dodge in the 70's that was
>     Ben> unbelievably reliable.  I know people who still swear by
>     Ben> it...)
>
> I remember those.  I can't see that this has anything to do with
> "obsolescence," let alone the "planned" version.  I can think of lots
> of reasons why they might discontinue a technically superior product,
> like lower consumer-willingness-to-pay/producer-cost ratio, higher
> costs of maintaining and improving the design, couldn't meet
> California pollution standards cheaply enough, etc.  If you tell me
> none of those are true, I dunno; I still don't see a connection to
> planned obsolescence.
>
I don't know if those are true.  But I do know people who say
that they would pay a nice premium for one.  Of course these
are also the kind of people who would like to buy a truck
and use it for 15 years or so...

>     Ben> Agreed that customers are ignorant of computers.  However I
>     Ben> suspect that rapid change is in the nature of sofware for the
>     Ben> near future.  Rapid change guarantees ignorance in most of
>     Ben> your consumers...
>
> True.  This can be a big problem, sufficient to justify legal
> intervention (creation of non-disclaimable implicit warrantees).
>
The legal system is moving the other way these days...

[...]
> And remember that distribution costs of _information_ are low.
> Training consumers to understand the meaning of what you say may be
> hard, but I don't think the broad properties of software change that
> fast.  Eg, once a consumer understands the idea of a secure VPN,
> changing the encoding method or the various tunneling/addressing
> methods can be treated as information that the consumer doesn't need
> to understand in detail.  This should improve over time.
>
Cool for helping people understand today's technologies some
5 years too late.  What about the technologies that people will
be arguing about in 5 years?

>     Ben> Agreed.  If you were not locked in to the solution, then
>     Ben> guarantees of future support would not matter - you could
>     Ben> just switch later.  But switching is hard, consumers know
>     Ben> that, and so said guarantees are very important to consumers.
>
> However, one thing that consumers are going to be more aware of as
> time goes on is that one simple measure---using publically
> standardized interfaces---decreases switching costs enormously.
>
And they have not reached this awareness before?  How
did the OSI, POSIX, etc come into existence then?

Interoperability is like swear-words and euphemisms.  The
new language works for a while, and then the meaning gets
diluted and you are back at square one...

Plus people tend to forget the old lessons with each new
generation of types of devices...

[...]
>     Ben> It was somewhat of a rhetorical question.  I am claiming that
>     Ben> software lends itself strongly to natural monopolies, this
>     Ben> fact is being recognized more and more widely, and the
>     Ben> resulting dynamics are not to be underestimated.
>
> I don't understand what you're getting at.  The natural monopoly
> aspect is sufficiently obvious that anybody who doesn't understand it
> had better get a job working for somebody who does (or why Wozniak
> worked for Jobs, not vice versa).  It has been at least since
> Visicalc.
>
Thank you for confirming that most of corporate America is
run by idiots.  With that established, as these idiots realize
that the difference makes a difference to them in a particular
area and starts to act on that decision, the result is
economically important.

[Interesting comments on "monopolistic competion" snipped]
>     Ben> But when the "monopoly" is held by a free software product,
>     Ben> which allows for the product to be supplied by "an internal
>     Ben> free market" within the bounds what would otherwise be a
>     Ben> "monopoly market", then you get an opportunities for a more
>     Ben> efficient competition.
>
> That's true, as I re-stated above.  Except that I don't see an obvious
> definition for "internal free market".  I don't even really have a
> good idea of what it looks like to the firms and users participating,
> let alone what the crucial economic relations will be.

The current market in selling support for Linux (largely tied
with selling Linux distributions) would be a good example
to look at.  A rough definition would be, "A free market that
for products tied to an architecture which is itself in
competition with other architectures."  This definition is more
general than just software, for instance it can be used to
compare the PC market VS Apple VS the workstation
vendors.  A common result appears to be a tendancy for
the architecture in question to win. :-)

[...]
> True.  But what you're missing in that statement is that you can
> choose the degree of monopoly by specifying the level(s) at which
> public standard interfaces must be used.  [...]
>
I strongly suggest looking back through the history of
software to see how other "open" initiatives have played
out.  You should find no shortage of potential examples
to sharpen your intuition.

>     >> I don't care.  To an economist, perceived barriers are real
>     >> barriers, and vice versa.  A real barrier which isn't perceived
>     >> has a tendency to supply the dreamer with a rude awakening.
>     >> Perceived barriers which aren't real have a habit of
>     >> evaporating, to the great profit of the lucky sleeper who wakes
>     >> first.  No guarantees, but close enough for government work.
>
>     Ben> In the long-run at equilibrium, yes.  However a substantial
>     Ben> part of the dynamics of software depends upon what the
>     Ben> current *dis-equilibrium* is.
>
> True.  I think that the important part of disequilibrium is that
> consumers don't realize things like "all Internet connections are
> publicly accessible" in a certain sense, though.  Once consumers are
> educated to that level, they will start demanding appropriate features
> and standards and the perceived/real differential will vanish.
>
Fine.  Will they be informed on the advantages of various
types of distributed architectures and the relevant
protocols?

Also how long has it taken to teach consumers the
realities of what leads to viruses and what viruses will
cost?

>     Ben> Transient effects are hard to model, granted.  However that
>     Ben> does not stop them from being incredibly important in a
>     Ben> rapidly changing industry...
>
> On the flip side, robber barons come and go, but government regulatory
> agencies are forever.

Not if Microsoft can get the DOJ's budget slashed
a few times. :-/

Ben